New: Fix Your Agent Harness

< PLG for developer companies />

plg.png

The core idea

PLG is not a marketing motion. It's the discipline of letting your product, your pricing, and your free tier do the work that a sales team does at a sales-led company. Sales is the layer you add on top once adoption is happening. Never the layer you start with.

Most companies miss this because they treat PLG as a tactic to bolt onto an existing motion. They run a 14-day trial, gate the API behind a sales call, charge per seat, and wonder why developers churn before they activate. The order matters. So does each individual decision inside it.

The companies that scaled developer products past $100M ARR (Stripe, Twilio, Vercel, Algolia, Datadog, MongoDB, GitHub, Cloudflare) didn't do this by accident. They got the same set of decisions right, in roughly the same sequence. This is that sequence, in 10 steps.

1. Ship a free tier with utility — your acquisition channel

Not a 14-day trial. Not a "contact sales" form. A free tier where a developer can build something, deploy it, and run it in production before paying. The free tier is your acquisition channel.

Stripe charges nothing until a developer processes a transaction. Vercel's free tier includes deployments, serverless functions, and a production URL. Twilio gives free credits, and developers send SMS messages during integration. Algolia provides 10K search requests a month for free. What these have in common: a developer can build something they'd be embarrassed to lose.

That's the activation bar. Not signups, not free-account counts. If your free tier is a demo environment with watermarks and sandbox-only limits, it's a trial, and trials don't drive PLG. Trials drive sales calls.

The most common mistake is time-based gating. A 14-day trial creates an artificial deadline that pushes developers into a buying decision before they've integrated. Usage limits don't. A developer can integrate Stripe and run in test mode for years before paying anything. By the time they do pay, they've shipped a checkout flow, embedded the API in their stack, and accumulated months of historical data they don't want to migrate. That's lock-in earned through utility, not contracts.

Everything above assumes near-zero marginal cost per free user. That assumption breaks when your product includes AI-powered features. Every prompt, generation, or inference call burns compute. The free tier still needs to exist, but it can't be open-ended. Usage caps on AI features replace unlimited access. Gate volume and speed, not feature access. Midjourney does this with Fast Mode vs. Relax Mode. Fast Mode gives instant GPU access with limited monthly hours. Relax Mode is unlimited but queued. Users pay for priority and throughput, not better outputs.

The trap is building a free tier so capable that it removes the reason to upgrade. Google's AI team ran into this launching Gemini subscriptions. The free tier already outperformed most use cases. Users had no reason to pay. The fix wasn't restricting features. It was designing a ceiling that creates upgrade desire without killing the experience that gets developers hooked in the first place.

Checklist

  • Audit your free tier. Can a developer build and ship something production-grade without paying?
  • Remove time-based trial gates. Usage limits work. Countdown timers don't.
  • Track what percentage of free-tier users build something they'd be embarrassed to lose. That's your activation rate.
  • If your product includes AI features, define where free stops and whether that stopping point creates the right motivation to upgrade.

2. Time to first meaningful action — the metric that predicts everything

Time to first API call is the strongest predictor of developer conversion. Every friction point between signup and value drags conversion down. It doesn't matter how good your product is downstream if developers don't get there.

The number to measure is the median time from signup to a developer's first meaningful action. For Stripe, that meant a working checkout flow in under 15 minutes. For Twilio, it was the "send your first SMS" tutorial, not docs, not a dashboard tour. For Firebase, quickstart templates that deploy a working backend in under five minutes.

The mistake here is measuring the wrong moment. "Account created" is not first action. "Logged in" is not first action. First action is the moment a developer sees the product do something useful for their use case. That's specific to your product, and you have to define it precisely. For a payments API, it's a successful test charge. For a database, it's a query returning data. For a deployment platform, it's a live URL.

Track median time-to-action weekly. Identify the top three friction points between signup and that action (onboarding flow, key generation, SDK install, first request) and fix the biggest one each quarter. Time-to-action improvements ripple through every metric downstream.

Checklist

  • Define your first meaningful action precisely. Not signup, not login. The moment a developer creates something useful for their use case.
  • Measure the median time from signup to that action and track it weekly.
  • Identify the top three friction points between signup and first action. Fix the biggest one this quarter.

3. Activation is not signup — define it in one sentence

A signup is not a user. OpenView's PLG benchmarks show PLG companies are 2x more likely than sales-led companies to grow revenue 100% year over year, and 87% of standout PLG companies track an explicit activation metric.

Activation is the moment a developer experiences the product's core value for their use case. It has to be specific enough that an analyst can query it from your data warehouse on a Monday morning and a number comes back.

Slack defined activation as "2,000 messages sent by a team." That single metric reshaped their entire GTM strategy: every onboarding decision, every paid feature, every sales handoff was built around getting teams to 2,000 messages. Datadog defined activation as connecting a first integration and sending data. Amplitude defined it as a user creating their first saved chart.

If you can't write your activation metric in one sentence, it's too vague. If you can't query it from your warehouse today, it doesn't exist yet. And if you're still reporting signup counts to leadership without activation rates alongside them, you're making decisions on the wrong number.

Checklist

  • Write a one-sentence activation definition. If it takes more than one sentence, it's too vague.
  • Instrument it. If you can't query it from your data warehouse today, it doesn't exist yet.
  • Stop reporting signup counts to leadership without activation rates alongside them.

4. Visible artifacts — your organic loop runs on these

The strongest PLG loop: developers build things with your product that other developers can see. Those artifacts drive organic discovery without marketing spend.

Vercel's free-tier sites deploy to a vercel.app subdomain by default. Every deployment surfaces the brand. Netlify's deploy previews in GitHub PRs exposed the product to every reviewer on the team. Stripe's checkout pages are seen by millions of end users. Every transaction surfaces the brand. Webflow's published sites include Webflow branding on free plans, and every site is a billboard for the platform.

The pattern: the product creates artifacts visible to non-users. Sites, apps, APIs, integrations, embeds, deploy logs, share links. Make attribution easy but not mandatory. Make the artifact good enough that developers want to share it. And track the inbound signup volume that originates from product-generated artifacts. That's your organic loop metric, and it's the closest thing to a free customer acquisition channel that exists in B2B.

If your product doesn't create visible artifacts, you have a harder PLG problem. Internal tooling, backend-only services, and infrastructure-layer products have to find their visible surface elsewhere: open source, public docs, technical content, conference talks. The loop still has to exist. The channels are different.

Checklist

  • Identify what artifacts your product creates that are visible to non-users — sites, apps, APIs, integrations, embeds, deploy logs, share links.
  • Make attribution easy but not mandatory.
  • Track inbound signups that originate from product artifacts. That's your organic loop metric.

5. Self-serve onboarding — if a developer needs sales, it's not PLG

If a developer needs to talk to a human to get started, you don't have a PLG motion. You have a sales-led motion with a free tier on top. Docs, quickstarts, templates, and example projects do the work.

Stripe's documentation is the gold standard. Every API endpoint includes a working code example. Supabase built project templates that deploy a working app in one click. Neon's CLI creates a working Postgres database with one command, no dashboard required. None of these required a sales call to evaluate.

Run the new-developer test. Take someone who has never used your product, sit them in front of a clean machine, and time how long it takes them to sign up, build something, and deploy it without talking to anyone. If they can't, fix what blocks them before fixing anything else.

Two specific calls. First, prioritize quickstarts over comprehensive documentation. Developers want to start, not read. Comprehensive docs come second. Second, build at least one template or starter project that runs end-to-end in under 10 minutes. The developer who ships something in 10 minutes activates. The developer who spent an afternoon reading already churned.

Checklist

  • Run the new-developer test. Sit someone down at a clean machine and time how long it takes them to sign up, build, and deploy without talking to anyone.
  • Prioritize quickstarts over comprehensive docs. Developers want to start, not read.
  • Build at least one template or starter project that runs end-to-end in under 10 minutes.

6. Usage-based pricing — per-seat is friction at the wrong moment

Usage-based pricing aligns cost with value. Per-seat pricing creates friction at the exact moment you want expansion, when a second or third developer wants to start using the product.

Twilio charges per API call. Vercel charges based on bandwidth and serverless function execution. Cloudflare Workers charges per request after a generous free tier. Algolia charges per search request. In all of these, cost scales with the product's success, not headcount. A team of five developers using Stripe doesn't cost more than a team of one developer doing the same volume. That's the right answer.

Map your pricing to the unit of value a developer gets. API calls, deployments, executions, storage, bandwidth, requests, GB processed. Pick the metric that's easiest to explain on a single line and that a developer can predict from their usage. Predictability matters as much as fairness.

Two tests for your pricing model. First, is $0 a starting point? Usage limits beat time limits. A developer should be able to run on the free tier indefinitely, with the bill kicking in only when usage crosses a threshold. Second, what happens when a customer 10x's their usage? If the answer is "they call sales," you have a pricing wall, not a growth model. The whole point of usage pricing is that growth happens automatically.

Pure per-unit billing has a failure mode at scale: bill shock. Vikas Kansal's team at Google hit this building AI subscription tiers. Unpredictable costs make enterprise buyers nervous and individual developers hesitant. The alternative is intensity tiers. Prepaid volume buckets (Plus/Pro/Ultra) that give predictability while still aligning cost with usage. Developers get a taste of most capabilities in every tier, but volume and speed increase as they move up. Predictable tiers convert better than open meters because developers can budget for them.

There's a second dimension most developer tools haven't considered yet: outcome-based pricing. Instead of charging per input (API call, prompt, request), charge per successful result. Intercom's Fin AI agent charges $0.99 per resolution. The AI tries to answer for free. You only pay when the problem is solved. As developer tools add agent and automation capabilities, pricing per outcome starts to make more sense than pricing per request. A developer who pays per successful deployment or per resolved issue is paying for value delivered, not compute consumed.

Checklist

  • Map your pricing to the unit of value a developer gets — API calls, deployments, executions, storage, bandwidth, requests.
  • Make $0 a starting point. Usage limits beat time limits.
  • Model what happens when a customer 10x's their usage. If the answer is "they call sales," you have a pricing wall, not a growth model.
  • If your product has AI-powered features, map each feature to its compute cost. Pricing tiers should align with cost-to-serve, not just perceived value.
  • Evaluate whether any capability in your product could be priced on outcomes (successful completions, resolved issues, deployed builds) instead of inputs.

7. Product-qualified leads — instrument behavior, not vibes

A product-qualified lead is a user whose behavior signals they're ready for a paid plan or a sales conversation. The threshold should be specific and defensible, not a gut feeling.

Dropbox's PQL signal was a user hitting their storage limit. Atlassian identified PQLs based on growing team size on free Jira instances. Figma's PQL signal was 3+ editors working in shared files, the moment usage shifted from solo to collaborative. In each case, the signal was behavioral, queryable, and tied to expansion intent, not "this account looks active."

Define two or three behavioral signals that predict conversion from free to paid. Test them against historical data before you trust them. The simplest scoring model (signal A plus signal B equals PQL) outperforms guessing every time, and it gives sales something concrete to act on.

When PQLs do route to sales, route them with context. Not "this account is active." Specifically: three developers running production workloads across two workspaces, last 30-day request volume up 4x, hitting rate limits on the free tier. Sales will only land calls where the signal is strong, and the signal is only strong when it's specific.

For products with variable cost-to-serve, PQL scoring needs a cost dimension. A developer running 500 prompts a day is both your best conversion candidate and your biggest margin risk. Score for conversion likelihood and cost-to-serve simultaneously. High engagement plus high compute cost means this user needs to convert now, not eventually. The free tier is subsidizing their usage, and the longer they stay free, the worse your unit economics get.

Checklist

  • Define two or three behavioral signals that predict conversion from free to paid. Test them against historical data before you trust them.
  • Build a simple PQL scoring model. Signal A plus signal B equals PQL beats guessing.
  • Route PQLs to sales with specific context — developer count, workload type, usage trend, rate-limit hits — not "this account looks active."
  • Track cost-per-user alongside engagement-per-user. High engagement plus high cost equals your most urgent conversion target.

8. Bottom-up adoption — let developers pull the product into their org

Bottom-up adoption is the bet that individual developers will use the product first, then bring it to their team and their org. Enterprise features matter, but they come after individual adoption, not before.

GitHub grew inside enterprises one developer at a time, before it sold top-down to IT. Slack spread team by team: one team adopted, adjacent teams noticed, and IT eventually had to standardize. Datadog entered orgs through a single DevOps engineer monitoring one service. Each of these scaled because individual developers could get full value from the product without needing org-level approval.

The most important rule: no admin-only features in the critical path. If a developer can't use your product without their CTO clicking a button, you've broken bottom-up adoption. Permissions, billing, and SSO matter, but they should be optional add-ons, not blockers to first use.

Build sharing and collaboration features that naturally expose the product to non-users. When one developer adopts the product, the next developer should encounter it within a week: a shared link, a PR, a deploy preview, a notification, an artifact in the repo. Track the internal referral pattern. When a second developer joins an account, what triggered it? That trigger is your expansion lever.

Checklist

  • Make sure a single developer can get full value without org-level approval. No admin-only features in the critical path.
  • Build sharing and collaboration features that naturally expose the product to non-users.
  • Track internal referral patterns. When a second developer joins an account, what triggered it?

9. Expansion triggers — one developer is adoption, three is expansion

The moment one developer invites a second, or one workspace connects to a second, that's your expansion signal. This is the bridge between individual adoption and team or enterprise revenue, and it's the metric that separates products that grow from products that plateau.

Notion's expansion came from page-sharing. A user shares a page with a teammate, the teammate joins the workspace, and the account grows. Cross-account sharing was a stronger upgrade signal than any solo-usage metric. Linear's signature expansion event is the second engineer joining a workspace. Figma's was a design file shared with a developer, because cross-functional sharing predicted enterprise deals.

The pattern across all of these: the expansion signal is "more people or more projects," not "more usage." A single developer using the product more isn't expansion. A second developer joining is.

Measure active developer density per account. One developer is adoption. Three is expansion. That's the threshold where sales should engage. Earlier and you're spending sales time on accounts that aren't ready. Later and you're missing the window. Build features that make multi-developer collaboration better than solo use, because the moment collaboration is the obvious choice, expansion happens by default.

Checklist

  • Define your expansion signal as "more people or more projects," not "more usage."
  • Measure active developer density per account. One is adoption, three is expansion. That's the threshold for sales engagement.
  • Build features that make multi-developer collaboration better than solo use.

10. Product-led sales — sales is a layer, not a starting point

Elena Verna calls this product-led sales. Developers adopt bottom-up. Buyer personas (directors of engineering, tech leads, CTOs) approve expansion and budget. Sales engages accounts where adoption is already happening, not accounts where it might.

Twilio's sales team only engaged accounts after developers were already making API calls. Sales opened with usage data, not feature demos. MongoDB's enterprise sales targeted companies where Atlas free-tier clusters were already running. Vercel's enterprise motion focused on companies where multiple developers had already deployed projects. In all three, sales worked because the product had already done the trust-building.

Define the product adoption threshold that triggers a sales touchpoint. "Three or more active developers in one account" is better than "high engagement." The threshold has to be queryable and defensible. Sales should know exactly why this account got escalated.

The other half of this rule: don't let sales engage accounts below the adoption threshold. Premature outreach damages developer trust faster than anything else you can do. A developer who got a cold sales email two days after signing up will remember it, and they'll associate the friction with your product. Hold the line on the threshold even when pipeline pressure builds, because the alternative is faster pipeline this quarter and worse retention forever.

Checklist

  • Define the product adoption threshold that triggers a sales touchpoint. "Three or more active developers in one account" beats "high engagement."
  • Equip sales with product usage data — which products are in use, how many developers are active, what the growth trend looks like.
  • Don't let sales engage accounts below the adoption threshold. Premature outreach damages developer trust.

The order matters

These 10 steps aren't independent. They're a sequence, and each one only works because the previous ones are in place.

A free tier with utility doesn't drive growth without short time-to-action. Time-to-action doesn't matter if your activation metric is undefined. Activation can't be measured if you're charging per seat. Per-seat pricing kills the bottom-up adoption that PQLs depend on. PQLs are noise without an expansion signal. And expansion signals don't matter if sales engages accounts before they fire.

Companies fail at PLG because they pull one piece out of the sequence and bolt it onto a different motion. Free tier without usage pricing. Usage pricing without an activation metric. Activation metric without bottom-up adoption. The 10 steps above are the same set of decisions Stripe, Vercel, Twilio, Algolia, and Datadog all made, in roughly the same order.

If you're starting a developer product today, the order to follow is the order in this list. If you're adopting PLG inside a company that already has a sales motion, the order matters even more, because the friction will be highest at the steps where your existing motion contradicts the playbook. Find the contradiction and fix the contradiction. Don't try to run both at once.

PLG is the growth half of building a durable developer company. The other half is how companies innovate in the AI era.

Developer Writing Assistant

Handbook
Developer Marketing Handbook

Goals

Developer marketing builds trust first, pipeline second.
The work connects your product to how developers actually build and helps that credibility translate into adoption and revenue.

A great developer experience is the foundation. It starts with discoverability, continues through docs, and carries into the product itself. Good documentation shortens time to value and builds confidence that your product can scale with real teams. Developers trust what they can inspect, so show how the product works and let the system speak for itself.

Success isn't clicks or vanity metrics. It's measurable engagement that creates product-qualified leads, builds influence across teams, and contributes to both product-led and sales-led growth.
When developers use your product by choice and advocate for it inside their company, you've done the job right.

Strategy

Start with reality, not aspiration.

Map where your product fits in the developer workflow, then help them do that job faster or with less friction.

Lead with clarity. Explain what it is, what it does, and why it matters.

Show the system behind the product. Architecture, examples, and tradeoffs explain more than positioning ever will.
If you can do it in a clever or playful way that still feels authentic, that's bonus points.

The best developer marketing respects time, delivers value, and makes something complex feel obvious.

Journey

Awareness → Evaluation → Adoption → Advocacy.
Each stage should connect clearly to the next.

Awareness happens in places developers already spend time: GitHub, Reddit, newsletters, blogs.
Evaluation happens in your docs, demos, and sandboxes.

For most developers, the docs are the real homepage, so accuracy and structure matter more than polish.

Adoption depends on how fast they reach first success.
Advocacy is when they start teaching others what they learned from you.

Personas

Create personas based on who buys the product and who actually uses it. For example:

Buyers: CTO or Engineering Leader, Senior Engineer, Implementation Architect.
Users: Frontend, Full-stack, App Developer.
Adjacent: Ops, Product, Design.

Each persona has different pain points and goals.
CTOs and Engineering Leaders care about governance and ROI.
Senior Engineers look for performance, flexibility, and code quality.
Implementation Architects focus on how well a tool integrates and scales.
Write for what each person owns, not what you wish they cared about.

These categories are shifting. PMs and designers who build with AI tools aren't adjacent anymore. They're users. Update your personas to reflect how people actually work, not how the org chart defines them.

Messaging

Be clear first. Be clever only if it helps.
Make every message easy to scan. Lead with the point before expanding on it.
Good developer messaging is specific, practical, and rooted in how people actually build.

Clarity earns trust, but a bit of personality makes it stick.
The goal isn't to sound like marketing. It's to communicate something real that developers recognize and care about.

Build around three pillars:

  • Speed: faster builds, fewer tickets
  • Efficiency: consolidated stack, lower maintenance
  • Control: safe scale, long-term confidence

If you can back it with code, data, or proof, keep it.
If it only sounds good, cut it.

Campaigns

Treat campaigns like product launches.
Plan, ship, measure, repeat.

Each campaign should answer three questions:

  • What developer problem are we solving?
  • What proof are we showing?
  • What happens next?

Treat developer feedback like bug reports and close the loop quickly when something needs to be corrected or clarified.

Make it easy for developers to try, test, or share.
Run retros on every launch and capture what worked, what didn't, and what to change next time. Always learn from what you launch.

Content

Write with clarity and intention. Every piece should help developers build faster, learn something new, or solve a real problem.

Strong content earns attention because it's useful.
Lead with the outcome or insight, then show how to get there. Make it easy to skim from top to bottom.
Show working examples, explain tradeoffs, and include visuals or code where it helps understanding. If it doesn't teach or demonstrate something real, it doesn't belong.

Core content types

  • Blog posts: tutorials, technical breakdowns, or opinionated takes grounded in experience.
  • Guides and tutorials: step-by-step instructions that lead to a working result.
  • Integration or workflow content: explain how tools connect and where they fit in a developer's process.
  • Technical guides and code examples: deeper material for experienced readers who want implementation detail.
  • Explainer or glossary content: clear, factual definitions written to answer specific questions directly.
  • Video or live sessions: demos, interviews, or walkthroughs that show real workflows.
  • Research and surveys: reports or insights that help developers understand the state of their field.

Content strategy buckets

  1. Awareness — generate buzz and discussion. Hot takes, thought leadership, or topics that invite conversation.
  2. Acquisition — bring new developers in through problem-solving content. Tutorials, guides, and explainers that answer real questions.
  3. Enablement — help existing users succeed. Deep tutorials, documentation extensions, and practical how-to content with long-term value.
  4. Convert Paid — drive upgrades or signups. Feature-specific walkthroughs or advanced use cases that show value worth paying for.

Each piece should fit into one of these buckets and serve a clear purpose. Awareness earns attention. Acquisition builds trust. Enablement drives success. Convert Paid turns success into growth.

Clarity is the standard. Use it to earn credibility.

Community

Reddit. GitHub. Discord. Slack. YouTube and other social platforms.
Join conversations, don't start pitches.

Be helpful. Add context. Share working examples.
When your content becomes the answer people link to, you've earned credibility.

Metrics

Measure adoption and revenue, not reach.
Awareness is useful, but only if it drives activation or expansion.

Focus on signals that show impact:

  • Product or API usage
  • Time to first success
  • Product-qualified leads
  • Developer-influenced revenue
  • Retention and repeat engagement

The goal is to prove that trust earned from developers shows up later in product usage and revenue.

Developer Marketing Skill

I built a Developer Marketing Skill for Claude that helps evaluate content, strategy, and campaigns against the principles in this handbook.

Use it to stress-test messaging, review technical content, plan developer campaigns, or get feedback on positioning. It applies a "trust first, pipeline second" philosophy with an emphasis on clarity, technical credibility, and measurable engagement.

Need more resources?

Check out my curated collection of developer marketing tools, newsletters, and resources.

ESC