← Back to blog

GPT-5 Just Launched — What GTM Teams Need to Know

GPT-5 is out. This is OpenAI’s most capable model, and for once the launch actually matters for GTM teams — not just developers.

I’m not going to recap every feature. You can read the blog post for that. What I want to do is break down the specific capabilities that change how GTM workflows get built, and what you can do differently starting now.


The 1M Context Window Changes Everything for GTM

This is the headline feature for anyone working in GTM engineering.

GPT-5 ships with a 1 million token context window. That’s 7.8x more than o3 had. To put that in practical terms: you can now feed an entire company’s 10-K filing, their last 20 blog posts, their job listings page, three competitor analyses, and your full ICP document into a single prompt — and still have room left over.

Why this matters for GTM:

Account research at depth. Previously, you had to be surgical about what context you fed into a model for account research. You’d summarize a prospect’s website, pull a few key data points, and hope the model could infer the rest. With 1M tokens, you can feed it everything. The full website. The full annual report. Every recent press mention. The model gets the complete picture, and the quality of its analysis reflects that.

Multi-source enrichment in a single pass. In Clay, one of the biggest workflow design constraints has been context window limits. You’d break enrichment into multiple steps — one to analyze the company, another to analyze the person, another to synthesize — because you couldn’t fit all the source data into one call. With 1M tokens, you can collapse several enrichment steps into one. Fewer API calls, lower latency, simpler workflows.

Full campaign context for copy generation. You can now give the model your entire campaign history — every email you’ve sent, reply rates by variant, what worked for which segments — as context when generating new copy. The model doesn’t just write an email. It writes an email informed by everything your team has learned across every campaign you’ve run.

Previously, this kind of long-context work meant switching to Gemini. That tradeoff is gone now.


Unified Reasoning Means You Stop Switching Models

GPT-5 merges the reasoning models (o-series) and the base models (4-series) into one. This is a bigger deal than it sounds.

Before this, you had to make a choice every time you set up an AI step in a workflow. Need deep analysis? Use o3. Need fast, cheap text generation? Use 4o or 4.1. Need instruction-following? Use 4.1. The model selection was part of the workflow design, and getting it wrong meant either overspending on simple tasks or getting weak outputs on complex ones.

GPT-5 removes that decision. One model handles both fast generation and deep reasoning. You can even control the reasoning level — tell it to think harder on complex tasks and go fast on simple ones.

For GTM workflows, this simplifies architecture significantly. Your Clay enrichment steps, your email generation, your lead scoring, your research synthesis — they can all hit the same model. You don’t need to route different tasks to different models anymore. One model, one API key, one set of prompt patterns.


The Writing Quality Is a Real Upgrade

4o had a recognizable voice. If you’ve read enough AI-generated outbound, you know exactly what I mean: the slightly formal tone, the tendency toward lists, the “I hope this message finds you well” energy even when you explicitly tell it not to.

OpenAI claims GPT-5’s writing is more natural and less templated. I’ll be testing this head-to-head against Claude (which has been our default for copy generation), but the early samples are noticeably better than 4o.

For outbound, this matters more than people think. Deliverability is increasingly about whether an email “feels” human to both spam filters and recipients. If GPT-5 can produce copy that genuinely varies in structure, tone, and cadence — without the telltale AI patterns — that’s a direct improvement to inbox placement and reply rates.

The benchmark here isn’t “can it write a good email.” It’s “can it write 500 emails that are each genuinely different from each other while staying on-message and on-voice.” That’s the test. We’ll see.


The Pricing Changes the Math on API Usage

This is where it gets interesting for teams running AI-heavy GTM workflows.

GPT-5’s API pricing is roughly 4x cheaper than Claude’s Opus. They’re also releasing GPT-5 mini and GPT-5 nano at even lower price points. The tiered model lineup — full power, mid-range, and ultra-cheap — means you can match the model to the task and keep costs under control.

What this means in practice:

Clay workflows get cheaper to run. Every Claygent call, every enrichment step that hits an LLM, every AI-powered scoring model — they all cost per token. A 4x price reduction means you can either run the same workflows at a quarter of the cost, or run four times as many enrichment steps for the same budget. For teams doing high-volume enrichment, this is significant.

You can afford to be more thorough. When API calls are expensive, you design workflows to minimize them. You compress prompts, skip enrichment steps for lower-priority prospects, and batch aggressively. When calls are cheap, you can afford to run deeper research on every prospect. More data points, more analysis, better personalization — without blowing your enrichment budget.

Nano for high-volume, low-complexity tasks. Things like email categorization, basic data extraction, simple classification — these don’t need a frontier model. GPT-5 nano at ultra-low pricing means you can run these at massive scale for almost nothing.

The hope is that this pricing translates to faster integration in tools like Clay. It took a while for o3 to show up in Clay after launch. If GPT-5’s economics are as good as advertised, there’s a strong incentive for every GTM tool vendor to integrate it quickly.


What You Can Build Differently Now

Here’s the practical translation. If you’re running GTM workflows today, these are the things that change:

Collapse multi-step enrichment into single calls. If you’ve been breaking research into multiple LLM steps because of context limits, test consolidating them. Feed all the source data into one GPT-5 call and see if the output quality holds.

Run deeper account research at scale. The 1M context window means you can feed the model an entire company’s digital footprint and get analysis that actually reflects the full picture. Try pulling a prospect’s entire website plus their last earnings call plus their job listings and asking for a pain point analysis.

Test the writing quality against your current model. If you’re using Claude or GPT-4o for email generation, run a head-to-head test. Same prospects, same enrichment data, same instructions. Compare the outputs for naturalness, variety, and adherence to voice guidelines. The model that produces the most genuinely varied, on-voice copy wins.

Recalculate your enrichment budget. If GPT-5 pricing holds, your per-prospect enrichment cost just dropped significantly. Figure out what that means for your workflow — can you add enrichment steps you previously cut for cost reasons? Can you run AI scoring on your full database instead of just high-priority segments?

Simplify your model routing. If you’ve been routing different tasks to different OpenAI models, test running everything through GPT-5 with different reasoning levels. One model that handles both quick classification and deep analysis is architecturally simpler than maintaining multiple model integrations.


The Bigger Picture

Every major model release compresses the timeline for GTM teams. Capabilities that were expensive become cheap. Workflows that were complex become simple. Things that required engineering talent become accessible to operators.

GPT-5 specifically accelerates three shifts: longer context means richer inputs, cheaper pricing means broader deployment, and unified reasoning means simpler architecture. Each of those independently makes GTM workflows better. Together, they move the bar for what a small team can build.

The teams that are already running AI-powered GTM workflows will absorb this immediately. They’ll swap models, test outputs, and ship better workflows within weeks. The teams that haven’t started yet just watched the gap get wider.

Sam Altman said GPT-3 was like talking to a high schooler, 4o like a college student, and GPT-5 is like talking to a PhD-level expert. I don’t know about PhD-level. But I do know the GTM workflows we can build with this are meaningfully better than what we could build six months ago. And that compounds fast.