AGI Is Here in 2026: Sequoia Capital Case Study and What It Means
Sequoia Capital declares AGI has arrived. See how an AI agent completed a full recruiting job in 31 minutes, why agent capabilities double every 7 months, and what this exponential growth means for your career.
AGIAI AgentAI TrendsLong-horizon AgentSequoia Capital
1796  Words
2026-01-26

On January 14, 2026, Sequoia Capital published a landmark blog post: “2026: This is AGI.”
The authors — Pat Grady (Sequoia’s co-managing partner with 19 years of investment experience) and Sonya Huang (Sequoia partner who spotted the AI megatrend back in 2022) — got straight to the point:
Long-horizon agents are functionally AGI, and 2026 will be their year.
This isn’t a random prediction from a tech blogger. It’s a formal assessment from one of the world’s most influential venture capital firms. Their conclusion is simple: Stop waiting. AGI is already here.
Defining AGI: The Ability to Figure Things Out
Years ago, Sequoia partners asked top AI researchers how they would define AGI.
The researchers looked at each other and gave a telling answer:
“Each of us has our own definition, but we’ll know it when we see it.”
At the time, this felt vague and unhelpful. But now Sequoia has offered their own answer.
Pat and Sonya were refreshingly honest:
“We’re investors, not researchers. We don’t have the credentials to offer a technical definition. But from a functional standpoint, AGI is the ability to figure things out. It’s that simple.”
This definition might sound too simple at first glance. But think about it — when you need an AI to help you with something, what do you actually care about? Whether it can get the job done. You don’t care what algorithm it uses or how many parameters it has. You care about results.
What Does “Figuring Things Out” Require?
They broke the logic down clearly:
A person who can figure things out needs three capabilities:
- Knowledge — Understanding relevant information and concepts
- Reasoning — The ability to analyze, judge, and make decisions
- Iteration — The ability to adjust course, keep trying, and find answers when stuck
An AI that can figure things out needs the same three capabilities:
- Pre-trained knowledge — The massive knowledge base that came with ChatGPT in 2022
- Reasoning ability — The deep thinking capability introduced by OpenAI’s o1 series in late 2024
- Iterative capability — The autonomous exploration ability brought by coding agents like Claude Code in 2026
All three puzzle pieces are now in place.
Think of it like a fresh college graduate: they have knowledge (four years of education), critical thinking skills (they can analyze problems), and the ability to learn on the job (they grow through practice). Today’s AI agents are like brilliant new hires with unlimited potential.
The 31-Minute Recruiting Job: Nobody Told It What to Do
Abstract concepts only go so far. Sequoia backed up their argument with a real-world example that speaks volumes.
The Setup
A startup founder sent an AI agent a single message:
“I need a Head of DevRel. Someone technical enough to earn respect from senior engineers, but who also loves Twitter. Our customers are platform teams. Go.”
That’s it. No detailed job description. No search strategy. No candidate list.
How the Agent Executed
The agent got to work immediately:
Step 1: LinkedIn Search
It searched LinkedIn for DevRel roles at well-known companies and found hundreds of profiles.
But it quickly spotted the problem: job titles don’t tell you much. Resumes can’t distinguish between people who excel and people who coast.
Step 2: Pivot to YouTube
So it changed tactics and searched YouTube for tech conference talks.
After finding 50+ speakers, it filtered for those with strong engagement metrics — videos with lots of likes and active comment sections, indicating the speaker genuinely connects with technical audiences.
Step 3: Twitter Cross-Reference
Next, it cross-referenced these people on Twitter.
The result: half had inactive accounts or only retweeted corporate blog posts.
But it did find about a dozen people with authentic followings — people who shared genuine opinions, engaged with developers, and posted with real taste.
Step 4: Timing Signals
The agent kept digging and noticed something subtle: three people had significantly reduced their posting frequency over the past three months.
What could this mean? Burnout or disengagement with their current role. A subtle but important signal.
Step 5: Deep Research
Then it researched these three candidates in depth:
- Candidate 1: Just announced a new position. Too late.
- Candidate 2: A startup founder who just closed a funding round. Definitely not looking to switch.
- Candidate 3: Now this one was interesting…
Step 6: Target Locked
The third candidate’s profile:
- Working in DevRel at a Series D company
- That company had recently laid off people in the marketing department (instability signal)
- Her recent talk topics focused on platform engineering — a perfect match with the startup’s direction
- 14,000 Twitter followers, posting memes that actually resonated with real engineers
- No LinkedIn updates in two months (possibly exploring new opportunities)
Final Step: Drafting the Outreach
The agent drafted a personalized outreach email referencing her recent talk, highlighting the overlap with the company’s target customer profile, and mentioning the creative freedom that comes with a small team.
Total time: 31 minutes.
What This Example Really Shows
The founder didn’t receive a job description or a long list of candidates. They got a single, precisely targeted recommendation.
The critical point: nobody told the agent how to execute any of these steps.
- Nobody said to search YouTube for conference talks
- Nobody said to use posting frequency as a proxy for job dissatisfaction
- Nobody said to monitor company layoff activity
The agent reasoned its way through: forming hypotheses, testing them, hitting dead ends, pivoting, and persisting until it found the answer.
This is exactly what Sequoia means by “figuring things out.”
The Exponential Curve: Capabilities Double Every 7 Months
Sequoia made a bold prediction in the blog post:
Long-horizon agent capabilities are roughly doubling every 7 months.
This data comes from METR (an organization that tracks AI capabilities through empirical measurement), not speculation.
What Happens If This Curve Continues?
Extrapolating along this exponential curve:
| Year | Task Duration Agents Can Reliably Handle |
|---|---|
| 2026 | ~30 minutes of expert-level work |
| 2028 | A full day of expert-level work |
| 2034 | An entire year of work |
| 2037 | A full century of work |
In other words: goals you planned to achieve by 2030 could be accomplished in 2026.
The Debate
This prediction sparked significant debate.
Supporters: OpenAI’s Greg Brockman shared the blog post with endorsement.
Skeptics felt it was overly optimistic, ignoring real-world problems:
- Agents still make mistakes
- They hallucinate (confidently stating incorrect information)
- They lose context
- They sometimes charge confidently in completely wrong directions
Sequoia acknowledged these issues directly:
“To be clear: agents still fail. They hallucinate, lose context, and sometimes charge confidently down exactly the wrong path. But the trajectory is unmistakable, and the failures are increasingly fixable.”
The key insight isn’t the capability level at any single point in time — it’s the rate of improvement.
It’s similar to when smartphones first launched. The experience was terrible — short battery life, barely any apps, constant crashes. But the pace of evolution was undeniable, so the direction was clear.
From Talkers to Doers: Business Models Must Be Rewritten
This is what I consider the most insightful section of Sequoia’s article.
2023-2024: AI as “Talkers”
The AI applications of the past two years were fundamentally extensions of conversational ability:
- ChatGPT: Chatting with you
- AI writing assistants: Helping you draft articles
- AI coding assistants: Helping you write code snippets
These were impressive, but their impact was limited. Why? Because after the conversation, you still had to do the actual work yourself.
2026-2027: AI as “Doers”
Sequoia predicts the next generation of AI applications will be doers:
“They will feel like colleagues.”
This implies several fundamental shifts:
| Dimension | Talker Era | Doer Era |
|---|---|---|
| Usage frequency | A few times daily | Running 24/7 |
| Instances | One chat window | Multiple instances working simultaneously |
| User role | Individual contributor | Managing a team of agents |
| Interaction model | Chat conversation | Task delegation |
What This Means for Entrepreneurs
Sequoia posed four key questions:
- What work can you complete? Identify tasks requiring sustained attention
- How do you productize it? Evolve UIs from chatbots to task delegation interfaces
- Can it execute reliably? Improve feedback loops so agents can self-correct
- How do you price it? Charge based on value and outcomes, not API calls
This isn’t about adding an AI feature to your product. It’s about rethinking the entire business model.
Industries Where Agents Are Already Operating
Sequoia highlighted multiple verticals where agents are gaining traction:
| Industry | Representative Company | Agent’s Role |
|---|---|---|
| Healthcare | OpenEvidence | Specialist physician |
| Legal | Harvey | Legal associate |
| Cybersecurity | XBOW | Penetration testing expert |
| DevOps | Traversal | Site reliability engineer |
| Sales | Day AI | Sales representative |
| Recruiting | Juicebox | Executive recruiter |
| Mathematics | Harmonic | Research mathematician |
| Chip design | Ricursive | Chip engineer |
| AI research | GPT-5.2/Claude | AI researcher |
That last entry is the most significant: AI is helping build better AI. This is a self-accelerating flywheel.
What Should We Do About It?
After reading Sequoia’s article, one question naturally arises: If AI can do everything, what’s left for us?
Fear Is Natural
The headlines from 2025 have been unsettling:
- Medical diagnosis competition — AI won
- Programming competition — AI won
- Mathematical Olympiad — AI won
- Stock trading competition — AI won
This pressure is real. But fear won’t solve anything.
Back to That Definition
Remember Sequoia’s framework?
“A person who can figure things out needs three things: knowledge, reasoning, and the ability to iterate toward answers.”
This definition applies to us too. AI is evolving — and so must we.
The Key Word: Direction
The crucial skill isn’t execution anymore. It’s instruction:
- The more rigorous your thinking, the clearer your instructions
- The clearer your instructions, the more precise the results
This isn’t a crisis. It’s a shift where the ability to direct becomes more valuable than the ability to execute.
From Individual Contributor to Team Manager
Sequoia made this clear: the user’s role shifts from “individual contributor” to “managing a team of agents.”
This means:
- The competition isn’t about who writes code fastest — it’s about who decomposes problems best
- It’s not about who memorizes the most knowledge — it’s about who knows the right questions to ask
- It’s not about execution speed — it’s about directional judgment
Key Takeaways
Sequoia’s article distills down to three core points:
- A functional definition of AGI: The ability to figure things out
- All three puzzle pieces are in place: Knowledge + Reasoning + Iteration
- 2026 is the Year of the Doer: From talkers to doers, business models get rewritten
Whether or not you agree that “AGI has arrived,” one thing is certain:
The boundary of AI capability is expanding exponentially, and the rate of expansion itself is accelerating.
When the speed of improvement is itself accelerating, our predictions about the future tend to be far too conservative.
As Sequoia concluded:
“Saddle Up!”
References:
Comments
Join the discussion — requires a GitHub account