Closed for business
Why agents change everything about API strategy
tldr; Some thoughts on a choice facing software companies - close off your data to AI agents or stay open. The API wars aren’t new, but they feel particularly acute in the context of agent-led workflows.
Thanks Reggie, Mike, Joe, Alex, and Antoine for the IRL chats that encouraged me to write this.
Enjoy!
If you didn’t catch Dwarkesh’s conversation with Karpathy, I recommend checking it out. I don’t take it all as gospel, but his statement; “we are in the decade of agents, not the year of agents” definitely stoked some thoughts. He argues that agent capabilities are progressing slower than the hype might suggest.
I think the capability to time frame ratio matters here, a lot. If agent capabilities progress slowly, and this really is a decade long thing, it impacts how both new startups and legacy companies play their hand.
Consider what an agent needs to be useful. Say you’re building something to help with accounts payable. The agent needs to talk to Sage, pull data from Stripe, check Excel spreadsheets, and so on. It needs to move between systems, gathering information and compiling it into something actionable. Each data source is valuable, yes, but it’s the aggregation and massaging of the data that could be an order of magnitude more valuable.
Now imagine you’re one of those source of truth companies. You’re Salesforce, or HubSpot, or Sage. You’ve built your business on owning essential data. For years, that ownership was the moat. Integrations existed, sure - Zapier could connect systems, custom software could pull data across sources, etc - but agents can do exponentially more with that data. They reason, they handle ambiguity, they create workflows that traditional integrations never could. The value gap between owning data and being where data becomes useful just got a lot wider.
This is exactly what I think is going to stir up some change.
I’ve tried to think about whether we’ve seen this movie before. The best I’ve got is how telecom carriers owned SMS infrastructure and kept it closed and monetised. WhatsApp, iMessage, and WeChat built better messaging experiences on top of data networks. Carriers tried to protect their SMS revenue by being restrictive, but users went to wherever gave them the best experience. The value shifted from owning the pipes to being where people actually communicated. It feels like the same shift is happening now with data and agents.
This creates a choice, though it’s less binary than it first appears. Companies are landing on a spectrum of strategies.
Some are closing off entirely. Google has started limiting what browsing agents can see, down to about ten links in search results. Reddit is doing the same with Perplexity. Jira launched Rovo, their own agent, while keeping APIs restricted for third-party builders (for now). These companies are controlling how AI works with “their” data.
Others might be selectively open. Open to their own agents, but throttle competitors. Or open at a price, charging premium rates for agent access the way they might charge for API calls. Salesforce could let Einstein access everything while making it prohibitively expensive for third parties. That’s not fully closed, but it’s not really open either. Salesforce rent-seeks in this way for most of their integrations already.
And then there are companies betting on openness. Linear is here. Cursor, Codegen, Devin all build agents that integrate deeply with Linear. We make our data structure clean and accessible. We’re betting that being the workbench where agents operate makes us stickier than trying to own every tool (or most of them) ourselves.
I think where you land on this spectrum matters enormously, but I’m not certain how it plays out.
The optimistic case for openness is that user experience wins. We can’t build the best agent for every workflow. No single company can. The technology is evolving too fast, and the use cases are too diverse. By being open, we let the best agents win, and that makes Linear more useful. We don’t lock users in, but they stay anyway because we’re the hub that connects everything. Over time, closed systems lose talent, lose ecosystem momentum, lose to whoever delivers the best experience.
But there’s a real counter argument, and it’s not solely reliant on enterprise inertia. What if proprietary data creates genuine competitive advantage? Salesforce has decades of customer interaction data. If they use that to train Einstein, and Einstein gets meaningfully better at predicting customer behavior because of it, third-party agents can’t match that. The closed loop creates a data moat that openness can’t overcome. Vertical integration wins when the integration itself is the product.
Apple proves this is possible. They’re vertically integrated, they control the stack, and they deliver experiences that feel more cohesive precisely because everything is designed together. Why wouldn’t the same logic apply to enterprise software? Just to note, Apple’s recent partnership with OpenAI weakens this argument, but let’s overlook that for now…
Back to Karpathy and the stoking of thoughts. The answer probably comes down to pace of change. If AI capabilities are evolving slowly, being closed (if you have the data) wins because one company can keep up and deliver a polished experience. But if capabilities are evolving fast, no single company can stay on the frontier across every use case. Modularity and openness wins because it lets you plug in whatever’s best right now.
Currently, the pace feels fast. Fast enough that I think openness wins. But I could be wrong about the pace, or wrong about whether pace even matters as much as I think it does. But if Karpathy is right, and this takes a decade to play out because capability improvement slows down, a decade is long enough for the closed players to figure out their agent strategy and close the user experience gap.
Still, I see accelerants. New companies choosing tools based on “works with all our agents” (including ones they build themselves) rather than feature completeness. Enterprise buyers starting to demand “agent-ready” as a procurement criterion, the same way “API-first” became table stakes. Or maybe most interesting is if agents get good enough at core workflows, companies might accept a “good enough” data model that’s open over a comprehensive one that’s closed, because the agent layer compensates for gaps.
But all of that assumes agents get meaningfully better, fast. What if they plateau? What if the improvements are incremental rather than transformative? Then the whole thesis weakens. Openness matters less if the thing you’re open to isn’t that valuable.
What I keep coming back to is that in the “agent era”, being the place where data lives isn’t enough. You have to be the place where data becomes useful. And if you’re not the best at making it useful, and you block everyone else from trying, users will eventually go somewhere that doesn’t block them.
That’s the bet. User experience wins in the long run, even in the enterprise.


These are great complimentary reads:
The Resonant Computing Manifesto: https://resonantcomputing.org/
It's still 1995 by Rex Woodbury: https://www.digitalnative.tech/p/its-still-1995