I recently attended Web Summit 2025, where AI was (unsurprisingly) the dominant topic. There were brilliant ideas and hard-earned experiences shared on stage, but when it came to AI, the conversations often felt shallow. It was as if everyone had asked ChatGPT, “What should I think about AI?” and settled for the median answer.

Many speakers defaulted to “what only humans can do,” sidestepping the exponential gains unfolding in real-time. But that view is already outdated. Dario Amodei, CEO of Anthropic, recently warned that AI could eliminate up to 50 percent of entry-level white-collar jobs within five years, potentially pushing U.S. unemployment above 20 percent by 2030. He is not alone. Many of us building at the frontier know this disruption is not theoretical. It is inevitable.

So the question becomes: if we truly believe what the experts are saying—that this is the next Industrial Revolution—why are we acting like it is just another tool rollout?

If you really believe the AI assumptions, then act like it.

There is a quiet consensus forming in the AI community around three ideas:

1. The models will keep getting better, fast

2. LLMs are not the path to AGI

3. Data moats are the only real defence when the marginal cost of intelligence goes to zero

Most people nod along. But few are acting like they understand the implications. If you do, then you need to make three uncomfortable conclusions.

1. LLMs are just autocomplete – world models are the future.

Large Language Models (LLMs) are impressive, but fundamentally, they are autocomplete engines. They predict the next word without genuine understanding of meaning, causation, or context. They lack memory, reasoning, and a grasp of how the world actually works.

In contrast, world models build internal maps of reality. They simulate outcomes, anticipate consequences, and reason across time and context. If you are serious about building with AI, stop optimizing autocomplete. Start creating systems that truly understand the world.

2. No data moat? No business.

AI startups that wrap foundational models like ChatGPT into lightweight products are chasing a mirage. Without proprietary data, exclusive ground truths, or tight feedback loops, there is no defensibility.

These tools are easy to replicate. Worse, as frontier model providers expand their capabilities, they often absorb the most popular startup features directly into their platforms. OpenAI’s rollout of native memory, voice, search, and agent-style tools has already rendered dozens of “AI assistant” startups redundant. What was once a differentiator becomes table stakes overnight.

Without unique data or deep integration into core workflows, these businesses have no lasting advantage. As the foundational models improve, the thin wrappers built on top of them will disappear.

3. Incumbents will not adapt – they will collapse.

This is the most uncomfortable truth: when AI-native companies start to scale, the incumbents will not just fall behind. They will fall apart. Entire categories of jobs and business models will be upended.

Markets are already pricing in AI-driven efficiencies. Forward P/E ratios assume productivity gains that most companies are not operationally ready to deliver. Most are still layering AI on top of legacy processes and existing workforce structures instead of rethinking the work itself.

It is the same mistake factories made when they first introduced electricity. They plugged it into steam-era layouts and saw almost no improvement until someone redesigned the factory for electricity from the ground up.

This is not a tooling upgrade. Businesses will need to be rebuilt entirely—people, processes, and platforms included.

What startups should be doing instead.

If you are building a tech company today, do not just use AI. Build with a clear strategy for the world that AI is about to remake. That starts with three imperatives:

1. Understand the frontier – build toward world models.

LLMs may power early experiments, but the real shift will come from systems that can reason, plan, and simulate. Startups that deeply understand the trajectory toward world models will spot opportunities others miss.

2. Own your data advantage.

Your defensibility comes from proprietary data, closed feedback loops, and embedded access to workflows. If you are not collecting and learning from ground truth in real time, you are just a thin UI on someone else’s intelligence.

3. Choose your side: disrupt or enable.

You are either building the AI-native business that reimagines workflows from scratch or helping incumbents survive their reckoning. Most enterprises are still layering AI on top of legacy systems. When the survival panic hits, they will not need tools. They will need full-stack transformations.

Conclusion

A few weeks ago, during a private roundtable at Vancouver Startup Week, a few of us talked about this exact problem. Too many teams are focused on low-hanging fruit. Simple wrappers. Easy demos. Ironically, those are often the companies with the worst odds of survival. The ones willing to tackle the hard, transformative problems may actually have the better shot.

This moment is not about bolting AI onto what already exists. It is about rebuilding from first principles in a world where intelligence is cheap and everywhere.

If you are a builder, do not ship what sounds smart. Ship what changes the world.

Francis Silva is a founder and former enterprise AI leader.

Reply

or to participate

Keep Reading

No posts found