YC's First AI Startup School
On June 2025, Y Combinator hosted its first AI Startup School in San Francisco. A free event with 2,500 of the brightest computer science students packed into the room. The speaker lineup was insane: Elon Musk, Sam Altman, Jared Kaplan, Andrew Ng, Arvin Shavas, Fei-Fei Li and other people building the next generation of AI tools.
I followed the talks closely and highlighted some of the most interesting bits.
Sam Altman: “Don’t copy, build weird things”
Sam talked about building with conviction, being original instead of copying, he prefers people who build something different and weird. He also spoke about hiring curious, productive people who love solving hard problems.
Regarding ChatGPT, he said: “Our AI models are ahead of the products we’ve built with them.” There’s this huge gap between what models can do and what’s actually out there. That gap is a massive opportunity for startups.
He also discussed the idea of memory in ChatGPT and how it could transform personalised AI assistants into something really useful. Rather than just responding, they could help out people with their day to day tasks. Not just chatbots but tools that remember, help proactively, and fit into your daily life. This idea of AI that fades into the background and just gets things done is coming fast.
His advice to founders: ignore the noise, don’t chase hype, and build things that only you can see. This is the best time ever to start something.
Sam also talked about the cost of running ChatGPT. High-performance systems need massive computing power and generate a lot of heat, which means complex cooling systems. All of this pushes electricity usage way up. I have to say that unlike the US and China, which have invested heavily in power-hungry AI data centres, the problem we have in Europe is limited energy generation capacity. It’s a massive bottleneck that’s holding back the deployment of large-scale AI infrastructure.
Other interesting things Sam mentioned:
Great startups don’t start big. They start with a few people chasing something different. Most people thought AGI was a joke. A tiny group believed in it, and that’s how OpenAI got started.
In the early days, don’t hire for fancy CVs or big-name companies. Hire people who are curious, sharp and get things done.
AI won’t just change tech. It’ll speed up science, reshape economies and touch everything.
Elon Musk: “We’re at the very early stage of the intelligence big bang”
Elon gave a very Elon talk. He talked about breaking things down to the basics and building up from there. Instead of copying what’s been done before, he looks at the facts, like the raw cost of materials or how much power a data centre really needs. It’s a way to spot problems and find better solutions, especially in hard tech like rockets or AI infrastructure.
He warned that digital superintelligence could be here in a year or two. His take on safety was simple: build AI that tells the truth. Not what people want to hear. He also stressed that engineers need to be useful, not just smart. Minimise ego, take responsibility and don’t break feedback loops with reality.
“We’re building things that could shape the future of civilisation, so we better get it right.”
AI, robotics, and brain tech like Neuralink are starting to come together. Neuralink could boost how fast we think, communicate, and take in information, basically helping us keep up with AI. This mix of machines and humans could speed up the path to superintelligent AI, but it also raises big questions about who we are and where this is all going.
Elon says humans are like the bootloader, the thing that starts up digital superintelligence. One day, AI will be way smarter than us. So the challenge now is making sure we build it in a way that still reflects our values.
Other interesting things Elon said:
When he started PayPal, SpaceX, or Neuralink, he wasn’t trying to build something great, just something useful.
AI should be trained to tell the truth. That’s the safety mechanism.
Neuralink is about bandwidth, about connecting brains to machines so we can keep up, communicate faster, and maybe understand AI better.
Jared Kaplan: “Asking for an answer is just the beginning, doing the task is where the value is”
Jared Kaplan, co-founder and CSO of Anthropic, had a different tone: more measured, more focused on safety and long-term thinking.
Anthropic is moving from chatbots to agentic coding assistants who can take action, use tools, and complete real tasks. This means, fewer prompts, and more outcomes. He made it clear that they’re not trying to replace developers, they’re building tools that help them.
“Anthropic wants to help developers build with its AI, not compete with them.”
They offer tools like Claude Code so others can create new apps using their models. But it’s a tricky balance, they still need to protect their business, especially when demand is high and partners start to overlap with what they do.
Jared talked a lot about how scaling models with more data and compute still works, but it gets even better when combined with reinforcement learning, especially with human feedback. That’s what helps models reason better and handle real tasks safely. Training big models is just one part, fine-tuning them the right way is just as important.
Arvin Shavas: “Most great ideas come after launch”
Arvin’s building Perplexity, a search engine powered by AI. Something like Google, but fast, simple and with answers instead of links.
What’s interesting is how Perplexity started. Early on, they tried using LLMs to run structured queries over social media data, but it didn’t scale. Then they tried something dumb: feed search results into the model and get a summarised answer. And that worked, and doubled engagement just by letting users ask follow-up questions.
“The way we get things done isn’t just by giving you the answer. It’s by using tools, learning, trying, failing, fixing. That’s how real work happens.”
He also talked about product culture: make it simple, fast and clear. At Perplexity, they treat the user like they’re never wrong. You shouldn’t need to be an AI expert or know how to write perfect prompts. The product should just work. Big tech might have the best models, but they still struggle to make AI feel easy to use. Even with Google and Microsoft in the game, Arvin found a way in by focusing on better UX. According to him, the real opportunity isn’t just better models, it’s how you combine tools, knowledge and context to give useful answers.
His big bet: search engines will become smart assistants, helping you complete tasks, not just return links. Whoever figures out how to monetise that beyond traditional ads, wins.
Source:




