OpenAI DevDay 2025 was HUGE. Now that the emotions have settled, let’s summarize the key announcements from the conference.
1. AppsSDK in ChatGPT
OpenAI introduced a way to directly chat and build with different apps: Booking, Zillow, Coursera, Expedia, Spotify, Canva, and Figma.
My thoughts
This feature feels like the next level of MCP (it still uses MCP under the hood, at least for now). It solves the biggest MCP issue - accessibility.
For non-technical users, using MCP was pretty inconvenient: you had to know what you wanted, find the right MCP, and install it in a not-so-friendly way.
With the Apps SDK, it’s like an AI assistant from sci-fi movies:
“Figma, create me a pitch deck slide from my company details.” Boom. Everything’s easy and available right inside the chat.
It’s also a great way for OpenAI to build a global position as a “marketplace.” They’re trying to be the proxy for other services - where users rely on ChatGPT instead of native UIs.
It’s the same strategy Google’s been using for years: why visit Booking’s website when you can book directly from Google Search - or now, from ChatGPT?
In my opinion, it might not be as disruptive as they hope, but it’s definitely a huge leap forward for AI.
You can now directly “ask” Booking to find you a hotel to stay at, Spotify to create a playlist, and even create designs with Canva or Figma - all without leaving the ChatGPT tab.
2. AgentKit - first no-code Agent Builder from OpenAI
The AI Agents niche is currently dominated by no-code/low-code editors. The biggest players have been N8N, Make, and Zapier.
Now, OpenAI has entered this market with AgentKit - a visual builder for agents that lets you create both simple automation workflows and full AI agents with LLM support, integrations, and a built-in chat toolkit.
My thoughts
It’s a really promising update because it shows OpenAI is actively investing in agentic AI.
However, at first glance, the tool itself doesn’t seem particularly unique. The competition is strong - Make and Zapier are very mature platforms that already added LLM support. N8N, on the other hand, is free (if self-hosted) and extremely customizable.
The biggest challenge for AgentKit might be model lock-in. Other tools are model-agnostic, so you can use any LLM you like. With OpenAI’s AgentKit, you’ll likely be limited to their models - which may not fit every use case. Different models excel at different tasks, and pricing plays a huge role.
Still, I’m excited to test AgentKit and really happy OpenAI is taking the agentic AI route.
3. Sora 2 via API
Sora 2 was recently released with closed, invitation-only access. But even with a limited user base, it went insanely viral. Social media was flooded with wild AI-generated videos - especially deepfakes (mostly of Sam Altman).
Now, Sora 2 is accessible through API, with pricing starting from $0.1/sec.
My thoughts
Sora 2 is an amazing video model. It’s not 100% realistic - but much more realistic than anything before. So realistic, it could be dangerous.
We’ll probably see a flood of AI-generated videos in the coming weeks. Be ready to spot deepfakes - and teach older family members so they don’t get fooled.
On the business side, having API access opens up tons of opportunities.
Honestly, within the next 2 years, I think around 95% of TV commercials will be AI-generated. Crazy times.
4. Integrating Codex with workflows
Codex is currently the top agentic AI coding tool - I’d say the best one at the moment.
OpenAI pushed it even further by making integration with workflows much easier.
Codex is now available through Slack integration, allowing users to mention Codex in a conversation with instructions - which automatically spawns a task to implement something in the right environment.
OpenAI also introduced the Codex SDK, which lets you use the same optimized agent from the CLI directly in code through a simple TypeScript SDK. It’s also available via GitHub Actions.
My thoughts
This is great news for both vibe coders and developers doing assisted coding.
Slack integration is super useful, and so is the native GitHub Actions support.
It creates some great developer experience flows, like:
- Running Codex tasks to fix failed builds from GitHub Actions
- Requesting new features or tweaks directly from Slack - with verification by an experienced dev
And much more.
Honestly, this makes Codex ready for serious production use - even for large projects.
And let the numbers speak for themselves: most of OpenAI’s own code is written using Codex. Love to see that.
5. New models: GPT-5 Pro, gpt-realtime-mini, and gpt-image-1-mini
It’s the least flashy news, but still worth noting.
OpenAI introduced GPT-5 Pro, now available in the API (and also in Cursor). It’s the smartest model yet - built to think deeply and deliver the best possible answers.
It’s also very expensive: $15 per 1M input tokens and $120 per 1M output tokens.
On the other hand, smaller and cheaper models for voice and images were also introduced: gpt-realtime-mini and gpt-image-1-mini.
My thoughts
GPT-5 Pro looks amazing, but due to its cost, it’ll likely be used for very specific, high-complexity tasks.
Still, having such a model available opens up big possibilities for building advanced AI agents.
6. Milestone awards
OpenAI gave milestone awards to users and organizations that consumed over 1 trillion, 10 billion, or 1 billion tokens.
This gamified reward system was inspired by YouTube’s gold/silver/bronze play buttons - and I think it’s awesome.
It’s great to reward customers who spent that much and dedicated so much time to OpenAI.
For me, it felt like the AI world just took a few big steps forward. And it was super exciting to follow everything live on X - just by reading posts in real time.
Kamil Kwapisz