OpenAI's masterplan to rule them all
+ Anthropic (Claude's creators) have an open challenge

Barun Pandey
February 04, 2025

GM! Welcome to Get Into AI.
I’m your favorite barista. I serve you a cup of news just the way you like it.
Here’s what I have for today:
SoftBank's $3B Bet: "Here's My Card, OpenAI" 💳
OpenAI's New Master Plan: One Model to Rule Them All 🔮
Anthropic Teaches AI to Say No 🚫
Let’s dive in!
Not subscribed yet? Stay in the loop with AI daily by reading Get Into AI for five minutes.
Three major headlines
Three main stories for the day.

SoftBank just made a game-changing announcement: They're committing $3 billion annually to OpenAI's technology and launching Cristal Intelligence.
Think of it as ChatGPT's sophisticated Japanese cousin, exclusively for Japanese businesses.
Why is this a big deal? Imagine if someone invested the equivalent of Netflix's entire content budget into AI – every year. That's the scale we're talking about.
This move signals Japan's determination to participate in and lead the AI race across Asia.
Son’s cooking.

2/ The “Everything Model”
Sam Altman says the leap from GPT-4 to GPT-5 will be as big as that of GPT-3 to 4 and the plan is to integrate the GPT and o series of models into one model that can do everything
— Tsarathustra (@tsarnick)
8:04 PM • Feb 3, 2025
Sam Altman just shared something exciting: OpenAI is working on combining its O-series and GPT models into what could be called an "everything" model.
The jump from GPT-4 to GPT-5 is expected to be as dramatic as the leap from GPT-3 to GPT-4 was.
But here's the real kicker: this new unified model will know when to think deeply, when to respond quickly, and when to use different capabilities like voice or research – just like how you naturally switch between various modes of thinking.


Anthropic just revealed their "constitutional classifiers" – imagine a bouncer for AI smart enough to spot trouble before it starts.
This system acts as a protective layer on top of their AI models, monitoring what goes in and what comes out.
The results are impressive: their Claude 3.5 Sonnet model rejected over 95% of harmful requests with these new protections, up from just 14% without them.
What's particularly clever is that this system can adapt quickly to new threats while barely affecting the AI's helpful nature – only a 0.38% increase in legitimate request rejections.

🐤Tweet of the day
The people who embrace AI will automate the people who don’t
— Sahil Lavingia (@shl)
2:34 PM • Feb 4, 2025
Are you embracing it?

Catch you tomorrow! ✌️
That’s it for this week, folks! If you want more, be sure to follow our Twitter (@BarunBuilds)
🤝 Share Get Into AI with your friends!
Did you like today's issue? |