
GM! Welcome to Get Into AI!
Your favorite AI news guy back again here.
I'm like that friend who reads all the AI Discord servers so you don't have to wade through 3,619 messages yourself.
Time is precious, and I'm here to save you 351 minutes of reading!
Here is what I have for you:
QwQ-32B flexes on massive models
Apple's M3 Ultra/M4 Max pack ridiculous RAM
OpenAI plans $20K/month agent subscriptions
But first, words from our sponsor:
Find out why 1M+ professionals read Superhuman AI daily.
AI won't take over the world. People who know how to use AI will.
Here's how to stay ahead with AI:
Sign up for Superhuman AI. The AI newsletter read by 1M+ pros.
Master AI tools, tutorials, and news in just 3 minutes a day.
Become 10X more productive using AI.
Alright, let’s dive in!
Three major headlines
Three main stories for the day.
1/ Alibaba's QwQ-32B Claims to Match DeepSeek R1-671B
Alibaba's Qwen team finally released QwQ-32B, a model they claim rivals DeepSeek's R1 - despite being 20x smaller!
Using two-stage RL training focused on math and coding first, then general capabilities, they've created a 32B model that can apparently punch way above its weight class.
While we don't have independent verification yet, the team has shared enough implementation details to make this believable.
Why it matters: This continues the trend of smaller, more efficient models challenging the giants.
If true, it means we don't necessarily need monster-sized models to get top-tier reasoning capabilities.

2/ Apple Unveils M3 Ultra with 512GB Unified Memory

Apple announced new Mac Studio configurations featuring the M3 Ultra with up to 512GB of unified memory.
At $10,499, it's not cheap, but it's potentially capable of running models like DeepSeek R1 or Llama 3.1 405B locally.
The unified memory architecture provides 819 GB/s bandwidth, making it surprisingly competitive with specialized AI hardware.
Why it matters: This could be a game-changer for local AI development, potentially allowing individual developers to run models that previously required expensive server setups.

3/ OpenAI Plans Premium Agents at $2K-$20K Monthly

According to The Information, OpenAI is planning to charge between $2,000 and $20,000 per month for advanced AI agents capable of automating coding and PhD-level research.
SoftBank has reportedly committed to spending $3 billion on these agent products this year alone.
Why it matters: This signals OpenAI's push into enterprise tools and reveals their pricing ambitions for truly capable AI systems.
It also suggests they believe they have agent capabilities worth those eye-watering price tags!

Catch you tomorrow! ✌️
That’s it for this week, folks! If you want more, be sure to follow our Twitter (@BarunBuilds)
🤝 Share Get Into AI with your friends!
Did you like today's issue? |