What is self-reflecting AI?
We discuss Elon's plans, Daning-Kruger effect, and AI papers.
GM! Welcome to The Status Code.
We're like explorers, uncovering new frontiers of AI discovery for you to marvel at.
Here’s what we have for today:
❌Elon’s X project
(Estimated reading time: 3 minutes 10 seconds)
Not subscribed yet? Stay in the loop with AI weekly by reading the Status Code for five minutes.
Two main stories of last week. If you have only ~2 minutes to spare
1/ 👯🏻Self-reflecting GPT
We got GPT-4 last month.
Researchers and experts just had enough time to play with GPT-4 and wrap up findings on GPT-3.5.
Let’s deep dive into two papers:
Ever heard of NPCs?
In a game, NPCs are non-playable characters that help move the story along. What if they could think for themselves?
Well, some folks at Stanford and Google Research made it happen. They created a town with 25 people, each with a unique personality, and some living in families in a sandbox.
Each person is a generative agent.
Then, they added three components to each agent:
Observation, Planning, and Reflection
Turns out, a big language model like GPT-3.5 can encode a wide range of human behavior in its training data.
These agents even threw a Valentine's Day party. One asked another to be their date!
Imagine how much scripting it would take to create a party scene and plan behaviors for each character. But, when given the freedom, they acted on their own.
Lastly, the models learned from their mistakes. If an agent behaved oddly, they'd notice next time.
Speaking of reflection, we have another paper on that.
What if we made GPT learn from its mistakes?
Human innovation was built on reflection. We have reached here by accepting mistakes and trying new things.
Apparently, it’s almost the same with GPTs.
Each agent gets to reflect on their actions and improve their performance.
At first, reflection in GPT is cheaper and less memory-intensive than training.
Researchers found that reflection enables intuitive search queries, meaning the models benefit from a chain reaction.
In conclusion, reflection could be a second type of memory.
So, why do we need it?
Well, you can test social science theories with these adaptations. And game devs can create NPC foundations with less effort.
The only challenge is making sure the most relevant pieces of the agent's memory are retrieved and synthesized when needed.
2/ ❌Elon’s X project
Elon Musk has big plans for Twitter! And it's more than just changing the logo to Doge.
Last week was huge for Twitter. It secretly merged with X company, the online payment firm Elon used for PayPal.
And 10,000 graphics processing units (GPU) were purchased on Twitter. GPUs of that scale are purchased for computing AI models.
The total investment comes to around $155M!
This is interesting since Musk recently agreed to stop AI research with other stakeholders. He also brought AI engineers from DeepMind, like Igor Babuschkin and Manuel Kroiss.
While there's no proof they're working on Twitter, this could be the ChatGPT alternative Elon mentioned before.
Last we heard, Babuschkin was building a team. So, if the GPUs are in, things are taking shape. If they use Twitter data, it'll be the most advanced use of data yet.
Now, looking at Elon's past, this will be a hit or miss. His investments don't look great right now. Tesla's shares are dropping after he joined Twitter. Boring Co. and Neuralink are paused and in development, respectively.
Try Refind, and learn a little bit of everything!
Refind helps you find high-quality articles every day in an organized way.
1 trend you can pounce on. Reading time: ~1 minute 10 seconds
"Half-knowledge is dangerous."
We've heard this many times, right? Well, it's the core idea of the Dunning-Kruger effect.
Stanford scientists David Dunning and Justin Kruger tested people's performance in the ’90s. They found out that low-performing folks rated their skills higher than they actually were.
Take me, for example:
When I first learned to drive, I got overconfident. I drove in the city, hung out with friends (without a license), and then took the test.
Yep, you guesses it right; I failed miserably.
Now, think about the large data of AI.
AI is used in self-driving cars to see the environment, make decisions, and navigate. But sometimes things go wrong.
We have seen videos of Tesla cars to see trucks as two small cars. Or the car may not have data about a certain model of a vehicle.
You’re driving, and there’s one thing the car doesn’t recognize, and BOOM, you’ve just crashed.
The Dunning-Kruger effect also shows up in AI tools like ChatGPT.
I’ve made a formula:
AI confidence = Training Data Quality + Self Assessment Capability / Overconfidence Threshold
Training Data Quality: accuracy and reliability of the data
Self-Assessment Capability: system's ability to recognize its limitations
Overconfidence Threshold: A value that keeps AI systems from being too confident
We have to accept this effect to make progress in AI.
And AI development cannot stop until we perfect it.
But, we can set policies and thresholds for training data. We can also make sure AI systems recognize when they're wrong, like with Reflexion.
It's high time for authorities and governments to wake up and pay attention.
🦾What happened this week
OpenAI announced their Bug Bounty program
Google’s recycle sorting robot
Stanford’s ML scholar program is now accepting applications
Mark Maunder wrote this article on AI Innovation
Alibaba is bringing their ChatGPT-rival to all their product soon
OpenAGI, the LLM x domain experts paper, was released
OpenAI also released their code for their consistency model
Have a look at CAMEL, an interactive ChatGPT and agents collaboration
This article goes into why we need to slow down on AI
Intelligence platform Native AI raised $3.5M in seed funding
Marketing company Stagwell raised $1M in funding for AI platforms
US 1st in countries with high investments received by startups
An AI-focused project, CryptoGPT secured $10M in funding
AI firm Xapien closed £4.5M in funding
Carbon Robotics raised $30M to sell AI-powered robots
R&D company NobleAI secured $17M in Series A funding
Investing firm In Revenue Capital raised $3.8M in funding
AI software firm Fivecast raised AUS$30M in Series A funding
🐤Tweet of the week
gen z graduating into a recession while all entry level jobs are being replaced by AI
— BEP-01055 (@GiuseppeBisceg5)
Apr 13, 2023
😂 Meme of the week
That’s it for this week, folks! If you want more, be sure to follow our Twitter (@CallMeAIGuy)
🤝 Share The Status Code with your friends!
Did you like today's issue?