Naamche

Share this post

What OpenAI's new APIs mean for you

How OpenAI's APIs unlock new businesses, LLaMA, and AI's big legal problem.

AuthorAuthor

Barun Sharma & Barun Pandey
March 03, 2023

GM! Welcome to the Status Code.

We're like busy bees, always working to help you learn more about AI.

Here’s what we have for today:

  1. 🛠️ How you can use the new APIs from OpenAI

  2. 🦙 Is LLaMA a new breed of LLM?

  3. 👨🏾‍⚖️Legal Issues in AI: A synopsis

Not subscribed yet? Stay in the loop with AI weekly by reading the Status Code for five minutes.

(Estimated reading time: 4 minutes 55 seconds)

🛠️ How you can use the new APIs from OpenAI

OpenAI released its ChatGPT and Whisper APIs last week.

But they didn't stop there.

Since December, they optimized their models and reduced their costs by 90% for ChatGPT.

APIs available for ChatGPT and Whisper 👀

On-demand pricing of $0.006 / minute for Whisper

ChatGPT is running on GPT3.5 model and 10x cheaper than GPT3.

GPT3: $2 for 100k tokens
GPT3.5: $2 for 1M tokens

— The AI Guy (@CallMeAIGuy)
Mar 1, 2023

So, they are passing the cost savings on to their API consumers.

ChatGPT's underlying model, GPT3.5, is now 10x cheaper than its predecessor, GPT3.

This is great.

There have already been studies showing the effectiveness of ChatGPT in average productivity.

One study explained that ChatGPT did it in two folds:

  • By decreasing the time to carry out certain tasks (by 0.8 SDs, Standard Deviations)

  • By increasing output quality (by 0.4 SDs)

A refresher on Standard Deviation

Let us say we give a math problem to 10 students. They all take different times to solve it. But the average time is 5 minutes. SD tells you how much the times vary from each other. Say it’s 1.41. Now, we ask them to use ChatGPT. So, 0.8 SDs means they now save 0.81.41 minutes. Using ChatGPT, a student saves around 1.128 minutes to solve the problem.

And now, with its ability open for developers to build on, interesting use cases will pop up.

Logan Kilpatrick, OpenAI's developer advocate, mentioned some use cases in a podcast with Swyx:

  • It could unlock almost zero-cost access to mental health services.

  • Personalized tutors can be a thing. People will ask questions, and the curriculum will adapt to keep them engaged in the learning process.

  • Teachers can focus on teaching and not on grading assignments.

That's not it.

He also mentioned how companies will build LLM-first experiences as a moat (Btw, LLMs are large language models. And LLM-first experience means building ChatGPT-like experiences using their data).

And it's already in place. OpenAI announced that many companies have already begun using ChatGPT and Whisper APIs:

  • Snapchat launched MyAI for Snapchat+ last week. A friendly chatbot for its users. It can write jokes for you if you're not a funny guy like me.

  • Speak, an AI-powered language learning app announced a new AI-speaking product for its users. It uses Whisper's transcription capabilities.

  • Quizlet announced Q-Chat. It’s a ChatGPT-enabled tutor which asks questions based on whatever you're studying.

Here's my bold (and lame prediction):

GPT-3.5 will enable use cases in niche markets beyond the content space (lame, I know. It's already happening!)

On a side note, this tweet on OpenAI's unit economics intrigued me:

Is OpenAI making money off their ChatGPT API?

YES. They’re making >$2.1 for every $1 in infra cost.

Float16 ops/token (GPT-3) = 2x175B = 350B
A100 FLOPS (F16) (25% util) = 156T/s
Tokens/s = 156T/350B = 446
Tokens/hr = 1.6M
Revenue/hr = $2/Mx1.6M = $3.2/hr
A100 Cost = $1.5/hr

— Deedy (@debarghya_das)
Mar 3, 2023

There are plenty of assumptions here. But it's a good starting point to understand how OpenAI operates as a business.

🦙 Is LLaMA a new breed of LLM?

Meta released LLaMA this week. And unlike other language models, LLaMA aims to achieve more with less.

It has some unique features:

  • It is available in several sizes. (smallest is 7 billion, and the largest is 65 billion; GPT-3 has 175 billion)

  • It’s “open-source” (but what its open-source label means is something we discuss later)

So, how is LLaMA different?

Meta claims it’s in the game of democracy.

They say AI has a big research problem. Researchers don’t have enough resources to study large models like GPT-3.

But, since LLaMA has fewer parameters, it will require less computing power. This way, we can solve problems related to large language models without relying on Big Tech.

And I read their paper. It’s a foundational model. You cannot build applications on top of it without assessing risks. At least, not today.

But there’s still one elephant in the room.

LLaMA is not available to everyone.

It requires you to fill out a form and prove to them that you are an academic researcher and wait till they approve. So much for a public release.

LeCun, their head of AI, says it’s because they learned a bitter-sweet lesson the last time they made their language model open-source.

Their previous LLM, Galactica, had become racist even though ttrained it on 48 million scientific papers.

Is that the final truth? God knows.

Anyway, if you want to access it, you can apply here.

But remember, some people have been waiting for days, and they are still to receive it.

Even prominent researchers.

Is Meta only giving access to people having Shakespeare’s First folio or what?

Meanwhile, LLaMA weights are now available in torrents. It’s bound to happen when your best security strategy is to hope and pray that people don’t share it with others.

So, what’s Meta’s endgame?

To me, it’s not just Meta trying to tackle the burning issues of AI. They want an in on the LLM hype train. And they want to better their model.

More people test it -> model becomes intelligent -> they switch to a paid model.

It worked for GPT-2. Maybe it’ll work for LLaMA too.

Besides, a paper from Cornell University proved that having more parameters doesn’t mean a better model.

So maybe Meta’s onto something.

👨🏾‍⚖️Legal Issues in AI: A synopsis

Last weekend, my friend prank-called a Pizza place downtown using AI voice.

She asked them what the Pizza cost using The Terminator's voice. Who does that?

My friend got her Pizza. But it could have been worse. She could have gone to jail for that. That prank is illegal in California.

So, I dug deep into the rabbit hole and found out these were also illegal:

  • Tricking facial recognition software

  • AI-assisted stalking (no, you can’t be the guy like in You)

  • Using the original art of other people to generate similar art

  • Making deep fakes of a person without their permission (if you thought this was legal, you have a problem)

Since AI has emerging use cases, there are fine lines between legal and not.

And there are questions that don’t have definite answers:

  • Algorithms learn from the past (and history books are skewed to bias for/against a particular group).

  • AI doesn’t copy. It takes inspiration and comes up with something new. But, unlike humans (with a learning curve of years), an AI can learn from someone’s work and start creating in hours. How do we make it fair?

No wonder we have lawsuits flying here and there (recall the Stable Diffusion and Getty Images situation?).

But we need a framework to decide what’s fair. So everyone’s trying to chime in:

  • Governments are trying to develop a framework around AI’s transparency. And also setting best practices.

  • Companies are running risk assessments of their models. Those showing biases are making way for new ones.

Anyway, we’re kind of in a spot here.

It’s like having a Golden Retriever puppy if you have a 200 sq. ft apartment in New York City. You love it. You know it’s good for you. But god knows what you’re going to do when it gets bigger (and you know it’s getting bigger!).

And more bad news for companies, monitoring what their beast creates at such a scale is tricky.

So they have started putting in filters (to tame the beast if you like the analogy):

  • Stability AI now allows artists to opt out of their data sets in next-generation models

  • GitHub’s Copilot checks the code it generates with public code and hides suggestions if it matches

But, it’s small baby steps towards a big mountain to climb.

I think the way to solve all these issues is to create a fine-tuned model.

A fine tuned-model = fine-tuned database + pre-trained model.

We’ll need to wait until Q2 to know where we are on legal issues.

🦾What happened this week?

  • ChatGPT and Whisper released their APIs 

  • OpenAI released its AGI roadmap

  • Microsoft introduced Kosmos-1 , an AI model for analyzing images

  • Sewn.AI is an assistant to automate your workflow

  • Want ChatGPT for your Mac? Join the waitlist for Embra

  • Tesla reveals a walking robot named Optimus

  • Use Magic Stickers for AI-generated stickers for iMessage

  • Enhance your image in seconds using SupaRes

  • PromptLoop can build spreadsheet models! Try it!

  • DemoTime turns demos into deals

  • Rubberduck is the chatbot for your VS Code

  • Chinese military claims AI pilot has beaten human in a real-life dogfight

💰 Funding Roundup

  • AI content platform, Typeface received $65 million in venture equity

  • The Czech government is creating a VC of $58 million for AI research

  • SandboxAQ, a quantum computing AI startup, raised $500 million

  • Anthropic AI raised $300 billion, raising the company to a $5 billion valuation

  • Subtl.AI closes angel investment of $100k from various sources

  • Robin.AI, a London-based generative AI startup, raised $10.5M

🐤Tweet of the week 

😂 Meme of the week

That’s it for this week, folks! If you want more, be sure to follow our Twitter (@CallMeAIGuy)

🤝 Share The Status Code with your friends!

Did you like today's issue?

Login or Subscribe to participate in polls.

Subscribe to our newsletter

We will keep you updated with best news and trends to follow

Keep Reading

All Posts