OpenAI could debut a multimodal AI digital assistant soon


OpenAI has been showing some of its customers a new multimodal AI model that can both talk to you and recognize objects, according to a new report from The Information. Citing unnamed sources who’ve seen it, the outlet says this could be part of what the company plans to show on Monday.

The new model reportedly offers faster, more accurate interpretation of images and audio than what its existing separate transcription and text-to-speech models can do. It would apparently be able to help customer service agents “better understand the intonation of callers’ voices or whether they’re being sarcastic,” and “theoretically,” the model can help students with math or translate real-world signs, writes The Information.

The outlet’s sources say the model can outdo GPT-4 Turbo at “answering some types of questions,” but is still susceptible to confidently getting things wrong.

It’s possible OpenAI is also readying a new built-in ChatGPT ability to make phone calls, according to Developer Ananay Arora, who posted the above screenshot of call-related code. Arora also spotted evidence that OpenAI had provisioned servers intended for real-time audio and video communication.

None of this would be GPT-5, if it’s being unveiled next week. CEO Sam Altman has explicitly denied that its upcoming announcement has anything to do with the model that’s supposed to be “materially better” than GPT-4. The Information writes GPT-5 may be publicly released by the end of the year.

Source link


Please enter your comment!
Please enter your name here

Share post:




More like this

Spain, Ireland and Norway formally recognise Palestinian state

4 hours agoJames Landale,Diplomatic correspondent, @BBCJLandaleNiall Carson/PA WirePrime...

OpenAI announces new safety team days after dissolving the old one

Weeks after he dismantled the OpenAI team focused...

A comprehensive list of 2024 tech layoffs

The tech layoff wave is still going strong...