If you’ve ever dreamed of spending your summer whispering sweet nothings into the digital ear of one of the enchanting ChatGPT voice assistants that OpenAI debuted last month, you’ll have to dream a little further. On Tuesday, the company announced that its “Advanced Voice Mode” feature would need more time in the oven “to reach launch standards.” The feature will be available to a small group of users to gather feedback before rolling out to all ChatGPT paying customers in the fall.
OpenAI tells X, “We’re improving our models’ ability to detect and reject certain content, and we’re also working on preparing our infrastructure so we can scale to millions of dollars while improving the user experience and maintaining real-time responsiveness.”
We’re sharing an update to the advanced voice mode we demoed in the Spring Update, and we’re really excited about it.
We had planned to release the alpha version to a small group of ChatGPT Plus users at the end of June, but we needed another month to reach launch criteria…
— OpenAI (@OpenAI) June 25, 2024
Voices has been part of ChatGPT since 2023. But last month, OpenAI demonstrated an upgraded version that closely resembles humans. drew a comparison With Samantha, the attractive voice assistant in the 2013 film. her, played by Scarlett Johansson. Weeks after the announcement, the actress accused OpenAI of copying her voice despite denying her permission.
OpenAI says it’s still working out when the new voices (except the Johansson-like voice) will roll out to paying users this fall. Another feature that would have allowed the voice assistant to use your phone’s camera to understand the world around it was also delayed until then. “The exact schedule is dependent on meeting our high safety and reliability standards,” the company said.