In a nutshell: Starting this week, ChatGPT subscribers can once again ask the chatbot to search Bing, making its output more up-to-date. ChatGPT is also gaining the ability to detect images and conduct verbal conversations with paying users. OpenAI plans to open the new capabilities to everyone soon.
One of ChatGPT’s primary shortcomings has been its inability to search the internet to answer queries. OpenAI briefly added the functionality earlier this year but removed it due to unintended consequences. The company has now restored the chatbot’s internet access with additional safeguards while introducing speech and image recognition capabilities.
Users can enable the search function by selecting “Browse with Bing” under GPT-4. The chatbot could initially only base its responses on the information the company used to train it, all of which came from before September 2021. Thus, ChatGPT was unaware of events occurring after that date, limiting its effectiveness for research.
ChatGPT can now browse the internet to provide you with current and authoritative information, complete with direct links to sources. It is no longer limited to data before September 2021. pic.twitter.com/pyj8a9HWkB
– OpenAI (@OpenAI) September 27, 2023
OpenAI began enabling subscribers to tell the chatbot to conduct Bing searches in May, but deactivated the feature in July after discovering it could circumvent news outlets’ paywalls. Commanding ChatGPT to summarize a URL would give users access to the corresponding page’s content, even for news stories reserved for paying readers.
The new internet-capable version follows websites’ instructions on what information it’s permitted to crawl, preventing it from bypassing paywalls. Microsoft and Google introduced similar rules for their Bing Chat and Bard chatbots, respectively.
I gave ChatGPT a screenshot of a SaaS dashboard and it wrote the code for it.
This is the future. pic.twitter.com/9xFgFdv4MM
– Mckay Wrigley (@mckaywrigley) September 27, 2023
Additionally, image recognition and a verbal interface are rolling out to subscribers and enterprise clients over the next two weeks, with free users following soon after. The new feature enables ChatGPT to interpret images on any platform, while voice chat is limited to iOS and Android.
To input an image, select the photo button to take or upload a picture. On mobile platforms, first, tap the plus button. Users can show the chatbot multiple images at a time and draw directions to focus its attention on a certain part of the picture. OpenAI claims the functionality allows ChatGPT to compile recipes based on what it sees in a refrigerator, solve math problems, or help fix equipment.
Use your voice to engage in a back-and-forth conversation with ChatGPT. Speak with it on the go, request a bedtime story, or settle a dinner table debate.
Sound on ï¿½”ï¿½ pic.twitter.com/3tuWzX0wtS
– OpenAI (@OpenAI) September 25, 2023
Voice functionality is found under Settings > New Features, where users must opt into verbal conversations. Then, tap the headphone icon in the top-right corner of the home screen and select from five voice types. The speech recognition system uses OpenAI’s Whisper technology, which Spotify is now also using to automatically dub podcasts into different languages.
The company is proceeding cautiously with ChatGPT’s expanded capabilities. It limited the voice technology to conversations to prevent its use for fraud or impersonation. Furthermore, OpenAI employed a red team to ensure the chatbot doesn’t say harmful things about the images it receives. The company can’t guarantee that hallucinations won’t still occur but promises that continual feedback will improve the system.