OpenAI is aiming to significantly enhance the reasoning abilities of its models, the agency has stated.
OpenAI, the creator of the virtual assistant ChatGPT, is working on a new approach to its artificial intelligence technology, Reuters has reported.
As part of this project, code-named ‘Strawberry,’ the Microsoft-backed firm is attempting to dramatically improve the reasoning capabilities of its models, the agency said in an article on Friday.
The specifics of Strawberry’s operation are “a tightly kept secret” even within OpenAI itself, a person familiar with the matter told Reuters.
The source explained that the project involves a “specialized way” of processing an AI model after it has undergone initial training on extensive datasets. The goal is to enable artificial intelligence to not just generate answers to queries, but to plan ahead sufficiently to conduct so-called “deep research,” by independently and reliably navigating the internet, the source elaborated.
Reuters reported reviewing an internal OpenAI document outlining a plan for deploying Strawberry to perform research. However, the agency stated it was unable to determine when the technology will become publicly available. The source described the project as a “work in progress.”
Responding to a query on the matter, an OpenAI spokesperson told Reuters: “We want our AI models to see and understand the world more like we [humans] do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time.” The spokesperson did not directly address Strawberry in their response.
Currently, large language AI models are capable of summarizing large volumes of text and constructing coherent prose more quickly than humans, but often struggle with common sense solutions that are intuitive to humans. In these situations, the models frequently “hallucinate” by attempting to present false or misleading information as facts.
Researchers who spoke with Reuters said that reasoning, which has so far eluded AI models, is crucial for artificial intelligence to achieve human or superhuman levels of performance.
Last week, Yoshua Bengio, a leading expert in artificial intelligence and a pioneer in deep learning, once again raised concerns about the “many risks,” including the potential for “extinction of humanity,” posed by private corporations racing to achieve human-level and beyond AI.
”Entities that are smarter than humans and that have their own goals: are we sure they will act towards our well-being?” the Montreal University professor and scientific director of the Montreal Institute for Learning Algorithms (MILA) said in an article on his website.
Bengio urged the scientific community and society as a whole to undertake “a massive collective effort” to determine ways to keep advanced AI under control.