OpenAI will determine how powerful their AI systems are
OpenAI has created an internal ladder to track the progress of its large language models toward artificial general intelligence, or AI with human-like intelligence, a spokesperson told Bloomberg.
Today’s chatbots like ChatGPT are at Level 1. Openai is said to be approaching level 2 and is defined as a system that can solve the main problem of the Ph.D. Level 3 refers to an artificial intelligence agent that can take measures under the name of the user. Level 4 involves AI that can create new innovations. Level 5, the final step to achieving AGI, is AI that can perform the work of entire organizations of people. OpenAI previously defined AGI as "a highly autonomous system that outperforms humans in the most profitable tasks."
OpenAI's unique structure is centered around the mission of realizing AGI, and how OpenAI defines AGI is important. The company said that before OpenAI, "if a value- and security-conscious project seeks to create AGI," it would promise not to compete with that project and would drop everything to support that project. The phrasing of this in OpenAI’s charter is vague, leaving room for the judgment of the for-profit entity (governed by the nonprofit), but a scale that OpenAI can test itself and competitors on could help dictate when AGI is reached in clearer terms
Still, AGI is still quite a ways away: it will take billions upon billions of dollars worth of computing power to reach AGI, if at all. Timings vary widely among experts, and even within OpenAI: In October 2023, OpenAI CEO Sam Altman said that we were "still about five years away" from reaching AGI. The new metric, which is still under development, was introduced a day after OpenAI announced a partnership with Los Alamos National Laboratory aimed at exploring how advanced AI models like GPT-4o can safely contribute to biological science research. A Los Alamos program manager in charge of the national security biology portfolio and who was instrumental in securing the OpenAI partnership told The Verge that the goal is to test GPT-4o's capabilities and establish a degree of security and other factors for the U.S. government. Ultimately, public or private models can be tested against these factors to evaluate their own models.
In May, OpenAI disbanded its security team after the team's leader, OpenAI co-founder Ilya Sutskever, left the company. OpenAI's lead researcher, Jan Reich, resigned shortly after stating in an article that the company's "security culture and processes have taken a back seat to the company's shining products," a charge that OpenAI denies, but some worry about what it could mean if the company does in fact achieve AGI. OpenAI hasn't provided details on how it distributes models into these internal layers (and declined The Verge's request for comment). However, according to Bloomberg, company executives presented a research project using the GPT-4 artificial intelligence model at an all-hands meeting on Thursday and believe the project demonstrates new skills that demonstrate human thinking.
This scale could help provide a strict definition of progress, rather than leaving it up for interpretation. For instance, OpenAI CTO Mira Murati said in an interview in June that the models in its labs are not much better than what the public has already. Meanwhile, CEO Sam Altman said late last year that the company had recently "lifted the veil of ignorance," meaning its model had become significantly smarter.