Navigating the Path to Artificial General Intelligence

Artificial General Intelligence (AGI) is a type of AI that, if fully realized, can perform and accomplish any intellectual task that human beings can perform. Unlike current AI systems, AGI would not only excel at specific tasks, but would also have the capacity for generalization, adaptation, flexibility, common sense, and most importantly, and perhaps scarily, autonomy. Current versions of large language models are, according to Sam Altman, now former CEO of OpenAI, simply stepping stones to what AI will eventually become. On the Lex Fridman Podcast, Altman shed some light on GPT-4 (the latest version of ChatGPT), AI safety, AGI, jobs, and power. 

Models now, like GPT-4 and Barde, are trained on enormous amounts of text data. There’s so much content in the world, so training current language models is less about getting as much data as possible, and more about filtering out the “unimportant” data. Models compress the web into a small box of parameters that fit what they’ve been trained to converse on. That’s one of the biggest differences between current levels of LLMs and AGI. Although excited, most programmers also have some sense of anxiety about the future of AI. As Altman puts it, “The increase in quality of life AI can deliver is extraordinary. We can make the world amazing… but people want status, people want drama, people want to create, people want to feel useful…” AGI puts all of those “wants” at risk. There will be fewer imperfections of human life as AI progresses to such a point where it can make things as good as they can possibly be, but as AI improves, it also becomes harder to keep it aligned. Altman argues that now is a “very good time” to significantly ramp up technical alignment work. There are new tools and a greater understanding of those tools that will hopefully stave off the serious concern of AI takeoff.  

AI takeoff is the idea that AI will exponentially improve in just a matter of days. The theory implies that, if it happens, AI will become uncontrollable and too powerful to align. With GPT-4, Altman thinks that it isn’t an AGI just yet and that “it doesn’t feel that close to me.” Although much better than previous versions on the technical side, there are still a lot of systems that are being tested, like refusals. Currently, there are systems put in place that attempt to learn when to refuse answering a question. It’s “early and imperfect,” but it’s a big leap from other LLMs and may put some people at ease when thinking about the kinds of dangerous questions others will try to ask.

For Altman, his worries about the future of AI are in disinformation, economic shocks, or “something else at a level far beyond anything we’re prepared for.” There will soon be a lot of capable LLMs with very few safety controls on them, especially if there is a greater market driven pressure for companies to perform at a level that values money over safety. Economically speaking, Altman is confident that, after a relatively short period of joblessness, there will come a point where there will be plenty of “incredible” new jobs that will be enhancements to what people were doing before. He also believes, however, that economic transformation caused by AI will drive political transformation, “There will be harm caused by this tool… tools do wonderful good and real bad.” 

So, if the “end goal” is AGI, does that mean it will know the answers to all kinds of questions we have yet to answer? Well… not necessarily. If a user were to ask an AGI whether or not there are advanced alien civilizations out there in this vast universe, an AGI model wouldn’t be able to give a clear “yes” or “no” answer. Instead, it’ll give the user the tools and information to help make finding that answer a lot easier, and that’s what Altman wants people to remember. Current and future versions of LLMs are not creatures or human beings, they’re tools. People tend to anthropomorphize things they don’t fully understand, and although doing so can help some people grasp the complexities of certain topics like AI, it can also be dangerous to attribute human sensibilities onto intangible things. 

Altman and the people at OpenAI have done their best to keep the development of AI language models public as a way to keep people well informed about how things are going, or as Altman puts it, “failing publicly.” It shows people where researchers are in the development process and allows for more questions about AI safety and what our future will look like, “People at OpenAI feel the weight and responsibility in what we’re doing.” Similar to what Anthropic CEO, Dario Amodei said on the Dwarkesh Patel Podcast about handing over such power to a government entity, Altman continues by stating that giving a powerful AI or AGI system to an individual or government can be “really bad.” Unlike Amodei, however, Altman is a little more reassuring about the future of AI. Although he has fears for what AI will bring in the near future, Altman chooses to focus on the good that will come out of LLMs. Vast improvements in programming, better tools to solve impossible questions, better jobs; these are all things to look forward to. Despite all this, he does admit that “I’d be crazy not to be a little afraid, and I empathize with people who are a lot afraid.” 

Christian Brewster

Christian is a Journalism student, formerly culture editor at Howard University’s The Hilltop. He enjoys writing about film and music.

Previous
Previous

The Rise of Pinterest in Luxury Markets: Trends and Opportunities

Next
Next

Ethereal Elegance: A Glimpse Into the Latest Naeem Khan Bridal Collection