Former OpenAI researcher says AGI could be achieved by 2027 but laments that shiny products get precedence over security

 ChatGPT privacy settings.
ChatGPT privacy settings.

What you need to know

  • A former OpenAI researcher has published a 165-page report outlining the progression and growth of AI, security challenges, and more.

  • The paper suggests OpenAI could achieve Artificial General Intelligence (AGI) by 2027, but power supply and increasing demand for GPUs pose a major threat to the benchmark.

  • According to the author, only a few hundred people have situational awareness about the technology and how its advances will affect the future.


Generative AI is a big deal in the tech landscape right now. We've seen artificial intelligence make companies like Microsoft the world's most valuable company with over $3 trillion in market valuation. Market analysts attribute the exponential growth to the Redmond giant's early lead and adoption of the technology. Even NVIDIA is on the verge of hitting its iPhone moment with AI after recently overtaking Apple and becoming the second-most valuable company in the world due to high GPU demand for AI advances.

Microsoft and OpenAI are arguably among the top tech firms that are heavily invested in AI. However, their partnership has stirred up controversies, with insiders indicating Microsoft has turned into "a glorified IT department for the hot startup." In contrast, billionaire Elon Musk says OpenAI has seemingly transformed into a closed-source de facto subsidiary for Microsoft.

It's no secret that both tech companies have a complicated partnership and the latest controversies affecting OpenAI aren't helping the situation. After launching GPT-4o, a handful of high-level employees left OpenAI. While the explanation behind their departure remains slim at best, Jan Leike former super alignment lead indicated that he was worried about the trajectory AI advances were taking at the company. He further stated that the firm was seemingly prioritizing the development of shiny products as security and privacy took a backseat.

To this end, it's impossible to tell the trajectory AI will take in the next few years, though NVIDIA CEO Jensen Huang indicates that we might be on the brink of hitting the next AI wave. The CEO further states that robotics is the next big thing, with self-driving cars and humanoid robots dominating the category.

But as it now seems, we might have a bit of insight into what the future might hold for us, according to a former OpenAI researcher who recently published a 165-page report highlighting the rapid growth and adoption of AI, security, and more (via Business Insider).

Leopold Aschenbrenner worked as a researcher for OpenAI's super alignment team but was fired for leaking critical information about the company's preparedness for general artificial intelligence. However, Aschenbrenner states that the information he shared was "totally normal" since it was based on publicly available information. He suspects the company was just looking for a way to get rid of him.

The researcher is among the OpenAI employees who refused to sign the letter asking for Sam Altman's reinstatement as CEO after he was fired by the board of directors last year. Aschenbrenner believes this contributed to his firing. This is in the wake of former board members alleging that two OpenAI staffers had reached out to the board with claims of psychological abuse from the CEO, which generally contributed to a toxic atmosphere at the company. The former board members also indicated that OpenAI staffers who didn't necessarily support Altman's imminent return as CEO signed the letter as the "feared" retaliation.

OpenAI might get to the superintelligence benchmark sooner than we expected

OpenAI logo
OpenAI logo

According to Aschenbrenner's report, the AI progression will take an upward trajectory. It's no secret that Sam Altman has a soft spot for superintelligence based on how passionately he speaks about the topic during interviews. In January, the CEO admitted that OpenAI is actively exploring advances that could eventually help it unlock this incredible feat. However, he didn't disclose whether the company was taking a radical or incremental trajectory while chasing it down.

As you may know, superintelligence means having a system with cognitive abilities that surpass human reasoning. However, there's concern building around this benchmark and what it could mean for humanity. An AI researcher revealed that there's a 99.9% probability it could end humanity, according to p(doom), and the only way to avoid this outcome is to stop building AI in the first place. Interestingly, Sam Altman admitted there's no big red button to stop the progression of AI.

With the emergence of new flagship AI models like GPT-4o with reasoning capabilities across text, audio, and more, it doesn't seem like the progression will stop soon. Computational power and algorithmic efficiency trends show AI will continue to experience rapid growth. However, there are critical concerns about power supply with OpenAI looking into nuclear fusion as a plausible alternative for the foreseeable future.

Aschenbrenner says AI development could scale to greater heights by 2027 and surpass the capabilities of human AI researchers and engineers. These predictions aren't entirely farfetched, with GPT-4 (referred to as mildly embarrassing at best) already surpassing professional analysts and advanced AI models in forecasting future earnings trends without access to qualitative data. Microsoft CTO Kevin Scott shared similar sentiments and foresees newer AI models capable of passing PhD qualifying examinations.

The report also indicates that more corporations will join the AI fray and invest trillions of dollars in developing systems to support AI advances, including data centers, GPUs, and more. This is amid reports of Microsoft and OpenAI investing over $100 billion in a project dubbed Stargate to free themselves from an overreliance on NVIDIA for GPUS.

Security, Privacy, and Regulation remain core priorities as AI advances

Satya Nadella and Sam Altman at OpenAI Dev Day
Satya Nadella and Sam Altman at OpenAI Dev Day

Reports suggest AI will eventually become smarter than people, take over their jobs, and turn work into a hobby. There's a rising concern about the implications this might have on humanity. Even OpenAI CEO Sam Altman sees a need for an independent international agency to ensure all AI advances are safe and regulated like airlines to avert "catastrophic outcomes."

Perhaps more interesting is that Aschenbrenner's report suggests that only a few hundred people understand AI's impact on the future. He added that most of them work in AI labs in San Francisco (potentially referring to OpenAI staffers).