White House tasks hackers with breaking ChatGPT

·2-min read
The head of ChatGPT creator OpenAI met with US Vice President on 4 May, 2023, to discuss AI risks  (Reuters)
The head of ChatGPT creator OpenAI met with US Vice President on 4 May, 2023, to discuss AI risks (Reuters)

The White House has challenged hackers to break ChatGPT and other AI chatbots in order to better understand the risks that the technology poses.

The test of generative artificial intelligence will take place at the Def Con 31 hacker convention in Las Vegas this August, with leading AI developers like Google, Microsoft and OpenAI all agreeing to let their products be tested.

“AI is one of the most powerful technologies of our time, but in order to seize the opportunities it presents, we must first mitigate its risks,” the White House said in a statement.

“The Administration is announcing an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, Nvidia, OpenAI, and Stability AI, to participate in a public evaluation of AI systems.”

The event was announced during a meeting between US Vice President Kamala Harris and tech executives at the White House, which aimed to address concerns about fast-growing AI technology.

The hacking contest aligns with the Biden Administration’s AI Bill of Rights announced last year, which aims to protect citizens against potential harms associated with AI.

“This independent exercise will provide critical information to researchers and the public about the impacts of these models, and will enable AI companies and developers to take steps to fix issues found in those models,” the White House’s statement read.

Last week, a blog post from the White House Domestic Policy Council and White House Office of Science and Technology Policy warned that the technology currently poses a significant risk to workers.

Longer term, technologists and policy makers warn that advanced artificial intelligence could have catastrophic consequences for society.

A former OpenAI researcher recently said that he believed there was a “50/50 chance of doom” if AI systems reach and surpass the cognitive capacity of humans.

“I tend to imagine something like a year’s transition from AI systems that are a pretty big deal, to kind of accelerating change, followed by further acceleration, et cetera,” Dr Paul Christiano, who now runs AI research non-profit Alignment Research Center, said last month.

“I think once you have that view then a lot of things may feel like AI problems because they happen very shortly after you build AI.”