OpenAI establishing safety panel

OpenAI said it is establishing a new safety panel that will make recommendations to the company’s board regarding “critical safety and security decisions.”

The generative artificial intelligence (AI) company announced in a Tuesday blog post the new committee will be led by chief executive Sam Altman and directors Adam D’Angelo, Nicole Seligman and Bret Taylor, who will also serve as the group’s chair.

The company said the committee’s first task will be to “evaluate and further develop” OpenAI’s processes and safeguards over the next 90 days. After that time, the committee will provide its recommendation to the board. Following the board’s review of the recommendation, it will be shared publicly as an update.

The formation of the committee comes as AI safety at the company is being discussed. Jan Leike, the former researcher at OpenAI, resigned in March, saying safety has “taken a backseat to shiny products” at the company.

The company also announced Tuesday it recently started training its “next frontier model” and expects the new systems to bring the “next level” of capabilities.

“While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” the company said on Tuesday.

The newly formed committee will also consist of a variety of policy and technical experts, as well as consulting with former cybersecurity officials, OpenAI said.

Copyright 2024 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

For the latest news, weather, sports, and streaming video, head to The Hill.