Connect with us

Hi, what are you looking for?

Brilliant AchievementBrilliant Achievement

Tech News

OpenAI is launching an ‘independent’ safety board that can stop its model releases

Vector illustration of the ChatGPT logo.
Image: The Verge

OpenAI is turning its Safety and Security Committee into an independent “Board oversight committee” that has the authority to delay model launches over safety concerns, according to an OpenAI blog post. The committee made the recommendation to make the independent board after a recent 90-day review of OpenAI’s “safety and security-related processes and safeguards.”

The committee, which is chaired by Zico Kolter and includes Adam D’Angelo, Paul Nakasone, and Nicole Seligman, will “be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed,” OpenAI says….

Continue reading…

Join The Exclusive Subscription Today And Get Premium Articles For Free
Your information is secure and your privacy is protected. By opting in you agree to receive emails from us. Remember that you can opt-out any time, we hate spam too!

You May Also Like

Editor's Pick

Thomas A. Berry and Alexander Khoury Since the creation of the market square, there have been boisterous, loquacious individuals who have solicited bids for...

Editor's Pick

We had a sneak preview of emerging leadership on the morning of July 12th. That was the morning the June Core CPI came in...

Editor's Pick

Colleen Hroncich A few years ago, EdChoice released what’s casually known as the Chicken Little report. The official title was a little less catchy...

Editor's Pick

Travis Fisher and Josh Loucks Just north of Boston in Everett, Massachusetts sits the poster child for irrational energy permitting in the United States....