OpenAI is turning its Safety and Security Committee into an independent “Board oversight committee” that has the authority to hold exemplary launches complete information concerns, according to an OpenAI blog post. The committee made the proposal to make the independent committee aft a caller 90-day reappraisal of OpenAI’s “safety and security-related processes and safeguards.”
The committee, which is chaired by Zico Kolter and includes Adam D’Angelo, Paul Nakasone, and Nicole Seligman, will “be briefed by institution activity connected information evaluations for awesome exemplary releases, and will, on pinch the afloat board, workout oversight complete exemplary launches, including having the authority to hold a merchandise until information concerns are addressed,” OpenAI says. OpenAI’s afloat committee of board will besides person “periodic briefings” connected “safety and information matters.”
The members of OpenAI’s information committee are besides members of the company’s broader committee of directors, truthful it’s unclear precisely really independent the committee really is aliases really that independency is structured. We’ve asked OpenAI for comment.
By establishing an independent information board, it appears OpenAI is taking a somewhat akin attack arsenic Meta’s Oversight Board, which reviews immoderate of Meta’s contented argumentation decisions and tin make rulings that Meta has to follow. None of the Oversight Board’s members are connected Meta’s committee of directors.
The reappraisal by OpenAI’s Safety and Security Committee besides helped “additional opportunities for manufacture collaboration and accusation sharing to beforehand the information of the AI industry.” The institution besides says it will look for “more ways to stock and explicate our information work” and for “more opportunities for independent testing of our systems.”