Former OpenAI employees say whistleblower protection on AI safety is not enough

Jun 04, 2024 11:53 PM - 5 months ago 52981

Several erstwhile OpenAI labor warned successful an unfastened letter that precocious AI companies for illustration OpenAI stifle disapproval and oversight, particularly arsenic concerns complete AI information person accrued successful nan past fewer months. 

The unfastened letter, signed by 13 erstwhile OpenAI labor (six of whom chose to stay anonymous) and endorsed by “Godfather of AI” Geoffrey Hinton, formerly of Google, says that successful nan absence of immoderate effective authorities oversight, AI companies should perpetrate to unfastened disapproval principles. These principles see avoiding nan creation and enforcement of non-disparagement clauses, facilitating a “verifiably” anonymous process to study issues, allowing existent and erstwhile labor to raise concerns to nan public, and not retaliating against whistleblowers. 

The missive says that while they judge successful AI’s imaginable to use society, they besides spot risks, specified arsenic nan entrenchment of inequalities, manipulation and misinformation, and nan anticipation of quality extinction. While location are important concerns astir a instrumentality that could return complete nan planet, today’s generative AI has much down-to-earth problems, such arsenic copyright violations, nan inadvertent sharing of problematic and forbidden images, and concerns it tin mimic peoples’ likenesses and mislead nan public. 

The letter’s signees declare existent whistleblower protections “are insufficient” because they attraction connected forbidden activity alternatively than concerns that, they say, are mostly unregulated. The Department of Labor states workers reporting violations of wages, discrimination, safety, fraud, and withholding of clip disconnected are protected by whistleblower protection laws, which intends employers cannot fire, laic off, trim hours, aliases demote whistleblowers. “Some of america reasonably fearfulness various forms of retaliation, fixed nan history of specified cases crossed nan industry. We are not nan first to brushwood aliases speak astir these issues,” nan missive reads. 

AI companies, peculiarly OpenAI, person been criticized for inadequate information oversight. Google defended its usage of AI Overviews successful Search moreover aft group claimed it was giving group dangerous, though hilarious, results. Microsoft was besides under occurrence for its Copilot Designer, which was generating “sexualized images of women successful convulsive tableaus.”

Recently, respective OpenAI researchers resigned aft nan institution disbanded its “Superalignment” team, which focused connected addressing AI’s semipermanent risks, and nan departure of co-founder Ilya Sutskever, who had been championing information successful nan company. One erstwhile researcher, Jan Leike, said that “safety civilization and processes person taken a backseat to shiny products” astatine OpenAI. 

OpenAI does person a caller information team, one that is led by CEO Sam Altman.

More