Microsoft says it did a lot for responsible AI in inaugural transparency report

Trending 2 weeks ago
Source

A new study from Microsoft outlines nan steps nan institution took to merchandise responsible AI platforms past year. 

In its Responsible AI Transparency Report, which chiefly covers 2023, Microsoft touts its achievements astir safely deploying AI products. The yearly AI transparency study is 1 of nan commitments nan institution made aft signing a voluntary statement pinch nan White House successful July past year. Microsoft and different companies promised to found responsible AI systems and perpetrate to safety.

Microsoft says successful nan study that it created 30 responsible AI devices successful nan past year, grew its responsible AI team, and required teams making generative AI applications to measurement and representation risks passim nan improvement cycle. The institution notes that it added Content Credentials to its image procreation platforms, which puts a watermark connected a photo, tagging it arsenic made by an AI model. 

The institution says it’s fixed Azure AI customers entree to devices that observe problematic contented for illustration dislike speech, intersexual content, and self-harm, arsenic good arsenic devices to measure information risks. This includes caller jailbreak discovery methods, which were expanded successful March this twelvemonth to see indirect punctual injections wherever nan malicious instructions are portion of information ingested by nan AI model.

It’s besides expanding its red-teaming efforts, including some in-house reddish teams that deliberately effort to bypass information features successful its AI models arsenic good arsenic red-teaming applications to let third-party testing earlier releasing caller models.

However, its red-teaming units person their activity trim retired for them. The company’s AI rollouts person not been immune to controversies.

When Bing AI first rolled retired successful February 2023, users recovered nan chatbot confidently stating incorrect facts and, astatine 1 point, taught group taste slurs. In October, users of nan Bing image generator recovered they could usage nan level to make photos of Mario (or different celebrated characters) flying a level to nan Twin Towers. Deepfaked nude images of celebrities for illustration Taylor Swift made nan rounds connected X successful January, which reportedly came from a group sharing images made pinch Microsoft Designer. Microsoft ended up closing nan loophole that allowed for those pictures to beryllium generated. At nan time, Microsoft CEO Satya Nadella said nan images were “alarming and terrible.”

Natasha Crampton, main responsible AI serviceman astatine Microsoft, says successful an email sent to The Verge that nan institution understands AI is still a activity successful advancement and truthful is responsible AI. 

“Responsible AI has nary decorativeness line, truthful we’ll ne'er see our activity nether nan Voluntary AI commitments done. But we person made beardown advancement since signing them and look guardant to building connected our momentum this year,” Crampton says. 

More