Microsoft President Answers the Question ‘How Do We Best Govern AI?’

Following in the footsteps of OpenAI with its recent post regarding AI superintelligence safety precautions, Microsoft has now posted its own list of guidelines for how we, as a society, can safely navigate the artificial intelligence revolution.

Penned by the company’s president and vice chair Brad Smith, the article entitled “How do we best govern AI?” breaks down the five key pillars of the tech giant’s proposed approach.

The first pillar of Smith’s proposal is to work within government frameworks and develop AI solutions in a way that keeps governing bodies and tech innovators on the same page. Smith also noted the importance of cooperatively building on those guidelines so that tech is able to develop quickly.

The second pillar is ensuring there are fail safes and brakes to stop catastrophic failures with critical infrastructure systems that are controlled by AI. The idea is that humans would always have the ability to regain control of an AI-driven system if need be.

Also Read:
Microsoft to Integrate AI Into Windows to Help Users ‘Take Action’

“In this approach, the government would define the class of high-risk AI systems that control critical infrastructure and warrant such safety measures as part of a comprehensive approach to system management,” Smith wrote. “New laws would require operators of these systems to build safety brakes into high-risk AI systems by design.”

The third pillar calls for a broad regulatory framework wherein all parties involved in AI development, deployment, and use are in some way watched over so that no individual actor can derail safety measures. Smith stated that future AI models would benefit from law and regulations implemented by a “new government agency.”

The fourth pillar argues academic institutions and nonprofits should maintain access to AI and not be cut out of vital resources. In this section, Smith stated that transparency is essential, noting that “Microsoft is committing to an annual AI transparency report and other steps to expand transparency for our AI services.”

And lastly, Smith encouraged public and private sector AI teamwork for combatting society’s struggles, citing the war in Ukraine to highlight that “when like-minded allies come together, and when we develop technology and use it as a shield, it’s more powerful than any sword on the planet.”

Also Read:
OpenAI Says Technology’s Rapid Advance Requires Urgent Global Oversight