In-Dept

Microsoft Offers Assurances in 'Trustworthy AI' Manifesto

The AI sausage is getting made responsibly, Redmond asserts.

This month, Microsoft gave public assurances to users of its Copilot generative artificial intelligence products by publishing "How Microsoft 365 Delivers Trustworthy AI."

The 25-page document is part mission statement, part product blueprint and part policy manual. It was released just a few months after the general availability of Microsoft 365 Copilot, which is bringing generative AI to organizations using productivity apps such as Excel, PowerPoint and Word.

Copilot and solutions like it could be hamstrung by common concerns about the security of using large language models and generative AI technologies. Microsoft's paper seems designed to assuage such concerns. The manifesto was described as "a comprehensive document providing regulators, IT pros, risk officers, compliance professionals, security architects, and other interested parties with an overview of the many ways in which Microsoft mitigates risk within the artificial intelligence product lifecycle," per a Microsoft blog post.

The Trustworthy AI paper covers a lot of ground. It describes the teams responsible for steering Microsoft's AI policies. It lists the AI regulations on both sides of the pond that Microsoft has adopted (and which ones it is merely "considering"). The full document is available to read here; below are five highlights.

1. The 10 AI Vulnerabilities on Microsoft's Radar
Widespread use of AI opens the door to new and harder-to-detect cybersecurity threats. Microsoft is keeping a particularly close eye on these 10:


Future updates to Microsoft's Software Development Lifecycle will be designed to keep pace with these and other emerging AI security threats, according to the paper.

2. The 6 Tenets of Microsoft's Responsible AI Standard
Now in its second edition, the Responsible AI Standard (RAIS) is Microsoft's evolving internal rulebook for developing around AI. Wherever there's a gap between legal policies around AI and the actual capabilities of AI technology, Microsoft turns to RAIS to guide its hand.

There are six "domains" covered in the RAIS, per Microsoft:

"The RAIS applies to all AI systems that Microsoft creates or uses," according to the paper, "regardless of their type, function, or context."

3. Microsoft's AI Watchmen
The paper identifies three discrete groups within Microsoft that "work collaboratively" to steer the company's responsible AI efforts. They are:

All three teams spend their time "figuring out what it means for AI to be safe," says Microsoft. "Failure modes in AI don't distinguish between security and responsible AI, so our teams closely collaborate to ensure holistic coverage of risk."

4. Microsoft's 3-Part Data Security Promise
Data security and privacy are an IT team's perennial bugbears. To assure organizations that Copilot is not overstepping its boundaries when it comes to their data, Microsoft is promising these three things:

"No unauthorized tenant or user has access to your data without your consent," according to the paper. "Further, those promises are extended into LLMs. This means that as a commercial customer of Microsoft 365, your data will not be used to train Copilot LLMs without your consent, even models only used by other users within your tenant. You are always in control of how, when, and where your data is used."

Microsoft employs several security checks to make sure Copilot doesn't expose or misuse an organization's data, including data encryption (in transit and at rest) and role-based access control.

"Even though your copilot requests are run on multi-tenant shared hardware and in multi-tenant shared service instance," Microsoft says, "data protection is ensured through extensive software defense-in-depth against unauthorized use such as RAG [retrieval augmented generation] and other techniques."

5. Intelligence Isn't Knowledge
Copilot is good, Microsoft acknowledges, but that's not because it's violating data privacy principles. Per the paper: "Because copilots produce results tailored to you, you may be concerned that your data is being trained into an LLM or could be seen or benefit other users or tenants. That is not the case."

Microsoft gives this high-level explanation for how Copilot processes information while still respecting an organization's security policies:

Copilots use a variety of techniques to ensure highly tailored results without training your data into the models; techniques like "Retrieval Augmented Generation (RAG)" in which your copilot prompt is used to retrieve relevant information from your corpus using semantic search with an access token that ensures only data you are permitted to see can be used. That data is fed into the LLM as "grounding," along with your prompt, which enables the LLM to produce results that are both tailored for you and cannot include information you are not authored to see.

"Any sufficiently advanced technology is indistinguishable from magic," Arthur C. Clarke famously aphorized. One possible corollary, phrased clumsily, is, "Just because a technology seems markedly advanced, that doesn't mean it's doing anything nefarious."


About the Author