Microsoft released its first annual Responsible AI Transparency Report, offering insights into the company’s ethical AI efforts. The report explains how Microsoft builds generative AI applications, determines their release, supports customers in responsible AI use, and evolves its AI practices.
Here are some highlights from the report:
- Microsoft has implemented a framework to map, measure, and manage risks in generative AI models. This framework informs decisions about planning, safeguards, and the appropriate use of generative AI systems.
- Risks are managed at the platform and application levels, with monitoring, feedback, and incident response systems to address unknown risks. Procedures to measure AI risks guide Microsoft’s development and use of generative AI systems, integrating responsible AI into engineering teams, the AI lifecycle, and tooling.
- Microsoft has released 30 responsible AI tools with over 100 features to support customers, including risk mapping, real-time detection, and ongoing monitoring. Detailed documentation about AI applications’ capabilities, limitations, and uses is provided to customers.
- The AI Assurance Program helps customers ensure their AI applications meet legal and regulatory requirements.
- Microsoft will defend commercial customers against third-party copyright infringement claims related to Azure OpenAI Service, Copilot, or their outputs, provided customers meet basic conditions, such as avoiding infringing content and using required guardrails.
- Microsoft will continue to train its employees, recognizing the crucial importance of this effort. In 2023, 99% of Microsoft employees completed mandatory responsible AI training, underscoring the company’s commitment to ethical AI practices.
- Microsoft supports various programs and regularly publishes research to advance responsible AI.
Click here to read Microsoft’s Annual Responsible AI Transparency Report.