Google and Microsoft: Navigating the Ethical and Legal Landscape of AI

Both Google and Microsoft have recently positioned AI at the core of their commercial offerings. Microsoft 365 Copilot now offers integrated AI capabilities across their Microsoft 365 suite and Azure cloud services. Similarly, Google now bundles Gemini AI within its Workspace offerings, and provides a range of powerful AI services to its Cloud customers.

Both companies are at the forefront of addressing the ethical issues and legal risks associated with AI technology, and are taking proactive steps to reassure their corporate and public sector customers that AI can be implemented safely and responsibly. This article examines and compares the approaches taken by Google and Microsoft to mitigate potential problems and foster trust in AI adoption.

Ethical Frameworks and Principles

Google and Microsoft have each established comprehensive ethical frameworks based on the same six principles:

  1. Fairness

  2. Reliability and safety

  3. Privacy and security

  4. Inclusiveness

  5. Transparency

  6. Accountability

Governance and Implementation

Both companies have established robust governance structures to ensure the implementation of their ethical principles.

Google's Approach

Google has created a federated, bottom-up approach combined with strong top-down support from company leadership. Over 200 Google full-time employees are dedicated to implementing ethical AI practises. Key components of their governance structure include:

  1. AI Principles reviews conducted by dedicated teams

  2. Consultations with internal experts for teams developing AI applications

  3. Adversarial proactive fairness (ProFair) testing

  4. Risk rating assessments focusing on potential impacts on people and society

Google has also invested heavily in internal education, with over 32,000 employees engaging in AI Principles training since 2019.

Microsoft's Approach

Microsoft has implemented a multi-layered governance structure to operationalize its responsible AI practices:

  1. Microsoft Board oversight through the Environmental, Social, and Public Policy Committee

  2. Responsible AI Council co-led by senior executives

  3. Office of Responsible AI (ORA) tasked with policy governance and implementation

  4. AI Ethics and Effects in Engineering and Research (Aether) Committee for thought leadership

  5. Engineering teams creating AI platforms, applications, and tools

Microsoft has also developed the Responsible AI Standard, which defines goals, requirements, and practices for all AI systems developed by the company.

Addressing Specific Concerns

Both Google and Microsoft are tackling key ethical and legal issues that concern their corporate customers.

Data Privacy and Security

Google emphasizes the importance of privacy and security in its AI systems, ensuring that they respect privacy and maintain the security of private and confidential information. The company has implemented strong safety measures at the AI model, platform, and application levels.

Microsoft has integrated privacy and security considerations into its Responsible AI Standard and requires compliance with existing privacy and security programs. The company also provides guidance on data privacy concerns related to its AI tools.

Intellectual Property and Copyright

Both companies are grappling with the complex legal landscape surrounding AI and copyright. Google is actively participating in discussions and legal proceedings related to the use of copyrighted material in AI training.

Microsoft has entered into agreements with publishers, such as Taylor & Francis, to improve the performance of its AI products while addressing copyright concerns. The company is also updating its contracts and terms of service to clarify the responsibilities of users regarding copyright and intellectual property.

Transparency and Explainability

Google has developed resources like the TensorFlow open-source toolkit to provide model transparency in a structured, accessible way. The company also emphasizes the importance of explainable AI in building trust with users and stakeholders.

Microsoft has incorporated transparency requirements into its Responsible AI Standard and provides documentation on its AI systems to help users understand how decisions are made. The company also conducts regular audits of its AI systems to ensure ongoing compliance with ethical guidelines.

Corporate Customer Reassurance

Both Google and Microsoft are taking steps to reassure their corporate customers that they can use AI safely and responsibly.

Google provides resources and best practices for responsible AI development, including fairness guidelines and technical references. The company also offers AI ethics training and workshops to help customers implement ethical AI practices within their organizations.

Microsoft has developed the AI Business School to help organizations understand the implications of AI and implement it responsibly. The company also provides transparency documents and resources to help customers navigate the ethical and legal landscape of AI adoption.

Conclusion

While the two tech giants have taken similar approaches to addressing the ethical issues and legal risks surrounding AI, there are some nuanced differences in their strategies. Google appears to place a stronger emphasis on internal education and bottom-up implementation of ethical principles, while Microsoft has developed a more structured, top-down governance model.

Both companies recognize the importance of transparency, accountability, and ongoing assessment in building trust with their corporate customers. By providing comprehensive frameworks, tools, and resources, Google and Microsoft are working to create an environment where organisations can leverage AI technologies while minimizing potential ethical and legal risks.

As the AI landscape continues to evolve, both companies will likely need to adapt their approaches to address new challenges and regulatory requirements. Public sector customers should remain engaged with these developments and work closely with their AI providers to ensure responsible and compliant implementation of AI technologies within their organisations.

 

More About Bramble Hub

Bramble Hub has been successfully connecting IT private sector companies and the public sector since 2006..... Find out more ..

Subscribe To Our Newsletter

Our regular newsletter keeps you up to date with developments at Bramble Hub and our partners and customers...

Contact Us

If you are a best of type business looking to work with the public sector via frameworks do get in touch with our team.

Latest News