How American Tech Giants will Self-Regulate

The use of Artificial Intelligence (AI) systems can undoubtedly bring societal benefits, enhance economic growth, support innovation, and reframe global competitiveness among businesses and governments. At the same time, it is commonly acknowledged that certain characteristics of AI systems are concerning, especially regarding safety, security and fundamental rights protection. The latter has also been acknowledged by EU institutions (such as the European Parliament). The same threats have also been acknowledged at the US level (the White House, fact sheet, September 12, 2023).

Against this background, the European Commission unveiled a proposal for regulating Artificial Intelligence (the AI Act) in April 2021. The third trilogue on this proposal will take place on September 26th, 2023, between the Commission, the European Parliament and the Council of the EU. On the other side of the Atlantic discussions on the regulation of AI have started in the US Congress. Three members of the US Congress introduced the National AI Commission Act in order to produce ‘’bipartisan and bicameral legislation to create a national commission to focus on the question of regulating Artificial Intelligence (AI)’’. The US Senate has also started to address oversight of A.I.

Moreover, eight more American tech companies (Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability) signed up to President Joe Biden’s voluntary commitments governing AI (second round of voluntary commitments) on September 12th, 2023. ‘’These commitments represent an important bridge to government action and are just one part of the Biden-Harris Administration’s comprehensive approach to seizing the promise and managing the risks of AI’’. The first round of voluntary commitments (July 21, 2023), which were signed by Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, aim to help move toward safe, secure, and transparent development of AI technologies.

According to these eight voluntary commitments, the fifteen big tech companies agreed to ensure the safety of their products before introducing them to the public. These commitments are based on three principles that must be taken into consideration in future AI systems: safety (commitments 1 and 2), security (commitments 3 and 4), and trust (commitments 5-8).

These voluntary commitments include allowing independent experts to test the internal and external security of their AI systems and in particular the risks that these systems pose to national security and society at large, before they can be released (commitment 1). Additionally, commitment 2 acknowledges the importance of information sharing between the AI industry and the US government, as well as civil society and academia, on the management of AI. Moreover, these companies will also put security first by investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights (commitment 3) and facilitating third party detection and reporting of vulnerabilities in their AI systems (commitment 4).

Earning people’s trust is also considered to be a fundamental issue. In order to achieve this, these companies have agreed to develop robust technical mechanisms to ensure that users know when audio or visual content has been generated by AI, such as a watermarking system (commitment 5). Furthermore, they agreed to publicly report the capabilities and limitations of models or AI systems, and areas in which they may be used appropriately or inappropriately (commitment 6). Moreover, they consent to prioritise research on the societal risks that AI systems may pose, including harmful bias and discrimination, and risks to privacy (commitment 7). The final commitment aims to develop and deploy advanced AI systems to help address society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats, as well as fostering the education and training of students and workers in order to prosper from the benefits of AI (commitment 8).

Voluntary commitments offer an action-oriented mechanism for addressing interconnected, complex and pressing issues. They are not legally binding, but they offer an easier consensus mechanism, which at this juncture, and based on the urgency of these issues, is arguably the best way to proceed for the time being. At the same time, the European Consumer Organisation (BEUC) has expressed concerns about the intention of the Executive Vice-President of the European Commission, Margrethe Vestager, and the European Commissioner for the Internal Market, Thierry Breton, to respectively draft a joint EU-US AI voluntary code of conduct and an ‘AI Pact’ for Europe.

It will therefore be interesting to see how the voluntary commitments approach will be used on both sides of the Atlantic, as well as the resulting outcomes.

T. Karathanasis, S. Tsipras

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email