Washington: In a significant development, leading technology companies, including Amazon, Google, Meta (formerly Facebook), and Microsoft, have pledged to adhere to a set of Artificial Intelligence (AI) rules and commitments brokered by President Joe Biden’s administration on Friday.
The White House announced on Friday that seven major US companies, including OpenAI, Anthropic, and Inflection, are on board with the initiative. Their AI systems will be subject to security testing carried out, in part, by independent experts. This step is crucial to guard against potential major risks, such as biosecurity and cybersecurity threats. These safeguarding commitments aim to ensure the safety of AI products before they are released to the public.
With a drastic surge in consumption and investment in generative AI tools, that are capable of producing human-like text and images, there has been both admiration and concern against the budding technologies. While these tools have proven to be imperative in boosting productivity, their ability to deceive people and spread disinformation is also a growing worry.
To address these issues, the companies have committed to adopting methods for reporting vulnerabilities in their AI systems. They will also employ digital watermarking techniques to help differentiate between real and AI-generated images, such as deepfakes. Additionally, they will publicly disclose flaws and risks, including those related to fairness and bias.
The move comes amidst calls from technology executives for AI regulation. This comes after Senate Majority Leader Chuck Schumer expressed his intention to introduce legislation to regulate AI, given its bipartisan interest.
While the White House’s efforts have been commended, some critics remain skeptical of voluntary commitments. Nonetheless, the collaboration between the government and major tech players is a major step towards promoting responsible AI development.