The White House has announced a new initiative to further promote responsible American innovation in the field of artificial intelligence (AI). As part of this initiative, the White House hosted a first-of-its-kind Artificial Intelligence Summit. Vice President Harris emphasized the ethical, moral, and legal responsibilities of the private sector in ensuring the safety and security of their products while complying with the law. The White House also emphasizes the importance of accountability and external scrutiny for AI-driven products, as well as the role of company executives as key individuals in protecting the rights of users.
Lina Khan, the chair of the U.S. Federal Trade Commission (FTC), published an article warning of the risks arising from the development of artificial intelligence (AI) technology. Ms. Khan highlighted the risks that AI poses to fair competition, fraud, and automated discrimination. She called on enforcement agencies, public officials, and lawmakers to act on the matter promptly. Ms. Khan also emphasized that the FTC is committed to taking regulatory action against any unfair use of technology, under its dual mandate of promoting fair competition and protecting from unfair or deceptive consumer practices.
The U.S. Equal Employment Opportunity Commission (EEOC), the U.S. Department of Justice’s Civil Rights Division (DOJ-CR), the U.S. federal Consumer Financial Protection Bureau (CFPB), and the FTC also issued a joint statement addressing the risks of artificial intelligence. The statement emphasizes that AI systems may perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes. The agencies’ statement explains the actions they have taken to date to ensure that the use of emerging automated systems is consistent with the law.
The CFPB published a circular confirming that financial consumer protection laws fully apply to AI systems. The DOJ-CR recently submitted a court brief explaining that the federal Fair Housing Act also applies to algorithm-based tenant screening services. The EEOC published a technical assistance document explaining how the federal Americans with Disabilities Act applies to the use of AI for decision-making related to job applicants and employees. The FTC issued a report evaluating the use and impact of AI in combatting online harm. The report outlines significant concerns about how AI tools can be inaccurate and biased, and how they contribute to deceptive trade practices.
On the other side of the Atlantic, members of the European Parliament have reached an initial agreement on the text of the Artificial Intelligence Act (AI Act). The AI Act aims to promote transparency in the use of AI and to protect the fundamental rights of individuals within the EU in this field. It classifies AI systems into four risk-based categories (unacceptable, high, limited, and minimal) and establishes the obligations imposed on their operators.
Among other things, the updated version of the draft AI Act specified strict rules for the deployment of generative AI tools, such as ChatGPT and MidJourney. According to the updated draft, companies developing and operating such tools must disclose any proprietary material used in their development. Additionally, the draft AI Act would also require the development of AI models in ways that minimize the risks to individuals’ health, security, fundamental rights, environmental quality, and the principles of democracy and the rule of law. In situations where the risk cannot be reduced, all risks and the reasons they cannot be mitigated must be documented.
The French data protection authority (CNIL) published an action plan to deal with AI which includes four key prongs: understanding the functioning of AI systems and their impact on people; enabling and guiding the development of privacy-friendly AI; federating and supporting innovative players in the AI ecosystem in France and Europe; and auditing and controlling AI systems.
CNIL’s action plan will examine questions related to fairness, the data processing underlying the operation of AI tools, the protection of online information against scraping used in AI tools, and the possible consequences for the rights of individuals to their data. CNIL plans to publish recommendations regarding the development and design of AI systems and their use in scientific research, the division of responsibilities between the stakeholders of AI tools, and the application of the various principles of the GDPR to AI tools.
At the industry level, the executives of OpenAI, the developer and operator of AI products such as ChatGPT, DALL·E-2, and Copilot, published an article regarding the governance of superintelligence. Their proposal for governance includes three pillars. First, coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows safety integration of AI systems with society. Second, the establishment of an agency for monitoring the development of superintelligence technologies. The agency would oversee any initiative involving AI in capabilities above a certain threshold, conduct system audits, enforce scrutiny, ensure compliance with safety standards, and restrict proliferation levels. Third, active research of secure superintelligence technologies, to which Open AI and other stakeholders are already dedicating substantial efforts.
Microsoft’s president, Brad Smith, also recently discussed the regulation of AI. Mr. Smith called for the regulation of AI systems and outlined a series of necessary steps he believes are necessary. For example, AI systems in critical infrastructure should be installed in a way that allows them to be completely turned off or heavily restricted, like emergency brakes on trains. Mr. Smith also proposed a governmental licensing requirement for companies seeking to implement high-capability AI models. The licensing process would require notifying the government of experiments with such systems, sharing the results of the experiments, ongoing monitoring of the system after deployment, and reporting unexpected problems that arise during use.
The president of Microsoft also stated that companies must be legally responsible for the damages caused by AI. He also supports the use of watermarks on images or videos generated by AI, as a measure to improve transparency and prevent deception.
Click here to read the White House’s statement on new actions to promote responsible AI.
Click here to read the opinion essay that Lina Khan, the chair of the FTC, published.
Click here to read the joint statement of U.S. federal agencies on enforcement efforts against discrimination and bias in automated systems.
Click here to read the EU Parliament’s draft of compromise amendments to the AI Act.
Click here to read the action plan of the French data protection authority (CNIL) regarding AI.
Click here to read OpenAI’s article about the governance of superintelligence.
Click here to read Brad the interview with Brad Smit, president of Microsoft, for the New York Times.