The U.S. Federal Communications Commission (FCC) has announced the unanimous adoption of a declaratory ruling that calls made using AI-generated voice are considered “artificial” under the U.S. Telephone Consumer Protection Act (TCPA).
The decision, which has already taken effect, declares the use of artificial intelligence (AI) technology for generating voice, primarily utilized for fraudulent automated advertisements targeting consumers, to be illegal. State Attorneys General across the US will now have new tools to tackle these frauds and ensure the public is protected against scams and misleading information. Previously, Attorney Generals could only address the fraudulent activities carried out through automated calls. However, now the use of AI to produce voice in these automated calls is illegal. In practice, law enforcement in the U.S. now has better tools to deal with this issue, such as blocking phone numbers, imposing fines, and filing lawsuits.
The use of automated calls has become particularly popular in recent years, as the technology harbors the capability to confuse consumers with misleading information by mimicking the voices of celebrities, political candidates, and close family members. For instance, the FCC recently accused Lingo Telecom of being a “source of automated call traffic,” in which the President of the United States, Joe Biden, was purportedly heard advising Democratic voters not to vote in the elections. According to the FCC’s announcement, the automated calls from the company began on January 21, two days before the presidential primaries in New Hampshire. The calls were doctored to appear as if the source was the spouse of a former Democratic Party official in New Hampshire.
Click here to read the FCC announcement of the unanimous adoption of a Declaratory Ruling that recognizes calls made with AI-generated voices are “artificial” under the Telephone Consumer Protection Act (TCPA).
Click here to read the FCC Notice of Suspected Illegal Traffic.