Meta AI Fulfilling Left’s Dream

( – Meta announced on Monday that it would establish a team to combat disinformation and the abuse of artificial intelligence in the lead-up to the European Parliament elections amid concerns over misleading AI-generated election information, Reuters reported.

Meta’s head of EU affairs Marco Pancini said the tech company’s Elections Operations Center would identify possible election interference threats and impose real-time mitigations to stop them.

The Elections Operations Center would consist of experts from Meta’s various teams, including data science, intelligence, research operations, engineering, and content policy. It would focus on combatting influence operations, misinformation, and other forms of election interference while countering the risks related to AI abuse, Pancini said.

While the effort may be a noble one, Meta may want to turn its attention inward and deal with the misinformation its own artificial intelligence bot peddles.

In a recent post on X, Libs of TikTok pointed out that Meta’s AI bot conveniently omitted Donald Trump from its list of US presidents, jumping from the 44th President, Barack Obama directly to Joe Biden, whom Meta AI listed as the 45th President.

Two hours later, Libs of TikTok tried again, and this time Meta AI got it right, listing Trump as the 45th President and Biden as the 46th President.

The expanding growth of artificial intelligence has triggered concerns that the technology could be used to mislead voters, particularly through the use of AI-generated images, videos, and audio.

Before the New Hampshire primary, a political consultant used an AI-generated voice of President Biden in robocalls to urge Democrats in the Granite State not to vote in the primary.

In early February, the Federal Elections Commission approved a new rule to ban robocalls that use AI-generated audio under the Telephone Consumer Protection Act of 1991 which restricts spam callers from using pre-recorded or artificial voice messages.

Under the new rule, the FEC can fine any campaign or company using AI-generated voices as much as $23,000 a call. The rule would also allow recipients of such calls to sue the companies that produce them. It also empowers state attorneys general to crack down on companies that employ the technology for robocalls in their states.

Copyright 2024,