Skip to content
post thumbnail

COMELEC urged to regulate AI and deepfakes, not content for 2025 poll campaigns

The Commission on Elections (COMELEC) must focus on regulating the use of artificial intelligence (AI) and deepfakes for campaigning in the 2025 elections, rather than content creation, which could be viewed as censorship.

By Vincent Galaura

Jul 26, 2024

4-minute read

Share This Article

:

With the midterm elections looming, the Commission on Elections (COMELEC) must focus on the use of artificial intelligence (AI) and deepfakes in campaign activities over content regulation, according to lawyer Ona Caritos of the Legal Network for Truthful Elections (LENTE).

During the stakeholders’ forum and consultation dialogue titled “AI and Philippines Decoded,” organized by the poll body on July 18, the LENTE executive director pushed for transparency approaches over content regulation, recognizing the challenge of prohibiting AI in election campaigns, especially in social media platforms.

Lawyers from various sectors gathered to discuss the legal aspects of regulating the use of artificial intelligence in Philippine election campaigns during a stakeholders’ forum and consultation dialogue organized by the Commission on Elections at Malcolm Hall, UP Diliman. (Photo courtesy of COMELEC)
Lawyers from various sectors gathered to discuss the legal aspects of regulating the use of artificial intelligence in Philippine election campaigns during a stakeholders’ forum and consultation dialogue organized by the Commission on Elections at Malcolm Hall, UP Diliman. (Photo courtesy of COMELEC)

Lawyers from various sectors gathered to discuss the legal aspects of regulating the use of artificial intelligence in Philippine election campaigns during a stakeholders’ forum and consultation dialogue organized by the Commission on Elections at Malcolm Hall, UP Diliman. (Photo courtesy of COMELEC)

In a June 3 position paper responding to COMELEC Chairman George Garcia’s proposal to ban AI in the midterm election, LENTE proposed this approach, highlighting that their previous research on disinformation policies in other countries found content regulation “lead to censorship and abuse.”

“Transparency initiatives are less draconian and effective in addressing the problem of disinformation by increasing the perception of accountability insofar as suppliers of political content are concerned,” the paper reads.

LENTE also said that AI content disclosure policies of social media platforms contribute to “a more robust transparency approach.”

According to Caritos, Meta (formerly the Facebook company), for instance, uses approaches like AI content disclosure, content labeling, and policy enforcement through platform community guidelines to address disinformation and AI usage.

“So if you use AI and you are a creator, you need to disclose that you use AI. So may mga labels if you take a look—AI-generated, altered, or synthetic content,” she explained.

In the context of the elections, another specific transparency approach Caritos identified is the “voluntary self-disclosure” of AI use in campaigning.

“For example, for candidates and political parties, in other jurisdictions, they’re disclosing that AI was used for a particular activity or campaign commercial,” she noted.

‘More constitutionally sound’

Meanwhile, Atty. Oliver Xavier Reyes, a Senior Lecturer at the University of the Philippines College of Law, suggested content-neutral regulation directed at social media platforms to address the use of AI and deepfakes for election materials.

“Regulation of platforms would be more effective than regulating the campaigns themselves. It’s more constitutionally sound because it is not their speech that is being regulated,” Reyes argued.

In the 2008 Chavez vs. Gonzales case, content-neutral regulation was defined as “merely concerned with the incidents of the speech, or one that merely controls the time, place, or manner, and under well-defined standards.” While a content-based restraint is censorship and is concerned with the “subject-matter of the utterance or speech.”

Reyes said that disinformation can also be effectively managed by regulating platforms under commercial speech doctrines.

Legal Information Institute of Cornell Law School defines commercial speech as “any speech which promotes at least some type of commerce.” Reyes also said that it differs from regular speech as it involves “making expression with the idea of earning income.”

He further asserted that social media platforms are not “neutral platforms,” but are biased towards “generation of engagement,” citing the exploitation of platforms during campaigns through techniques like microtargeting.

According to Privacy International, microtargeting involves collecting people’s data to segment them into groups, allowing companies and political parties to tailor different messages and ads to each group.

“Platforms play along because it’s content, it’s promoting engagement. And I believe that is a commercial purpose enough [to] start regulating how platforms operate,” said Reyes.

Other recommendations

To ensure election information integrity and transparency, Caritos suggested that COMELEC establish a rigorous and systematic social media monitoring unit to observe what is happening in social media spaces.

She also urged civil society and media companies to collaborate and partner with the poll body to share their expertise in social media monitoring.

“If COMELEC wants to establish a social media unit, then there’s a need for them to learn as fast as they can. And they can get this expertise from other CSOs (civil society organizations) doing fact-checking, for example,” said Caritos.

Moreover, she recommended adopting a code of conduct, saying, “We’ve been pushing for a code of conduct for PR firms and individuals engaged in political propaganda since 2019. I hope that AI use would be included in this code of conduct.”

Earlier, Garcia mentioned that COMELEC will release guidelines next month for using AI in promoting candidates in the 2025 midterm elections, based on the outcomes of the stakeholders’ forum and consultation dialogue.

Get VERAfied

Receive fresh perspectives and explainers in your inbox every Tuesday and Friday.