INTERNATIONAL

Concerns Are Raised by a Deepfake Audio of the Philippine President Calling for Military Action Against China

A fake audio recording purporting to show Philippine President Ferdinand Marcos Jr. ordering his armed forces to confront China has unnerved government officials in Manila. They issue a warning, saying that this may affect the country’s foreign policy.

In the altered recording, a deepfake voice identified as Marcos Jr. is heard allegedly telling his forces to step in if China presents a danger to the Philippines. He continues, saying he would not stand by if Beijing continued to injure Filipinos.

With deepfake technology, a person’s voice or look in synthetic media may be partially or completely replaced by another person’s using artificial intelligence.

“We must not compromise on even one person in order to defend what is rightfully ours,” the speaker in the fictitious tape declares. It was purportedly posted on a YouTube channel with thousands of followers. According to the South China Morning Post, a montage of images showing Chinese boats in the South China Sea followed the music.

The Presidential Community Communications Office (PCO) verified that the media manipulation was completely phony and warned the public about it on Tuesday night.

“It has come to the attention of the Presidential Communications Office that there is video content posted on a popular video streaming platform circulating online that has manipulated audio designed to sound like President Ferdinand R. Marcos Jr.,” the PCO said in a statement.

“The goal of the audio deepfake is to give the impression that the President has ordered the Philippine Armed Forces to take action against a certain foreign nation. There isn’t one and hasn’t been,” it continued.

Through its Media and Information Literacy Campaign, the PCO said that it is actively developing strategies to counteract misinformation, false news, and disinformation.

“We are also closely coordinating and working with government agencies and relevant private sector stakeholders to actively address the proliferation and malicious use of video and audio deepfakes and other generative AI content,” added the statement.

Related Articles

Back to top button