In a scary incident, a fabricated audio clip featuring Philippine President Ferdinand Marcos Jr. instructing his military to respond to China has raised significant alarm among Manila’s government officials. They caution that this could have implications for the nation’s foreign policy.
The manipulated audio features a deep fake voice of Marcos Jr, where he purportedly indicates to his military to intervene if China poses a threat to the Philippines. He adds that he cannot tolerate further harm to Filipinos by Beijing.
Deepfake technology involves the use of artificial intelligence to replace aspects of a person’s appearance or voice with those of another individual in synthetic media.
“We cannot compromise even a single individual just to protect what rightfully belongs to us,” says the voice in the faked audio, which was reportedly released via a YouTube channel with thousands of subscribers. The audio was accompanied by a slideshow of photos showing Chinese vessels in the South China Sea, the South China Morning Post reported.
On Tuesday night, the Presidential Community Communications Office (PCO) issued a public warning about the manipulated media and confirmed that it was entirely fake.
“It has come to the attention of the Presidential Communications Office that there is video content posted on a popular video streaming platform circulating online that has manipulated audio designed to sound like President Ferdinand R. Marcos Jnr,” the PCO said in a statement.
“The audio deepfake attempts to make it appear as if the President has directed our Armed Forces of the Philippines to act against a particular foreign country. No such directive exists nor has been made,” it added.
The PCO said that it is actively working on measures to combat fake news, misinformation, and disinformation through its Media and Information Literacy Campaign.
“We are also closely coordinating and working with government agencies and relevant private sector stakeholders to actively address the proliferation and malicious use of video and audio deepfakes and other generative AI content,” it said.