For the first time, social media giant Facebook will set up a ‘war room’ to counter misinformation and interference in in the upcoming Taiwan presidential election, January 11.
Taiwan’s Central News Agency quoted an “industry insider” as saying that Facebook is setting up a war room in the Asia Pacific headquarters of Taipei and Singapore, and will begin operations after January 1.
The war room will bring together members of different departments, including Facebook’s public policy, political advertising, content review, legal affairs, and system security. The team will include Chen Yu, director of public policy for Hong Kong, Taiwan, and Mongolia, according to the report.
With support of headquarters in Ireland and the USA, Facebook will provide 24-hour monitoring of content including posts and advertising, using AI to identify and delete fake accounts. By bringing experts from different departments together in one room, staff can discuss issues in person and in real time to improve the efficiency of decision making.
Facebook began addressing the problem of the dissemination of false information after the 2016 US presidential election amid allegations of foreign interference and the dissemination of fake news on the popular social media platform.
Facebook set up the first election war room in September 2018, ahead of the presidential elections in Brazil, and mid-term elections in the US. Facebook cited an example of a post containing fake news that the Brazilian presidential election had been postponed from October 7 to October 8, due to nationwide protests, as an example of an attempt to prevent people from voting. According to Facebook, the election war room team removed the fake news item within an hour, just as the post had begun to go viral.
Facebook began addressing issues about ad transparency in the lead up to the Taiwan elections in November this year, requiring elected officials, those seeking office, and organizations seeking to influence public opinion to confirm their identity, disclose who’s responsible, and appear in a “Paid for by” disclaimer. This came after an announcement in June of a global roll-out of Facebook’s ad transparency tools.
November 2019, Wang Li-qiang made a formal statement to the Australian Security Intelligence Organization, and in an interview to Australian media that he had personally been involved in influencing Taiwan’s nationwide municipal elections November 2018.
Wang stated that his success in influencing the 2018 elections led to him being tasked with interfering with the upcoming Taiwan presidential elections January 2020, with the ultimate aim of unseating President Tsai Ing-wen.
Wang claimed that during the 2018 municipal elections more than 200,000 fake social media accounts were made to attack the Democratic Progressive Party and influence voters to support candidates favored by the Chinese Communist Party, including successful Kaohsiung mayoral candidate Han Kuo-yu, who is now running for president under the banner of the Chinese Nationalist Party (KMT).
In the meantime, Facebook has promised to continue to refine and improve their policies and tools as part of a commitment to help protect the integrity of elections in Taiwan and around the world.
Facebook claims that its machine learning and artificial intelligence technology is now able to block or disable fake accounts more effectively.
“That said, security remains an arms race and staying ahead of these adversaries will take continued improvement over time. We’re committed to the challenge,” quoteth the Facebook.
Taiwan English News is an independent publication with no corporate funding or support. If you like what you have just read, please show your support by liking or following on Facebook or Twitter, or subscribing to Taiwan English News to receive the latest news via email. Advertising queries are welcome. Share, like, or comment below.