There are two dissemination channel categories: online and offline
This post first appeared in the online magazine New Bloom on January 6, 2020, before the presidential election took place in Taiwan. This edited version is being republished on Global Voices under a content partnership agreement.
Puma Shen (沈伯洋), assistant professor at the National Taipei University’s Graduate School of Criminology and director of the Taiwan-based NGO DoubleThink Labs, which conducts research on disinformation, speaks with Brian Hioe, who writes for the online publication New Bloom, about the dissemination of fake news just before Taiwan’s presidential elections on January 11, 2020.
In recent years, China has spread disinformation as a means to influence Taiwan elections and Swedish democracy watchdog V-Dem’s 2019 global report describes Taiwan as the world’s most targeted place for disinformation tactics. On December 31, 2019, about 10 days before the elections, the country passed the Anti-infiltration law as a means to crackdown the spread of election disinformation.
Brian Hioe (BH): What do you think are the key means by which disinformation disseminates in Taiwan, and what is different about the way it spreads through these platforms?
Puma Shen (PS): In general, we can divide the dissemination channels into two categories. One is ‘online’. The other is ‘offline’. Online, it’s through Facebook, or through ‘content farms’ which are often based in Malaysia, with some in Taiwan. Many content farms also have fan pages on Facebook to spread the news.
This is how they were doing it last year, anyway. This year is different. Many fan pages were deleted this year, so now they first create content farm articles and then recruit individual netizens to earn an income by helping them to spread their contents. These freelancers are often Malaysian, overseas Chinese or Taiwanese people.
There are also some pro-China political parties which operate their own content farms in Taiwan. They quite often have connections with Chinese authorities.
Regarding the offline element of fake news, this is more difficult to tackle. First, there are ‘rumors’. These rumors often come from village borough chiefs or heads of temples. While holding events, they’ll make use of the occasions to spread rumors.
These word-of-mouth rumors usually circulate widely in grassroots communities, either to generate fear or to spread a positive image of China.
Nowadays, the dissemination of this kind of rumors has become more advanced. The rumor mongers put texts online or use LINE's chat groups to disseminate. In the last few months, we can see about half of the fake news in Line groups originates from China according to our research finding. The content of these rumors do not always [take the form of] text; about half are YouTube videos.
BH: Who is behind these content farms?
PS: Sometimes you’ll be able to track down companies that are from China, but sometimes they’re just overseas Han Chinese. They started these content farms to make money off Chinese authorities like the United Front Work Department, as the Department has huge amount of budget to sponsor pro-China politicians’ election campaigns
The social media based news outlets which have direct affiliations with the Chinese Communist Party (CCP) are mostly based in China, with a few in Hong Kong. As far as commercial content farms go, many are based in Malaysia.
BH: I remember there were attempts by Chinese people to buy Taiwanese Facebook pages last year.
PS: Yes. I think that’s less effective now. They wanted to use the pages to post content farm articles, but that's not an effective strategy since they would be sanctioned for circulating contents that violate community rules — so the taking over of pages on Facebook only lasted for about a month.
BH: What’s different about the ‘fake news’ strategies adopted for this election, as compared to the past?
PS: Due to the changes in Facebook’s policy, it’s now more difficult to run content farms, so I believe that they have adjusted their strategies.
First is LINE. Disinformation continues to spread on LINE and much of it is produced by ‘little pinks’ or online patriots on Weibo or WeChat, China's most popular social media platforms.
Compared with LINE, disinformation spread on YouTube has become a more severe problem. Since October [2019], a large number of videos emerged on YouTube spreading disinformation. Since video production involves script writing, video editing, background music [and] adding subtitles, it requires coordinated teamwork and resources. Some of these channels can upload several videos in one day.
Last year, a survey showed that YouTube is a key source of information for the supporters of KMT's [the Kuomintang of China] presidential candidate, Han Kuo-yu. Information operations on YouTube have a strategic value.
We believe they [Youtube channels] are either operated by subcontractors of political propaganda backed by Beijing, or they are directly operated by the Chinese authorities. We observed that some of these channels use simplified Chinese in their subtitles and captions. Some YouTubers use idiomatic phrases from China when they talk; sometimes, their subtitles are mixed with simplified Chinese. Recently, there have been more Taiwanese YouTubers joining the disinformation industry.
BH: So would you say that the operators of disinformation have switched to hiring more Taiwanese agents?
PS: Yes, because it’s too easy to tell if the Chinese themselves are doing it. Getting Taiwanese to do it is more effective — but the people that take up this kind of job are primarily interested in it for the money. Usually, they just perform according to the script given to them — so if you analyze the script, you can identity where it comes from, as expressions in Taiwan and China are very different.
BH: How do you propose to combat ‘fake news’?
PS: Tracking down the sources of fake news is important. If the information is ‘made in China’, people will be more cautious of its intention and question whether it aims at influencing the election.
Clarification is very important. But the majority of these types of information operations are not fake news; they aim at creating narratives, or stories, to create bias in people's minds.
For example, if you keep receiving information telling you that the American economy is bad while China's is big and powerful, the message would be imprinted in your head. You might then wonder if Taiwan should distance itself from the United States and develop closer economic relations with China.
Reports about China being big and powerful aren’t fake news, as there are definitely some positive aspects to its economy, so this kind of information bias is difficult to counteract.
BH: How do NGOs and government authorities address the problem?
PS: Taiwan has many NGOs fighting against disinformation. They keep track of content farms, analyze disinformation and promote media literacy.
The government also responds to fake news very quickly. Organizations such as Taiwan FactCheck Center and Co-Facts are doing fact checking on a daily basis, but we have no response system on political narratives yet. Thus far, we can only rely on disclosing the source of the narratives. This is what our organization is working on now.
BH: Is Taiwan's experience with disinformation a useful reference for other regions in the world?
PS: I think so — like the example of Russia [which] has engaged in disinformation operations against a number of countries, such as the Czech Republic. As a result, the Czech Republic can share the experience of dealing with Russian disinformation efforts with other countries. Similarly, Taiwan can share its experience with other countries confronting China.
Taiwan is a very good testing ground for analyzing China's disinformation operations; through our researches, we have developed a sensitivity on some of distinctive features of the disinformation operators, such as their orchestration network and linguistic characters.
Another example is an online campaign that took place on Instagram and Facebook one month ahead of [Taiwan's] presidential elections. Quite a number of individuals were uploading photos and videos of themselves putting their hand on their chest, and talking about how they were going to vote. The majority of them were criticizing the slow-growing economy. The campaign looked like a spontaneous act, but Taiwanese would find it very awkward because of the linguistic expression, such as the use of the hashtag ‘Declaring my voting intention’ (#宣告我的投票意志) — this is just not the way that Taiwanese talk and write.
Through our research, we have developed tools to analyze the ‘made in China’ information, and would very much like to share [our findings] with other countries and help them to resist disinformation operations from China.
Post a Comment