Date
11 December 2017
China expects WeChat, one of the country's largest social media platforms, to take the responsibility of tackling fake news, blocking sensitive keywords and taking down "illegal content". Photo: Bloomberg
China expects WeChat, one of the country's largest social media platforms, to take the responsibility of tackling fake news, blocking sensitive keywords and taking down "illegal content". Photo: Bloomberg

How WeChat monitors information flow and tackles fake news

In this era of “fake news”, it is important for social media platforms to be able sift through the onrush of information to distinguish between fact and falsehood. Tencent’s WeChat, arguably China’s most influential social media app, uses an internal tool, WeSeer, to monitor content and tackle disinformation and misrepresentations.

In an interview with Tech In Asia, Huamin Qu, who opened a joint artificial intelligence (AI) lab with WeChat, described WeSeer as a powerful software that can predict which articles are likely to go viral in the next hour, pinpoint key accounts driving the spread of information, as well as identify stories of interest and break down readership by different segments, e.g., location, age and community.

“It’s a double-edged sword,” Qu, who is also a professor of computer science at Hong Kong University of Science and Technology, said, noting that while big data analytics can be used to capture criminals, it can also target other groups of people.

With its Great Firewall to keep information from the outside world from filtering into the country’s cyberspace, China expects the domestic online platforms to take the responsibility of tackling fake news, blocking sensitive keywords and taking down “illegal content” – ranging from celebrity gossip to political topics.

As early as a year before the 19th national congress of the Communist Party of China opened, WeChat began blocking relevant keywords, such as “19th Party Congress Power”, according to a report released by the Citizen Lab, a research laboratory at the University of Toronto.

As Beijing continues to tighten its grip on the country’s cyberspace, the pressure to control information will only increase for WeChat, the influential social network with almost a billion users.

In additional to predicting an article’s popularity, automating rumor detection has become a task for Qu’s research team with rising priority.

According to Tech in Asia’s report, articles shared between accounts, both on personal newsfeeds and via public accounts, create a path when they travel through the social network. In WeChat, these traces are especially unique because the platform is a closed system, i.e., only first-degree contacts can see the users’ Moments posts.

Some paths have an “overall pattern just like a virus”, Qu told Tech in Asia. “You suddenly capture a lot of attention or you just slowly build up.”

“Some articles pass through 50 to 60 layers within the social network as they are shared from account to account,” Qu said, adding that they can classify articles by the account user’s behavior and identify potential false information by analyzing these propagation paths.

Qu’s research team would delve deeper into the accounts who shared the information to assess how credible they are. For instance, if a computer science professor shares something on artificial intelligence, the article should be considered more legitimate. “But it’s still a work in progress,” Qu admitted.

Other tech firms are also ramping up spending and effort to help weed out low-quality content and fake news.

As Tech in Asia reports, Toutiao, the news aggregation platform with over 120 million daily active users, has rapidly grown its team of content auditors and reviewers in Tianjin.

A source from the company told Reuters that it has almost a thousand reviewers, up from 30 to 40 two years ago.

Chinese social networking app Momo has also expanded its content moderation team in Tianjin. Since launching its app in 2011, it has hired more than 400 content reviewers, a spokesperson told Tech in Asia.

Wages for content auditing jobs across different tech firms range from US$455 to US$910 a month, according to job posts on Lagou, a Chinese hiring site.

– Contact us at [email protected]

BN/CG

EJI Weekly Newsletter

Please click here to unsubscribe