Join our Mailing List

"As long as human rights are violated, there can be no foundation for peace. How can peace grow where speaking the truth is itself a crime?"

New Study Finds Revealing Patterns in Chinese Internet Censorship

June 19, 2012

It is well established that China’s propaganda authorities employ a variety of techniques in attempting to control the spread of information on social networks, but a new study suggests the government’s last line of defense, an army of human censors who manually excise posts, is operating differently than previously thought.

Instead of simply censoring topics critical of the government or that make China look bad, the study finds, the country’s human censors specifically target posts that could lead to protests or other forms of collective action, leaving ample room for China’s web users to criticize its government.

The study, recently released by Harvard University’s Institute for Quantitative Social Science also takes the first steps toward using censorship trends to predict the behavior of the Chinese government, examining cases in which major political events were preceded by drastic changes in censorship patterns.

Conducted by Harvard political scientist Gary King in conjunction with Ph.D. candidates Jennifer Pan and Margaret Roberts, the study focuses on longer form blogs and message boards, leaving aside China’s most popular Twitter-like microblog platforms, known as weibo. The findings nevertheless provide useful new insights both into the ways China censors information online and the relationship of that censorship to the government’s actions in the real world.

“This is an enormous program. Hundreds of thousands of people are involved to help the government keep secrets…and the interesting paradox is an enormous program like that, designed to keep people from seeing things, actually exposes itself,” Mr. King said in an interview. “An elephant leaves big footprints.”

Mr. King is quick to point out that the study, based on data collected by social media monitoring firm Crimson Hexagon, fails to look at what websites China blocks through the Internet filtering system widely known as the “Great Firewall” or at the manysensitive keywords censors use to control what Chinese users search for and post on social media sites.

The average Chinese netizen can use clever wordplay and wit to skirt these first two mechanisms, Mr. King says, arguing that real threat to free speech in China comes from the armies of censors employed by both the government and Internet companies who manually screen and delete social network posts.

After examining more than 11 million posts made on 1,382 Chinese social media web sites, the study estimates that roughly 13% of all blog posts in China are censored.

Seeking to determine the relationship between the content of a post and the likelihood it would get censored, the study broke down posts into three groups based on political sensitivity. The most sensitive category included terms like “Chen Guangcheng” (theblind legal activist who recently spent six-days seeking refuge inside the U.S. embassy) and “Tiananmen,” the middle range flagged phrases like “one-child policy” and “environment and pollution,” while the lowest included phrases like “traffic in Beijing” and the names of popular video games

Censorship rates across the three categories were closer than expected, the study found: Words of the highest sensitivity were censored 24% of the time, while medium range words were censored 17% and the least sensitive 16%.

Re-examining the data, the researchers found that not all seemingly-sensitive posts were censored equally. Complaints about power shortages during the spring of 2011 and speculation about the end of the one-child policy during the 2011 National People’s Congress, for example, were generally left untouched.

“This is a city government that treats life with contempt, this is government officials run amuck, a city government without justice, a city government that delights in that which is vulgar, a place where officials all have mistresses,” rants one Internet user in an uncensored post cited in the study.

“Negative posts do not accidentally slip through a leaky or imperfect system,” the paper notes. “The evidence indicates that the censors have no intention of stopping them, instead they are focused on removing posts that have collective action potential, regardless of whether or not they cast the Chinese leadership and their policies in a favorable light.”

Following bombings in protest of forced evictions in Fujian province in May 2011, the study found, posts critical of the government were cut – but so were posts that supported the government. The study also found that local social media websites, such as popular local bulletin-board services that work as online message boards, are increasingly censored following events specific to certain areas.

Following the Japan earthquake in 2011, for instance, posts about iodine – which many incorrectly believed could help protect against radiation – that led to a run on salt in grocery stores were removed from local services, but left up on national forums.

The paper reasons this is because, “localized, collective organization is not tolerated by the censors, regardless of whether it supports the government or criticizes it.”

The study found only two exceptions to the rule: Posts with pornography or criticism of China’s internet censorship were almost universally cut, regardless of when users posted or the degree of their criticism.

Perhaps not surprisingly, Mr. King noted in the interview, the censors appear to be harsher on criticism of themselves than they are on criticism of the government.

More surprising were findings that suggest that changes in censorship might be used to predict major political moves by Chinese authorities. In three cases — a treaty with Vietnam over a dispute in the South China Sea, the demotion of former Chongqing police chief Wang Lijun and the arrest of dissident artist Ai Weiwei – the study found drastic changes in censorship patterns occurred several days before the events took place.

In the case of Mr. Ai, researchers noticed found that deletions of posts about the artist began to rise five days before his arrest, prior to any public signs or warnings that he would be arrested. Checking the rise in deletions against censorship rates for Ai Weiwei discussions throughout the year, Mr. King found the jump in censorship was statistically the highest of the year.

“We hypothesize that the Chinese leadership took an (otherwise unobserved) decision to act approximately five days in advance and prepared for it by changing levels of censorship so that they different from what they would have been otherwise,” Mr. King writes, adding that censorship behavior “seems to be predictive of future actions outside the Internet, [it’s] informative even when the traditional media is silent.”

The Harvard team’s findings on the predictive power of censorship are only preliminary, Mr. King said, but it’s a topic he is continuing to pursue.

CTC National Office 1425 René-Lévesque Blvd West, 3rd Floor, Montréal, Québec, Canada, H3G 1T7
T: (514) 487-0665
Developed by plank