Congressional attention has turned to the Orwellian plight facing the Uyghurs, a Muslim minority in Xinjiang, in Northwest China. The Chinese state has incarcerated approximately one million Muslims in Xinjiang in “political education” camps for offenses as minor as having a beard. The authorities have placed tight restrictions on the practice of their religion and the teaching of their local language in an apparent effort to assimilate them into mainstream Han Chinese culture. Technology plays a vital role in this police state and enables new levels of intrusion into the population’s daily lives.
According to Human Rights Watch, the Chinese government “imposes pervasive and constant surveillance alongside persistent political indoctrination.” Uyghurs constantly pass through checkpoints, many of which are armed with facial recognition technology. Wi-fi “sniffers” silently gather data from network devices. Xinjiang authorities have been instructed to gather biometrics for all residents between ages 12 and 65, including fingerprints, iris scans, blood types, voice samples, and DNA samples.
Presumably, these technologies enable more effective tracking of potential dissidents. The authorities reportedly use big data technology to predict whether certain individuals might pose a threat based on their everyday activities. This predictive technology also integrates data from and interacts with the security checkpoints. Much of the data collected is collated and analyzed by artificial intelligence (AI) so that the authorities can detect perceived threats through patterns of activities as mundane as local residents’ purchasing habits—which can lead to incarceration in “political education” detention camps.
The surveillance state in Xinjiang demonstrates the dark side of surveillance equipment, big data, and AI. It also indicates the speed at which China is developing and commercializing AI. China has a competitive advantage in this space because Chinese companies have access to a massive pool of data on which to train AI and fewer privacy laws to obey.
Recent reports suggest that China is exporting this technology to other countries with questionable human rights records, suggesting that Xinjiang-style extensive surveillance is likely to be a contagious malady. Moreover, certain U.S. technology brands are reportedly providing hardware that supports widescale facial recognition to Chinese companies—the same Chinese companies deeply implicated in the Xinjiang surveillance system. Well-known U.S. institutional investors also have invested in these Chinese surveillance companies.
What should the consequences be for companies implicated in the mass surveillance and incarceration of an ethnic group such as the Uyghurs? A starting point would be for the Trump administration to establish a human rights policy for China and consistently and publicly voice deep concern about the situation confronting the Uyghurs, working with allies in multilateral fora to pressure China.
The administration’s approach should include strategies to address China’s use of technology for repression. One possibility is to seek ways to deny foreign companies significantly involved in such violations access to U.S. markets, capital, and technology to integrate into their systems. U.S. companies also should develop policies to avoid knowingly provide technology that supports or enables the abuses. The U.S. government, technology companies, and investors all have a role to play in addressing the situation.
Indeed, action has already begun. For example, the House of Representatives included a provision in the National Defense Authorization Act (NDAA) for 2019 preventing the U.S. government from buying surveillance cameras from two large Chinese companies that currently sell to the U.S. Army. Those same two companies are deeply implicated in the surveillance system in Xinjiang, although Congress’s action was likely based primarily on concerns that their cameras could be used to spy on sensitive U.S. security infrastructure. This step is a small one but starts to create red flags for not only U.S. government entities but also companies purchasing this technology.
The United States could also consider applying sanctions authorized by the Global Magnitsky Act to the Chinese companies most deeply implicated in China’s surveillance and repression of the Uyghurs to restrict their access to global customers. Indeed, 17 senators recently requested that the Trump administration do so. The Global Magnitsky Act permits sanctions against entities “involved in gross violations of internationally recognized human rights committed against individuals in any foreign country who seek…to obtain, exercise, defend, or promote internationally recognized human rights and freedoms, such as the freedoms of religion, expression, association, and assembly…” The Executive Order implementing the Global Magnitsky Act enables the sanctioning of entities indirectly involved in such violations if they are “responsible for or complicit in, or to have directly or indirectly engaged in” human rights abuses or corruption. Such language could apply to Chinese companies knowingly providing surveillance equipment being used to control and incarcerate hundreds of thousands of Uyghurs and eliminate their culture and religion. If these companies were sanctioned under the Global Magnitsky Act, it would prevent U.S. firms from engaging commercially with them.
Congress also has begun to focus attention on the U.S. companies supplying technology to Chinese counterparts that play a key role in the Xinjiang surveillance state. In May 2018, Senators Marco Rubio (R-FL) and Chris Smith (R-NJ) wrote a letter to the commerce secretary expressing concern about the situation in Xinjiang and questioning why U.S. firms had been able to sell products to the Chinese authorities for use in surveillance systems despite existing export control measures. Export control measures limit the export of equipment for crime control and detection to certain countries, including China, that might use it abusively. The challenge is that such restrictions do not cover all technology that might be used in surveillance because the regulations have not been updated to restrict the export of new technology such as facial recognition or AI in most circumstances. The particular technology highlighted in the senators’ letter was a DNA sequencer and not subject to export control restrictions for crime control reasons, as these restrictions are only applied to products or technologies more obviously used in policing.
Presaging potential further action, Congress also authorized a National Security Commission on Artificial Intelligence in this year’s NDAA. The primary goal of the Commission is to ensure that the United States is competitive in the development of AI, but the authorizing language also asks the commission to consider the lawfulness and ethics of such technology as used by the U.S. security or foreign powers.
Such mostly reactive efforts are a start of what is likely to be a long process as the United States comes to terms with new technology developed at home and abroad, and its use by authoritarian states. Technology—and especially AI—in China will undoubtedly continue to develop apace regardless of U.S. laws. Yet this is no excuse for a regulatory race to the bottom. New measures—regulatory or otherwise—are likely to be needed to address: 1) the incorporation of U.S. technology into products used for gross human rights violations in Xinjiang (and elsewhere) and 2) the U.S. marketplace for Chinese (or other) companies deeply involved in the surveillance state. Both government regulation and company policies should aim to limit adverse impacts, while enabling and encouraging the development of such technology for its potential positive uses. AI in particular could be a powerful tool for good if developed thoughtfully, with appropriate human rights safeguards, as a recent report by Harvard Law’s Berkman Klein Center for the Internet and Society demonstrates.
One longer-term option could be to update export control restrictions so that dual-use U.S. technology is not integrated into repressive technology known to be used by governments abroad. Such regulation is challenging although vital specifically because such technology has both benign and repressive applications (e.g., is “dual use”).
Alternatively, a federal agency could diminish the U.S. marketplace for foreign technology used for significant human rights violations by potentially being vested with authority to regulate technology imports based on whether they are frequently used in gross human rights abuses, as the Food and Drug Administration does for pharmaceutical safety. Such efforts would help address the U.S. role in financially supporting repressive technologies.
Companies, including investors, also should play a proactive role in diminishing their support for companies and regimes involved in gross human rights abuses related to widespread surveillance. Knowingly providing equipment to actors involved in such violations is often in violation of those companies’ professed commitments to respect human rights, in keeping with international frameworks such as the U.N. Guiding Principles on Business and Human Rights. As a starting point, purportedly responsible companies should voluntarily agree not to purchase or invest in such technology or collaborate with the surveillance companies clearly implicated in the Xinjiang violations.
Companies will need a robust decisionmaking framework to guide their actions as more governments engage in such widespread surveillance and crackdowns on certain populations. As a first step, the U.S. government could consider supporting company efforts by developing basic human rights due diligence guidance for companies exporting technology with surveillance capabilities to mitigate adverse human rights impacts associated with the technology.
In coming years, collaboration across sectors and expertise will be needed to develop a framework that addresses the wider and growing problem of governments using AI for surveillance. This will require government engagement. However, given the current limited understanding of technology and AI on Capitol Hill, businesses should assume a role as a constructive actor at the negotiation table, so that any government-supported frameworks or regulations are appropriately tailored, and should consider developing their own decision-making guidelines and make those public. Human rights organizations should also have a seat at the table so that the results are human rights compatible.
In short, any regulations or frameworks should be pro-innovation and pro-human rights. The two need not be mutually exclusive.
Amy Lehr is the director of the Human Rights Initiative at the Center for Strategic and International Studies in Washington, D.C.
Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).
Amy K. Lehr Director, Human Rights Initiative Media Queries
Contact H. Andrew Schwartz
Chief Communications Officer
Contact Caleb Diamond
Media Relations Manager and Editorial Associate
Tel: 202.775.3173 Related Asia, China, Commentaries, Critical Questions, and Newsletters, Human Rights, Human Rights Initiative, Human Security, Technology and human rights, Transitional Justice