Experts fear sales of the technology also export authoritarian ideas about biometric surveillance. The second largest exporter is the US.
By Will Knight January 24, 2023
PHOTOGRAPH: GETTY IMAGES
EARLY LAST YEAR, the government of Bangladesh began weighing an offer from an unnamed Chinese company to build a smart city on the Bay of Bengal with infrastructure enhanced by artificial intelligence. Construction of the high-tech metropolis has yet to begin, but if it proceeds it may include face recognition software that can use public cameras to identify missing persons or track criminals in a crowd—capabilities already standard in many Chinese cities.
The project is among those that make China the world leader in exporting face recognition, according to a study by academics at Harvard and MIT published last week by the Brookings Institution, a prominent think tank.
The report finds that Chinese companies lead the world in exporting face recognition, accounting for 201 export deals involving the technology, followed by US firms with 128 deals. China also has a lead in AI generally, with 250 out of a total of 1,636 export deals involving some form of AI to 136 importing countries. The second biggest exporter was the US, with 215 AI deals.
The report argues that these exports may enable other governments to perform more surveillance, potentially harming citizens’ human rights. “The fact that China is exporting to these countries may kind of flip them to become more autocratic, when in fact they could become more democratic,” says Martin Beraja, an economist at MIT involved in the study whose work focuses on the relationship between new technologies like AI, government policies, and macroeconomics.
Face recognition technology has numerous practical applications, including unlocking smartphones, providing authentication in apps, and finding friends in social media posts. The MIT-Harvard researchers focused on deals involving so-called smart city technology, where face recognition is often deployed to enhance video surveillance. The research used information on global surveillance projects from the Carnegie Endowment for International Peace and data scraped from Chinese AI companies.
In recent years US lawmakers and presidents have expressed concern that China is gaining an edge over the US in AI technology. The report seems to offer hard evidence of one area where that shift has already occurred.
“It bolsters the case for why we need to be setting parameters around this type of technology,” says Alexandra Seymour, an associate fellow at the Center for New American Security who studies the policy implications of AI.
There is growing bipartisan interest in the US in restricting Chinese technology worldwide. Under president Trump, the US government imposed rules designed to restrict the use of Huawei’s 5G technology in the US and elsewhere and took aim at China’s AI firms with a chip embargo. The Biden administration levied a more sweeping chip blockade that prevents Chinese companies accessing cutting edge chips or semiconductor manufacturing technology, and has placed sanctions on Chinese providers of face recognition used to monitor Uyghur Muslims.
Further efforts to limit the export of face recognition from China could perhaps take the form of sanctions on countries that import the technology, Seymour says. But she adds that the US also needs to set an example to the rest of the world in terms of regulating the use of facial recognition.
The fact that the US is the world’s second largest exporter of face recognition technology risks undermining the idea—promoted by the US government—that American technology naturally embodies values of freedom and democracy.
Use of facial recognition is rising among US police departments, and while some cities have placed restrictions on the use of the technology, there are no national standards restricting or limiting its use. Some US companies, such as Clearview AI, have developed and are exporting face recognition tools that can connect a surveillance camera image of a person to their online identities, a use case that civil liberties groups argue invades citizen’s privacy without legal justification.
Seymour says the best way for the US to counter China’s success in exporting face recognition may be to regulate its use at home and to then offer alternatives to Chinese technology abroad. “Having a conversation around values will help to shape some of the limitations that need to be set on these technologies,” she says. But the prospects of the US Congress agreeing on meaningful limits to the technology look slim.
Chinese companies have come to dominate face recognition technology partly because of ties to government entities that can provide huge quantities of photos as well as significant funding for the technology’s development. In a paper published in November 2021, Beraja and his coauthors argued that innovation in the development of face recognition AI can flourish in autocracies because of close alignment between the technology and government goals.
Controlling the spread of unsavory uses of face recognition could be difficult, because the same technology can have many more benign uses.
And David Yang, one of Beraja’s coauthors and an economist at Harvard University, says recent US moves to contain Chinese technology have focused more on preventing development of new capabilities, not limiting the transfer of existing ones. “China has already developed a comprehensive suite of surveillance AI tech that it can sell,” he says. “The recent restrictions do nothing to change that.”
Seymour of the Center for New American Security says other emerging areas of AI could also be set to develop into powerful new surveillance tools whose proliferation should be carefully monitored.
Face recognition was one of the first practical uses for AI to appear after vastly improved image processing algorithms using artificial neural networks surfaced in the early 2010s. She suggests the large language models that have caused excitement around clever conversational tools such as ChatGPT could follow a similar path, for example by being adapted into more effective ways to censor web content or analyze communications.