On Tuesday, China’s cyber regulator released draft regulations on facial recognition. The proposed rules further cement individuals’ technical right to opt out of face recognition, which was upheld in 2021 by the Supreme People’s Court on the basis of protecting personal information.
The Supreme Court’s decision came after a court in the eastern city of Hangzhou ruled in favor of law professor Guo Bing, who sued a wildlife park in 2019 for breach of contract after the park made its entry process based on facial recognition. Upon learning of the change, he asked for a refund of his yearly membership fee, which the park refused to give him.
Guo sued the park, “not to get compensation but to fight the abuse of facial recognition.” His efforts were at least partially successful; the lawsuit resulted in the first court ruling establishing that citizens can demand their personal data be deleted.
Face recognition-enabled surveillance cameras have only become more omnipresent since 2019, which is likely to make enforcement of Tuesday’s draft protections difficult and delayed. The curbs will also be almost entirely limited to businesses—not just tech companies, but a wide sampling of brick and mortar retail stores that use facial recognition cameras for data-gathering purposes such as identifying and attempting to better serve repeat customers.
The regulations would also be significant for surveillance equipment firms and app developers. If enforced as written, demand for surveillance cameras could decrease significantly and apps to adjust would have to adjust their use of face-based identification.
Curbs like these are meant to protect individuals from misused or hacked personal data. The regulations outright ban the technology in hotel rooms, public bathrooms, and changing rooms and stipulate that companies offer “non-biometric” identification means whenever possible. They also require that businesses obtain consent from individual users before storing their face data and grant special protections to minors, whose parents or guardians would have to provide such consent.
Explicitly mentioning large public spaces like museums, airports, stadiums, and hotels, the draft regulations say this type of venue should not make it required for entry or service that individuals use facial recognition. They say those who opt in to using face recognition should have to give their consent—after being made fully informed of the scope of use.
Carving out wide exceptions for instances involving “public safety” and “national security,” the regulations prohibit the use of facial recognition technology to analyze individuals’ health status, race, ethnicity, or religion. The exceptions notably entitle provincial and central authorities to essentially continue using facial recognition as usual.
These draft regulations enable continued unchecked state surveillance—and overt government exceptionalism. But they also grant individuals the right to protect their personal data (in this cases, the especially intimidate data of one’s face) from businesses who stand to market and profit off people’s likeness. While data gathered by private interest is by no means safe from authorities (officials requesting data are likely to get it), data amassed by the government is unlikely to be handed over to business. That means these regulations are in effect cutting off one of the major groups looking for citizens’ data. While a half completed task shouldn’t be praised, it is a half step further than the United States has taken.
And it fits seamlessly into China’s broader regulatory framework on AI. Facial recognition is the latest example of China’s strategy of regulating targeted AI uses that accord with its “big three” data laws: the Cybersecurity Law, Personal Information Protection Law, and Data Security Law. These regulations clarify to technology companies what applications are acceptable and, at least on paper, codify the rights of individuals to sue if their privacy is violated.
The draft regulations would limit the convenience factor this technology offers private proprietors—while maintaining their public and law enforcement applications. This is why the regulations could be dismissed as merely throwing the privacy-concerned public a bone while not actually curbing government access to personal data, facial or otherwise.
While it’s true that businesses are the target of the regulations, data protections from the greed of private interest are far from meaningless. Businesses around the world continue to engage in data-centric profit models that sideline (if not entirely erase) personal privacy in the process.
Plus, while Chinese authorities are susceptible to supporting a much-encompassing definition of “public security,” the regulations require that even the government’s use of facial recognition technologies be limited to enforcing legitimate security concerns. That means government agencies are responsible for not “unlawfully disclosing” data like “personal images and identity-recognition information.”
China’s AI regulations enable the governance structure and norms China’s leaders are comfortable with; namely, facilitating stringent regulatory compliance from companies while not quite holding the party-state system and those working within it to the same standards. Authorities worry about losing control, either by failing internally to take full advantage of tech-enabled surveillance or by leaving such surveillance so unregulated that people feel unprotected, causing civil unrest.
Similarly, the United States’ sustained lack of national data laws—let alone targeted regulations on AI-enabled applications like facial recognition—is a testament to American lawmakers’ fear of becoming Big Brother.
Both countries are guilty of failing to appreciate the beautiful logic of the middle ground.
Read the full article here