China’s approach to regulating recommendation algorithms

On January 4th, 2022, the regulation “Provisions on the Management of Algorithmic Recommendations in Internet Information Services” was adopted by China. This text will enter into force on March 1st. It follows an initial draft presented by China’s Cyberspace Administration in August 2021 and it seeks to regulate algorithms, especially those that will be employed for ‘recommendation’ purposes such as those used in search filters, social media, online stores, content services or gig work platforms. The objective, according to the Provisions’ own terms, is to “carry forward the Core Socialist Values, preserve national security and the societal public interest, protect the lawful rights and interests of citizens, (…) and promote the healthy and orderly development of internet information services.”

The newly published regulation starts by defining how ‘recommendation algorithms’ are used for “generation and synthesis, individualized pushing, sequence refinement, search filtering, schedule decision-making, and so forth to provide users with information.” It then provides for a series of limitations and obligations that affect service providers.

With regard to the protection of users, the regulation obliges service providers to check, assess, and verify the algorithm mechanisms, as well as their models, data, and outcomes in order to prevent practices that could induce addiction in users. Moreover, the newly adopted text intends to give users more control over their personal data: Providers will not be allowed to use ‘unlawful’ or ‘negative’ keywords to profile users. Providers will also have to equip users with the ability to select or delete the ‘user labels’ that target their personal traits. Furthermore, the law contains a provision aimed at fighting against online manipulation. The article forbids actions such as registering fake accounts, manipulating user accounts, and fake engagement.

The regulation also states that providers “must not use algorithms to interfere with information presentation such as by blocking information, making excessive recommendations, manipulating the order of top content lists or search results, or controlling hot searches or selections, influencing online public opinion or evading oversight and management.” In addition, providers will be required to give users the option to not be targeted due to their individual characteristics and the option to stop using the services.

Some groups have been granted enhanced protection, such as workers, consumers, and minors. For example, providers of ‘work coordination services’ are required by the regulation to “improve algorithms related to assigning orders, salary composition and payment, work times, rewards and penalties, and so forth.” This is seen as a measure that protects workers from ‘gig work platforms’ like transport and delivery services. Some of the regulation’s provisions aim to protect consumer rights by prohibiting discriminatory practices in e-commerce transactions such as differentiated pricing of products and services based on the user’s profile. Minors also benefit from protective measures as the regulation forbids content that could undermine their physical and psychological health.

In terms of content, the regulation does not solely target what might affect minors. It prohibits more broadly the transit of ‘forbidden’ information and obliges providers not to circulate ‘negative’ information. Service providers must “strengthen the management of information security to identify illegal and negative content” and take measures to stop it from spreading. In addition, if ‘Internet news information’ is provided, a permit from the State must be obtained and providers must not generate or synthesise fake news “or transmit news information from units that are not within the scope provided by the State.”

Overall, providers are expected to exercise careful oversight of algorithms and “implement entity responsibility for algorithm security.” This is expected to be achieved by conducting technical measures to check algorithms mechanisms, ethics reviews, data protection, countering telecommunications network fraud, security assessments and monitoring, emergency responses, among others. Providers are also required to be more transparent: to disclose the rules that govern their services, inform users of the means and basic principles of the recommendation services provided, their intended purposes, and main operation mechanisms.

How, then, should we assess China’s latest attempt to regulate recommendation algorithms? According to Kendra Schaefer from Trivium China, the bill contains ‘groundbreaking’ regulation on algorithms although the mechanisms for enforcing the rules are not clear. Other critics, however, have opined that, even though the provisions seek to address multiple concerns (disinformation, online addiction, gig employment issues, etc.) they also reflect the country’s fears, such as “the fear of disaffection and consequent social mobilizations”. Moreover, the fact that algorithms are required to orient towards “mainstream values” and transmit “positive energy” has raised great concern, hence some of the measures can be seen as a way of censoring information that spreads values considered unacceptable to the Chinese government. Besides threatening freedom of speech, the rules require ‘a great deal of interpretation’ as to what kind of content would be considered ‘beneficial’. This could have far-reaching consequences for the tech industry, imposing rules that could be difficult to implement and forcing them to ‘adjust’ the direction of their recommendation algorithms.

Source: China Law Translate, “Provisions on the Management of Algorithmic Recommendations in Internet Information Services” (unofficial translation), January 4, 2022.

SCJ

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email