China Regulations for Internet Recommendation Regulations: What US Companies Should Know | Blanc Roma LLP

With China’s new Online Recommender System regulations going into effect on March 1, 2022, US companies that use recommended content decision algorithms and similar algorithms in their apps and websites already used in China must comply. For those who are still evaluating their policies and practices and want to learn more about the new regulations, the following summary of the key elements may be helpful. As with any applicable regulatory decision, companies should consult their legal advisors for guidance.

What are China’s regulations called?

According to the English translation of the rule available from Stanford University’s DigiChina project, Chinese regulations are defined as “Internet Information Service Algorithm Recommendation Management Provisions”.

What do the regulations cover?

Article 2 states that the regulations apply to the use of “computational recommendation technology” that is used to provide “Internet Information Services” within the territory of the People’s Republic of China. Specifically, the rules cover the following techniques and techniques (discussed below): “Use of generative or synthetic type, personal recommendation-type, order and selection-type, search filter-type, dispatch and decision-type-type, and other algorithm techniques of this such as to provide information to users.

What are algorithmic recommendation systems?

The meaning of “recommendation technology” can be broadly interpreted to refer to any algorithm that makes a decision about what content (ie an “object”) is displayed on a platform, such as an application or a website. A summary of the object recommendation technique may be helpful.

In object recommendation, the user is matched with products they may like, including physical objects such as clothes or virtual objects such as streaming movies or other content. Making these communications requires data about users, including personal user information and comments. In the case of Amazon and Netflix, for example, user feedback may be in the form of written ratings and reviews. In the case of music recommendations, preferences provided by the user can be used (for example, the specific preferred music genre obtained from the user’s profile). A machine learning technique called collaborative filtering relies on feedback as well as online behavior (for example, what someone has already viewed or read, which links they have clicked, etc.) to make future recommendation decisions and present them to users. This is done by comparing online comments, inputs, and behavior data with the same types of data from other users to calculate the degree of similarity with respect to a particular object. Basically, this allows others to “vote” what the user might want to read, watch or buy based on their shared interests and activities online. Other data about users can be used to rank or filter the selected items, thus reducing the list of recommended objects to those most likely to be of interest to users and others. Advocates of recommender systems argue that without user input and behavior data, a person’s content will be less attractive and interesting to them (and thus not valuable to the company making the recommendations, at least from an advertising revenue-generating perspective).

Since Chinese regulations are aimed at “internet information services”, things to be regulated appear to be text articles, user comments, videos, and similar content displayed on media (including social media), search and possibly e-commerce sites. Firms that are not sure whether they are subject to the regulations should seek the advice of their legal counsel.

Why are recommendation systems regulated?

While we can only speculate on the reasons behind Chinese regulations, Article 4 states that “providing algorithmic recommendation services must abide by laws and regulations, observe social ethics and morals, abide by business ethics and professional ethics, and respect the principles of fairness, fairness, openness, transparency, science, reason, sincerity and trust.” Presumably, Chinese authorities view content recommendation technologies as affecting at least some of these issues (which also represent concerns that lawmakers and stakeholders in the United States, Europe, and elsewhere have raised about the recommendation and other AI technologies). Article 12 is expressive: it states that systems must “avoid creating a harmful effect on users, and prevent or reduce disagreements or disagreements,” which, as we have seen, can lead to social division, disorder and other problems. In fact, some social researchers argue that the recommendation system and other content decision systems undermine individual privacy because the technologies rely on collecting and processing private, personal information about users. But recommendation systems are also seen as affecting personal autonomy and potentially undermining public interests including national security. For example, applied recommendation techniques can be used to direct users (and the groups with whom they identify) to read and view certain content, which may influence their behavior or actions in predictable ways.

What activities are prohibited by China’s regulations?

Article 6 states that algorithm recommendation service providers may not use algorithm recommendation services “to engage in activities harmful to national security and the general social interest, disrupt economic and social order, infringe the legitimate rights and interests of other persons, and other similar acts prohibited by laws and administrative regulations.” They may not use algorithmic recommendation services to disseminate information prohibited by laws and administrative regulations, and they must take measures to prevent and limit the dissemination of harmful information.” This appears to be a direct response to some of the concerns mentioned above.

What do the regulations require?

Articles 7 through 12 describe positive steps regulated entities should take to comply, including such things as incorporating ethical design processes (also known as “ethical AI”) into the design and development of algorithms, systems, and monitoring systems after they are deployed, and reporting to authorities .

Article 13 creates the first-of-its-kind national authorization program for artificial intelligence. (No other country yet, as is best understood, has a national authorization framework for similar AI technologies.) Specifically, in the case of algorithmic recommendation service providers that provide “internet news information services,” they must obtain news information Online service permit from the organizers. In the United States, permits are written permission to conduct some of the specific activities specified in the permit. It can contain specific operating conditions, record-keeping and reporting requirements, operator certification statement requirements, and specify penalties for non-compliance. In the case of Chinese regulations, permits will require regulated companies to “standardize their publication to collect Internet news information, editorial and publishing services, re-sharing services, and broadcast platform services. They may not create or synthesize fake news information and may not publish news information not published by business units.” within the scope specified by the State. The regulations do not appear to exclude small entities or those with relatively few monthly active users.

Notably, an earlier version of Article 10 prohibited the use of “discriminatory or biased user tags” in algorithmic recommendation systems, which is not in the final version of the regulations.

What protection do users have under the regulations?

Articles 16 to 22 are intended to give users new rights, including protection for minors and the elderly. This includes the right to receive notice, the ability to opt-out and deletion of user data, and freedom from “preferred treatment”. Rights to notice, opt-out, and control user data are also provided by data privacy laws such as the General Data Protection Regulation (“EU”) of the European Union (“GDPR”) and the California Consumer Privacy Act (“CCPA”), among others.

Are there regulatory enforcement and oversight mechanisms in place?

Articles 23-30 provide for the administration of regulations. For brevity, these judgments will not be summarized here.

What obligations do companies face if they violate the regulations?

Article 31 states that the authorities may, depending on the severity, issue a warning, a “criticism report” or a “correction order” to violators. Violators may be suspended and fines ranging from 10,000 to 100,000 yuan (currently $1,580 and $15,800) may be imposed. Crimes may amount to criminality and can be prosecuted. Interpreting the regulations is the responsibility of the authorities in China’s cyberspace administration, with the help of the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration for Market Regulation.

Do other countries have similar laws?

China is poised to beat the United States and the European Union in issuing regulations aimed squarely at content recommendation systems. The United States does not have law or regulations directly affecting recommendation systems, although the FTC law authorizes the Federal Trade Commission (“FTC”) to regulate deceptive and unfair practices. The Federal Trade Commission has used this authority to regulate activities that involve the collection, processing, and sale of user data, which, as described above, is key to any recommendation systems. Similarly, the European Union indirectly regulates recommendation systems through its Regulations “On the protection of natural persons in relation to the processing of personal data and the free movement of such data” (2016). The European Union Artificial Intelligence Act, introduced in April 2021 but not yet effective, defines “organised artificial intelligence systems” as programs “… for a specific set of human-defined goals, that generate outputs such as content, predictions, recommendations, or Decisions influence the environments with which they interact. Thus, depending on the interpretation of EU law, including what constitutes a “high risk” AI system, EU law can at least indirectly regulate recommendation systems due to their potentially high-risk impact on personal rights and public interests.