<China's Algorithm Oversight: A Model for the US and EU?>
Written on
China's data regulatory body, the Cyberspace Administration of China (CAC), has unveiled a list comprising 30 algorithms submitted by major tech companies like Alibaba and ByteDance for evaluation. This initiative follows the implementation of new laws requiring these tech giants to comply with stricter oversight.
The CAC mandates these firms to conduct a self-assessment regarding the security of their algorithms, the types of data they gather (including sensitive biometric or identity information), and the data sources utilized for training their algorithms. This recent move could lead to the CAC performing its own evaluations of these companies' operations.
The publicly released algorithm summary provides minimal insights, merely outlining the functions of the companies involved. For example, it mentions that the Weibo recommendation system suggests “Weibo content that aligns with users' historical browsing habits.” While this is informative, it doesn't reveal the underlying complexities.
What remains unclear is the methodology employed by Douyin (the Chinese counterpart to TikTok, also owned by ByteDance) in utilizing user data for its recommendation system. While we know that Douyin collects user data, the specifics of how this data is leveraged to connect users with particular videos, and the full range of data sources involved, are still undisclosed.
This development may not clarify much for casual observers, but it enables the government to investigate how these platforms execute automated decisions. It is likely that the CAC will have numerous inquiries and may even involve engineers to understand the decision-making processes behind algorithm coding. The subsequent regulatory actions of the CAC will likely provide insight into their findings.
Tech companies, of course, would not willingly submit to such scrutiny. They will likely attempt to protect their closely guarded trade secrets, much like Meta has done in the US when pressed to disclose algorithmic details.
The central issue revolves around the dynamics between businesses and governments in the world’s largest economies. Both China and the US are scrutinizing social media platforms, albeit with potentially differing motivations.
Officially, China cites “data abuses” by these tech firms as the reason for this oversight. The CAC has imposed targeted fines for data privacy violations, such as the $1.2 billion penalty imposed on ride-hailing service DiDi earlier this year. However, this action appears to represent a broader initiative aimed at reshaping the relationship between technology services and consumers. The EU and the US are contemplating similar issues, yet without delving as deeply into the search for solutions.
Unofficially, it’s plausible that China has recognized the detrimental impact of social media in Western nations. Specifically, it may have realized that these platforms can influence political opinions, leaving the populace vulnerable to manipulation by outside entities. This is particularly concerning for China’s one-party state.
Should the Chinese government comprehend the intricacies of these algorithms, the potential for enhancing their propaganda efforts becomes significant. This knowledge could serve two main purposes:
- Controlling domestic narratives: China could amplify its messaging while suppressing dissenting viewpoints and strengthening its influence over the tech sector.
- Influencing foreign narratives: By understanding the operational logic of major tech algorithms, China could manipulate trends and narratives beyond its borders.
China is already exploring such strategies. A recent Buzzfeed report revealed that:
"Four former ByteDance employees, each of whom worked on [US-based news app] TopBuzz, claimed that ByteDance instructed staff to insert specific pro-China messages into the app."
This TopBuzz strategy appears simplistic yet effective.
Moreover, China could leverage social media to exacerbate divisions within Western nations, although that currency may be diminishing.
While this scenario might sound alarming, it reflects the historical context of foreign policy over the last century. Justin Hart, in Empire of Ideas: The Origins of Public Diplomacy and the Transformation of U.S. Foreign Policy, notes that the U.S. State Department utilized musicians, including Louis Armstrong, as propaganda instruments during the 1960s. Armstrong's 1960 tour of Africa, sponsored by the U.S., was intended to bolster the country's image abroad; unbeknownst to him, CIA operatives exploited this opportunity to gather intelligence on local leaders.
The contemporary equivalent might involve TikTok.
ByteDance claims that TikTok operates independently, with its headquarters in Singapore and limited influence from Beijing. TikTok has transitioned its operations to Oracle's cloud servers and requested Oracle to audit its recommendation algorithms.
However, skepticism persists in the U.S., with many politicians worried that ByteDance employees may still access American users' data.
Recently, EU states concluded that data from EU citizens cannot be processed in the U.S. by Google, making it unlikely they would consent to EU data being sent to ByteDance's servers when that case eventually reaches the courts.
Nonetheless, China has absorbed valuable insights from the U.S.'s historical cultural dominance. It will likely engage in the charade of a separation between state and business to further its long-term ambitions.
Facebook capitalized on remarkable growth through its social graph, while the next chapter in social media appears to belong to TikTok’s content graph.
China's desire for control over censorship is increasingly challenged by the opacity of algorithms, live content, and immediate interactions. Even if we have already crossed a critical threshold in this regard, the state aims to act before it is too late. Unlike the U.S., which might refrain from exercising such power, China is poised to assert its influence.
As previously noted, social networks are shifting their emphasis from social graphs connecting users with their friends’ content to TikTok-style feeds that curate content globally based on its potential to retain user engagement. The primary objective is to keep users engaged at all costs. Companies like Facebook, Twitter, and even Amazon are now striving to emulate TikTok's model.
Earlier this year, a leak disclosed that TikTok shares an internal document titled “TikTok Algo 101” with select departments. While the document does not unveil significant secrets, it provides an overview of the company’s objectives:
- “The company’s ultimate goal is to grow daily active users by enhancing user retention rates and increasing the total time spent on the app each time TikTok is accessed.”
- Each video is evaluated based on likes, comments, and viewing duration, with a formula for success detailed in the document: Plike X Vlike + Pcomment X Vcomment + Eplaytime X Vplaytime + Pplay X Vplay.
In this framework, mere fame does not guarantee social media interaction. Anyone can achieve viral success by producing content that resonates with a specific audience segment, making the app addictive for both creators and consumers.
In a competitive environment focused on capturing attention, where “quality” is subjective and quantity is essential, AI programs may emerge as ideal content creators. Brands in China are increasingly adopting virtual influencers for their livestream shopping events, as they can generate new avatars tailored to every micro-segment of the audience. With appropriate data inputs, this technology could yield endless variations on content themes, which the algorithm would immediately deliver to relevant users.
A report from Media Matters for America revealed that TikTok "sends users spiraling down a 'rabbit hole' of increasingly extreme right-wing content," sometimes starting from fitness videos and progressively introducing more radical viewpoints. There is little evidence to suggest that the algorithms are intentionally designed with this outcome in mind, underscoring the necessity for regulation.
Opinions diverge on the best approach to safeguard users.
Some argue that as long as the results align with the inputs, everything is fine. This means that if users consent to share their data with a platform and are satisfied with the outcomes, it is generally acceptable. They ordered the hot dog, liked the taste, so who cares about the process in between?
Tech companies will advocate for this perspective to protect their intellectual property for as long as possible. They may agree to some level of oversight, potentially from an external board of their choosing. However, this is likely to be inadequate, as these same companies require maximum engagement to sustain their advertising platforms.
When considering whether the U.S. and EU should emulate China’s approach, it is crucial to focus on the desired outcome and work backward. It remains uncertain whether increased transparency from tech giants will equip the state with the necessary insights to implement effective reforms.
A former Reddit CEO highlighted on Twitter the challenges of eradicating misinformation on the platform, suggesting that every well-meaning policy inadvertently creates new issues.
While it may be uncomfortable to use the term, a solution this pervasive necessitates a “holistic” approach that encompasses psychology, behavioral science, socioeconomics, politics, as well as statistics, platform design, artificial intelligence, and data management.
While excellent work is already being done in each of these fields, the challenge lies in integrating them into a cohesive solution. Adjusting the algorithm is akin to using a scalpel when an axe is needed.
In many respects, the intentions of both China and the U.S. in this domain are aligned. Both aim to tighten control over major tech companies and maintain influence over the content consumed by their populations. However, the methods of achieving this will differ due to the distinct nuances of each nation's objectives.
The U.S. could benefit from observing China in one key aspect: the latter is taking proactive measures to advance its own goals. Given the potentially harmful nature of some of those objectives, this should serve as additional motivation for the U.S. to take action.
Algorithms are fundamental to the functioning—both beneficial and detrimental—of social networks. However, they are not inherently “evil”; any malice stems from external factors, and effective regulation must consider the entire system. Merely glimpsing the algorithms is just the beginning.