TikTok or Nah?
I'm a fan of the old television series The Twilight Zone, and one of its classic episodes in particular, an entry titled "The Monsters Are Due on Maple Street." In it, the civility of a small town in Anywhere, U.S.A. breaks down amidst the Fear, Uncertainty, and Doubt sown amongst its residents by a credibly perceived, yet ultimately imagined, threat. A simple power outage, and the series of rumors that follow it, turns neighbor against neighbor as aliens watch the chaos ensue from behind the controls. Ostensibly a reference to the Red Scare of the 1950s, the episode serves as a powerful demonstration of the impact that misinformation can have on communities.
While more than 60 years have passed since the episode's original airing, the concerns of the body politic it highlights bear an eerie relevance to the debate surrounding Congress' recent proposal to "ban" TikTok (technically, force a strategic divestiture), right down to fears related to the spread of communist propaganda. Yet, one need not look far to see how the effects of algorithmic and AI-enabled misinformation are already unleashing havoc in digital communities not unlike that on Maple Street.
As the Senate decides whether to take up consideration of the House bill, policymakers might consider several key risks, both supporting and rejecting the arguments for H.R. 7521.
The TikTok content recommendation algorithm is one of the key features under scrutiny by Congress. Research published by NIST has shown that trustworthy AI systems are transparent, explainable, and interpretable, among other characteristics. AI explainability and interpretability are core tenets of AI trustworthiness, because they help operators and oversight groups understand "how" and "why" its decisions are made. Because we don't know "why" the TikTok algorithm recommends the content that it does, we can't trust that it is doing so as-designed. Therefore, its results are susceptible to manipulation — whether benign or adversarial in nature. As online advertising mechanics have demonstrated, algorithmic targeting can be achieved in increasingly precise ways.
Similarly, if the TikTok algorithm cannot be explained, it also cannot reasonably be monitored for shifts in the types of content being presented to users. If we do not understand how the algorithm is designed to work, what benchmarks would we measure against to detect aberrant changes? Continuous monitoring of AI systems is another core principle outlined in the NIST AI Risk Management Framework.
In truth, these critiques apply to almost all social media recommendation algorithms. And, as such, groups such as the Electronic Frontier Foundation and the ACLU have argued against the "TikTok ban" on First Amendment grounds. They also claim that by singling out TikTok specifically the House bill fails to go far enough by not enacting comprehensive consumer data privacy legislation that would apply equally to all platforms, protecting Americans across the digital ecosystem. This is a worthy goal that should be pursued by Congress; we desperately need it. Yet, TikTok remains an outlier as the only foreign-controlled social media platform with a substantial audience. The overwhelming number of calls, e-mails, and even threats made to Congress in response to this proposed ban shows just how much power and influence the platform has over the real-world actions of American citizens. In some cases, the call was even coming from inside the house.
While an opaque, unmonitorable, misinformation machine would seem an inarguable national security risk...well, the heart wants what it wants. And, as with most things Prohibition, if the people want it badly enough, they will turn to third-party marketplaces to get it. Banning the app from official app stores could potentially result in even greater risks to both individual user security and that of their broader communities. As Maurice Dawson, Assistant Professor at the Illinois Institute of Technology, noted recently on podcast The 21st Show, the app will continue to exist on devices where it is already installed but will no longer receive the benefit of security updates. Similarly, sideloaded apps downloaded from unofficial sources and installed with increased privileges can further compromise the user's device or even home and business networks that the device connects to. So, TikTok doesn't go away, but many of the current security safeguards do, and new attack vectors emerge.
Regardless of how the Senate acts, one thing is certain — individuals can control their own reactions to the content they see online.
- Pay attention to what "the algorithm" is feeding you and how it makes you feel. Does it result in feelings of Fear, Uncertainty, or Doubt (FUD)? These are signs to step away.
- Do your own research. Get a variety of opinions and understand how to identify trusted information sources.
- Slow down. Don't rush to conclusions or action. See above.
- Ask why.
See staysafeonline.org for more resources on safeguarding yourself from online misinformation and other internet-safety tips.