New Year, Same Tech Policy Fights
People often make fun of the phrase, “New year, new me.” No matter how copious one’s New Year’s resolutions, the turning over of the calendar that occurs at midnight on January 1 doesn’t change much. Everybody is the same at 12:01 am as they were at 11:59 pm.
This dynamic applies also to policy and legal debates. A new Congress just sat. A new President will soon take his seat behind the Resolute Desk. State legislatures are readying themselves for new sessions. However, the debates and challenges facing policymakers remain the same — particularly with respect to cybersecurity. Hackers and other malicious actors remain a pervasive and ever-growing threat. Yet politicians still seek to subordinate cybersecurity and privacy to other priorities, such as economic “fairness” or children’s online safety.
Enter the U.S. District Court for the Northern District of California, which ruled over the holiday break that much of the Golden State’s “Protecting Our Kids from Social Media Addiction Act” (SB 976) did not warrant a preliminary injunction. The Court subsequently issued a broader short-term injunction, as the U.S. Court of Appeals the Ninth Circuit joins the fray.
The many twists and turns of the court’s ruling are worth exploring, as California’s law has many facets. However, for the App Security Project’s purposes, one specific section deserves special attention. Regarding SB 976’s “age assurance” requirements, the court notes:
The regulations may even allow covered entities to estimate age using tools that run in the background and require no user input. For example, many companies now collect extensive data about users’ activity throughout the internet that allow them to develop comprehensive profiles of each user for targeted advertising. The regulations implementing SB 976 could permit covered entities to use such advertising profiles or similar techniques to estimate age without needing a user to affirmatively submit information.
Followed to its necessary conclusions, this reasoning arrives at quite a frightening destination, as far as user privacy and data security are concerned. The court would have social media companies achieve legal compliance through pervasive data collection. Lawmakers, policy experts, and users increasingly understand the inherent creepiness and personal dangers of this approach. As such, many are working on policy and technical solutions to ameliorate such problems as this prescription would create.
As law professor Eric Goldman explains:
First, not every regulated service collects enough data to do this well. Second, we definitely don’t want to regulatorily encourage more services to data-mine kids. #Ironic. Third, any automated data mining will routinely make Type I/Type II errors, and it’s also easily gamed by spiking the dataset. Fourth, will this kind of data mining be legal in light of the existing and emerging privacy laws? ….
If this court thinks automated “behind-the-scenes” data mining is a reasonable path towards protecting child safety online, then we’re doomed. Meanwhile, I hope the court’s openness to this kind of age authentication solution acts as the much-needed red-alert to the privacy community about the privacy threats emerging from the child safety regulatory pushes. The privacy invasions caused by mandatory age authentication have the realistic potential to overwhelm any other privacy gains made elsewhere.
This is the kids-online-safety movement’s perpetual struggle — and the perpetual struggle of those who seek to hyper-regulate the internet, generally. The logic goes thus: To solve X problem, Y invasive policy solution is required — and the thought ends there. The trouble is, of course, that invasive policy solutions almost always bring tradeoffs. As the court failed to see, imposing new modes of operation on online platforms are purchased with sizeable opportunity costs, usually ending in a net loss. It also often runs squarely and blindly into the barriers against speech suppression erected by the First Amendment. There’s no such thing as a free lunch, and there’s no such thing as a costless age-verification regime.
The court’s blindness to the unseen ramifications of online regulation is just one instance of an all-too-common inattention to unintended consequences. Regulations that force digital ecosystems open invariably come at the cost of cybersecurity vulnerabilities. Government-mandated content-moderation guidelines come at the cost of censorship. Big-is-bad antitrust enforcement — and the kneejerk promotion of the apparent short-term interests of small companies — come at the cost of suppressed innovation. Finally, of course, mandated age verification comes at the cost of users’ privacy and data security.
It’s a new year, but the laws of economics, the underlying technical variables, and the political factors that bear on technology policy remain the same. This comes as an annoyance to certain would-be technocrats. Pretending otherwise will do nobody any good. Policymakers in Washington, D.C. and in state capitals must remember that tradeoffs and knowledge problems still remain in the digital world — perhaps more so than in the physical world. Moreover, as the Supreme Court has ruled emphatically, the Constitution’s ordinary protections for speech and other civil liberties apply online, just as they would anywhere else.
Ignoring tradeoffs resembles ignoring a credit card balance: you can fool yourself for a time, but the bill will inevitably come due.
Published on January 13, 2025