
- A series of demands to big tech companies pressuring them to modify the software's operation
- This is considered by many critics a legal strategy to condition that the software controls the information
- Ironically doing exactly what they were sued for: providing information arbitrarily.
Free expression is that any ordinary person has the freedom to make their ideas known, without the least fear of being censored by corporations, sued and oppressed by institutions.
Digital platforms shape public discourse, influence social norms and shape individual perceptions, the mechanisms by which these platforms operate are under intense scrutiny.
While they are presented as exponential tools of free expression, the reality is that they are being deliberately modified using legal strategies to achieve indirect, covert censorship under the pretext of promoting "security."
A particular lawsuit claiming that social networks cause addiction
Los Angeles Superior Court Judge Carolyn Kuhl ordered three of the most influential tech CEOs - Mark Zuckerberg of Meta, Evan Spiegel of Snap and Adam Mosseri of Instagram - to testify in a particular lawsuit.
The case alleges that social media platforms were deliberately designed with addictive features intended to harm the mental health of young users.
The court's decision means a seismic change: Algorithmic design choices are being treated as actionable, enforceable behavior.
These lawsuits are not unusual.
This is not a trivial matter and neither is it new, in recent years, there has been a curious trend of this type of lawsuit against these big tech companies.
Politicians demand them because search results show negative comments about his government's impositions and questionable honesty.
Companies sue because search results show content that denounces their harmful products or frauds with their customers.
Famous characters demand them because search results show contents that was supposedly their private information.
The result of all this curious rain of lawsuits is that these honest, poor, harassed digital mega-corporations are desperately terrified to strictly comply with the law.
As a result today anyone can use a popular web search engine to check that there is absolutely nothing interesting, more than hundreds of monotonous sites that insistently repeat everything that can be found in the mass media.
Subtly influencing software operation
The implications are important: by framing software design as behaviors that can be subject to legal scrutiny, regulators are now prepared to exercise the law as a means of influencing the way platforms operate, and with this the possibility of influencing what the masses can see.
Everything is apparently to protect young people but could potentially serve to censor and control speech more broadly.
The same platforms, under these principles, could be forced to do exactly what they were supposed to prevent, that is: the manipulation of the software to force it to offer certain content and information, but which is considered "acceptable" by arbitrary decisions.
This legal strategy paves a dangerous path where safety becomes a pretext for regulations that limit what can be said, seen or shared.
The possibility of indirect censorship
The danger lies in the possibility of regulation becoming a fundamentally indirect form of censorship.
Instead of openly banning certain issues or viewpoints - an approach that attracts public reaction - regulators could take advantage of legal mandates that require platform modifications.
They could force alterations to recommendation algorithms under the banner of "safety" and "mental health," but the real effect would be to influence, and ultimately manipulate, the flow of information and ideas.
Indeed, regulators could be controlling not only content, but the very framework through which users experience and interpret the digital world.
This strategy - legally convincing changes to recommendation algorithms - could serve as a powerful and very subtle form of censorship.
Platforms, wishing to avoid legal sanctions or reputational damage associated with "harmful content," may preemptively restrict or modify certain types of information marked "fake" or "harmful."
Algorithms, which operate as intermediaries in the distribution of information, could be redesigned to suppress or prioritize particular narratives, opinions or perspectives.
This is not direct censorship in the classical sense but an indirect way. Exercising legal mandates to impose certain flows of control of speech without explicitly prohibiting words or ideas.
Openly enforce software manipulation
Ironically, this approach could exacerbate the problem it aims to address, namely the mental health crisis among vulnerable users, particularly young people.
By messing with algorithms, regulators could inadvertently undermine mental well-being, creating a paradoxical situation where efforts to curb addictive or harmful content end up influencing thought patterns, perceptions, and even self-esteem.
Instead of fostering a healthier digital environment, manipulating algorithms under legal pressures could reinforce echo chambers, reduce diversity of thought, and intensify feelings of alienation, loneliness, or anxiety.
The strategic use of legal mechanisms based on security concerns, although apparently protective, can serve as a backdoor for powerful entities to influence public discourse and thinking.
