THE HOUSE OF SEKHON - YOUR PARTNER IN CAPITAL ASSETS CREATION. USING FREE MARKETS TO CREATE A RICHER, FREER, HAPPIER WORLD !!!!!

Disinformation Is Among the Greatest Threats to Our Democracy. Here Are Three Key Ways to Fight It

In October 2019, the late Supreme Court Justice Ruth Bader Ginsberg was asked what she thought historians would see when they looked back on the Trump era in United States history. Justice Ginsberg, known for her colorful and often blistering legal opinions, replied tersely, “An aberration.”

As President Biden’s administration settles in, many feel an enormous sense of relief, an awareness that the United States dodged a proverbial bullet. But how do we ensure that Justice Ginsberg’s prediction becomes reality? This is not an academic question; Trump’s recent speech at CPAC all but announced his desire to return in 2024. Only by recognizing the underlying reason he succeeded in the first place and by making the structural changes necessary to prevent someone like him from succeeding again can we head off this eventuality.
[time-brightcove not-tgx=”true”]

First, to understand what we must do to prevent the return of someone like Trump or even Trump himself, we first need to define what Trumpism really is and how it came to be. The seeds of Trumpism in America have been analyzed to exhaustion, but something specific emerged in 2016 that holds the key to Trump’s rise to power: Online disinformation.

Modern online disinformation exploits the attention-driven business model that powers most of the internet as we currently know it. Platforms like Google and Facebook make staggering amounts of money grabbing and capturing our attention so they can show us paid advertisements. That attention is gamed using algorithms that measure what content we engage with and automatically show us more content like it.

The problem, of course, emerges when these algorithms automatically recommend and amplify our worst tendencies. As humans, we evolved to respond more strongly to negative stimuli than positive ones. These algorithms detect that and reinforce it, selecting content that sends us down increasingly negative rabbit holes. Resentful about losing your job? Here’s a video someone made about how immigrants stole that job from you! Hesitant about the COVID-19 vaccine? Here’s a post from another user stoking a baseless anti-vaccine conspiracy theory. Notice, of course, that truth is nowhere in this calculus—the only metric the algorithm rewards is engagement, and it turns out that disinformation and conspiracy theory make the perfect fodder for this algorithmic amplification.

This is also the perfect setup for someone like Trump to create further political turmoil in the future. People like him will say or do literally anything to grab attention as long as it benefits them. They lie outright solely to further their own immediate interests. Their disregard for truth is pathological. Their entire personas are fabrications designed to maximize ratings. Our modern information environment, which rewards engagement above all else, is perfect for someone like Trump to succeed unless we make fundamental changes to this system.

Read Now: Shoshana Zuboff on Building an Internet That Lets Democracy Flourish

Now is the time for strong tech reform. Disinformation is a product of a toxic internet business model, and Congress wields the power to change the conditions from which it emerged. This is not a “truth police.” I do not advocate regulating disinformation directly; so-called “anti fake news” laws passed in other countries are ripe for political exploitation to suppress free speech and antagonize dissidents, activists, and political rivals. Instead, by regulating the toxic business models underpinning our information environment, we will create a healthier ecosystem that stems the flow of disinformation, mitigates harm, and leads to a freer more productive conversation.

We need strong legislation in three areas. The first is privacy. Personal data has become the most valuable resource in the world, with companies that collect it en masse replacing even oil companies as the most central businesses in the global economy. How does this relate to disinformation? For starters, personal data is the fuel that powers this algorithmic distortion engine. But the connection goes deeper. As Dr. Johnny Ryan, Senior Fellow at the Irish Council for Civil Liberties and the Open Markets Institute, argues, targeted advertising has robbed traditional media outlets of their primary asset: Their audience. This effect has devastated publisher revenues and destroyed the business models that previously supported quality journalism, leaving a news vacuum that is being filled by disinformation. Strong federal privacy laws that go beyond those already in place in Europe and California would help stem the damage to the media business model and re-establish an environment where quality journalism could thrive once again.

The second area is antitrust. Right now, the digital ad tech market is dominated by two companies, Facebook and Google. Advertisers from around the world pay on average two thirds of their budget to just these two companies. This money is ultimately what pays for the internet we use so freely. This is also what disinformation peddlers seek to capture when they create salacious content to share on social media. By driving engagement and traffic to their sites, these malicious actors display ads and make money.

All of this creates a risky situation for advertisers, who are loath to have their ads appear alongside disinformation, say, extolling the recent riots at the U.S. Capitol or dissuading people from getting the COVID-19 vaccine. And yet, the advertisers have little to no say over where their ads are appearing. They pay Google or Facebook, and those companies use algorithms to choose where to place the ads. And when the advertisers complain—as many did last summer as part of the #StopHateForProfit campaign—these companies simply defy them. They have no competition and thus little incentive to better serve their customers, the advertisers, by blocking ads—and funding—to sites peddling disinformation and hate.

The final and most important area is content liability. Often referred to as “Section 230” reform, from Section 230 of the Telecommunications Decency Act of 1996, this is among the most difficult, but most critical areas of tech reform. Section 230 essentially makes internet platforms immune to legal liability for both the content that users post and the decisions they make to remove that content. Some call this law “the twenty six words that built the internet.”

Under Section 230, Facebook, for example, assumes no liability when they suggest people join white supremacist groups, as their own internal data shows they do a whopping 64% of the time. They bear no responsibility when those people become radicalized and commit hate crimes or attack public buildings, even though it was Facebook that originally set them down the path.

While there is widespread consensus that Section 230 needs to be reformed, it remains an open question how to do so in a way that adequately protects free speech. When the law was originally conceived, the popular analogy used at the time was that a platform was like a chalkboard in a town square—anyone could write anything on it, and the owner of the chalkboard was neither liable for what was written, nor what they chose to erase. It made sense in the early days of the internet when these platforms had far fewer users and weren’t dominated by algorithmically curated feeds.

Today, the chalkboard analogy no longer applies. Billions of users are effectively feeding these algorithms an infinite supply of content from which to populate our feeds. A better analogy for today’s algorithmically driven platform would be one of those ransom notes from the movies, where the kidnapper cuts out magazine letters to spell words. We don’t accuse the magazines of the kidnapping even though they printed the letters, and just because the kidnapper used letters from magazines to spell the words doesn’t mean they didn’t “write” the note. In the same way, it’s not the content that’s the problem on the modern social media platform. It’s the algorithms that are weaving that content into personalized toxic narratives in order to drive engagement at the cost of everything else.

This is what researchers mean when they compare freedom of speech to freedom of reach. We are all entitled to freedom of speech—though even that has limits in the face of imminent harm, and a clear link is emerging between online toxicity and offline harm—but algorithmic amplification, or “reach,” is the product of decisions made by private companies through the algorithms they program. Not only is there no intrinsic right to have those private companies show your posts to other users, but those companies should also be able to be held liable when they decide to do so via their algorithms if it ends up harming you or someone else. This is even truer when it comes to online advertising. Everyone has the right to say what they want on the web, but no one has a right to the money that advertisers pay into Google or Facebook to place ads next to that content. That should be entirely the choice of the advertisers footing the bill.

Given this model, a solution becomes clearer. Any time a platform makes a decision—or programs an algorithm to make a decision—to show content to a user, the platform should assume liability for that decision. If Facebook recommends a white supremacist group to a user, and that user joins the group, becomes radicalized, plans violence there, and harms someone, the victim should be able to hold Facebook accountable for recommending the group and facilitating the planning. Sure, Facebook didn’t solely cause the harm, but they were involved and thus bear some responsibility. Right now, under the blanket immunity provided by Section 230, the victim can’t even take Facebook to court to hash it out, let alone hold them accountable. That’s what needs to change.

Social media platforms have immensely affected the balance of power in media, giving space to voices that have historically been suppressed. But we need to update the rules that govern the manipulation of our information environment in order to prevent another nihilistic narcissist gaining power. The Global Disinformation Index, the nonprofit that I co-founded, has been working with the tech sector to stop the creation and spread of online disinformation. To date, we’ve partnered with tech companies, governments, and advocacy groups to disrupt these financial incentives to spread disinformation. Our industry-focused advocacy work is a start, but the world’s policymakers must now do their part. The EU is already leading the way with reforms through the Digital Services and Digital Markets Acts. These legislative initiatives, should they become law, seek to create platform liability and level the playing field in the ways that will make the web safer and more competitive. But since most of the companies affected are American, right now the most powerful levers to combat disinformation lay in the hands of Congress—it’s time to pull them and relegate Trumpism to the “aberration” Justice Ginsberg predicted it would be.

Liquid error (layout/theme line 205): Could not find asset snippets/jsonld-for-seo.liquid
Subscribe