Comments
ACCORDING TO LIZ - First there was the 2016 presidential campaign scandal, when Cambridge Analytica waded into American politics by vacuuming up the personal data of 50 million Facebook users to generate profiles of persuadable voters who could be emotionally leveraged to give opportunistic right-wing politicians – initially Ted Cruz and then Donald Trump – an edge over their Democratic opponents.
Then Myanmar aka Burma took Facebook a step beyond with Buddhist nationalists using it – with company executives’ full knowledge – to fan animosity against the minority Muslim Rohingya. Amnesty International found that it was Facebook’s algorithms that “proactively amplified and promoted content… which incited violence, hatred and discrimination against the Rohingya.”
The Rohingya genocide was one of many examples of bad actions by the company and its founder detailed in Careless People by Sarah Wynn-Williams, Facebook’s former Director of Public Policy.
Her tell-all book details how the company’s engineers, egged on by senior executives, specifically designed algorithms to incentivize user engagement to keep people using it, building numbers for investors, driving up the price of its stock and profits for the company. With no thought to ancillary damage.
When cornered, Mark Zuckerberg's inner circle acknowledged the results but blamed them on lack of user moderation, at the same time claiming that, in the interest of free speech, moderating the site was not only not their responsibility but an impediment to their users’ rights to vent hate speech and have it augmented.
However, their designer algorithms were purposefully set to select what content to promote, what to move up trending lists, what other Facebook groups to recommend users to join, all with the intent of increasing engagement, not providing truthful information or reasoned analysis.
Solely to amplify hits, no matter how shocking. Forcing eyeball numbers stratospheric to attract advertisers and investors. Pushing up the value of the company… and executive-owned shares.
Algorithms that selected what people saw by automatically tagging the most outrageous posts to follow on users’ selected content; as many as 70% of which were diametrically different and designed to get viewers to g down a rabbit hole searching for more of the same.
Instead of logging off. Instead of disengaging.
A U.N. fact-finding mission concluded that by its deliberate dissemination of hate-filled content through those algorithms, Facebook – essentially the only form of internet for the vast majority of Burmese – played a “determining role” in the ongoing ethnic cleansing of their Rohingya neighbors.
Algorithms designed to outrage and create a false engagement methodically favored the extreme and the shocking over intelligent comments and measured speech. Is this what human society has come to?
Yes, it put a fascist criminal in the White House – twice. But is that anything to celebrate?
Why can’t our government condemn such practices and ban social media and other platforms algorithms from intentionally amplifying hatred?
Because of profits.
Because the American capitalist system encourages offending companies to structure their algorithms to monetize every eyeball that alights on their posts. Both directly through paid advertising and indirectly through political advancement of policies and politicians that benefit them.
Because might makes right of the Middle Ages has become the money makes might of the twenty-first century Robber Barons.
Media featured Trump in so many news stories in his first run for office, because the outrage sold views and advertisers liked the eyeballs, that they pushed Clinton’s policies out from front and center.\
This opened the floodgates to what became the MAGA mob, formed from conspiracy theorists and other subsets of Americans disgruntled with the trajectory of their lives. People unwilling to accept responsibility for their perceived failure, happy to embrace blaming anyone else.
Creating a convenient audience for the ones who exploit that most basic human survival emotion – fear. Rapidly accelerating the divisiveness that has become endemic in American culture today.
Meanwhile, basking in positive feedback from their employers, coders relentlessly unleased algorithms with more and more power to provoke negative feelings. Failing to consider the consequences. Or that they had any responsibility to mitigate the damages.
At a time when the government itself is exacerbating divisiveness algorithms provide echo chambers to amplify fears. Unfortunately the human animal is still structured to respond more strongly to fear-provoking imagery, so fear-provoking posts dominate anything promoting peace and positive feelings.
In 2012, YouTube viewings hit 100 million hours a week – pretty amazing. But riding a rush of power and conceit, the executives wanted more, setting a goal of one billion hours 2016.
Trial and error testing showed that once again outrage drives up engagement, especially in their target audiences. Not cute cats or mounting concerns about the hazards of too much screen time.
So their algorithms were programmed to tempt millions of viewers with the most outrageous claims and conspiracy theories instead of a broad spectrum of content, mostly more moderate views or mindless entertainment. It didn’t matter if it was offensive and filled with hatred and lies; those who didn’t pursue the bait were going to sign off anyway, so it was a win/win bigger scenario.
And by 2016 YouTube eyeballs were watching one billion hours… a day.
Governments should offer solutions, not be part of the problem.
Wars, sanctioned or covert, have always had psychological elements, whether demonizing opponents or spreading disinformation behind enemy lines. Social media has become another tool in arsenals with more and more sophisticated algorithms, increasingly augmented by A.I., spreading propaganda and engaging in psychological warfare.
In Russia and Ukraine, China and North Korea, Israel and Gaza. And the United States.
Domestically, misinformation and deliberate obstruction of knowledge is rampant and can often be deadly.
Fears spread on social media, community groups, and messaging apps are escalating jittery responses to immigration on both sides – those who are brainwashed to see foreigners as threats and those whose very futures are increasingly at risk.
While a president and a department of homeland insecurity which seems to perceive masked men with lethal weapons rolling up in unmarked vehicles with their licence plates removed to chase immigrants is just one big frat party of a hunt.
And at a time when pregnancy termination options have been curtailed on a platform where millions of teens get their news, when people search for basic information about medication abortion, TikTok's algorithm pushes anti-abortion propaganda. Claiming a woman is "very likely to see their deceased baby" in the top search result is pure unadulterated BS.
What is expelled is less tissue and blood than many see in their monthly period. And probably far more welcome.
Scaring people when they are able to get a safe medical abortion can lead to far more dangerous consequences later – even carrying a baby to term is riskier – especially when in some states doctors are prohibited to do anything to help the mother if it may harm the foetus.
While human institutions can act independently to cull out socially unredeeming biases and lean on others in both the public and private sectors to curb abuses and rectify errors, there is no leaning on the immense powerhouses that Facebook and YouTube and TikTok have built.
They and others of their ilk intentionally exploit the fact that there is radically greater use of their aps when there are no regulations, or at least unenforceable ones. and when the more outrageous the fake news and their postings are, the more hits they get. The more they profit.
The real and the truthful are consigned with the more mundane aspects of our news to be buried behind the cacophony of more strident, more exaggerated voices. Drowning out what is important to individuals in their everyday lives.
Algorithm abusers enmeshing themselves with A.I. which is also advancing in leaps and bounds, their greedy corporate creators pushing heavily to not only put a pause on future regulations but to eviscerate those already passed at state level as knowledgeable people across a spectrum of backgrounds work up to the potential dangers.
By the number, most postings on YouTube, TikTok, and Facebook are not fake news and don’t incite genocide.
But that small minority. those that social media algorithms ensure receive a disproportionate number of views, are the ones that drive engagement by vulnerable Americans further down those rabbit holes.
The above draws from a number of sources but was inspired by Yuval Harari’s book Nexus. Read it yourself and be scared, really scared.
(Liz Amsden is a former Angeleno now living in Vermont and a regular CityWatch contributor. She writes on issues she’s passionate about, including social justice, government accountability, and community empowerment. Liz brings a sharp, activist voice to her commentary and continues to engage with Los Angeles civic affairs from afar. She can be reached at [email protected].)