We mostly think that we can bring about change by relying on people’s reason. But as social psychologist Jonathan Haidt wrote in The Righteous Mind, “Anyone who values truth should stop worshipping reason.”
This is an edited excerpt of an article from thewholestory
For decades, economists assumed that human beings were reasonable actors, operating in a rational world. When people made mistakes in free markets, rational behavior would, it was assumed, generally prevail. Then, in the 1970s, psychologists like Daniel Kahneman began to challenge those assumptions. Their experiments showed that humans are subject to all manner of biases and illusions.
“We are influenced by completely automatic things that we have no control over, and we don’t know we’re doing it,” as Kahneman put it. The good news was that these irrational behaviors are also highly predictable. So economists have gradually adjusted their models to account for these systematic human quirks.
Campaigners instinctively understand certain things about human psychology: we know how to grab the brain’s attention and stimulate fear, sadness or anger. We can summon outrage in five words or less. We value the ancient power of storytelling, and we get that good stories require conflict, characters and scene. But in the present era of tribalism, it feels like we’ve reached our collective limitations.
So our collective challenge in changing hearts and minds is: how can we avoid reinforcing the polarization of attitudes? How can we constructively use the conflict between opposing sides, to advance the debate and not entrench existing attitudes?
The lesson for anyone working amidst intractable conflict: complicate the narrative. First, complexity leads to a fuller, more accurate story. Secondly, it boosts the odds that your work will matter — particularly if it is about a polarizing issue. When people encounter complexity, they become more curious and less closed off to new information. They listen, in other words.
There are many ways to complicate the narrative, as described in detail under the six strategies below. But the main idea is to feature nuance, contradiction and ambiguity wherever you can find it. This does not mean calling advocates for both sides and quoting both; that is simplicity, and it usually backfires in the midst of conflict. “Just providing the other side will only move people further away,” says social psychologist Peter T. Coleman in his book The Five Percent. Nor does it mean creating a moral equivalence between neo-Nazis and their opponents. That is just simplicity in a cheap suit. Complicating the narrative means finding and including the details that don’t fit the narrative — on purpose.
The idea is to revive complexity in a time of false simplicity. “The problem with stereotypes is not that they are untrue but that they are incomplete,” novelist Chimamanda Ngozi Adichie says in her mesmerizing TED Talk “A Single Story.” t’s impossible to engage properly with a place or a person without engaging with all of the stories of that place and that person.”
As researchers have established in hundreds of experiments over the past half-century, the way to counter the kind of tribal prejudice we are seeing is to expose people to the other tribe or new information in ways they can accept. When conflict is cliché, complexity is breaking news.
As LGBTI activists, we are often drawn to simplify the stories. First because we need to mobilise our supporters. And mobilisation requires to be simple, sharp, action-focused. Secondly because we are drenched in attacks from our opponents, which are all but simplified, if not simplistic; so we react by doing the same. Thirdly, because simplifying helps us to make sense of a world that often just looks too absurd to grasp.
But if we want to have a deep and wide impact at changing attitudes, bringing complexity back into the debate might be a non-negotiable parameter.
This article appeared in MIT Technological Review – It is so fascinating that we share it with the LGBTI community
As the Arab Spring convulsed the Middle East in 2011 and authoritarian leaders toppled one after another, I traveled the region to try to understand the role that technology was playing. I chatted with protesters in cafés near Tahrir Square in Cairo, and many asserted that as long as they had the internet and the smartphone, they would prevail. In Tunisia, emboldened activists showed me how they had used open-source tools to track the shopping trips to Paris that their autocratic president’s wife had taken on government planes. Even Syrians I met in Beirut were still optimistic; their country had not yet descended into a hellish war. The young people had energy, smarts, humor, and smartphones, and we expected that the region’s fate would turn in favor of their democratic demands.
Back in the United States, at a conference talk in 2012, I used a screenshot from a viral video recorded during the Iranian street protests of 2009 to illustrate how the new technologies were making it harder for traditional information gatekeepers—like governments and the media—to stifle or control dissident speech. It was a difficult image to see: a young woman lay bleeding to death on the sidewalk. But therein resided its power. Just a decade earlier, it would most likely never have been taken (who carried video cameras all the time?), let alone gone viral (how, unless you owned a TV station or a newspaper?). Even if a news photographer had happened to be there, most news organizations wouldn’t have shown such a graphic image.
At that conference, I talked about the role of social media in breaking down what social scientists call “pluralistic ignorance”—the belief that one is alone in one’s views when in reality everyone has been collectively silenced. That, I said, was why social media had fomented so much rebellion: people who were previously isolated in their dissent found and drew strength from one another.
Digital connectivity provided the spark, but the kindling was everywhere.
PETER MACDIARMID | GETTY IMAGES
Twitter, the company, retweeted my talk in a call for job applicants to “join the flock.” The implicit understanding was that Twitter was a force for good in the world, on the side of the people and their revolutions. The new information gatekeepers, which didn’t see themselves as gatekeepers but merely as neutral “platforms,” nonetheless liked the upending potential of their technologies.
I shared in the optimism. I myself hailed from the Middle East and had been watching dissidents use digital tools to challenge government after government.
But a shift was already in the air.
During the Tahrir uprising, Egypt’s weary autocrat, Hosni Mubarak, had clumsily cut off internet and cellular service. The move backfired: it restricted the flow of information coming out of Tahrir Square but caused international attention on Egypt to spike. He hadn’t understood that in the 21st century it is the flow of attention, not information (which we already have too much of), that matters. Besides, friends of the spunky Cairo revolutionaries promptly flew in with satellite phones, allowing them to continue giving interviews and sending images to global news organizations that now had even more interest in them.
Within a few weeks, Mubarak was forced out. A military council replaced him. What it did then foreshadowed much of what was to come. Egypt’s Supreme Council of the Armed Forces promptly opened a Facebook page and made it the exclusive outlet for its communiqués. It had learned from Mubarak’s mistakes; it would play ball on the dissidents’ turf.
The generals in Egypt learned from Hosni Mubarak’s mistakes.
PETER MACDIARMID/GETTY IMAGES
Within a few years, Egypt’s online sphere would change dramatically. “We had more influence when it was just us on Twitter,” one activist prominent on social media told me. “Now it is full of bickering between dissidents [who are] being harassed by government supporters.” In 2013, on the heels of protests against a fledgling but divisive civilian government, the military would seize control.
Power always learns, and powerful tools always fall into its hands. This is a hard lesson of history but a solid one. It is key to understanding how, in seven years, digital technologies have gone from being hailed as tools of freedom and change to being blamed for upheavals in Western democracies—for enabling increased polarization, rising authoritarianism, and meddling in national elections by Russia and others.
But to fully understand what has happened, we also need to examine how human social dynamics, ubiquitous digital connectivity, and the business models of tech giants combine to create an environment where misinformation thrives and even true information can confuse and paralyze rather than informing and illuminating.
2. The audacity of hope
Barack Obama’s election in 2008 as the first African-American president of the United States had prefigured the Arab Spring’s narrative of technology empowering the underdog. He was an unlikely candidate who had emerged triumphant, beating first Hillary Clinton in the Democratic primary and then his Republican opponent in the general election. Both his 2008 and 2012 victories prompted floods of laudatory articles on his campaign’s tech-savvy, data-heavy use of social media, voter profiling, and microtargeting. After his second win, MIT Technology Review featured Bono on its cover, with the headline “Big Data Will Save Politics” and a quote: “The mobile phone, the Net, and the spread of information—a deadly combination for dictators.”
However, I and many others who watched authoritarian regimes were already worried. A key issue for me was how microtargeting, especially on Facebook, could be used to wreak havoc with the public sphere. It was true that social media let dissidents know they were not alone, but online microtargeting could also create a world in which you wouldn’t know what messages your neighbors were getting or how the ones aimed at you were being tailored to your desires and vulnerabilities.
Digital platforms allowed communities to gather and form in new ways, but they also dispersed existing communities, those that had watched the same TV news and read the same newspapers. Even living on the same street meant less when information was disseminated through algorithms designed to maximize revenue by keeping people glued to screens. It was a shift from a public, collective politics to a more private, scattered one, with political actors collecting more and more personal data to figure out how to push just the right buttons, person by person and out of sight.
All this, I feared, could be a recipe for misinformation and polarization.
Shortly after the 2012 election, I wrote an op-ed for the New York Timesvoicing these worries. Not wanting to sound like a curmudgeon, I understated my fears. I merely advocated transparency and accountability for political ads and content on social media, similar to systems in place for regulated mediums like TV and radio.
The backlash was swift. Ethan Roeder, the data director for the Obama 2012 campaign, wrote a piece headlined “I Am Not Big Brother,” calling such worries “malarkey.” Almost all the data scientists and Democrats I talked to were terribly irritated by my idea that technology could be anything but positive. Readers who commented on my op-ed thought I was just being a spoilsport. Here was a technology that allowed Democrats to be better at elections. How could this be a problem?
There were laudatory articles about Barack Obama’s use of voter profiling and microtargeting.
ALEX WONG/GETTY IMAGES
3. The illusion of immunity
The Tahrir revolutionaries and the supporters of the US Democratic Party weren’t alone in thinking they would always have the upper hand.
The US National Security Agency had an arsenal of hacking tools based on vulnerabilities in digital technologies—bugs, secret backdoors, exploits, shortcuts in the (very advanced) math, and massive computing power. These tools were dubbed “nobody but us” (or NOBUS, in the acronym-loving intelligence community), meaning no one else could exploit them, so there was no need to patch the vulnerabilities or make computer security stronger in general. The NSA seemed to believe that weak security online hurt its adversaries a lot more than it hurt the NSA.
That confidence didn’t seem unjustified to many. After all, the internet is mostly an American creation; its biggest companies were founded in the United States. Computer scientists from around the world still flock to the country, hoping to work for Silicon Valley. And the NSA has a giant budget and, reportedly, thousands of the world’s best hackers and mathematicians.
Since it’s all classified, we cannot know the full story, but between 2012 and 2016 there was at least no readily visible effort to significantly “harden” the digital infrastructure of the US. Nor were loud alarms raised about what a technology that crossed borders might mean. Global information flows facilitated by global platforms meant that someone could now sit in an office in Macedonia or in the suburbs of Moscow or St. Petersburg and, for instance, build what appeared to be a local news outlet in Detroit or Pittsburgh.
There doesn’t seem to have been a major realization within the US’s institutions—its intelligence agencies, its bureaucracy, its electoral machinery—that true digital security required both better technical infrastructure and better public awareness about the risks of hacking, meddling, misinformation, and more. The US’s corporate dominance and its technical wizardry in some areas seemed to have blinded the country to the brewing weaknesses in other, more consequential ones.
4. The power of the platforms
In that context, the handful of giant US social-media platforms seem to have been left to deal as they saw fit with what problems might emerge. Unsurprisingly, they prioritized their stock prices and profitability. Throughout the years of the Obama administration, these platforms grew boisterously and were essentially unregulated. They spent their time solidifying their technical chops for deeply surveilling their users, so as to make advertising on the platforms ever more efficacious. In less than a decade, Google and Facebook became a virtual duopoly in the digital ad market.
Facebook also gobbled up would-be competitors like WhatsApp and Instagram without tripping antitrust alarms. All this gave it more data, helping it improve its algorithms for keeping users on the platform and targeting them with ads. Upload a list of already identified targets and Facebook’s AI engine will helpfully find much bigger “look-alike” audiences that may be receptive to a given message. After 2016, the grave harm this feature could do would become obvious.
Meanwhile, Google—whose search rankings can make or break a company, service, or politician, and whose e-mail service had a billion users by 2016—also operated the video platform YouTube, increasingly a channel for information and propaganda around the world. A Wall Street Journal investigation earlier this year found that YouTube’s recommendation algorithm tended to drive viewers toward extremist content by suggesting edgier versions of whatever they were watching—a good way to hold their attention.
This was lucrative for YouTube but also a boon for conspiracy theorists, since people are drawn to novel and shocking claims. “Three degrees of Alex Jones” became a running joke: no matter where you started on YouTube, it was said, you were never more than three recommendations away from a video by the right-wing conspiracist who popularized the idea that the Sandy Hook school shooting in 2012 had never happened and the bereaved parents were mere actors playing parts in a murky conspiracy against gun owners.
Though smaller than Facebook and Google, Twitter played an outsize role thanks to its popularity among journalists and politically engaged people. Its open philosophy and easygoing approach to pseudonyms suits rebels around the world, but it also appeals to anonymous trolls who hurl abuse at women, dissidents, and minorities. Only earlier this year did it crack down on the use of bot accounts that trolls used to automate and amplify abusive tweeting.
Twitter’s pithy, rapid-fire format also suits anyone with a professional or instinctual understanding of attention, the crucial resource of the digital economy.
Say, someone like a reality TV star. Someone with an uncanny ability to come up with belittling, viral nicknames for his opponents, and to make boastful promises that resonated with a realignment in American politics—a realignment mostly missed by both Republican and Democratic power brokers.
Donald Trump’s campaign excelled at using Facebook as it was designed to be used by advertisers.
BRETT CARLSEN/STRINGER/GETTY IMAGES
Donald Trump, as is widely acknowledged, excels at using Twitter to capture attention. But his campaign also excelled at using Facebook as it was designed to be used by advertisers, testing messages on hundreds of thousands of people and microtargeting them with the ones that worked best. Facebook had embedded its own employees within the Trump campaign to help it use the platform effectively (and thus spend a lot of money on it), but they were also impressed by how well Trump himself performed. In later internal memos, reportedly, Facebook would dub the Trump campaign an “innovator” that it might learn from. Facebook also offered its services to Hillary Clinton’s campaign, but it chose to use them much less than Trump’s did.
Digital tools have figured significantly in political upheavals around the world in the past few years, including others that left elites stunned: Britain’s vote to leave the European Union, and the far right’s gains in Germany, Hungary, Sweden, Poland, France, and elsewhere. Facebook helped Philippine strongman Rodrigo Duterte with his election strategy and was even cited in a UN report as having contributed to the ethnic-cleansing campaign against the Rohingya minority in Myanmar.
However, social media isn’t the only seemingly democratizing technology that extremists and authoritarians have co-opted. Russian operatives looking to hack into the communications of Democratic Party officials used Bitcoin—a cryptocurrency founded to give people anonymity and freedom from reliance on financial institutions—to buy tools such as virtual private networks, which can help one cover one’s traces online. They then used these tools to set up fake local news organizations on social media across the US.
There they started posting materials aimed at fomenting polarization. The Russian trolls posed as American Muslims with terrorist sympathies and as white supremacists who opposed immigration. They posed as Black Lives Matter activists exposing police brutality and as people who wanted to acquire guns to shoot police officers. In so doing, they not only fanned the flames of division but provided those in each group with evidence that their imagined opponents were indeed as horrible as they suspected. These trolls also incessantly harassed journalists and Clinton supporters online, resulting in a flurry of news stories about the topic and fueling a (self-fulfilling) narrative of polarization among the Democrats.
5. The lessons of the era
How did all this happen? How did digital technologies go from empowering citizens and toppling dictators to being used as tools of oppression and discord? There are several key lessons.
First, the weakening of old-style information gatekeepers (such as media, NGOs, and government and academic institutions), while empowering the underdogs, has also, in another way, deeply disempowered underdogs. Dissidents can more easily circumvent censorship, but the public sphere they can now reach is often too noisy and confusing for them to have an impact. Those hoping to make positive social change have to convince people both that something in the world needs changing and there is a constructive, reasonable way to change it. Authoritarians and extremists, on the other hand, often merely have to muddy the waters and weaken trust in general so that everyone is too fractured and paralyzed to act. The old gatekeepers blocked some truth and dissent, but they blocked many forms of misinformation too.
The old information gatekeepers blocked some truth and dissent but also many forms of misinformation.
CHIP SOMODEVILLA/GETTY IMAGES
Second, the new, algorithmic gatekeepers aren’t merely (as they like to believe) neutral conduits for both truth and falsehood. They make their money by keeping people on their sites and apps; that aligns their incentives closely with those who stoke outrage, spread misinformation, and appeal to people’s existing biases and preferences. Old gatekeepers failed in many ways, and no doubt that failure helped fuel mistrust and doubt; but the new gatekeepers succeed by fueling mistrust and doubt, as long as the clicks keep coming.
Third, the loss of gatekeepers has been especially severe in local journalism. While some big US media outlets have managed (so far) to survive the upheaval wrought by the internet, this upending has almost completely broken local newspapers, and it has hurt the industry in many other countries. That has opened fertile ground for misinformation. It has also meant less investigation of and accountability for those who exercise power, especially at the local level. The Russian operatives who created fake local media brands across the US either understood the hunger for local news or just lucked into this strategy. Without local checks and balances, local corruption grows and trickles up to feed a global corruption wave playing a major part in many of the current political crises.
The fourth lesson has to do with the much-touted issue of filter bubbles or echo chambers—the claim that online, we encounter only views similar to our own. This isn’t completely true. While algorithms will often feed people some of what they already want to hear, research shows that we probably encounter a wider variety of opinions online than we do offline, or than we did before the advent of digital tools.
Rather, the problem is that when we encounter opposing views in the age and context of social media, it’s not like reading them in a newspaper while sitting alone. It’s like hearing them from the opposing team while sitting with our fellow fans in a football stadium. Online, we’re connected with our communities, and we seek approval from our like-minded peers. We bond with our team by yelling at the fans of the other one. In sociology terms, we strengthen our feeling of “in-group” belonging by increasing our distance from and tension with the “out-group”—us versus them. Our cognitive universe isn’t an echo chamber, but our social one is. This is why the various projects for fact-checking claims in the news, while valuable, don’t convince people. Belonging is stronger than facts.
A similar dynamic played a role in the aftermath of the Arab Spring. The revolutionaries were caught up in infighting on social media as they broke into ever smaller groups, while at the same time authoritarians were mobilizing their own supporters to attack the dissidents, defining them as traitors or foreigners. Such “patriotic” trolling and harassment is probably more common, and a bigger threat to dissidents, than attacks orchestrated by governments.
This is also how Russian operatives fueled polarization in the United States, posing simultaneously as immigrants and white supremacists, angry Trump supporters and “Bernie bros.” The content of the argument didn’t matter; they were looking to paralyze and polarize rather than convince. Without old-style gatekeepers in the way, their messages could reach anyone, and with digital analytics at their fingertips, they could hone those messages just like any advertiser or political campaign.
Fifth, and finally, Russia exploited the US’s weak digital security—its “nobody but us” mind-set—to subvert the public debate around the 2016 election. The hacking and release of e-mails from the Democratic National Committee and the account of Clinton campaign manager John Podesta amounted to a censorship campaign, flooding conventional media channels with mostly irrelevant content. As the Clinton e-mail scandal dominated the news cycle, neither Trump’s nor Clinton’s campaign got the kind of media scrutiny it deserved.
This shows, ultimately, that “nobody but us” depended on a mistaken interpretation of what digital security means. The US may well still have the deepest offensive capabilities in cybersecurity. But Podesta fell for a phishing e-mail, the simplest form of hacking, and the US media fell for attention hacking. Through their hunger for clicks and eyeballs, and their failure to understand how the new digital sphere operates, they were diverted from their core job into a confusing swamp. Security isn’t just about who has more Cray supercomputers and cryptography experts but about understanding how attention, information overload, and social bonding work in the digital era.
This potent combination explains why, since the Arab Spring, authoritarianism and misinformation have thrived, and a free-flowing contest of ideas has not. Perhaps the simplest statement of the problem, though, is encapsulated in Facebook’s original mission statement (which the social network changed in 2017, after a backlash against its role in spreading misinformation). It was to make the world “more open and connected.” It turns out that this isn’t necessarily an unalloyed good. Open to what, and connected how? The need to ask those questions is perhaps the biggest lesson of all.
6. The way forward
What is to be done? There are no easy answers. More important, there are no purely digital answers.
There are certainly steps to be taken in the digital realm. The weak antitrust environment that allowed a few giant companies to become near-monopolies should be reversed. However, merely breaking up these giants without changing the rules of the game online may simply produce a lot of smaller companies that use the same predatory techniques of data surveillance, microtargeting, and “nudging.”
Ubiquitous digital surveillance should simply end in its current form. There is no justifiable reason to allow so many companies to accumulate so much data on so many people. Inviting users to “click here to agree” to vague, hard-to-pin-down terms of use doesn’t produce “informed consent.” If, two or three decades ago, before we sleepwalked into this world, a corporation had suggested so much reckless data collection as a business model, we would have been horrified.
There are many ways to operate digital services without siphoning up so much personal data. Advertisers have lived without it before, they can do so again, and it’s probably better if politicians can’t do it so easily. Ads can be attached to content, rather than directed to people: it’s fine to advertise scuba gear to me if I am on a divers’ discussion board, for example, rather than using my behavior on other sites to figure out that I’m a diver and then following me around everywhere I go—online or offline.
But we didn’t get where we are simply because of digital technologies. The Russian government may have used online platforms to remotely meddle in US elections, but Russia did not create the conditions of social distrust, weak institutions, and detached elites that made the US vulnerable to that kind of meddling.
Russia meddled in US politics, but it didn’t create the conditions that made the US vulnerable to such meddling.
CHRIS MCGRATH/GETTY IMAGES
Russia did not make the US (and its allies) initiate and then terribly mishandle a major war in the Middle East, the after-effects of which—among them the current refugee crisis—are still wreaking havoc, and for which practically nobody has been held responsible. Russia did not create the 2008 financial collapse: that happened through corrupt practices that greatly enriched financial institutions, after which all the culpable parties walked away unscathed, often even richer, while millions of Americans lost their jobs and were unable to replace them with equally good ones.
Russia did not instigate the moves that have reduced Americans’ trust in health authorities, environmental agencies, and other regulators. Russia did not create the revolving door between Congress and the lobbying firms that employ ex-politicians at handsome salaries. Russia did not defund higher education in the United States. Russia did not create the global network of tax havens in which big corporations and the rich can pile up enormous wealth while basic government services get cut.
These are the fault lines along which a few memes can play an outsize role. And not just Russian memes: whatever Russia may have done, domestic actors in the United States and Western Europe have been eager, and much bigger, participants in using digital platforms to spread viral misinformation.
Even the free-for-all environment in which these digital platforms have operated for so long can be seen as a symptom of the broader problem, a world in which the powerful have few restraints on their actions while everyone else gets squeezed. Real wages in the US and Europe are stuck and have been for decades while corporate profits have stayed high and taxes on the rich have fallen. Young people juggle multiple, often mediocre jobs, yet find it increasingly hard to take the traditional wealth-building step of buying their own home—unless they already come from privilege and inherit large sums.
If digital connectivity provided the spark, it ignited because the kindling was already everywhere. The way forward is not to cultivate nostalgia for the old-world information gatekeepers or for the idealism of the Arab Spring. It’s to figure out how our institutions, our checks and balances, and our societal safeguards should function in the 21st century—not just for digital technologies but for politics and the economy in general. This responsibility isn’t on Russia, or solely on Facebook or Google or Twitter. It’s on us.
Zeynep Tufekci is an associate professor at the University of North Carolina and a contributing opinion writer at theNew York Times.
Tips from Rise For ClimateIf you're planning an action or event, and want to know how to best cover it online using just your mobile phone (and a few other tools) -- here are some tips. With just a few steps, you can ensure that many people see and hear about what happened.
For videos, it’s best to hold the phone horizontally, (unless it’s an Instagram Story).
Decide if you want to hire professional photographers and videographers also. This can really help.
Be strategic about which social media platforms you use. You don’t have to share on all of them. Focus on where you have the most followers and where people in your area go to get their information.
3. Some basic equipment can help.
Backup battery pack for your phone.
Monopod or tripod for stability. Hold that phone steady!
Microphone for audio. (If you don’t have a microphone just be as close as possible to people when you interview). You can use a lavalier microphone for interviews or a shotgun microphone for general sounds. There are versions of both that you can attach to a phone.
Bring your own pocket wifi or at least make sure you have plenty of data on your phone.
4. Be Safe!
Go in pairs or as a team and make a plan how you will work together. Maybe one person focuses on Twitter while the other does livestreams on Facebook.
Have a plan for how you will meet up if you get separated.
Make sure you have a security plan in place (depending on your situation.)
For more resources on security best practices for filming sensitive situations, check out the organization Witness.
5. Encourage everyone to share their experience.
Remind participants to share on their social media about the event – and use the hashtag! It’s great to have coverage of an event from many perspectives.
This article which first appeared on creativetimesreport may seem irrelevant at first sight, but it’s actually a VERY IMPORTANT one! It is a great example of someone who went out of her “comfort zone” and stopped preaching the converted. A strategy at the heart of all good campaigning work. Her example, and the lessons she shares, are enlightening!
Angela Washko, The Council on Gender Sensitivity and Behavioral Awareness in World of Warcraft, 2012.
[Chastity]:Abortion is wrong and any woman who gets one should be sterilized for life. [Purpwhiteowl]: should i mention the rape theory? [Snuh]: What if they don’t have the means to pay for the child and got raped? [Xentrist]: clearly Chastity in sick [Snuh]: What if they are 14 years old and were raped? [Chastity]: I was raped growing up. Repeatedly. By a family member. If i had gotten pregnant i wouldnt have murdered the poor child. because THE CHILD did not rape me.
This intense and personal discussion regarding the ethics of abortion unfolded in the lively city of Orgrimmar, one of the capitals of an online universe populated by more than 7 million players: World of Warcraft (WoW). After several years of raiding dungeons with guilds, slaying goblins and sorcerers, wearing spiked shoulder pads with eyeballs embedded in them and flying on dragons over flaming volcanic ruins, I decided to abandon playing the game as directed. Fed up with the casual sexism exhibited by players on my servers, in 2012 I founded the Council on Gender Sensitivity and Behavioral Awareness in World of Warcraft to facilitate discussions about the misogynistic, homophobic, racist and otherwise discriminatory language used within the game space.
As a gamer who is also an artist and a feminist, I consider it my responsibility to dispel stereotypes about gamers—especially WoW players—who have been mislabeled as unattractive, mean-spirited losers. At the same time, I question my fellow gamers’ propagation of the hateful speech that earns them those epithets. The incredible social spaces designed by game developers suggest that things could have been otherwise; in WoW’s guilds, teams come together for hours to discuss strategy, forming intimate bonds as they exercise problem-solving and leadership skills. Unfortunately, somewhere along the way, this promising communication system bred codes to let women and minorities know that they didn’t belong.
Angela Washko,The Council on Gender Sensitivity and Behavioral Awareness: Red Shirts and Blue Shirts (The Gay Agenda), 2014 (excerpt).
Trying to explain to someone who has never played WoW (or any similar game) that the orcs and elves riding flying dragons are engaging in meaningful long-term relationships and collaborative team-building experiences can be a little difficult. Typical Urban Dictionary entries for WoW define the game as “crack in CD-ROM form” and note, “players are widely stereotyped as fat guys living in there parents basements with out a life or a job or a girl friend [sic].” One only needs to look into the ongoing saga of #gamergate—an online social movement orchestrated by thousands of gamers to silence women and minorities who have raised questions about their representation and treatment within the gaming community—to see how certain individuals play directly into the hands of this stereotype by attempting to lay exclusive claim to the “gamer” identity. But gamers, increasingly, are not a homogeneous social group.
World of Warcraft is a perfect Petri dish for conversations about feminism with people who are uninhibited by IRL accountability
When women and minorities who love games question why they are abused, poorly represented or made to feel out of place, self-identified gamers often respond with an age-old argument: “If you don’t like it, why don’t you make your own?” Those on the receiving end of this arrogant question are doing just that, reshaping the gaming landscape by independently designing their own critical games and writing their own cultural criticism. Organizations like Dames Making Games, game makers like Anna Anthropy, Molleindustria and Merritt Kopas and game writers like Leigh Alexander, Samantha Allen, Lana Polansky and others listed on The New Inquiry’sGaming and Feminism Syllabus are becoming more and more visible and broadly distributed in opposition to an industry that cares much more about consumer sales data and profit than about cultural innovation, storytelling and diversity of voices.
What’s especially strange about the sexism present in WoW is that players not only come from diverse social, economic and racial backgrounds but are also, according to census data taken by the Daedalus Project, 28 years old on average. (“It’s just a bunch of 14-year-old boys trolling you” won’t cut it as a defense.) If #gamergate supporters need to respect this diversity, many non-gamers also need to accept that the dichotomy between the physical (real) and the virtual (fake) is dated; in game spaces, individuals perform their identities in ways that are governed by the same social relations that are operative in a classroom or park, though with fewer inhibitions. That’s why—instead of either continuing on quests to kill more baddies or declaring the game a trivial, reactionary space where sexists thrive and abandoning it—I embarked on a quest to facilitate conversations about discriminatory language in WoW’s public discussion channels. I realized that players’ geographic dispersion generates a population that is far more representative of American opinion than those of the art or academic circles that I frequent in New York and San Diego, making it a perfect Petri dish for conversations about women’s rights, feminism and gender expression with people who are uninhibited by IRL accountability.
Angela Washko, The Council on Gender Sensitivity and Behavioral Awareness in World of Warcraft, 2012.
WoW, like many other virtual spaces, can be a bastion of homophobia, racism and sexism existing completely unchecked by physical world ramifications. Because of the time investment the game requires, only those dedicated enough to go through the leveling process will ever make it to a chatty capital city (like Orgrimmar, where most of my discussions take place), meaning that only the most avid players are capable of raising these issues within the game space. At such moments, the diplomatic facades required of everyday social and professional life are broken down, and an inverse policy of “radical truth” emerges. When I asked them about the underrepresentation of women in WoW—less than 15 percent of the playerbase is female—some of these unabashed purveyors of “truth” have attributed it not to the outspoken misogyny of players like themselves but to the “fact” that gaming is a naturally male activity. Many of the men I’ve talked to suggest that women are also inherently more interested in playing “healer” characters. These arguments are made as if they were obviously true—as if they were rooted in science.
When I ask men why they play female characters, I’ve repeatedly been told: “I’d rather look at a girl’s butt all day in WoW”
Women now have to “come out” as women in the game space, risking ridicule and sexualization, as more than half the female avatars running around in WoW are played by men (women, by contrast, are rarely interested in playing men). Unfortunately this is not because WoW is an empathetic utopia in which men play women to better understand their experiences and perspectives; WoW merely offers men another opportunity to control an objectified, simulated female body. When I ask men why they play female characters, I’ve repeatedly been told: “I’d rather look at a girl’s butt all day in WoW,” “because it would be gay to look at a guy’s butt all day” and “I project an attractive human woman on my character because I like to watch pretty girls.” I found these responses, which were corroborated by a study recently cited in Slate, disturbing to say the least. They also bring to mind Laura Mulvey’s discussion of the male gaze in her influential essay “Visual Pleasure and Narrative Cinema,” published in 1975: “In a world ordered by sexual imbalance, pleasure in looking has been split between active/male and passive/female. The determining male gaze projects its phantasy on to the female form which is styled accordingly. In their traditional exhibitionist role women are simultaneously looked at and displayed, with their appearance coded for strong visual and erotic impact.”
The simulated avatar woman customized and controlled by a man who gets pleasure out of projecting his fantasy onto her is in strict competition with the woman who talks back—the woman who plays women because, as Taetra points out in the image below, for women it is logical to do so. Women haven’t been socialized to capitalize on—or in many contexts even to admit to having—sexual desires and consequently do not project sexual objects to conquer and control onto their avatars.
Angela Washko,The Council on Gender Sensitivity and Behavioral Awareness: Playing a Girl, 2013 (excerpt).
As I continued to facilitate discussions about the discriminatory language usage on various WoW servers, I realized that the topic generating the most negative responses and the greatest misunderstanding was “feminism.” Here’s a small sample of the responses I’ve gotten when asking for player definitions of feminism (and framing my question as part of a research project):
[Chastity]: Feminists are man hating whores who think their better than everyone else. Personally I think a woman’s job is to stay home, take care of her house, her babies, her kitchen and her man. And before you ask, yes I am female [Xentrist]: Feminism is about EQUAL rights for women [Hyperjump]: well all you really need to know is pregnant, dish’s, naked, masturbate, shaven, and solid firm titties. feminism is all about big titties and long stretchy nipples for kids to breastfeed. [Taetra]: Feminism is the attention whore term of saying that women are better than men and deserve everything if not more than them, which is not true in certain terms. Identifying with the female society instead of humans. Working against the males instead of with. [Yukarri]: isnt it when somebody acts really girly [Try]: google it bro [Holypizza]: girls have boobs. gb2 kitchen [Raspberrie]: idk like angry more rights for females can’t take a kitchen joke kind of lady [Defeated]: is that supporting woman who don’t make me sammichs? they need to make my samwicths faster [Kigensobank]: i dont know if WOW is the best place to ask for feminists [Mallows]: I think that hardcore feminists often think that women are better lol and they change their mind when they don’t like something that men have that is undesirable [Alvister]: da fuq [Misstysmoo]: lol feminism is another way communism to be put into society under the pretense of
protecting women [Seirina]: Feminists are women who think they are better than men. Theyre nuts. Men and women are equal. We’re just sexier. [Yesimapally]: Big Chicks who love a buffet but hate to shave their hairy armpits?? [Nimrodson]: i think it’s a word with too many negative/positive connotations to be worth defining [Dante]: woman are usefull as healer [Scrub]: yes, women were discriminated against while back, but after many feminist movements the laws were changed. It is now the 21st century and women have all if not more rights then men do. so the feminist activists are doing nothing more then creating drama
The tone of many of these comments reflects what one might find on a men’s rights forum. Recently the gaming and men’s rights communities have overlapped unambiguously, as Roosh V—a so-called pick-up artist dubbed “the Web’s most infamous misogynist” by The Daily Dot—just created an online support site for #gamergate supporters despite not being a gamer himself. I conducted an interview with him for another (seemingly unrelated) project a week before he announced this site.
Most of the women I’ve addressed in WoW do not see themselves as victims within this system, likely because their scarcity greatly increases their value as projected-upon objects of desire (as long as they don’t ask too many questions) without having it related to the physical body outside of the screen. Among the women I’ve talked to, I’ve found that there are two common yet distinct responses to my questions about feminism and being a woman inside of WoW. Response type #1: “Feminists hate men and feminism encourages physically attractive women to be sluts.” Response type #2: “Feminism is about equal rights for women, but I don’t talk about it in WoW because bringing up issues about the community’s exclusivity compromises my participation in competitive play and makes me a target for ridicule.”
Opportunities to interact online without potential repercussions for one’s offline life are becoming fewer and fewer.
Of course phrases like “get back to the kitchen/gb2kitchen” or “make me a sandwich” can be said in jest, but they nonetheless reinforce conservative viewpoints regarding women’s roles. The overwhelmingly popular belief communicated in this space—that women are not biologically wired to play video games (but rather to cook, clean, produce and take care of babies, maintain long, dye-free hair and faithfully serve their deserving men)—creates a barrier for women who hope to excel in the game and participate in its social potential. This barrier keeps women from being taken seriously for their contributions within the game beyond existing as abstracted, fetishized sex objects. Women who reject this role may be publicly demonized and called “feminazis.”
Unfortunately I did not learn how to turn WoW into a space for equitable, respectful conversation, as I had intended. Instead I came away with some thoughts about how much bigger the issues are than the game itself. Back in the days of dial-up modems, when my family finally realized the impending necessity of “getting the internet,” there was a huge fear of allowing anyone to know “who you really were.” Anonymity was the default then, and protecting your identity was key to avoiding scams, having your credit card information stolen, being stalked IRL or whatever else parents everywhere imagined might happen if someone on the internet knew your “real identity.”
What I learned early on from playing MUD games (text-based multiplayer dungeon games—precursors to MMORPGs like WoW) was that you could actually be quite intimate, revealing and honest with little consequence. There was no connection to your physical self in that kind of setting. But that seems to have changed drastically since the transition from Web 1.0 to 2.0. Web 2.0 has all but eliminated the idealized possibilities of performing an anonymous virtual self, moving internet users toward performing an (often professionalized) online version of one’s physical self (i.e., branding). The possibility of anonymity has disappeared as an increasing number of sites, Facebook foremost among them, require us to use our real names and identities to interact with other individuals online. Opportunities to interact online without potential repercussions for one’s offline life are becoming fewer and fewer.
Angela Washko, The Council on Gender Sensitivity and Behavioral Awareness in World of Warcraft, 2013
Though I had initially hoped to convince many WoW players to reconsider the adopted communal language therein, I quickly realized that this was both a terribly icky colonialist impulse on my part and that its persistence was related to a more complicated desire to hold on to a set of values that is becoming increasingly outdated and unacceptable. Throughout my interventions in the massively multiplayer video game space, I’ve found that WoW is a space in which the suppressed ideologies, feelings and experiences of an ostensibly politically correct American society flourish.
“It’s just a bunch of 14-year-old boys trolling you” won’t cut it—gamers are not a homogeneous social group.”
In many areas of physical space, racism, homophobia and misogyny play out systemically rather than overtly. It has fallen out of fashion to openly be a sexist, homophobic bigot, so people carve out marginal spaces where this language can live on. WoW is a space in which the learned professional and social behaviors (or performances) that we all employ as we shift from context to context in our everyday life outside of the screen are unnecessary. At the same time, this anonymity produces one of the few remaining opportunities to have a space for solidarity among those who are extremely socially conservative in a seemingly unsurveilled environment unattached to participants’ professional and social identities. For the players I talk to, my research project provides a potentially meaningful platform to share concerns about how social value systems are evolving while protected by the facade of their avatars.
Thanks to the emerging visibility and solidarity of visual artists, writers, game makers and other cultural producers fostering a “queer futurity of games” (to quote Merritt Kopas) and more inclusive internet spaces in general, I believe that new spaces will be produced by and for those targeted by #gamergate and its ilk. I hope that efforts will move beyond examining how marginalized groups are represented and move toward creating game spaces that promote empathy. Rather than playing a female blood elf solely because you like the design of her ass, players would be allowed to fully experience the perspective of a person they might not understand or agree with. Perhaps by living as an other in this queer utopian game space, players will come to respect people unlike themselves; at the least, they will have a harder time denying that the experiences of other gamers are valid, acceptable and even worth celebrating.
2019 is a blessed year for science fiction fans. It is a crucial year for Neo-Tokyo, in Katsuhiro Otomo’s cult manga Akira. But it is also and especially the year that Ridley Scott’s Blade Runner is set in.
(Adapted from an Article published in LeMonde)
We’re in 2019 reality. Los Angeles is not yet completely overtaken by the pollution shown by Ridley Scott. But the androids that are at the heart of Blade Runner’s plot are already there – albeit in a very different form from the Replicants, these artificial beings impossible to differentiate from a human being without resorting to a complex test.
Replicants in 2019 reality do not haunt the basements of large mega-cities, but rather the depths of the Web. And they are everywhere, as the New York magazine summarizes in a long article entitled “How much of the Internet is fake? “. A lot, it turns out. A substantial portion of website traffic is done by automated programs and not by humans. Some are useful and well known, like Google’s crawlers who roam the Web to index all pages and their updates, almost in real time. Others, on the other hand, are designed to pass as humans. Their goal is simple: to increase the statistics of visits or views, or even click on advertisements. You can buy thousands of views of a YouTube video for a few euros; and there are automated networks that click on ads to “inflate” their numbers and bring earnings to the more or less legitimate sites that host them.
The problem is such that in 2013, according to the Times, almost half of the clicks on YouTube were made by robots – making the company’s tech people fear a phenomenon of “inversion”: Once the clicks of the machines would exceed those of the humans, the anti-machine tools would end up considering the human traffic as being the “fake” one, and would turn against the legitimate users of the site.
The “Inversion moment” officially never arrived; Without achieving the complexity of the Blade Runner Voight-Kampff test, the anti-spam tools have improved. The most common was, historically, the captcha, which asked the user to decipher one or two badly written words to prove that they were a human. The test proved too simple in the face of increasingly sophisticated robots: it has largely been replaced by a more analytical test, which asks the user to identify objects on images. Google, and others, are already working on a new generation of tools that analyze how the mouse moves on the screen to guess if it’s being manipulated by a real being.
But knowing that it is a human who clicks is not always enough. The Russian propagandists of the Internet Research Agency, who have tried to influence the US presidential election, are very human, as are the employees of the “click farms” who inflate the views on YouTube of their customers.
And as control tools improve, so do the skills of those who generate fake traffic.
In the past two years, simple AI tools have made it much easier to create “deepfakes”, these faked – and mostly pornographic – videos in which the face of a person is superimposed, in a relatively convincing way, on a character of a video. “ The fact is that trying to protect yourself from the Internet and its depravity is basically a lost cause… The Internet is a vast wormhole of darkness that eats itself” says in a rather disillusioned interview to the Washington Post actress Scarlett Johansson, a regular victim of deepfakes.
The worst is perhaps to come: on the Internet, pornographic innovations usually find other usage, and 2019 could be a good year for shady political videos. Because at the end of the day, one of the biggest differences between Blade Runner’s 2019 and the one we’re about to experience is that “replicating”is not just a multinational thing, as is the Tyrell Corporation in the film. Today, everyone, or almost everyone, can for a small cost buy or build a small robot factory. More than was foreseen by the original book by Philip K Dick on which Blade Runner was based on, the Replicants are now truly amongst us.
Is humanity controlled by alien lizards? – how fake news and robots influence us from within our own social circles.
Even these days, there are still 12% of Americans to believe humanity is controlled by alien lizards who took human shape. Replace “alien lizards” with “bots”, and the laughable conspiracy theory might not be that funny anymore.
Increasingly all social debates and political elections are manipulated by social bots and the most worrying news is that opponents to a cause or a party manipulate supporters of this cause or party from within their own social circles. We must absolutely understand how this is working against our social struggles, if we are to keep control of our campaigning strategies.
One of the most verified truth of campaigning is that people only get really influenced by attitudes and behaviors of other members of their social circles, as the conformity bias drives most of us to follow what we perceive our fellows think and do.
And where do these patterns appear more clearly than on social media? Clicks, likes and comments drive most of us to distinguish what is appropriate from what is not.
Political strategist have been constantly researching how to make the most of this and use individuals as one of their main channels to propagate their ideas.
In recent years the explosion of the use of social bots, allied to a shameless use of fake news, have given the strategists the most worrying tools to influence attitudes and behaviors, including our own.
The increasing presence of bots in social and political discussions
Social (ro)bots are software-controlled accounts that artificially generate content and establish interactions with non-robots. They seek to imitate human behavior and to pass as such in order to interfere in spontaneous debates and create forged discussions.
Strategist behind the bots create fake news and fake opinions. They then disseminate these via millions of messages sent via social media platforms.
With this type of manipulation, robots create the false sense of broad political support for a certain proposal, idea or public figure. These massive communication flows modify the direction of public policies, interfere with the stock market, spread rumors, false news and conspiracy theories and generate misinformation.
In all social debates, it is now becoming common to observe the orchestrated use of robot networks (botnets) to generate a movement at a given moment, manipulating trending topics and the debate in general. Their presence has been evidenced in all recent major political confrontations, from Brexit to the US elections and, very recently, the Brazilian elections:
On October 17, the daily Folha de S. Paulo, revealed that four services specialized in the sending of messages in mass on WhatsApp (Quick Mobile, Yacows, Croc Services, SMS Market) had signed contracts of several millions of dollars with companies supporting Jair Bolsonaro’s campaign.
According to the revelations, the 4 companies have sent hundreds of millions of messages on large lists of whatsapp accounts, which they collected via cellphone companies or other channels.
What these artificial flows represent in terms of proportion is frightening.
According to a Brazilian study, led by Getúlio Vargas Foundation, which analyzed the discussions on Twitter during the TV debate in the Brazilian presidential election in 2014, 6.29 percent of the interactions on Twitter during the first round were made by social bots that were controlled by software that created a massification of posts to manipulate the discussion on social media. During the second round, the proliferation of social bots was even worse. Bots created 11 percent of the posts. During the 2017 general strike, more than 22% of the Twitter interactions between users in favor of the strike were triggered by this type of account.
The foundation conducted several more case studies, all with similar results.
Twitter is Bot land
Robots are easier to spread on Twitter than on Facebook for a variety of reasons. The Twitter text pattern (restricted number of characters) generates a communication limitation that facilitates the imitation of human action. In addition, using @ to mark users, even if they are not connected to their network account, allows robots to randomly mark real people to insert a factor that closely resembles human interactions.
Robots also take advantage of the fact that, generally, people lack critical thinking when following a profile on Twitter, and usually act reciprocally when they receive a new follower. Experiments show that on Facebook, where people tend to be a bit more careful about accepting new friends, 20% of real users accept friend requests indiscriminately, and 60% accept when they have at least one friend in common. In this way, robots add a large number of real people at the same time, follow real pages of famous people, and follow a large number of robots, so that they create mixed communities – including real and false profiles ( Ferrara et al., 2016)
How Whatsapp is trusting the debate in Brazil
But Twitter is not the only channel. All social media experience the same strategies of infiltration, depending on what is being used by the specific group targeted by unscrupulous strategists.
In most countries whatsapp is a media restricted to private communications amongst a close circle.
But in Brazil, it has largely replaced social media. Of 210 million brazilians, 120 million have an active Whatsapp account. In 2016, a . En 2016, Harvard Business Review study indicated that 96 % of Brazilians who has a smartphone used Whatsapp as prefered messaging app.
Although disseminating information is rather difficult, with Whatsapp groups being limited to 256 people, the influence of messages is extremely high as the levels of trust within Whatsapp groups are higher than anywhere else. So investing in reaching these groups turns out to be extremely effective.
Furthermore, regulation and traceability of fake news are extremely difficult as messages are encrypted.
As a result, some Brazilians have reported receiving up to 500 messages per day according to Agence France Presse
And the impact of this tactic is not to be underestimated: The internet watchdog Comprova created by over 50 journalists analysed that among the fifty most viral images within these groups, 56% of them propagate fake news or present misleading facts.
The “virality” of fake news is particularly strong in the case of images and memes, such as the one pretending that Fernando Haddad, the candidate of the PT, aimed at imposing “gay kits” in schools.
Not only is this highly immoral but, in the case of Brazil also illegal, as the law only allows a party to send message to its enrolled supporters. Not to speak of how this constitutes illegal funding of political campaigning.
Following the disclosure, Whatsapp closed 100 000 accounts that were linked to the 4 companies, but this represents only a fraction of the problem, and in any case the damage was done.
This manipulation is generated within supporters groups to discredit their opponents
In line with what has been happening since the beginnings of politics, influencers act within groups of supporters of a cause or a party to discredit their opponents and help tighten the group.
The same strategy is applied to target the moveable audiences and win them over.
In this respect, the major change that bots bring is the size and speed of the manipulation.
Attacking from within
But the worrying trend is that the army of fake news and opinion distortions also attack our movement from within.
The University of Washington released in October 2018 the results of investigationsin the social discussions during the 2016 US presidential elections that showed that many tweets sent from what seemed to be #BlackLivesMatter supporters were not posted by “real” #BlackLivesMatter but by Russia’s Internet Research Agency (IRA) in their influence campaign targeting the 2016 U.S. election. Of course, the same was true of #BlueLivesMatter.
The creepy graph below shows in orange the IRA accounts, within the larger blue circles of pro and anti BLM conversations.
The IRA accounts impersonated activists on both sides of the conversation. On the left were IRA accounts that enacted the personas of African-American activists supporting #BlackLivesMatter. On the right were IRA accounts that pretended to be conservative U.S. citizens or political groups critical of the #BlackLivesMatter movement. Infiltrating the BLM movement by increasing the presence of radical opinionswas a clear strategy to undermine electoral support for Hillary Clinton by encouraging BLM supporters not to vote.
Outrageous fake news that come from our opponents are relatively easy to spot and dismiss, but when more subtle fake news and artificial massification of opinion use our own frames and come from what seem to be elements of our own movements, the danger is much bigger.
What does this mean for SOGI campaigning?
Political pressure on social media to reinforce regulations is mounting from governments and multilateral institutions such as the EU.
Issues of sexual orientation, gender identity or expression, and sex characteristics are almost always used by conservatives to discredit progressives and whip up moral panics.
Supporting institutional efforts to control fake news would probably always work in our favor.
More and more public and private initiatives are being developed to bust fake profiles.
For example, Brazil developed PegaBot, a software that estimates the probability of a profile being a social bot (e.g. profiles that post more than once per second).
TheBBC reportsthat through the International Fact Checking Network (IFCN), a branch of the Florida-based journalism think tankPoynter, facebook users in the US and Germany can now flag articles they think are deliberately false, these will then go to third-party fact checkers signed up with the IFCN.
Those fact checkers come from media organisations like the Washington Post and websites such as the urban legend debunking siteSnopes.com. The third-party fact checkers, says IFCN director Alexios Mantzarlis “look at the stories that users have flagged as fake and if they fact check them and tag them as false, these stories then get a disputed tag that stays with them across the social network. “Another warning appears if users try to share the story, although Facebook doesn’t prevent such sharing or delete the fake news story. The “fake” tag will however negatively impact the story’s score in Facebook’s algorithm, meaning that fewer people will see it pop up in their news feeds.
The opposite could also be favored, with « fact-checked » labels being issued by certified sources and given priority by social media algorithms.
Of course, this would create strong concerns over who would hold the « truth label » and how this would play out to silence voices which are not within the ruling systems.
But beyond these and other initiatives to get the social media platforms to exert control, campaign organisations also need to take direct action.
As a systematic step, educating our own social circles on fake news and bots now seems unavoidable.
We might even need to disseminate internal information to our readers, membership or followers, warning them of possible infiltration of the debates by fake profiles that look radical. But this might also lead to discredit the real radical thinking which we desperately need.
One of the most useful activities could be to increase our presence in other social circles and help these circles identify and combat fake news. Some people are so entrenched in their hatred that they will believe almost anything that will justify their hatred. But most people are genuinely looking for true information. After all, no one likes to be lied at and manipulated. If we keep identifying and exposing fake news within the social circles of these moderate people, we can surely achieve something, at least help block specific profiles by reporting them.
The net is ablaze with discussions on how to counter the manipulation of public opinion by bots. As one of the first victims of this, we surely must have our part to play.
Sogicampaigns.org and our digital partners use cookies on this site. Some are strictly necessary to run the site but below are the optional ones:
• Used for measuring how the site is used
• Enabling personalisation of the site
• For advertising marketing and social media
1. Capture the message and spirit of your event
2. Use a variety of shots, media and platforms to showcase the full story of your action.
3. Some basic equipment can help.
4. Be Safe!
5. Encourage everyone to share their experience.