​​​​ ​​​​

Category: social media

Trolls attack from within : How we are all being manipulated from within our own communities

This article is reproduced from Medium.com

For researchers in online disinformation and information operations, it’s been an interesting week. On Wednesday, Twitter released an archive of tweets shared by accounts from the Internet Research Agency (IRA), an organization in St. Petersburg, Russia, with alleged ties to the Russian government’s intelligence apparatus. This data archive provides a new window into Russia’s recent “information operations.” On Friday, the U.S. Department of Justice filed charges against a Russian citizen for her role in ongoing operations and provided new details about their strategies and goals.

Information operations exploit information systems (like social media platforms) to manipulate audiences for strategic, political goals—in this case, one of the goals was to influence the U.S. election in 2016.

In our lab at the University of Washington (UW), we’ve been accidentally studying these information operations since early 2016. These recent developments offer new context for our research and, in many ways, confirm what we thought we were seeing—at the intersection of information operations and political discourse in the United States—from a very different view.

A few years ago, UW PhD student Leo Stewart initiated a project to study online conversations around the #BlackLivesMatter movement. This research grew to become a collaborative project that included PhD student Ahmer Arif, iSchool assistant professor Emma Spiro, and me. As the research evolved, we began to focus on “framing contests” within what turned out to be a very politicized online conversation.

Framing can be a powerful political tool.

The concept of framing has interesting roots and competing definitions (see Goffman, Entman, Benford and Snow). In simple terms, a frame is a way of seeing and understanding the world that helps us interpret new information. Each of us has a set of frames we use to make sense of what we see, hear, and experience. Frames exist within individuals, but they can also be shared. Framing is the process of shaping other people’s frames, guiding how other people interpret new information. We can talk about the activity of framing as it takes place in classrooms, through news broadcasts, political ads, or a conversation with a friend helping you understand why it’s so important to vote. Framing can be a powerful political tool.

Framing contests occur when two (or more) groups attempt to promote different frames—for example, in relation to a specific historical event or emerging social problem. Think about the recent images of the group of Central American migrants trying to cross the border into Mexico. One framing for these images sees these people as refugees trying to escape poverty and violence and describes their coordinated movement (in the “caravan”) as a method for ensuring their safety as they travel hundreds of miles in hopes of a better life. A competing framing sees this caravan as a chaotic group of foreign invaders, “including many criminals,” marching toward the United States (due to weak immigration laws created by Democrats), where they will cause economic damage and perpetrate violence. These are two distinct frames and we can see how people with political motives are working to refine, highlight, and spread their frame and to undermine or drown out the other frame.

In 2017, we published a paper examining framing contests on Twitter related to a subset of #BlackLivesMatter conversations that took place around shooting events in 2016. In that work, we first took a meta-level view of more than 66,000 tweets and 8,500 accounts that were highly active in that conversation, creating a network graph (below) based on a “shared audience” metric that allowed us to group accounts together based on having similar sets of followers.

“Shared Audience” Network Graph of Accounts in Twitter Conversations about #BlackLivesMatter and Shooting Events in 2016. Courtesy of Kate Starbird/University of Washington.

That graph revealed that, structurally, the #BlackLivesMatter Twitter conversation had two distinct clusters or communities of accounts—one on the political “left” that was supportive of #BlackLivesMatter and one on the political “right” that was critical of #BlackLivesMatter.

Next, we conducted qualitative analysis of the different content that was being shared by accounts on the two different sides of the conversation. Content, for example, like these tweets (from the left side of the graph):

Tweet: Cops called elderly Black man the n-word before shooting him to death #KillerCops #BlackLivesMatter


And these tweets (from the right side of the graph):

Tweet: Nothing Says #BlackLivesMatter like mass looting convenience stores & shooting ppl over the death of an armed thug.

Tweet: What is this world coming to when you can’t aim a gun at some cops without them shooting you? #BlackLivesMatter.

In these tweets, you can see the kinds of “framing contests” that were taking place. On the left, content coalesced around frames that highlighted cases where African-Americans were victims of police violence, characterizing this as a form of systemic racism and ongoing injustice. On the right, content supported frames that highlighted violence within the African-American community, implicitly arguing that police were acting reasonably in using violence. You can also see how the content on the right attempts to explicitly counter and undermine the #BlackLivesMatter movement and its frames—and, in turn, how content from the left reacts to and attempts to contest the counter-frames from the right.

Our research surfaced several interesting findings about the structure of the two distinct clusters and the nature of “grassroots” activism shaping both sides of the conversation. But at a high level, two of our main takeaways were how divided those two communities were and how toxic much of the content was.

Our initial paper was accepted for publication in autumn 2017, and we finished the final version in early October. Then things got interesting.

A few weeks later, in November 2017, the House Intelligence Committee released a list of accounts, given to them by Twitter, that were found to be associated with Russia’s Internet Research Agency (IRA) and their influence campaign targeting the 2016 U.S. election. The activities of these accounts—the information operations that they were part of—had been occurring at the same time as the politicized conversations we had been studying so closely.

Looking over the list, we recognized several account names. We decided to cross-check the list of accounts with the accounts in our #BlackLivesMatter dataset. Indeed, dozens of the accounts in the list appeared in our data. Some—like @Crystal1Johnson and @TEN_GOP—were among the most retweeted accounts in our analysis. And some of the tweet examples we featured in our earlier paper, including some of the most problematic tweets, were not posted by “real” #BlackLivesMatter or #BlueLivesMatter activists, but by IRA accounts.

To get a better view of how IRA accounts participated in the #BlackLivesMatter Twitter conversation, we created another network graph (below) using retweet patterns from the accounts. Similar to the graph above, we saw two different clusters of accounts that tended to retweet other accounts in their cluster, but not accounts in the other cluster. Again, there was a cluster of accounts (on the left, in magenta) that was pro-BlackLivesMatter and liberal/Democrat and a cluster (on the right, in green) that was anti-BlackLivesMatter and conservative/Republican.

Retweet Network Graph of Accounts in Twitter Conversations about #BlackLivesMatter and Shooting Events in 2016. Courtesy of Kate Starbird/University of Washington

Next, we identified and highlighted the accounts identified as part of the IRA’s information operations. That graph—in all its creepy glory—is below, with the IRA accounts in orange and other accounts in blue.

Retweet Network Graph plus IRA Troll Accounts. Courtesy of Kate Starbird/University of Washington

As you can see, the IRA accounts impersonated activists on both sides of the conversation. On the left were IRA accounts like @Crystal1Johnson, @gloed_up, and @BleepThePolice that enacted the personas of African-American activists supporting #BlackLivesMatter. On the right were IRA accounts like @TEN_GOP, @USA_Gunslinger, and @SouthLoneStar that pretended to be conservative U.S. citizens or political groups critical of the #BlackLivesMatter movement.

Ahmer Arif conducted a deep qualitative analysis of the IRA accounts active in this conversation, studying their profiles and tweets to understand how they carefully crafted and maintained their personas. Among other observations, Arif described how, as a left-leaning person who supports #BlackLivesMatter, it was easy to problematize much of the content from the accounts on the “right” side of the graph: Some of that content, which included racist and explicitly anti-immigrant statements and images, was profoundly disturbing. But in some ways, he was more troubled by his reaction to the IRA content from the left side of the graph, content that often aligned with his own frames. At times, this content left him feeling doubtful about whether it was really propaganda after all.

This underscores the power and nuance of these strategies. These IRA agents were enacting caricatures of politically active U.S. citizens. In some cases, these were gross caricatures of the worst kinds of online actors, using the most toxic rhetoric. But, in other cases, these accounts appeared to be everyday people like us, people who care about the things we care about, people who want the things we want, people who share our values and frames. These suggest two different aspects of these information operations.

First, these information operations are targeting us within our online communities, the places we go to have our voices heard, to make social connections, to organize political action. They are infiltrating these communities by acting like other members of the community, developing trust, gathering audiences. Second, these operations begin to take advantage of that trust for different goals, to shape those communities toward the strategic goals of the operators (in this case, the Russian government).

One of these goals is to “sow division,” to put pressure on the fault lines in our society. A divided society that turns against itself, that cannot come together and find common ground, is one that is easily manipulated. Look at how the orange accounts in the graph (Figure 3) are at the outside of the clusters; perhaps you can imagine them literally pulling the two communities further apart. Russian agents did not create political division in the United States, but they were working to encourage it.

That IRA accounts sent messages supporting #BlackLivesMatter does not mean that ending racial injustice in the United States aligns with Russia’s strategic goals or that #BlackLivesMatter is an arm of the Russian government.

Their second goal is to shape these communities toward their other strategic aims. Not surprisingly, considering what we now know about their 2016 strategy, the IRA accounts on the right in this graph converged in support of Donald Trump. Their activity on the left is more interesting. As we discussed in our previous paper (written before we knew about the IRA activities), the accounts in the pro-#BlackLivesMatter cluster were harshly divided in sentiment about Hillary Clinton and the 2016 election. When we look specifically at the IRA accounts on the left, they were consistently critical of Hillary Clinton, highlighting previous statements of hers they perceived to be racist and encouraging otherwise left-leaning people not to vote for her. Therefore, we can see the IRA accounts using two different strategies on the different sides of the graph, but with the same goal (of electing Donald Trump).

The #BlackLivesMatter conversation isn’t the only political conversation the IRA targeted. With the new data provided by Twitter, we can see there were several conversational communities they participated in, from gun rights to immigration issues to vaccine debates. Stepping back and keeping these views of the data in mind, we need to be careful, both in the case of #BlackLivesMatter and these other public issues, to resist the temptation to say that because these movements or communities were targeted by Russian information operations, they are therefore illegitimate. That IRA accounts sent messages supporting #BlackLivesMatter does not mean that ending racial injustice in the United States aligns with Russia’s strategic goals or that #BlackLivesMatter is an arm of the Russian government. (IRA agents also sent messages saying the exact opposite, so we can assume they are ambivalent at most).

If you accept this, then you should also be able to think similarly about the IRA activities supporting gun rights and ending illegal immigration in the United States. Russia likely does not care about most domestic issues in the United States. Their participation in these conversations has a different set of goals: to undermine the U.S. by dividing us, to erode our trust in democracy (and other institutions), and to support specific political outcomes that weaken our strategic positions and strengthen theirs. Those are the goals of their information operations.

One of the most amazing things about the internet age is how it allows us to come together—with people next door, across the country, and around the world—and work together toward shared causes. We’ve seen the positive aspects of this with digital volunteerism during disasters and online political activism during events like the Arab Spring. But some of the same mechanisms that make online organizing so powerful also make us particularly vulnerable, in these spaces, to tactics like those the IRA are using.

Passing along recommendations from Arif, if we could leave readers with one goal, it’s to become more reflective about how we engage with information online (and elsewhere), to tune in to how this information affects us (emotionally), and to consider how the people who seek to manipulate us (for example, by shaping our frames) are not merely yelling at us from the “other side” of these political divides, but are increasingly trying to cultivate and shape us from within our own communities.

Go to the profile of Kate Starbird


Kate Starbird

Asst. Professor of Human Centered Design & Engineering at UW. Researcher of crisis informatics and online rumors. Aging athlete. Army brat.

Why People Share: The Psychology of Social Sharing

We lately found this article on coschedule.com and though it’s business-focused, some lessons seem to be transferable to our sector. Below are the elements we suggest you have a look at and see how they resonate with your current practice on social media.

“People buy (and share content) from those that they know, like, and trust. Most sharing, as it turns out, is primarily dependent on the personal relationships of your readers. The data shows that the likelihood of your content being shared has more to do with your readers relationship to others than their relationship to you.

The most common reasons people share something with others are pretty surprising. Let’s look at the data.

Get Your Free Why People Share Infographic!

  1. To bring valuable and entertaining content to others.  49% say sharing allows them to inform others of products they care about and potentially change opinions or encourage action
  2. To define ourselves to others. 68% share to give people a better sense of who they are and what they care about
  3. To grow and nourish our relationships. 78% share information online because it lets them stay connected to people they may not otherwise stay in touch with
  4. Self-fulfillment. 69% share information because it allows them to feel more involved in the world
  5. To get the word out about causes or brands. 84% share because it is a way to support causes or issues they care about

It was also found that some users share as a act of “information management.” 73% of respondents said that they process information more deeply, thoroughly and thoughtfully when they share it.

As if that wasn’t enough, you also need to realize that good content comes with a high entertainment factor. Rather than a generic stock image, consider custom graphics or charts that present your content to readers in a brand new way. If you haven’t before, consider a video or infographic as a way to add more value, and more entertainment, to your content.

Connect Your Readers To Others

Your readers have an instinctual need to connect with others. Just look at the success of social networks like Facebook and Twitter. People like people.

In content marketing, the fabric of these connections is directly related to the content that we consume and share with our online network.

Here’s a small example: when is the last time that you left a comment on a post without sharing the post itself? Probably never. When we attach a conversation to a piece of content, we become very likely to share that content with others.

In addition, some readers will actually share their comment with a social share. The Facebook and Google+ commenting utilities (link) prove how closely these two things that are connected.

One way to do this is to try and end as many posts as possible with a question that our readers and can answer in the comments. While they don’t always do it, the question will often get them thinking and helps them apply.

Another option is to occasionally hit on the controversial post.  Overall, this is a good thing and helps people connect with others.

Make Them Feel More Valuable

In the New York Times study one respondent was quoted as saying that she enjoyed “getting comments that I sent great information and that my friends will forward it to their friends because it’s so helpful. It makes me feel valuable.”

This is pretty cool! Not only can your content help your readers become a subject matter expert in their field, but it can also help them look like one for their peers.

Why Facebook Is a Waste of Time—and Money—for Arts Nonprofits

The team from Artistic Activism takes a stand on an issue that is a major preoccupation for all non-profits. A bold move but are we ready to give up FB???

This articles appeared first on ARTNET

Steve Lambert,

Why Facebook Is a Waste of Time—and Money—for Arts Nonprofits

The co-founder of the nonprofit Center for Artistic Activism explains why his company has officially de-friended Facebook.

Facebook CEO Mark Zuckerberg in San Francisco, California. Photo: Josh Edelson/AFP/Getty Images.

Like many nonprofits, we use Facebook to connect with our audiences, and they use Facebook to stay in touch with us. It’s not our preferred way, but it’s where more than 4,000 people have chosen to stay informed about what we do at the Center for Artistic Activism. Part of our philosophy at the C4AA is to meet people where they are, and, undeniably, hundreds of millions of people (and some bots) are on Facebook. However, looking at the statistics provided by Facebook, we’ve come to realize that the connection we were after isn’t actually made.

That’s why we’ve decided to stop putting effort into Facebook. The world’s largest social network has become an increasingly inhospitable place for nonprofits.

We currently have 4,093 “fans” of our page on Facebook. For a scrappy organization focused on artistic activism, that’s not bad (especially since we never bought followers to boost our numbers). Those thousands came from years of hard work doing outreach.

From left: Steve Lambert, Rebecca Bray, and Stephen Duncombe, directors of the C4AA. Courtesy of Steve Lambert.

Stephen Duncombe and I started the organization around 2009, shortly after Facebook asked organizations to create “pages” to help differentiate from personal “profiles.” In those early years, we used our fan page to share the progress we were making to support artists and activists fighting corruption in West Africa, to help save lives in the opioid crisis, to get proper healthcare for LGBTQ people in Eastern Europe, and our work to make activism more creative, fun, and effective.

After trainings and other events, our page was especially active as new alumni from countries around the world joined to stay in touch. However, in recent years, the traffic dropped off.

Looking at the Numbers

During that time, we’ve grown significantly as an organization—adding staff positions, increasing programming—but I wouldn’t blame our Facebook followers for thinking the C4AA was dormant, if not dead.

They weren’t seeing everything we shared—and may not have been seeing anything. They’ve asked to hear from us, but Facebook decides if and when they actually do. And in reality, it’s not often. Here are the stats Facebook provides us:

Screenshot of C4AA's Facebook analytics. Courtesy of Steve Lambert.

Screenshot of C4AA’s Facebook analytics. Courtesy of Steve Lambert.

This shows how many people (anyone, not exclusively fans of our page) have seen our posts over the past three months. With a few exceptions, you can see most posts don’t reach more than a tenth of the number who have opted to follow our page. In recent weeks, we’ve reached an average of around 3 percent.

This is by design. People think the Facebook algorithm is complicated, and it does weigh many factors, but reaching audiences through their algorithm is driven by one thing above all others: payment. Facebook’s business model for organizations is to sell your audience back to you.

In the past, you could boost your social media reach by writing better posts and including images and video. But in recent years, targeted spending on advertising has overtaken all other tips and tricks. To reach more people who already requested to hear from the C4AA, we’d have to give our donors’ money to Facebook to “boost” our posts.

Now, are we simply against paying Facebook? Do we not want to give our donors’ money to one of the largest corporations on the planet, one that has enriched its leadership and shareholders by not paying the artists, journalists, and everyday people who give the site value? Do we want to withhold support to a company that’s barely taken responsibility for enabling Russian disinformation to reach US citizens in an effort to undermine democratic elections? Do we think that Facebook is turning the internet from an autonomous, social democratic space into an expanding, poorly managed shopping mall featuring a food court of candied garbage and Jumbotrons blasting extreme propaganda that’s built on top of the grave of the free and open web? Yes, yes, yes, and yes. That’s why we’ve never been big fans, much less paid to use Facebook.

Why Facebook Is Bad News

However, for the sake of argument, let’s imagine that we accept that this is Facebook’s business model, and it is free to create its own rules on its private platform. Fine. There’s still a broader inequity to address.

Facebook’s pricing treats nonprofits and artists the same as a multinational corporation like Coca-Cola, a high-end neighborhood boutique hair salon, or a vitamin supplement scam. The advertising model makes no exceptions for nonprofits—even though we have nothing to sell and our mission, legally bound, is for the common good.

This difference in purpose is significant. It’s why the US government does not charge taxes to nonprofits, and the postal service offers reduced rates. Even other tech companies put nonprofits in a different category. Paypal charges less to process charitable donations and enables fundraising opportunities through partners like eBay.

At the C4AA, we use the messaging system Slack, and were delighted to learn it offers a significant discount to non-profits to upgrade from their free plan to the standard plan. That discount? 100 percent. To upgrade to the top plan, the Plus Plan, the discount is 85 percent. Slack partners with the non-profit TechSoup, which arranges discounted software, hardware, and support from for-profits to nonprofit organizations. One TechSoup partner, Google—yes, that Google—offers thousands of in-kind dollars for “ad grants” so nonprofits can compete to communicate alongside for-profit companies.

Facebook offers no such discount. It considers all communication from any organization to be a form of “advertising.” Facebook will take the money of anyone who pays—whether to sell products or discord.

Sure, we can keep posting there anyway for free, but less than 3 percent of our followers would know.

Meanwhile, the Facebook-using public—around two billion people—is unaware of what they are missing. My social network may consist of a mix of the causes I care about, artists who challenge my thinking, independent news organizations I trust, some friends and family, and even a few businesses I like. But what I select is not what I see—at least not entirely. And this is a system that puts artists and nonprofits at a disadvantage.

In the past two years, we’ve seen this problem get worse. After the 2016 election, the C4AA began considering this decision more seriously, and after much internal discussion among our leadership and a few board members, along with last week’s indictments, we felt it was time. As much as Facebook and Mark Zuckerberg claim to want to build community and bring the world closer together, their business decisions tell another story.

Looking Ahead

For some nonprofits, paying Facebook for access to supporters is a deal they’re willing to make. No judgment here. C4AA staff still use it to stay in touch with friends. Many organizations we work alongside use Facebook for advocacy efforts. We know for some it may not be a reasonable option to withdraw. We’re not insisting anyone needs to adhere to some arbitrary purity standard. We’ve just decided Facebook is not for us.

For now, we’ve found our email newsletters much more effective because at least we know the message reaches the subscribers’ inbox. And while we are no longer investing our time or our donors’ money into Facebook, it’s not a complete departure. We’re letting automated systems repost from our website and from other social networks.

Leaving history’s biggest social network feels risky. We don’t want to lose those 4,000-plus people—though, in a way, they’ve been lost for a long time. And we remember: It’s not that big of a deal! This makes us only slightly more radical than the Unilever Corporation.

If you’re at a nonprofit and wondering what you can do, have a conversation with your leadership and make a conscious choice. Look at your Facebook stats. Are you reaching your audience? Is paying worth it? Is the money, content, and audience you give Facebook consistent with the goals and mission of your organization?

The Center for Artistic Activism is at C4AA.org. You can sign up for the Center for Artistic Activism email newsletter here. You could also follow us on Facebook, but what would be the point?

Steve Lambert is an associate professor of new media at the State University of New York at Purchase College, a co-founder and co-director of the Center for Artistic Activism, and an artist whose work can be seen at visitsteve.com.

Can we enter the fight against extremism?

Very useful for activists: maybe homophobic campaigns can be identified as extremism and erased from Youtube!

Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all. Google and YouTube are committed to being part of the solution. We are working with government, law enforcement and civil society groups to tackle the problem of violent extremism online. There should be no place for terrorist content on our services.

While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now.

We have thousands of people around the world who review and counter abuse of our platforms. Our engineers have developed technology to prevent re-uploads of known terrorist content using image-matching technology. We have invested in systems that use content-based signals to help identify new videos for removal. And we have developed partnerships with expert groups, counter-extremism agencies, and the other technology companies to help inform and strengthen our efforts.

Today, we are pledging to take four additional steps.

First, we are increasing our use of technology to help identify extremist and terrorism-related videos. This can be challenging: a video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user. We have used video analysis models to find and assess more than 50 per cent of the terrorism-related content we have removed over the past six months. We will now devote more engineering resources to apply our most advanced machine learning research to train new “content classifiers” to help us more quickly identify and remove extremist and terrorism-related content.

Second, because technology alone is not a silver bullet, we will greatly increase the number of independent experts in YouTube’s Trusted Flagger programme. Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech. While many user flags can be inaccurate, Trusted Flagger reports are accurate over 90 per cent of the time and help us scale our efforts and identify emerging areas of concern. We will expand this programme by adding 50 expert NGOs to the 63 organisations who are already part of the programme, and we will support them with operational grants. This allows us to benefit from the expertise of specialised organisations working on issues like hate speech, self-harm, and terrorism. We will also expand our work with counter-extremist groups to help identify content that may be being used to radicalise and recruit extremists.

Third, we will be taking a tougher stance on videos that do not clearly violate our policies — for example, videos that contain inflammatory religious or supremacist content. In future these will appear behind an interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements. That means these videos will have less engagement and be harder to find. We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.

Finally, YouTube will expand its role in counter-radicalisation efforts. Building on our successful Creators for Change programme promoting YouTube voices against hate and radicalisation, we are working with Jigsaw to implement the “Redirect Method” more broadly across Europe. This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.

We have also recently committed to working with industry colleagues—including Facebook, Microsoft, and Twitter—to establish an international forum to share and develop technology and support smaller companies and accelerate our joint efforts to tackle terrorism online.

Collectively, these changes will make a difference. And we’ll keep working on the problem until we get the balance right. Extremists and terrorists seek to attack and erode not just our security, but also our values; the very things that make our societies open and free. We must not let them. Together, we can build lasting solutions that address the threats to our security and our freedoms. It is a sweeping and complex challenge. We are committed to playing our part.

Express yourself(ie) !

Expressing ourselves is at the heart of every campaign.

Our expressions is what makes us visible, what makes us liked or disliked, what brings us enemies and allies.

Expressions come in many forms, and each campaigner will be faced with an early crucial choice : whose expression are we considering?

and under what form?

The answer to the first question is very often “Everyone’s”: while many campaigns chose to have celebrities, moral authorities or selected individuals carry a standard message, many others increasingly chose to call for public expression.

Public expression campaigns have the combined benefit of generating original content, which can serve as basis for advocacy (for example when the campaign aims at collecting powerful stories, which will then be brought to decision makers), and of reinforcing the community by drawing more people into the action.

But inviting the public to express themselves is not necessarily easy.

The answer to the second question is often “Selfies”. Many campaigns indeed are based on people sending selfies, which arguably is the easiest form of participation, both for those who contribute and for those who are in charge of validating the content (a split second tells you if a photo is OK to be posted, or to remain on a FB page or a Tumblr account, whereas written contributions take often very long to read and it might in addition be difficult to determine at times if some writings are OK).

Most selfie campaigns will be based on people sending a picture of themselves holding a sign with their message.

But as time goes by, selfie-campaigns have become quite worn-out, and campaigners need fresh ideas for public expression campaigns;

In a previous article, we documented the ‘Kiss the Pride’ initiative which invited the public to send ‘Rainbow lips’ selfies.

We also documented how nudity and sexuality are being used in selfie campaigns

There are many ways in which a selfie campaign can be tailored to the campaign’s message.

A feminist campaign once asked the public to deconstruct images of masculinity/patriarchy.

Screen Shot 2015-02-04 at 17.04.32

A campaign from an LGBT organisation, which wanted to make the point that legal and social obstacles to expressing your full sexuality left people incomplete, asked the public to send half portraits of themselves and created a giant display of these submissions.

Screen Shot 2015-02-04 at 17.24.58

In some contexts, coming out as LGBT is just too risky to allow for a selfie campaign. BUT there are creative ways around it. This incapacity to show your face publicly can become the very message of your campaign. French photographer Philippe Castetbon created a campaign by which people sent creative shoots of themselves where they remained unidentifiable. The campaign message was clearly that repressive legislations and social climates deprive people of the very basis of their identity: their image.  In places where criminal laws are in place, selfies can feature people’s faces masked by bars, featuring prison bars.

Screen Shot 2015-02-04 at 17.26.03

Holding a mirror in front of your face when you take the selfie is also a powerful way to demonstrate how the person looking at you (and maybe condemning you) could easily be in your place.


Need more ideas to inspire your next selfie campaign ? Check out



If you feel your public needs advice to take good selfies, check out these and also see below a nice infographic from the postplanner site






Virtual reality gets real in latest campaigns

It’s difficult to imagine how LGBT campaigning can integrate VR. Would an experience of rejection and discrimination filmed on VR and brought to the viewer be an effective tool? VR has been called the “empathy machine” but there is little experiment yet to as how far this goes. Anyway, there are bound to be many discussions on this in future, so LGBT campaigners should probably get themselves on top of things.

From Greenpeace’s Mobilisation Lab

Virtual reality gets real in latest campaigns

Learning from the frontlines of VR at Greenpeace and beyond
Since the first mission to remote Amchitka, Alaska, in 1971, Greenpeace has heightened awareness by pushing the boundaries of reporting. Storytelling – and bringing people into the conversation about what’s at stake – is always evolving as technologies, cultural sensitivities, and the problems themselves shift.

Journey to the Arctic virtual reality

This summer, in keeping with this evolution and tradition of experimentation, Greenpeace launched A Journey to the ArcticThe project was the organisation’s first virtual reality (VR) campaign about the rapid and devastating impact of climate change in the Arctic.

Using new technology – not to mention an expensive and uncharted one that asks viewers to wear silly masks that can cause motion sickness – is always a leap of faith. How did Greenpeace pull this campaign off and what can we learn?

Taking People to a Place Nobody Ever Sees

A Journey to the Arctic depicts the sublime beauty of Northern Svalbard, (a pristine Norwegian archipelago) immersing us within the beautiful, remote, and yet integrally important arctic region that has become increasingly fragile due to human-related climate change. With a VR-viewer strapped to your face and your head swirling around to explore, you begin your journey in front of the Arctic Sunrise as it breaks its way through ice to Svalbard.

A Journey to the Arctic Slowly takes viewers  deep inside of a glacier to hints of all the wildlife hidden within the snow and ice. You even see a mother polar bear with a cub, curiously investigating the camera – or rather six GoPro cameras for 360 degree video.

Bringing people from around the world to the frontlines of climate change is critical, especially as  Arctic ice melt accelerates. Yet doing so without further damaging the environment requires a mediated experience. Rasmus Törnqvist, the project’s Director of Photography, chose VR for its power to transport people and elicit emotional responses.

Empathy and Ecotourism 2.0

Törnqvist, who  began working for Greenpeace as a campaigner 11 years ago, told us that VR “provides a unique opportunity to take millions of people to the arctic,” calling it Ecotourism 2.0.

The VR experience is still too new for definitive results, Törnqvist says, but initial findings are promising. A Journey to the Arctic was created with face-to-face campaigning in mind. The film may be viewed anywhere but at 3.5 minutes it was made to test how VR works with campaigners and fundraisers on the street.

When people on the street see the VR video, “most of the time they’re amazed and ready to support,” said Törnqvist. In some instances, campaigners have credited VR for more than doubling donations. Törnqvist hopes that within a few more months of campaigning, they will have provable stats to see just how effective this new technology is for Greenpeace.

Getting to Behavior Change (and Impact)

This positive response mirrors results found by Stanford University’s Virtual Human Interaction Lab. The Lab’s former Hardware Manager, Cody Karutz, told us by email that several studies show VR can nourish empathy and, more importantly, behavior change, in relation to the environment. One study showed a relationship between immersive video and reductions in hot water use. Two other studies found that VR can be used to give people an animal’s perspective and thereby create greater feelings of connection between the self and nature.

Karutz told us that A Journey to the Arctic “gives the user enough time to accommodate to the Arctic spaces.” However, he says, “the piece is still focused on showcasing an exotic space.” This helps reduce one’s psychological distance from the issue, which is important. But bringing the issue home to the user’s local reality is integral to the work’s success. The campaigner, the human handing the viewer the VR goggles, needs to frame the story and give the user a hook integral part of the piece.

An Empathy Machine is not Enough

The conversation around VR in tech spaces tends to highlight its empathetic powers. In a 2015 TED talk, Vrse CEO and founder Chris Milk called VR an “empathy machine.” Törnqvist takes inspiration from Milk’s work but rejects that framing, instead calling VR an “amplifier of emotion.”

The technology can isolate, enrage, or build empathy; the context, framing, and work is what makes the difference. As Jeremy Bailenson, founding director of the Stanford lab on VR, said, “It’s up to us to choose.”

Ainsley Sutherland, a fellow at BuzzFeed Open Lab who studied VR and empathy while at MIT, has also been critical of efforts to cast empathy creation as the most important aspect of virtual reality. Sutherland wrote that VR “cannot reproduce internal states, only the physical conditions that might influence that.” There are hundreds of relational factors, such as where you use VR, how it is presented to the user, and by whom the story is framed that can create, hinder, or alter the emotional connection between the VR environment and the viewer.

The VR Experience is More than Goggles

Contexts (and campaigners) frame the story. Greenpeace’s Törnqvist notes the continued primary and powerful role of the campaigner – and campaign. Greenpeace found that the setting in which the VR is shared influences the user experience. On a crowded street, few people will agree to sit and wear awkward headgear. Those that do have a less immersive experience than users at festivals or other locations. Törnqvist attributed this to a more relaxed, convivial setting. The quality of the VR experience matters, but context can make or break the VR as well.

Törnqvist tells us that anyone who says they know how to make great VR films is either lying or from the future. However, he and others have some important lessons based on countless hours of filming. In largely stationary shots that allow the viewer to control where they look, building a didactical narrative is less effective.

Place as story. Evan Wexler, Technical Director and Cinematographer for On the Brink of Famine, talks about the key value of VR is building an experience of the site itself. Wexler calls this “place as story.” We see this in A Journey to the Arctic when the narrator invites us upon arrival to Svalbard to simply “just look around.” Wexler and Törnqvist both note that it’s important to find the right location at which to focus the viewer’s attention – the place where their presence may have a transformative effect.

Positive emotions are more powerful. Törnqvist also found that positive emotions tend to create more powerful experiences. He sought Svalbard as an environment still largely untouched by humans in order to “offer the same awe and passion that we [at Greenpeace] feel about the planet.” The media shows what sublime landscape is at stake, not what is already lost.

Create depth. Wexler and Törnqvist also discussed the importance and challenge of creating depth. 360 degree stationary camera rigs do not offer a large depth of field. Have the key subject nearby and other objects of value at middle and far distances to create a richer environment. This obviously presented some challenges in Svalbard, a land largely comprised of snow.

Where’s the audience? The empathy and impact of any communications medium depends on the reader or viewer. In his TED talk, Chris Milk points out the importance of connecting his film for the United Nations, Clouds Over Sidra, to those with the power to make a difference.

Clouds over Sidra - Virtual Reality

The UN screened Milk’s film about Sidra, a 12-year-old Syrian girl in the Zaatari Refugee Camp, at the World Economic Forum’s meeting in Davos, Switzerland. It’s useful for campaigns to consider how and where their targeted audiences will view VR stories.

Where to Go with Virtual Reality

Karutz notes that there is a great lack of interaction in most VR video, including A Journey to the Arctic. Without“embodied engagement with the user and the VR environment,” Karutz says, there could be less lasting behavior change. This can be mediated by the campaigners but Greenpeace is already working on pushing VR even further.

Pete Speller of Greenpeace International is working with The Feelies, a multi-sensory design team, and Alchemy VR, experts in creating compelling virtual reality narrative experiences, on a VR project that takes takes viewers inside Sawré Muybu village, home of the Munduruku Indigenous People in the Amazon rainforest.

In the Tapajós project, as it’s called, multi-sensory viewing pods will complement the VR film to create an immersive experience incorporating sounds, imagery, motion, smell and touch. The goal is to create a deeper connection to the Munduruku people and Amazon rainforest. The work will be publicly launched in Rio de Janeiro in early 2017.

“A fundamental of Greenpeace has always been the act of bearing witness,” Törnqvist told me. “Now, with VR, we have an opportunity for anyone to do so.” VR is a new way of telling stories but using it effectively requires a creative coupling with all the old tools campaigners have been honing for decades. Finding that balance remains the challenge.

What do those temporary Facebook profile pictures really mean?

As so many of us focus our work around online campaigns it’s really useful to know how social media drives norms (or doesn’t….). Here’s a great article from Scientific American  that might help inform some of your future plans!

Also worth reading on the same topic is the Washington Post’s article “More than 26 million people have changed their Facebook picture to a rainbow flag. Here’s why that matters.

From Scientific American

“We know that online peer pressure is powerful. But what we don’t know is whether that pressure is driving real change.

Sharing your opinions and thoughts online is as simple as clicking a button. But you might want to hold off on clicking that button if your opinion or thinking differs from the at-the-moment sentiment sweeping through your social network. To do otherwise, might bring the ire of your connections, and with it ostracism from the group. While it has never been easier to share online, it’s also never been harder to share things that differ from public sentiment or to not offer an opinion in the wake of emotionally charged events. Peer pressure, which was once categorically regarded as a negative driver of drugs and deviant behavior, has morphed to a broader expression of social pressure in online spaces and is more aligned with maintaining group norms.

Why is this an issue? There is a difference between norms that arise as a result of social consideration and norms that are driven by social momentum. The former are designed to improve a group’s cohesiveness by establishing degrees of sameness through agreement; they can be challenged and debated, and there is room for them to change to meet the needs of the widest possible group set. The latter, however, are driven by emotional responses. They become established quickly and decisively, spreading like wildfire, and bear a violence toward those who disagree. This has rightly been described as mob mentality because there is little discussion or debate; and while some people are relieved to have their beliefs finally expressed publicly, others follow because they are swept along by the expressions of the group or because they are afraid to stand apart from the group. In the online world, this has recently been helpful in highlighting cases of harassment but caution is warranted. There is a speed-to-action online that is troubling in that in quickly establishes a stigma tied to behavior or thinking that differs and forces people to act in less than meaningful ways.

In recent years, both of these circumstances have played out on Facebook. In 2012, Facebook allowed users to indicate their organ donor status. Later that year, Facebook asked users to pledge to vote in the presidential election. Both actions were marked by a sharable status that a user could use to broadcast action/intent to his or her network. The organ donor initiative was meant to help reduce the misconceptions that plague the donor community and prevent donor sign-ups. It drew criticism because it highlighted a personal choice as something a person could not be judged on, calling out a status that may differ between people and matter more than if you both liked the television show Friends. Similarly, “I Voted” was meant to mobilize people based on peer pressure. The idea being that if the majority of your friends had voted, you might want to as well. While most people will agree that becoming an organ donor or casting a vote is not a bad thing, the pressure to indicate that you’re in sync with your community might result in a false reporting of your status. There was no means of verifying that you were an organ donor or that you voted. What mattered, however, was the show of solidarity, which was driven by emotional wave of activism and change, respectively.

Behaviors and thoughts spread much in the same way that viruses do: they’re most powerful, and contagious, when passed between people who have close contact with each other. Within social networks–both online and offline–there is evidence to suggest that in groups where there is a great deal of overlap between members in terms of shared connections and interests, there are higher rates of adoption of behaviors and thinking because members are receiving reinforced signals about certain patterns. In these types of clustered networks, behavior and thought exist as complex contagions, requiring multiple points of contact before “infection” is established.

Researchers Nicholas Christakis and James Fowler gave us a good example of the power of clustered networks by tracing obesity, smoking cessation, and happiness through the Framingham network. This network was revealed following a medical study that collected information on personal contacts, which allowed the participants’ social networks to be mapped years later, and for researchers to trace the spread of certain behaviors. Christakis and Fowler found that:

  • If a person became obese, the likelihood his friend would also become obese was 171%.
  • When smokers quit, their friends are 36% more likely to also quit. (Although this effect diminishes as the separation between contacts grow, and loses its efficacy at four degrees of separation.)
  • Happy friends increased the likelihood of an individual being happy by 8%.

The Framingham data illustrated a potential impact of the connections within a network. Our networks help us establish a sense of what’s acceptable–right down to expanding waistlines. The more social reinforcement we receive that certain actions are appropriate, the more likely we are to adopt those actions ourselves.

The catch here is that the Framingham data represents an offline dataset. So in the case of the smokers who quit and influenced their friends to follow, this happened without a temporary profile picture or an “I quit smoking” Facebook status. This behavior played out offline where it was vetted and assessed before it was adopted. That kind of critical thinking is often missing from the online pressure to conform. What does it mean if your profile picture was not updated? Maybe you’re not active on Facebook often, in which case, you’d probably get a pass. But if you are active, does it mean you condone the attacks? What do we really accomplish with these kinds of acts of solidarity? Ultimately, it sends a message about who we are as people; it serves to distinguish us from an other–it says we aren’t like them, we aren’t bad people. But does it stop there?

Beyond our responses to acts of terrorism, we are establishing new data points upon which we can be judged. In the Framingham study, smokers mingled freely with nonsmokers in 1971 and they were distributed evenly throughout the network. However, by 2001 as groups of smokers quit, those who persisted were socially isolated. What if we required people to list their status as smokers or non-smokers–how would our networks shift as a result of this information? The temporary profile picture is a great way to get people to initially think about what is happening around them. But what does it mean beyond that? How does it drive change in a meaningful way? Right now, it may be a conversation point, but it may also provide an easy way out of having to take action in the real world. There are presently voices online highlighting ways that people can help–but will people feel that need to once they’ve updated their profile picture?”


Strategizing social media

“New Tactics in Human Rights” bring together experts over “Online Discussions” on various topics.

In this online conversation, they explored:

  • How to define your social media goals and targets;
  • Strategizing about how to reach your stakeholders with social media;
  • Making decisions about the resources you should devote to building and maintaining a social media presence;
  • How to use social media without putting your staff and your constituents at risk;

This online conversation was an opportunity to exchange experiences, lessons-learned and best practices among practitioners using social media strategically in human rights work.

Tactic examples shared in the conversation:

Engage your audience

With hundreds of millions of people around the world participating in social networks, it’s become passé to try to “be the message”. If your campaign mainly aims at building your sense of community, it’s OK to generate expressions and send them to one another, which is typically what selfie-campaigns do. By if you’re trying to change anybody’s heart and mind, your campaign shouldn’t send a message, it shouldn’t even generate expressions. It should convene a conversation, as only conversations move positions.

One finding that stood out from a survey by the Case Foundation was that 74 percent of non-profits use social media as a megaphone to announce events and share what they’re up to, instead of seeking out conversations.

Online campaigning is to polarize a discussion effectively, and then curate the conversation to make your side more compelling. And to create the conversation, you have to engage your audience.

Here are some tips for achieving this

Find the right tactic to avoid a ghost town.

If there isn’t any engagement with your social media efforts, it generally keeps new visitors from engaging. How do you first encourage engagement? You can recruit a handful of very loyal supporters – staff and volunteers – and get them to commit to participating in your social media efforts.

If you have some clout, you should also recruit leaders. See for example how the “Internet for schools” campaign by Social Driver and The Alliance for Education Excellence started their mobilisation:

“In order to start the campaign off, Social Driver identified 20 key influencers in the education space that they knew could kick-start a conversation. Once they were onboard and excited, Social Driver had them each make a video explaining why internet access and WiFi is important for schools – and post it with a tag to the FCC. From there, the snowball started rolling. Other educators, parents and students started to make similar videos, and even more people started to call on the FCC to expand internet and wifi coverage. All those conversations, impressions, and direct calls on the FCC gave the campaign the voice and weight it needed to be heard.”

Get your people out there to create content !

People don’t want to just participate in a campaign, they want to be the message.

The pinnacle of this is citizen journalism.

For a compelling introduction to what that is, we highly recommend this interesting TedTalk about Storify.com

Ask questions

This gets high interaction rates. Posts on Facebook with a question mark generate twice as much engagement as other posts.

Include photos or graphics with posts

A tweet with visual gets on average 50% more engagement than others.

Every user matters

It’s not just to keep that one user engaged. It shows all users that THEY would matter too, which is the single strongest driver of engagement.

Negative comments are an opportunity ! there is nothing better to start a conversation and wake up sleepy troops than a good troll. Interact, don’t delete.

Make sub-communities

Being/becoming part of a group is one of the fundamental drivers of our engagements. The more this community is made visible, the more people will engage. Look at the #hometovote campaign in Ireland: one of the target groups of the campaigns to get people to support same-sex marriage in Ireland and vote in the referendum were Irish people from the diaspora. To get them to come back, which represents a very strong form of engagement, campaigners created this very specific community, formalised by a hashtag among other expressions, which gave people this additional sense of belonging that propelled them to act

Get out there

We spend most of our time preaching to the choir, when we should really spend at least 70% of our time reaching out to our target audience of the moveable middle: these people whose views, attitudes and actions can be shifted.

Are we all really investing the spaces that our target group is on, instead of talking to one another? We definitely need to assess where we spend our time, and adjust when needed.