​​​​ ​​​​

Blog

Webinar: Defeating anti-Trans attacks: A narrative strategy

Ultra conservative movements around the world are using anti-Trans rhetoric as part of a broader attempt to seize power and take away our freedoms.

How can we fight back, harnessing the power of narratives that unite us across gender, race and class?

A wide research project in the USA has investigated this, and suggests very interesting indications of what to say…and what not!

Jay Marcellus, Director of narratives at ASO Communications and former Director of Communication at the US Transgender Law Center, will share these insights and the methodology that was involved, and will lead a Q&A session on narrative research.

Join us on June 3, at 15:00 Universal Coordinated Time

Register here

 

 

Online Web workshop: Practicing Narrative Work

Everybody talks about “Narrative work” but what does it mean exactly and, most importantly, how do you do it?

Come and meet the experts for an online interactive workshop that will get you to practice and enhance your skills.

April 24, 15:00 Universal Coordinated Time

Register HERE

Lucila Sandoval Herrera is a Latin-American communicator. She works on the intersection of narrative work, strategic intersectional communications and collaborative processes, from a feminist and decolonial perspective. She will take you through the Ice-berg exercise , a practical tool for re-framing that allows us to find the undercurrent of narrative.

Otieno P. Odongo is a Kenyan strategic communications practitioner and a narratives enthusiast. He will discuss the learnings from two years of work with parents of LGBTIQ+persons from Uganda, Kenya and Tanzania to identify harmful narratives, and identify and spread alternative/counter narratives.

Joel Bedos has created and manages the Sogicampaigns project. He will lead a discussion around the positive and negative narratives in a story of an African-American Trans man and present the organisation’s free online course on Narrative Work

 

Surfing AI trends for campaigning

Campaigners have always known how to surf trends in order to “piggyback” their message.

A previous article brought several examples of this, including how campaigners for Palestine used the Pokemon Go App that was a craze in 2015.

Memes are equally a daily support for campaign messages

With AI  becoming ubiquitous, campaigns are bound to increasingly invest this tool for their communication. One recent example is Greenpeace Greece’s use of popular Ghibli-style AI filters, as reported in this article from Adsofbrands

Greenpeace and Ogilvy Greece Challenge “Ghiblification” Trend with Stark Environmental Campaign

Apr. 03, 2025

In response to the recent surge of AI-generated images emulating the whimsical aesthetics of Studio Ghibli—a phenomenon dubbed “Ghiblification”—Greenpeace, in collaboration with Ogilvy Greece, has launched a provocative campaign highlighting the harsh realities of environmental destruction that no filter can beautify.

The “Ghiblification” trend gained momentum in March 2025, as users worldwide began utilizing AI tools to transform personal photos and popular memes into the distinctive style of Studio Ghibli, known for films like My Neighbor Totoro and Spirited Away. This movement has sparked both admiration and ethical debates concerning AI’s role in art and its implications for human creativity.

Greenpeace’s campaign taps into this viral trend by applying Ghibli-style filters to devastating images—forest fires, floods, polluted coastlines—creating an unsettling contradiction. The result is a powerful statement: no filter can hide the truth. No aesthetic can erase the damage. And no AI can replace real environmental action.

Launched just as the global trend exploded, the campaign aims to awaken awareness and inspire real-world engagement. By using the same tools and visual language that dominate timelines and feeds, Greenpeace flips the narrative—urging people to look beyond the surface.

Because real change doesn’t come from filters. It comes from action. Support Greenpeace. Support the truth.

 

Interactive web workshop: Can AI help us win hearts and minds?

Webinar : Can AI help us win hearts and minds?
April 3, 2025 – 3pm Central European Time/9am EST
With hate-mongers increasingly in power everywhere around the world, large parts of public opinion are being swayed towards conservative attitudes.
Changing hearts and minds is therefore more than ever an emergency for our movements, and to do this we have to find the most effective way to address our audiences.
But how can we know what they truly think and what messages are most likely to be consensual ? How can we go beyond anecdotal piecemeal results that traditional focus groups and interviews often provide?
Ever wondered how AI could be good for this?
Remesh is one tool that provides AI-powered focus groups, allowing for large scale conversation and real time results that highlight consensus, divergence, common themes, and much more.
In this 1-hour webinar we will
– Examine how Remesh has been utilized to empower the collective voice of everyday people to influence positive change in their communities.
– Collectively experience a Remesh conversation to see its potential, as all attendees will be invited to become participants to a virtual online focus group on a range of questions relevant to our field.
– Discuss the pros and cons, and potential risks involved in using AI-technology
Please register HERE for further info or on sogicampaigns.org

Should your nonprofit join the rising star social network, Bluesky?

This article was published onHashtagnonprofit

Bluesky is a new social media platform that’s being developed as an alternative to sites like X/Twitter. In just a few weeks post-US elections, its user base jumped from 13 million to over 21 million, with 3.5 million daily active users.

The growth of Bluesky is partly due to increasing frustration with X/Twitters’s policies and user experience, and also Bluesky’s promise of decentralisation, cleaner content feeds, and user control.  Decentralisation means no single company owns or controls the entire platform. Instead, it uses open-source technology so people and organisations can create their own communities or “networks” that connect to each other.

In style, Bluesky feels like the early days of Twitter. With a 300 character limit on your posts, you will typically find that the platform has less noise, more engagement, and it is much easier to see content from people you actually follow, rather than algorithmic-based news feeds full of bots and ads.

With nonprofits always looking for new ways to reach people, share their work, and connect with supporters, Bluesky could be a good option—but it’s not perfect.

Why nonprofits should consider using Bluesky

  • Bluesky is still new, which means nonprofits can stand out and take the time to develop your audience and your voice.
  • The “decentralised” model means which means no single company controls it. Nonprofits can have more say in how they share their content, and there’s less risk of sudden changes, like what’s happened with other platforms.
  • Right now, Bluesky’s smaller community makes it easier to have meaningful chats and connect directly with people who care about your work.
  • Joining a new platform like Bluesky shows your nonprofit is open to fresh ideas and keeping up with the times.

Why nonprofits might avoid Bluesky, or wait a while before using the app

  • Bluesky doesn’t have as many users as X/Twitter or Facebook.
  • Managing another social media account takes time and energy, and nonprofits often have limited staff and resources.
  • If you have had your social media accounts for several years, you have probably forgotten how much time and effort it takes to build up an audience on a new platform. It is a slow process!
  • Bluesky is new, and it’s hard to know if it will still be popular in a year or two. Many social media platforms, like Threads, get popular very quickly, but start to lose active users over time.

(…)

The Bottom Line

Bluesky has potential, but it’s not for everyone. If you have the time and resources to try something new, it could help your nonprofit stand out. But if you’re already stretched thin, rather focus on making the most of platforms where your audience already is.

 

How to use AI-Assisted mass focus groups

This article was written with items taken from Loopanel

AI is transforming the way we conduct focus groups, making the process faster, more efficient, and more insightful. From generating AI focus group questions to analysing transcripts and drafting reports, AI focus group tools are helping researchers at every stage of the process.

 

The research challenge

Anyone who has conducted social research will have faced these challenges:

  • Focus groups are too small to be really representative of the targeted demographic
  • Residential focus groups processes generate a lot of social biases, which can predict what people generally say on an issue, but not necessarily how they feel. These focus groups can be useful to determine messages that play into social conformity, but fail to give insights into other, e.g. values-based, approaches
  • Residential focus groups are difficult to organise, expensive and time-consuming

Recently developed AI-assisted tools can provide alternative or complementary methods, well worth exploring.

How does it work?

  • A question is being asked on a specific App platform to a range of participants (provided by the app randomly, sampled by the provider, or brought by the client)
  • Participants respond in their own words. Using free-form text, they can share their honest thoughts without constraints, giving you access to a rich source of data that supports your hypothesis — or takes you down an unexpected path you hadn’t considered.
  • Participants evaluate responses shared by others. This helps understand how well responses reflect the views of specific segments and discover areas of resonance.
  • Each response is immediately analysed to understand its meaning and how similar or different it is from other responses. This predicts how participants would vote on every response they didn’t see (since voting on thousands of responses would be impossible).
  • Results are generated instantly, organised and analysed by themes, codes, sentiment, etc

Benefits:

One of the most significant benefits of AI in focus groups is its ability to streamline time-consuming tasks. AI-powered tools can assist researchers in creating questionnaires, transcribing audio recordings, and generating reports. 

Also, AI algorithms excel at identifying patterns, sentiment, and key themes within large volumes of qualitative data, so AI can help researchers quickly identify trends and connections across multiple focus groups, enabling them to draw more comprehensive conclusions.

One of the biggest challenges of focus groups is the amount of data you’re dealing with. If you’re running multiple focus groups, you may end up with 100s of pages of transcripts, notes, and observations. While we as people find this volume of data extremely overwhelming, AI can help categorise it for us to make it easier to consume, process, and review. 

Challenges:

However useful it can be, it’s important to remember that AI is not a replacement for human expertise and judgement. While AI can provide valuable assistance, researchers must still use their skills and knowledge to design effective focus group studies, interpret the results, and make informed decisions. As with any new technology, it’s crucial to understand the limitations and potential drawbacks of AI in focus groups, such as the risk of biased or inaccurate outputs. This risk is the result of AI algorithms being developed from data provided by a mainly white Western audience.

Furthermore, AI will not be able to identify determinant cues such as facial expressions, or nuances such as humour, sarcasm, etc. 

Using AI in focus groups involves collecting and analysing large amounts of personal data. This raises important concerns about privacy and data security. Researchers must be transparent about how they’re using AI and ensure they have robust measures in place to protect participants’ personal information. For example, be very careful if you’re using open chatbots like ChatGPT to analyse data—make sure you’ve opted out of letting them use your data for training purposes.

Providers:

A useful list of tools, with their respective qualities, is available HERE.