Crowdsourcing violence

October 27, 2024

Digital mobs, crowdsourcing their anger and violence, gather strength from the ease with which rumours spread online

— Photos by Rahat Dar
— Photos by Rahat Dar


C

rowdsourcing violence—a term I often use to describe the idea of invoking digital mobs through organised, premeditated means—can be understood through deeply personal stories. “Love jihad,” for instance, is one example where misinformation, through various intricate means, is made to spiral into acts of aggression. In nearly all of such cases, pre-existing societal fault-lines, such as religious or ethnic hatred and discrimination, are exploited to invoke rage and anger, both of which are catalysts for virality on social networks, and often result in devastating consequences.

Crowdsourcing violence

Imagine a young couple in love, caught in the crosshairs of this digital mob. Their relationship, once private and sacred, becomes a battleground as a single rumour—amplified through social media—brands them as part of a fabricated conspiracy. In towns and cities across India, these couples are not just shamed; they are hunted. Mobs, fuelled by the false narrative that Hindu women are being lured into converting to Islam through marriage, take it upon themselves to ‘defend’ their communities. The digital crowd organises itself, mobilised by rage and misinformation, to enforce its version of justice, often violently disrupting lives in the process. For the couple, what was once a pure bond turns into a life-threatening ordeal, where love is overshadowed by fear.

This isn’t an isolated event. These digital mobs, crowdsourcing their anger and violence, gather strength from the ease with which rumours spread online. Misinformation, often seeded by political or religious actors, finds its way into the hearts and minds of ordinary people, transforming them into unwitting soldiers of a false cause. In the case of ‘love jihad,’ the violence that follows isn’t organised in traditional ways; there’s no central leader giving orders. Instead, ordinary citizens, united by the outrage they feel through their screens, become perpetrators. With each shared post and viral video, they become more convinced of their righteousness, and the violence that ensues becomes a collective act of enforcement—uncoordinated, decentralised, but devastating in its impact.

Crowdsourcing violence

The concept of crowdsourcing violence, as seen in the ‘love jihad’ narrative, finds a parallel in the recent college case in Lahore. Just as with the baseless allegations surrounding interfaith relationships, the Lahore case spiralled into chaos through misinformation. In both instances, the digital space became a battleground, where false rumours were strategically spread to stoke public outrage. What started as unverified reports of a student’s assault at a college in Lahore quickly spread across social media, leading to protests, fear and unrest in the community.

As digital mobs are mobilised, social media platforms remain complicit in amplifying the misinformation that fuels them. Their design choices prioritise profit over social responsibility, allowing harmful narratives to proliferate, often at the cost of human lives and societal cohesion.

In most such cases, the real damage is done by the digital mob—a leaderless, self-organising force fuelled by rage and misinformation. Ordinary citizens, with no personal connection to the situation, become instruments of violence and chaos. They act out of a sense of righteousness, convinced by the stories they consume online, yet oblivious to the actual truth. This crowdsourcing of violence, whether through the lens of religion in India or through the distorted narrative of a college assault in Lahore, showcases the power of digital platforms to ignite real-world harm. Misinformation no longer stays within the boundaries of a screen; it spills into our streets and cities, leaving lives and institutions broken in its wake.

The elephant in the room is the social media platforms that profit significantly from the phenomenon of crowdsourcing violence. Their algorithms are designed to amplify content that generates engagement—regardless of whether that content is positive or harmful. Misinformation, particularly that which stokes anger or fear, spreads rapidly because it provokes strong emotional reactions, leading to more shares, comments and likes. In cases like ‘love jihad’ or the college incident in Lahore, platforms benefit from the virality of unverified information, as the heated debates, outrage and mob mentality drive high traffic and user activity, which, in turn, boost ad revenue. These platforms profit off the attention economy, where increased engagement directly translates into financial gain through advertising.

The damage inflicted on individuals and communities in the process is profound. The algorithms that favour sensationalism over accuracy create a feedback loop where divisive, hate-filled content dominates, leading to real-world violence and unrest. As digital mobs are mobilised, social media platforms remain complicit in amplifying the misinformation that fuels them. Their design choices prioritise profit over social responsibility, allowing harmful narratives to proliferate, often at the cost of human lives and societal cohesion.

Crowdsourcing violence

To address the harmful spread of misinformation and the crowdsourcing of violence, we must seek solutions that preserve the benefits of social media platforms while tackling the root causes of digital mob behaviour. Banning platforms, such as the suspension of X, or criminalising speech isn’t the answer. In addition to the political abuse, most such actions would likely do more harm than good, pushing toxic discourse into darker, unregulated spaces where it’s even harder to combat. Instead, we need targeted, actionable solutions that focus on regulation, platform responsibility and user empowerment.

First, social media platforms must be held accountable for the content their algorithms amplify. One effective solution is algorithmic transparency. Platforms should disclose how their algorithms prioritise content, allowing both users and regulators to understand the mechanics behind the spread of misinformation. As far as the user-timelines are concerned, there should be an option for users to customise their feeds to favour more credible, fact-checked information over sensationalised posts. This could be reinforced by promoting partnerships with independent media outlets, ensuring that false or unverified information is flagged and slowed down before it gains dangerous momentum.

Second, regulation needs to shift its focus from content censorship to structural accountability. Rather than banning harmful speech outright, which risks infringing on free expression and other fundamental rights, regulators can mandate that platforms build more robust content moderation systems, provide quicker responses to harmful viral trends and enforce stricter penalties on the intentional spread of misinformation. Enhanced AI tools could be deployed, for instance, to detect and disrupt the spread of violent narratives before they result in real-world harm.

Finally, the most sustainable solution lies in enhancing digital literacy. Empowering users to critically evaluate the information they encounter can break the cycle of crowdsourced violence at its source. This can be done through comprehensive media literacy programmes integrated into schools and public campaigns that teach people how to identify misinformation and avoid contributing to digital mobs.

Social media platforms perform an invaluable service, providing spaces for connection, activism and innovation. Instead of turning them into scapegoats, we must collaborate to ensure they contribute to public good by curbing the spread of dangerous misinformation, fostering critical thinking and holding themselves accountable for the content they amplify. The challenge is great, but the solutions are within reach if approached through cooperation, innovation and responsible regulation.


The writer is the director and founder of Media Matters for Democracy. He writes on media and digital freedoms, media sustainability and countering misinformation.

Crowdsourcing violence