Social media platforms may wield considerable influence in shaping public opinion and determining electoral results in Pakistan
Hollywood has popularised the concept of ‘artificial intelligence’ through a spectrum of portrayals, ranging from the advanced, near-sentient digital assistant Jarvis powering Iron Man suits to self-aware autonomous machines turning against humanity. While these depictions are largely fantastical and fictional, they obscure the actual, pervasive impact of the AI in our everyday lives, which remains largely unnoticed by the general public. Consider: if you’re reading this article, you possibly found it on your social media feed. Ever wondered why it appeared on your timeline but not on that of the person next to you, who might be using the same platform? The answer lies in the AI-based algorithms used by social media companies. These algorithms quietly and constantly shape our online experiences, tailoring content to our individual behaviours and preferences in ways we often don’t realise.
Micro-targeting on social media platforms refers to the use of sophisticated algorithms to analyse vast amounts of user data, such as interests, browsing habits and demographic information. This enables advertisers and content creators to deliver highly personalised and relevant content to specific groups of users. This targeted approach ensures that users are more likely to engage with the content, as it resonates with their individual preferences and behaviours. In electoral campaigns, this can be abused by enabling political entities to disseminate highly customised messages to sway specific voter groups. Micro-targeting often remains invisible to the general public. This can lead to the spread of misinformation or polarising content, targeting vulnerable or impressionable demographics. This practice is known to manipulate public opinion under the radar and undermine the democratic process by creating echo chambers and exacerbating societal divisions, as voters receive tailored information reinforcing and exploiting their biases, rather than a balanced view of political discourse.
To contextualise our thought experiment, it’s important to delve into some key statistics. Recent surveys reveal that Pakistan hosts around 72.9 million social media accounts, which represents more than 30 percent of its total population. In comparison, the Election Commission of Pakistan has recorded approximately 127 million registered voters. A considerable number of these voters are likely to be active on social media platforms. This intersection of social media users and eligible voters underscores the significant influence that social media platforms may wield in shaping public opinion and determining electoral results in Pakistan.
Join these dots and top it with the fact that social media companies prioritise their “bottom lines” above all else. The emerging picture suggests a landscape where AI-powered timelines of social media companies, equipped with tools like micro-targeting and fuelled by user-traffic and (political) ads could significantly impact the voter behaviour.
Another particular aspect of AI that has seen substantial growth in the year 2023 is generative AI. This refers to advanced artificial intelligence systems that can create new content, including text, images, music and video — sometimes referred to as “deep fakes” — making it increasingly difficult for individuals to distinguish between what is real and what is artificial. Such deceptive content can be used to manipulate public opinion, exacerbate political polarisation and incite social unrest. Examples of this have been seen recently during elections in Bangladesh.
The question then arises: in our digital landscape where nearly everything from AI-driven timelines to the political economy and monetisation of disinformation is contributing to polarisation, creating echo-chambers and presenting a distorted worldview to users, what can be done to minimise the harm?
A strategy that has been demonstrated to be effective is enhancing information literacy among the internet users. This essentially means equipping individuals with the skills to critically evaluate and understand information in the digital age, so they that they are better equipped to navigate the complex landscape of online content and discern reliable sources from misleading ones. This approach fosters a more discerning and informed user base, capable of challenging disinformation and reducing the impact of digital echo chambers. The strategy, however, has its own set of challenges. First, the digital divide presents a significant hurdle. Access to information literacy resources is often limited in underprivileged or remote areas. Second, the rapid evolution of technology and the ever-changing landscape of misinformation tactics require that information literacy programmes be continuously updated, demanding substantial resources and adaptability. Additionally, combating deeply entrenched biases and preconceived notions in individuals can be difficult, as these factors heavily influence how information is perceived and accepted. Finally, the financial resources required for making information literacy resources available on a national scale are substantial. Also, information literacy programmes can often not be implemented without a structural policy, such as updates in education curriculum.
As AI continues to shape our digital interactions and perceptions, the role of enhanced information literacy emerges as a pivotal tool for the empowerment of information ecosystem. It is a crucial step in equipping individuals to navigate the complex online landscape, ensuring a more informed and resilient digital society against the backdrop of ever-advancing technology. This pursuit, while demanding, is essential in upholding the integrity and balance of our increasingly digital world.
The writer is the director and founder of Media Matters for Democracy. He writes on media and digital freedoms, media sustainability and countering misinformation