The generative AI boom has brought to the fore many ethical considerations which policymakers need to take into account
T |
he year 2023 was a pivotal period witnessing substantial developments across various sectors of artificial intelligence.
Strides in AI-powered personalisation, transforming how advertisers engage with consumers on digital platforms, have been particularly noteworthy. Moreover, the application of AI in healthcare has paved the way for assisting doctors in diagnostics through the analysis of medical imaging including X-rays and MRIs. Simultaneously, AI-powered cybersecurity has empowered machines to continuously monitor and counter threats, fortifying the protection of digital assets for companies and organisations. Among the myriad developments and applications of AI in 2023, the rise of generative AI, commonly known as Gen AI, stands out as the most substantial.
Generative AI, at its core, refers to the technology that empowers machines to autonomously produce content, spanning text, images and even videos. OpenAI’sChatGPT, a powerful language model, exemplifies the capabilities of generative AI, spearheading the revolution in this domain. The advent of ChatGPT and similar tools has not only facilitated automation but has also propelled the development of applications that transcend traditional boundaries, delving into creative and cognitive realms. As the number of applications grows and accessibility becomes more widespread, real-life instances of generative AI in action are beginning to emerge.
In Pakistan, Judge Mohammad Munir of the Mandi Bahauddin district court made history by incorporating AI assistance in a case involving pre-arrest bail for a juvenile. By leveraging ChatGPT, the judge streamlined the judicial process, employing quick legal review. Recognising the potential of AI, the National Judicial Automation Committee, reconstituted by Chief Justice Qazi Faez Esa, is actively working towards introducing AI into legal processes. The committee aims to prepare a national plan integrating technology and AI to enhance the efficiency of the judiciary.
However, the ascent of generative AI is not without its challenges.
Legal and safety concerns loom large, with the proliferation of deep fakes presenting a particularly complex issue. Deep fakes, manipulative media content often generated by AI, pose a significant threat to the credibility and authenticity of information. Instances of deep fake videos and images have been noted in political scenarios, such as the Bangladesh elections, where they were employed in electoral campaigns, as reported by the Financial Times. The FT report mentions various instances of deep fake videos being used by pro-government outlets and influencers to push a political narrative. Even former US President Donald Trump found himself ensnared by deep fakes, with AI-generated images depicting his arrest circulating on Twitter. Such content can have profound implications on the integrity and sanctity of electoral and democratic processes.
Striking the right balance between innovation and regulation is crucial to ensuring the responsible and ethical deployment of generative AI. Balancing the benefits of technological advancement with the imperative to prevent misuse requires a nuanced and collaborative effort between technologists, policymakers and society at large.
In Pakistan’s local political context, Imran Khan’s recent AI-assisted speech adds another layer of complexity to the debate. Should deep fake videos, created with consent and used as an alternate form of expression, be considered a legitimate means for political leaders to convey their messages? Or are these leaders inadvertently normalising the use of deep fakes as credible content, blurring the lines between reality and manipulation?
The destructive potential of generative AI is starkly evident in cases of technology-facilitated gender-based violence. Instances of pornographic images and videos of young girls and women being circulated online for blackmail or defamation are on the rise. According to a study by Deeptrace, quoted by Forbes, the main application today is generating sexually explicit videos for cyber exploitation. According to the Deeptrace report, 96 per cent of the deepfake videos on the internet are pornographic videos, whose main victims are women. Additionally, nude images of women parliamentarians and politicians have been generated through Gen AI maliciously to defame them and threaten their political standing. The violation of privacy and the potential for reputational damage underscore the urgent need for robust regulations to curb misuse.
As we delve deeper into the era of generative AI, the challenges in policymaking and implementation become increasingly apparent. Ethical dilemmas surrounding the use of AI, particularly in political and gender-based contexts, demand comprehensive regulations that safeguard individuals and institutions from potential harm.
However, regulating generative AI also poses a complex challenge. One approach involves the identification of AI-generated content by adding watermarks or other distinctive markers. These measures can help users distinguish between authentic and generated content.
Platforms are also in the process of developing responses to the emerging challenges, with Google and Meta announcing policies to start requiring campaigns to disclose if political adverts have been digitally altered.
Striking the right balance between innovation and regulation is crucial to ensuring the responsible and ethical deployment of generative AI. Balancing the benefits of technological advancement with the imperative to prevent misuse requires a nuanced and collaborative effort between technologists, policymakers and society at large.
Navigating this intricate landscape, the responsibility falls on us to foster an environment where generative AI can thrive without compromising our values and integrity. Societal awareness, combined with proactive regulatory frameworks, will be instrumental in harnessing the potential of generative AI for the greater good while mitigating its adverse impacts. The ongoing dialogue between stakeholders, encompassing technologists, policymakers and the general public, will be crucial in shaping a future where AI is a force for positive change rather than a source of potential harm.
The writer is an Islamabad-based journalist. He tweets at @asadbeyg