Artificial intelligence, real impact

August 18, 2024

AI regulation will be incomplete without a human rights approach

Artificial intelligence, real impact


T

he Summer Olympics 2024 were significant for many reasons, including the first gold medal for Pakistan in 40 years. The games also saw extensive use of the artificial intelligence. Those who tuned into the gymnastics game at the Olympics might have noticed that among the many cameras trailing the athletes, some were working on developing an automated, AI-powered gymnastics judging system. Additionally, the Paris Games were also the site of an algorithmic video surveillance system in place to monitor all activity in and around Olympic events and ‘predict’ any attacks. This application and uncritical embrace of the AI in areas where human judgment is essential have raised alarm regarding the future.

AI has become a buzzword of late, a catch-all term to address a very wide range of applications of artificial intelligence. AI is seen by some as a panacea to many problems. Some governments too are extolling the virtues of AI as heralding the ‘fourth industrial revolution.’ Companies across the world are adopting AI systems as solutions to labour-intensive work. AI entered the public consciousness with the popularity of ChatGPT, bringing forth debates on the ethics of AI use as well as accelerated calls for its regulation. In Pakistan, the government is trying to catch up with the rest of the world on AI. It is now rushing to pass an AI policy in the hope that Pakistan too can ride the AI wave. While the impetus for many initiatives by the government on AI, including the establishment of the National Task Force on Artificial Intelligence formed in 2023, is grounded in a desperate need for economic growth, AI regulation will be incomplete without a human rights approach that focuses on equity, harm reduction and non-discrimination. This is particularly important because AI is being deployed in a myriad ways that have implications for our individual and collective welfare.

The use of AI in law enforcement and policing has long been criticised given that AI often relies on biased and flawed data-sets to make determinations on issues such as sentencing, identifying suspects and facial recognition. A large body of research has demonstrated that predictive policing and facial recognition technologies relying on AI are discriminatory and disproportionately target marginalised communities, often exasperating existing policing biases. Further, research published by WIRED and The Markup in 2023 found that the success rate of predictive policing systems was abysmally low, between 0.1 percent and 0.6 percent. The same holds for AI use by governments in service delivery such as healthcare, education and welfare provision. Governments are increasingly relying on AI to do preliminary screenings regarding the dispensation of services, often excluding communities that are already excluded and marginalised in these systems.

The tip of the iceberg for many has been generative AI, particularly the application to generating text and synthetic AI. This has exasperated anxieties regarding the role of AI in exasperating online misinformation and disinformation operations. Bills have been proposed across the world to stymie AI generated content as it becomes increasingly cheap and accessible to create realistic images and text.

The European Union adopted the AI Act in March 2024. Europe has long been at the forefront of the regulation of technologies, passing the pivotal GDPR in 2018 and the Digital Services Act last year. The EU AI Act applies a risk-based approach to AI by classifying it into ‘unacceptable risks,’ ‘high risks,’ ‘limited risks’ and ‘minimal risks’. Anything classified as unacceptable risk is completely prohibited. A sliding scale of restrictions and safeguards is applied to high- and limited-risk categories. AI under the minimal- or low-risk categories is encouraged to adopt voluntary measures. Activists and digital rights advocates have said that the Act passed after several amendments is severely watered-down, conceding ground to big-tech lobbying.

In Pakistan, the government’s announcement last month that it has plans to present the AI policy to the federal cabinet in August raised alarm regarding the process adopted to develop this policy given the lack of consultation with digital rights and human rights organisations. A draft of the policy was shared on the Pakistan Telecommunication Authority’s website in May 2023. More recent drafts have not been made public. The publicly available draft of the policy is rife with ambiguities as to how AI ethics and human rights safeguards will be implemented. While the policy seeks to encourage the growth of AI development in Pakistan, there is no policy to restrict internet shutdowns and throttling that have emerged as a big impediment for the IT industry. Meanwhile, some important aspects such as the enormous energy consumption and environmental impact of AI remain unaddressed despite the fact that Pakistan suffers from electricity and fuel shortages.

Global South is trailing in the AI race. The generation of AI is dominated by companies such as Google and OpenAI. The Global South however has been providing cheap, exploited labour required to code datasets used for training the AI. If Pakistan hopes to participate in AI generation at a global scale it must be mindful not to recreate the same exploitative structures that underpin AI development and deployment. It should adopt a human rights approach that includes the design, development and application of AI.


The writer is a researcher and campaigner on human rights and digital rights issues

Artificial intelligence, real impact