We live in a world where we are increasingly depending on artificial intelligence (AI) to generate answers to our multiple questions and in some cases to write text for voice memos, and even college tests or high school papers. This is perhaps inevitable. But is it safe? The answer may be in the negative.
According to a review conducted using eight points of interaction with AI conducted this year by the Columbia Journalism Review, it was found that at least 60 per cent of answers given by AI were inaccurate or blatantly incorrect. Other studies have come up with similar results and some have suggested that as many as 80 per cent of answers may be inaccurate in one way or the other. It is of also known that AI cannot pick up nuance the way humans can and simply churns out information and presents it in a form that is now known to many as being single-dimensional in its approach.
So far, humans are proving to be more intelligent than AI. For example, school and college teachers have been educated in many countries and have themselves picked up the means to discover AI-generated texts given that the same algorithm is used again and again. One example of this is the frequent use of why ‘x is better than y’ and to continue from that point on. There are of course other, possibly more serious problems coming up with AI and its use.
In the first place, AI is being pushed by social media at an extremely rapid pace. Facebook, Instagram, YouTube and other platforms frequently come up each day with new advertisements and new suggestions on how AI should be used. There are also scores of advertisements on social media and in other forums on how to enroll for classes in AI, especially for seniors or those over 40 years of age who may not be so familiar with AI and its usage. In other words, those who have escaped this method of information gathering are being taught how to use it.
The other problem lies in the fact that there are at least considerations and plans in multiple companies to use AI as a means to ascertain the efficiency of workers and to test their ability. There are even suggestions it may be used as a means to hire or dismiss people. This in itself is extremely dangerous. In the first place, AI misses the subtleties of the human mind. It cannot, for instance, judge who is ethical and who is not. It may place a worker who takes a longer period of time to produce high-quality material in, say writing, scientific work, or mathematical equations at a low point than somebody who produces low-quality work at a faster pace. This raises all kinds of issues about the future and how humans will be judged by a technology which threatens to overtake them.
The person who is understood at Google to be the father of AI, Jeffrey Hinton, has joined a group of other researchers in AI to begin a campaign against his own creation. These scientists and programmers warn that AI may be at least as dangerous as the nuclear bomb. They say that by picking up information generated by other AI tools, the use of AI technology and its creations may overtake those produced by humans. This is dangerous in many ways and undermines human creativity and talent. It undermines the delicate nuances and the arts human minds are able to put together because of their versatility and their ability to generate poetry of the standards of the classical poets as well as so many modern poets and modern writers in every genre that exists around the world.
There are also other things we need to be wary of. In Pakistan, researchers have pointed out that AI is contributing to fake news of every kind and is dangerous in that it can falsify information far faster than humans and put out fake news which can be used to even bring blasphemy charges against people. We all know that AI can be used to replicate voices extremely closely and at a level not discernible to the ordinary listener or sometimes even to experts. This is obviously extremely problematic, especially in the environment we live in today.
We have also seen how US President Donald Trump has produced cartoons through the use of AI depicting the arrest of young people, notably Hispanics and non-white persons, by ICE teams. These cartoons replicate those created by liberal, progressive cartoonists in other countries. This itself is a problem of one kind or the other, touching on plagiarism and other issues which should warn us that AI is a tool we need to be wary of. Its high use of energy levels and inputs is a different topic, something else that we should be on the alert for in today's world.
In other words, AI is not as safe or as useful as some of us would like to think. The open encouragement to writing falsified essays or plagiarism put out by some AI companies in their ads is essentially immoral. Students and schoolchildren need to be encouraged to use their own minds and put forward their own thoughts. AI takes away this ability and threatens to create a robotic world in which everyone is uniform and can access only specific types of information based on the phrases they write out and ask AI to form into essays or the materials for them.
There is, of course, a possibility that AI will simply self-destruct by picking up material from one form of AI and using it in all forms to create a kind of endless stream of repetitive material. In some cases, this is already happening. But as humans, we need to find a way to stop the techniques being used by AI before it consumes us and takes away all that we have learned over the years of human innovation.
Somewhere the human mind needs to live on. Scientists warning about this need to be understood and at least listened to. They have already written letters to the top owners of tech companies and warned that this iteration of technology is pushing unneeded AI at us. As intelligent beings, we need to watch out for this and protect ourselves against technology that in some ways threatens all of us and our future generations as we become more dependent on what can be an extremely problematic tool.
The writer is a freelance columnist and former newspaper editor. She can be reached at: kamilahyathotmail.com