Prophesies of doom and media hullabaloo surrounding Artificial Intelligence (AI) have hit the headlines for years – but this time it seems different. It is who and how many are creating the noise.
A Goldman Sachs report in March sounded the alarm bells, warning that AI could replace the equivalent of 300 million jobs. Soon after, the world’s richest man, Elon Musk, got in on the act – along with 1,800 technology researchers and executives – calling for a six-month pause in the development of AI systems such as OpenAI’s GPT-4 (the latest version of the groundbreaking tech that powers ChatGPT). They warned that powerful digital minds were being created “that no-one – not even their creators – can understand, predict or reliably control”.
More recently, even more dire warnings have been sounded, with experts such as the heads of OpenAI and Google DeepMind postulating that AI could lead to the extinction of humanity. AI pioneer Yoshua Bengio also called for urgent action to protect the public.
Accessible & disruptive
The launch of OpenAI’s ChatGPT has been transformational. While chatbots have been around for some years, and have been considered for a whole range of industrial uses from logistics to medicine, this latest computer chatterbox has astounded commentators with its language ability. Aside from its conversational aplomb, it has shown its versatility with capabilities for writing music, debugging computer programs and completing high-level academic examinations.
Sean McMinn is the Director of the Center for Education Innovation at the Hong Kong University of Science and Technology (HKUST), where he manages a special teaching and learning development fund specific to AI called the Education and Generative AI (EDGE-AI) projects fund. He believes ChatGPT’s accessibility is the key game-changer.
“This is about accessibility and ease of use,” he says. “AI tools have been developing for a number of years, but they have never been too widely accessible. The user interface of ChatGPT [though] is extremely basic and anyone with a device and internet connection can interact with the Chatbot with ease. Because of this, and the perceived usefulness of the output it generates, it is not surprising that it has gained such attention worldwide. Add to that, the [AI] tools are more powerful than they were just a year ago.”
McMinn believes it is largely irrelevant that most people do not understand the technology that ChatGPT is built on, namely Large Language Models (LLMs). By far the most important consideration is that it is a very disruptive technology.
Academia alarm The academic community has been particularly perturbed by the latest developments in AI. ChatGPT (GPT stands for Generative Pre-trained Transformer) can write introductions to scientific articles, high-level computational mathematics and even college courses. The University of Hong Kong was initially so concerned that it implemented a blanket ban on its use; they have since softened their stance, greenlighting ChatGPT for staff (but not student) use until guidelines are introduced. Hong Kong Baptist University is also developing guidelines. One of McMinn’s main roles at HKUST is to explore technology-enhanced teaching and learning (TETL) methods and advise university management and faculty on strategies for implementing such initiatives. “While a lot is still unknown and speculative, many people believe that AI has the potential to enhance or transform how we teach and assess,” he says.
Following the ongoing speculation and uncertainty about the impact of generative AI on the education sector and workplace, he readily understands why some institutions may choose a cautious approach. The overriding issue concerns integrity. “How do we know what students submit is their own work? Do we have valid evidence that they are learning? Are the assessment tools teachers are using able to distinguish AI output and student outcomes?”
He believes the uncertainty surrounding the issue could have some positives – spurring into action institutions which have been slow to revamp how and what they teach and forcing a review of what knowledge and skills students need in today’s fast-changing world.
McMinn does voice concerns that banning the use of generative AI tools may be doing our students a disservice, and he advocates a shift of energy towards preparing students for an AIdriven world. “These tools are not going away; in fact, we will probably see faster advancement in the next few years,” he says. “We should be preparing students for the future of work where AI tools will be commonplace in the workforce.”
Workplace worries
Speculation has been rampant about which jobs will be affected or lost as a result of recent developments. The Goldman Sachs report said AI could assume about a quarter of the work now done by humans. Accountants, lawyers, doctors, journalists, data managers and professionals in other sectors are all thought to be at risk from an AI surge, and there has been speculation that Hollywood film studios might consider replacing writers with generative artificial intelligence.
“Many professions that rely on creativity will be disrupted, but I am not certain they will disappear completely. In some ways, AI tools have the affordance for new forms of creativity. I think there will still be a need for writers – just in a different form perhaps,” says McMinn.
He does believe ChatGPT has limited the need for brainstorming and drafting. “Now, we can generate ideas and drafts, and spend more time on editing. So, knowing that process will change, we can start to focus on higher-order tasks that require more analytical thinking.”
There are various tools available in Hong Kong to assist with writing and brainstorming ideas – Poe.com, Bing Chat, and Google Bard via a VPN. But he warns: “Always check the content these tools generate. LLMs hallucinate and there is a lot of inherent bias that could be harmful to users.”
Google is slowly integrating its AI into its search engine, much the same way Microsoft integrated Bing into theirs. “I think users will just interact differently with Google and Microsoft’s Bing search,” he says.
Some professionals may need to think about upskilling or reskilling, and acquiring expertise relevant to their career that AI cannot master. McMinn is confident jobs that do not exist now will emerge in the AI-driven near future. “AI still lacks contextual awareness or metacognitive tasks. Humans will still be important for critical and relative thinking, as well as problem-solving for tasks that are contextual,” he says.
Ethical issues
There are undoubtedly complex issues surrounding the widespread dissemination of AI. A US lawyer recently admitted to using AI for case research. Part of McMinn’s role at the university is to highlight the ethical implications of such technology and ensure stakeholders are aware of the implications.
One concern is information bias. The New Scientist recently highlighted the implications of how ChatGPT and GPT-4 are more familiar with books that appear online. “Always be cognisant that these tools are inherently biased,” says McMinn. “Much of the bias is inherited by the data sets used to train the AI tools, but also the content policies developed by the companies. Most of these biases are unintentional, but they are there.”
Since misinformation spread at a rapid rate will have societal implications, institutions and companies need to develop clear AI policies and guidelines for their stakeholders. Apple co-founder Steve Wozniak recently said AI may make scams harder to spot.
“I do also worry that these tools may reinforce echo chambers, especially with all the talk of personalising experiences with AI,” says McMinn. “We also need to be mindful that bad actors will use these tools to scam people.”
Being human
Educator Svetlana Chigaeva-Heddad has been exploring the capabilities and limitations of ChatGPT and other Gen AI technology. “I cannot emphasise enough how important it is to think through the process of engaging with these tools and reflecting on what our engagement with these tools means for us as humans and for our agency in the future,” she says.
Pointing out the connection between language and thought, she adds: “Given that generative AI tools are based on human language, is it really impossible to imagine that there may be artificial general intelligence which will be able to think like humans and perform tasks at the highest levels that we currently assume to be unique to humans?”