Mind the chat

Research in the International Journal of Technological Learning, Innovation and Development considers the growing influence of the large language model (LLM) ChatGPT. This and related tools are often colloquially referred to as generative artificial intelligence (AI) algorithms. The team has looked at how it might affect higher education.

Sami Mejri of Khalifa University, Moatsum Al Awida of Abu Dhabi University, Stavroula Kalogeras of Heriot-Watt University, Dubai, and Bayan Abu Shawar of Al Ain University, UAE, discuss some of the opportunities and risks faced by academic institutions where students and educators are using LLMs. This kind of software can be prompted to generate text that has many of the characteristics of human-written text and has already become a powerful tool in many areas. However, there are growing concerns about the impact of LLMs and related tools on academic integrity and the nature of education.

The team surveyed faculty, staff, and student groups and found that there is a tension between the potential for AI-driven educational innovation and the need to safeguard the principles of academic integrity. The researchers found that many respondents suggested that ChatGPT has the potential to reshape student engagement, creativity, and communication. However, there are risks associated with its use, not least reduced student effort and an increase in what might be considered academic dishonesty.

The ability of such tools to auto-generate coherent text from the vast datasets used to train ChatGPT, like a glorified autocomplete, one might ungenerously say, would suggest that its widespread use might undermine student intellectual development. Conversely, it might be argued that, aside from the issue of the origins of those datasets and copyright and plagiarism issues, the use of LLMs requires a level of creativity in devising prompts to trigger particular kinds of output from the LLMs and to make them useful. There is also a great need to validate and fact check any output from such tools.

The researchers suggest that there are various implications of their research. Higher education must adapt to the digital age and the emergence of AI tools like ChatGPT and others. These tools might transform not only how students learn but also how educators assess them. Traditional methods of assessment, such as essays or written exams, may need to be rethought as LLMs come to the fore.

As mentioned, there is creativity to be developed in prompting the likes of ChatGPT and it might be that the long-term effects on developing critical thinking, a foundational skill of education, could be taught or tailored to the assessment and validation of LLM output in ways not previously possible with published text, say. Educators might prompt their students to prompt an AI, but the learning and critical thinking skills then come from interpretation and assessment of the LLM output itself and comparison with how people might respond to those prompts.

There is no obvious answer to how we decided on where AI sits within education. We should recognise that AI and LLMs are tools, all tools can be used for good or bad. Educators will need to acquire an overarching understanding of these new tools, just as they did with earlier technological developments, and then be the guide for their students in their use as well as their instructors so that students can learn to use the tools positively.

Mejri, S., Al Awida, M., Kalogeras, S. and Shawar, B.A. (2024) ‘ChatGPT: an emerging innovation or a threat to creativity and knowledge generation?’, Int. J. Technological Learning, Innovation and Development, Vol. 15, No. 4, pp.425–448.