Dive Brief:
- As students begin to use ChatGPT to help them with homework, assignments and tutoring needs, it’s also important that educators speak with them about the limitations of artificial intelligence.
- Wikipedia’s rise offers a good analogy, said Danny Liu, associate professor and member of the Educational Innovation team at the University of Sydney in Australia. When the site first appeared, he said, teachers explained to students that using Wikipedia as a scholarly source was not appropriate but that it still held value.
- “Teachers helped students to develop their information and digital literacy, for example, around Wikipedia, so that they would take a critical lens to things they see online,” Liu said. “A similar approach needs to be taken with generative AI.”
Dive Insight:
The adoption of AI within education is well underway. While some districts and schools are looking to ban or limit the use of AI tools among students, others are considering ways to use large language models such as ChatGPT with classes.
Liu believes one way educators can help students understand the effectiveness and limitations of AI is to use it with them. And in particular, he said, teachers should craft lessons around AI specific to the subject matter.
“It's also critical to show students generative AI in action, in ways that are relevant to the discipline,” he said. “This way, teachers can demonstrate the benefits of AI whilst being able to highlight its shortcomings.”
A key drawback of ChatGPT, for example, is that it produces “hallucinations” — incorrect details and data. The AI tool also doesn’t provide footnotes or references, something that Wikipedia does list. That makes it hard to trace information back to the source and is another reason educators want students to be careful about relying on what AI tools generate as responses. But Liu also believes AI tools will improve.
“As generative AI gets more connections with the live internet, with scholarly databases, etc., new tools powered by generative AI will be able to draw from not just the neural network of the underlying large language model, but also from real and reliable sources,” he said. “The shortcomings of hallucination will likely be reduced in the future because of these advances and more.”