Table 1. Overview of students’ misconceptions about ChatGPT, based on qualitative analysis of responses
1. Understanding of Information Sources
Although more than half of the students (n=183) recognized that ChatGPT generates text based on internet data, many showed confusion about where this information comes from. A total of 115 students did not fully understand the diversity of sources or the model’s inability to verify accuracy.
What this means for teaching: Students may overestimate the reliability of AI-generated content. This highlights the need to strengthen source evaluation and verification skills when using AI.
2. Understanding of AI Outputs
Students valued the alignment between AI output and the task most highly (n=81), and some demonstrated awareness of potential inaccuracies (n=62). However, 47 students expressed uncritical trust, treating AI output as authoritative. Others viewed AI primarily as a tool for rewriting (n=45) or summarizing information (n=34).
What this means for teaching: Students are already developing partial critical awareness, but this needs to be extended. AI should be framed not just as an answer generator, but as a tool for supporting thinking, drafting, and revision.
3. Understanding of How AI Works
A majority of students (n=64) had only a vague association with the term “algorithm,” while only a minority (n=33) understood that the core mechanism is a probability‑based text‑prediction process. Anthropomorphizing AI—treating it as if it “thinks” or “knows”—was a common misconception.
What this means for teaching:AI literacy should go beyond surface-level explanations. Students benefit from a basic conceptual understanding of how large language models generate text, helping them avoid attributing human-like reasoning to AI.
4. Patterns of AI Use
Only a few students (n=7) voluntarily mentioned the need to verify information sources, revealing a clear tension between efficiency orientation and critical verification. Many prioritized efficiency over accuracy, indicating a gap between task completion and critical engagement.
What this means for teaching:Educational practice should emphasize that AI can only serve as an aid for idea generation and text refinement, not as a substitute for independent thinking; students must be trained to habitually verify AI outputs’ sources and factual accuracy.
Conclusion
Overall, the study shows that students’ misunderstandings span four key areas—information sources, outputs, mechanisms, and patterns of use—and these directly affect their ability to use AI critically. Based on these findings, several directions for teaching emerge:
- Strengthen information literacy by helping students evaluate sources, question outputs, and recognize the limits of AI-generated content
- Guide more intentional use of AI, positioning it as a tool for supporting thinking rather than completing tasks
- Build a basic understanding of how AI works, helping students move beyond anthropomorphic assumptions
- Embed verification and reflection into assignments, making critical evaluation a routine part of learning
AI is not a shortcut to learning but a tool that must be correctly understood and used with care. Only by helping students open the classroom “black box” of AI can educators enable them to harness technologies to improve learning efficiency while maintaining the baseline of critical thinking. Let artificial intelligence serve as a facilitator for learning, rather than becoming a means of relying on instead of thinking..
Reference:Bråten, I., Latini, N., & Strømsø, H. I. (2026). Exploring students’ (mis)conceptions about ChatGPT-generated text: A qualitative study. Education and Information Technologies. https://doi.org/10.1007/s10639-026-13938-w
Faculty Spotlights
WKU Faculty Share Practices on AI in Teaching and Assessment at UNNC Forum
Three faculty members from Wenzhou-Kean University were invited to present at the Technology-enhanced Teaching Forum 2026, held on 10 April at the University of Nottingham Ningbo China (UNNC).
The forum, themed The Assessment Revolution: From Pilots to Practice in the AI Era, brought together educators exploring how generative AI is reshaping teaching and assessment. Representing Wenzhou-Kean University, our faculty shared their experiences and insights on integrating AI into teaching practice, contributing to discussions on moving from experimentation toward sustainable implementation.
Presenter: Jie Zhao
Jie Zhao, a student of Svetlana Vikhnevich, presented her work on her behalf due to a scheduling conflict. Drawing from the study Buddy AI: Enhancing Freshman ESL Reading and Vocabulary through Interactive Chatbots, this initiative explores an AI-powered chatbot as a personalized “study buddy” to support freshman English as a Second Language (ESL), enhances vocabulary retention and reading comprehension, and provides practical guidance on designing custom AI chatbot personas for classroom use.
The participation of our faculty in the forum provided an opportunity to engage with current conversations around AI in teaching and assessment.
As AI continues to evolve, these exchanges highlight the importance of exploring how we can design more effective pedagogical approaches and assessment practices. We also welcome colleagues to continue sharing their experiences and perspectives, whether through formal events or informal conversations, as we collectively navigate the opportunities and challenges of AI in education.