DEVELOPING ACADEMIC INTEGRITY AND WRITTEN COMMUNICATION THROUGH CHATGPT-TYPE GENERATIVE MODELS
Keywords:
generative artificial intelligence, ChatGPT, academic integrity, academic writing, higher educationAbstract
The rapid development of generative artificial intelligence, especially language models such as ChatGPT, has created new opportunities and challenges for higher education. On the one hand, these tools can help students improve their academic writing by supporting idea development, language accuracy, and text organization. On the other hand, their use has raised serious concerns related to academic integrity, including plagiarism, authorship, and the misuse of automatically generated texts. This article explores how ChatGPT-type generative models can contribute to the development of academic integrity and students’ written communication skills when used in a transparent and pedagogically guided way. The study is based on a qualitative analysis of existing academic literature and focuses on theoretical discussions rather than empirical data. The findings suggest that generative AI does not necessarily weaken academic honesty; instead, when integrated responsibly, it can support ethical awareness, reflective writing practices, and gradual improvement of academic writing skills. The article argues for a balanced approach that combines clear ethical guidelines, AI literacy, and instructional support.
References
UNESCO. (2023). Guidance for generative AI in education and research. UNESCO.
UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. UNESCO.
International Center for Academic Integrity. (2021). The Fundamental Values of Academic Integrity (3rd ed.). International Center for Academic Integrity.
Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., Stadler, M., Weller, J., Kuhn, J., & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4, 100779. https://doi.org/10.1016/j.patter.2023.100779
Bhullar, P. S., Joshi, M., & Chugh, R. (2024). ChatGPT in higher education: A synthesis of the literature and a future research agenda. Education and Information Technologies, 29, 21501–21522. https://doi.org/10.1007/s10639-024-12723-x
Klyshbekova, M., & Abbott, P. (2023). ChatGPT and assessment in higher education: A magic wand or a disruptor? Electronic Journal of e-Learning, 21(5). https://doi.org/10.34190/ejel.21.5.3114
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. https://doi.org/10.1136/bmj.n71
OpenAI. (2023). GPT-4 technical report. arXiv. https://arxiv.org/abs/2303.08774
Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science.
Hyland, K. (2019). Second language writing (3rd ed.). Cambridge University Press.
Swales, J. M. (1990). Genre analysis: English in academic and research settings. Cambridge University Press.
Flower, L., & Hayes, J. R. (1981). A cognitive process theory of writing. College Composition and Communication, 32(4), 365–387.
Biggs, J., & Tang, C. (2011). Teaching for quality learning at university (4th ed.). Open University Press.
Nicol, D., & Macfarlane‐Dick, D. (2006). Formative assessment and self‐regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218.
