Addressing Academic Misconduct and GenAI at Bournemouth University

1. Executive Summary

This report examines the efforts made in the United Kingdom, with a particular focus on Bournemouth University (BU), to address academic misconduct related to the use of generative artificial intelligence (GenAI). The rapid advancement of GenAI tools, such as ChatGPT, has introduced significant challenges to academic integrity in higher education. UK universities, including Bournemouth University, have been proactive in implementing policies, providing resources, and fostering discussions on the ethical use of GenAI. The report also highlights the contributions of Dr. Steph Allen, a Principal Academic in Learning Development and Academic Integrity at Bournemouth University, who has been actively involved in addressing these challenges.

2. Introduction

The integration of generative artificial intelligence (GenAI) tools into academic environments has introduced significant challenges to maintaining academic integrity. UK universities have been at the forefront of addressing these challenges, given the rapid adoption of GenAI by students and the associated risks of academic misconduct. This report investigates the specific efforts made by Bournemouth University and other UK institutions to combat academic misconduct related to GenAI use, focusing on the period from 2023 to 2025.

3. Bournemouth University’s Approach to GenAI and Academic Misconduct

3.1 Policy Framework and Guidelines

Bournemouth University has taken a structured approach to address academic misconduct involving generative AI. Key elements of its policy framework include:

  1. Explicit Prohibition and Acknowledgment Requirements:

    • BU has declared that using generative AI tools, such as ChatGPT, to complete assignments without proper acknowledgment constitutes an academic offence under its “Policy and Procedure for Academic Offences” (6H plagiarism) [1].
    • Students are required to acknowledge the use of AI tools by naming the tool, describing how it was used, and detailing any modifications made to the AI-generated content. This includes specifying prompts used and how the output was integrated into their work [2].
  2. Consequences for Misconduct:

    • Failure to comply with these rules can lead to severe penalties, including being required to repeat a unit, receiving a zero or capped mark, or even expulsion [3].
  3. Turnitin AI Detection Tool:

    • While Turnitin has introduced AI detection capabilities, BU has decided not to use this functionality at present. The decision was based on concerns about the tool’s reliability, lack of detailed guidance from Turnitin, and the need for a more considered approach to integrating such tools into BU’s policies [4] [5].
  4. Policy Updates:

    • BU regularly reviews its academic integrity policies to ensure they remain relevant and effective. Recent changes include removing the 30% similarity parameter from Turnitin reports, leaving the determination of academic misconduct to the marker’s academic judgment [6].

3.2 Educational Initiatives and Resources

Bournemouth University has invested in educational initiatives to promote understanding and ethical use of generative AI among students and staff:

  1. Generative AI Resource Hub:

    • BU has established a central repository of knowledge, resources, and guidance on generative AI in higher education. This hub includes best practices, ethical considerations, and strategies for integrating AI into teaching and learning [7] [8].
  2. Workshops and Training:

    • A series of workshops and drop-in sessions have been organized to support staff and students in understanding and using generative AI responsibly. Topics include “Hacking GenAI,” ethics and integrity, and inclusivity [9].
    • Weekly AI drop-ins provide opportunities for academics to ask questions and receive guidance on integrating AI into their teaching practices [10].
  3. Student AI Literacy:

    • BU has developed a “Student AI Literacy” series of exercises designed to improve students’ skills in prompting, evaluating, and ethically using AI tools. These exercises are conducted in classroom environments to foster hands-on learning [11] [12].
  4. Authentic Assessment Practices:

    • BU is rethinking assessment strategies to create authentic learning opportunities that integrate AI use while maintaining academic integrity. This includes managing group work, assessment equivalencies, and GenAI literacy [13] [14].

3.3 Collaborative Efforts and Research

Bournemouth University has engaged in collaborative efforts and research to address the challenges posed by generative AI:

  1. Collaboration with Turnitin:

    • Although BU has not adopted Turnitin’s AI detection tool, it continues to assess the tool’s functionality and its alignment with BU’s policies on academic offences [15] [16].
  2. Research on AI and Academic Integrity:

    • BU researchers have explored the implications of generative AI on academic integrity, including the need for better quality control, standards, and regulation of large language models [17].
  3. Engagement with Broader Academic Community:

    • BU has participated in discussions and initiatives to promote academic integrity in the age of AI. This includes hosting webinars and workshops on topics such as “Teaching for Integrity in the Age of AI” and “Future-proofing Academic Integrity” [18] [19].

3.4 Challenges and Considerations

Despite its proactive measures, BU faces several challenges in addressing academic misconduct related to generative AI:

  1. Reliability of AI Detection Tools:

    • AI detection tools, including Turnitin’s, have been criticized for their lack of reliability and potential bias against non-native English speakers [20].
  2. Balancing Innovation and Integrity:

    • BU aims to strike a balance between empowering creativity and ensuring academic integrity. This involves promoting the ethical use of AI while preventing its misuse [21].
  3. Evolving Nature of Generative AI:

    • The rapid development of generative AI technologies necessitates continuous updates to policies, resources, and educational initiatives [22].

4. Broader UK Context: Case Studies and Institutional Responses

4.1 University of Nottingham

The University of Nottingham has taken a proactive approach to address GenAI-related academic misconduct. The institution has trained academic markers to detect changes in students’ writing styles, which may indicate the use of AI tools [23]. Additionally, the university has emphasized the importance of regularly communicating its academic misconduct policies to students, including updates reflecting the evolution of AI technology [24] [25]. Nottingham has also opted out of using Turnitin’s AI detection tool due to concerns about its reliability.

4.2 University of Oxford

Oxford University has adopted a nuanced approach by incorporating GenAI into its assessment methods while maintaining strict guidelines to prevent misuse. Professor Steve New of Oxford has encouraged students to use AI tools thoughtfully and critically to enhance their essays, provided they fact-check and make significant edits to AI-generated content [26]. Students are required to submit an “AI statement” alongside their assignments, detailing how they used GenAI tools [27]. This policy aims to ensure transparency and uphold academic integrity.

4.3 University of Cambridge

Cambridge University has warned students against over-reliance on GenAI tools, emphasizing the importance of developing critical thinking skills. While the use of GenAI has not been banned, the university has made it clear that submitting AI-generated content as original work constitutes academic misconduct [28]. Cambridge has also explored the use of GenAI as a collaborative coach and for time management, reflecting a balanced approach to its integration [29].

4.4 Glasgow Caledonian University

Glasgow Caledonian University has experienced a significant increase in academic misconduct cases, with numbers rising from 422 in 2020/21 to 742 in 2021/22 and 711 in 2022/23 [30]. The university has responded by working with experts to explore how GenAI can be used as a learning tool while implementing measures to detect and deter its misuse [31].

4.5 Edinburgh Napier University

The Business School at Edinburgh Napier University reported 1,395 cases of academic misconduct over two years, highlighting the scale of the issue [32]. The university has emphasized the need for robust policies and inclusive assessment processes to mitigate the risks associated with GenAI.

5. Challenges in Detecting and Addressing GenAI Misuse

One of the primary challenges faced by UK universities is the difficulty in reliably detecting AI-generated content. Studies have shown that AI detection tools, such as Turnitin’s AI writing detection capability, have accuracy rates ranging from 33% to 81%, depending on the provider and methodology [33]. This has led some institutions to rely on oral examinations (vivas) to verify the originality of students’ work. For example, in 2022/23, 825 out of 952 breaches investigated at UK universities required viva examinations [34].

Additionally, the rapid advancements in GenAI technology have made it increasingly difficult to distinguish between human-written and AI-generated content. ChatGPT-generated answers have gone undetected in 94% of cases and often outperformed real student submissions in assessments [35].

6. Collaborative Efforts and Shared Resources

UK universities have recognized the need for collaboration to address the challenges posed by GenAI. The Russell Group, comprising 24 leading research-intensive universities, has developed guiding principles for the ethical use of GenAI in education. These principles emphasize the importance of AI literacy, adapting teaching and assessment methods, and ensuring academic rigor and integrity [36] [37]. The group has also encouraged universities to share best practices and work collaboratively to refine their approaches to GenAI [38].

6.1 Russell Group’s Five Guiding Principles

In July 2023, the Russell Group released five guiding principles aimed at ensuring the ethical and responsible use of generative AI in education. These principles emphasize:

  1. Supporting students and staff to become AI-literate [39] [37].
  2. Equipping staff to help students use generative AI tools effectively and appropriately [37] [40].
  3. Adapting teaching and assessment to incorporate the ethical use of generative AI while ensuring academic integrity [37] [41].
  4. Sharing best practices as the technology evolves [42] [43] [44].
  5. Evaluating the benefits and drawbacks of generative AI with contextual sensitivity [45].

These principles were developed collaboratively with input from AI experts and member universities, reflecting a coordinated approach to addressing the challenges of generative AI [46] [47].

6.2 Collaboration with Dutch Universities

In February 2024, the Russell Group hosted representatives from 14 Dutch research-intensive universities to discuss AI in education. This workshop facilitated knowledge exchange on best practices in assessments, inclusivity, and innovation in teaching and learning. The event underscored the importance of international collaboration in addressing the evolving challenges of generative AI [48] [49].

6.3 Interdisciplinary and Inter-Sectoral Collaboration

UK universities have recognized the need for interdisciplinary and inter-sectoral collaboration to address the complexities of generative AI. This includes partnerships between faculties of computer science, ethics, education, and law, as well as engagement with industry leaders and professional bodies [50]. Such collaborations aim to develop robust ethical frameworks, innovative assessment designs, and inclusive educational environments.

6.4 Shared Resources and Training Initiatives

UK universities have prioritized the development of AI literacy programs for both staff and students. These programs include workshops, seminars, and online courses designed to raise awareness about the ethical use of generative AI and its potential risks [51] [52]. For example, Durham University has created a micro-course exploring the appropriate use of AI tools [53].

Several universities have developed toolkits and frameworks to guide the ethical use of generative AI. For instance:

The Russell Group has committed to sharing best practices across its member universities to ensure consistency and coordination in addressing generative AI challenges. This includes the dissemination of resources, guidelines, and case studies to support the ethical integration of AI in education [42] [43] [44].

7. Recent Policies and Guidelines on Academic Misconduct (2023-2025)

7.2 Policies on Generative AI Use

7.3 Guidelines for Ethical AI Use

Penalties for academic misconduct involving AI vary across institutions but generally align with the severity of the offence. For example:

7.5 Detection and Prevention Measures

7.6 Collaborative Efforts and Sector-Wide Initiatives

7.7 Challenges and Future Directions

8. Contributions of Dr. Steph Allen

Dr. Steph Allen, a Principal Academic in Learning Development and Academic Integrity at Bournemouth University, has been actively involved in addressing the challenges posed by academic misconduct, particularly in the context of generative artificial intelligence (GenAI). Her work spans academic publications, conference presentations, and initiatives aimed at fostering academic integrity in the evolving educational landscape.

8.1 Academic Publications

Dr. Steph Allen has contributed to several academic publications that explore the intersection of academic misconduct and generative AI. These include:

  1. “Managing the Mutations: Academic Misconduct in Australia, New Zealand, and the UK”
    This publication examines how academic misconduct has evolved in response to technological advancements, including generative AI. It highlights the need for consistent policies and preventative education to address the mutations of academic misconduct across different regions [81] [82].

  2. “Academic Integrity: Tales of the Unexpected”
    This work, published by Bournemouth University, delves into unexpected challenges in maintaining academic integrity, likely including the implications of generative AI [83].

  3. “Artificial Intelligence: How Have Learning Developers Engaged?”
    This publication explores how learning developers have responded to the challenges and opportunities presented by AI, including its impact on academic integrity [84].

  4. “Academic Integrity and Artificial Intelligence: A Student-Led R/Evolution”
    This publication discusses the role of students in shaping the discourse around academic integrity in the age of AI, emphasizing the importance of student engagement in ethical AI use [85].

  5. “AI Assistance or AI Replacement? An Academic Integrity Conversation”
    This work addresses the ethical dilemmas surrounding the use of AI in academic settings, particularly the distinction between AI as a tool for assistance versus a replacement for original work [86].

  6. “Enhancing Academic Integrity: Avoiding Academic Offences During COVID-19”
    Although focused on the pandemic, this publication likely provides insights into how rapid technological adoption, including AI, has influenced academic misconduct [87].

  7. “AI-Text Detection: The Challenges for Academic Integrity”
    This publication specifically addresses the limitations and challenges of AI detection tools in maintaining academic integrity, a critical issue in the context of generative AI [88].

  8. “Generative AI Has Been the Catalyst for Assessment Change but What Impact Has This Had on Education Fraud, If Any?”
    This work explores how generative AI has necessitated changes in assessment design and its implications for academic misconduct [89].

8.2 Presentations and Conferences

Dr. Allen has also been an active participant in conferences and forums discussing academic integrity and generative AI:

  1. International Student Round Table on Artificial Intelligence in the Digital Society (2023)
    Dr. Allen co-organized and co-hosted this event, which tackled monumental topics at the intersection of AI and ethics, including academic integrity [90].

  2. AI and Academic Integrity: What Next? (2024)
    Dr. Allen was a confirmed speaker at this practice-led conference, which explored the challenges generative AI presents to academic integrity. The conference focused on identifying AI-led academic misconduct, documenting and investigating breaches, and rethinking assessment approaches in the age of AI [91].

  3. SRHE Conference 2021: Academic Integrity: Campaigning Now and Beyond
    Although predating the widespread adoption of generative AI, this presentation underscores Dr. Allen’s long-standing commitment to academic integrity [92].

  4. ASCILITE Panel on the Perils of Artificial Intelligence (2024)
    Dr. Allen participated in this panel, which discussed the ethical and regulatory concerns surrounding AI in higher education, including its impact on academic integrity [93].

  5. ICAI Annual Conference (2024)
    Dr. Allen contributed to discussions on academic integrity in the era of generative AI, emphasizing the need for ethical guidelines and preventative strategies [94].

8.3 Initiatives and Contributions

Dr. Allen has been involved in several initiatives aimed at addressing academic misconduct in the context of generative AI:

  1. Promoting Preventative Education
    Dr. Allen advocates for preventative education as a key strategy to combat academic misconduct. Her work emphasizes the importance of educating both staff and students about ethical AI use and the risks of misconduct [82].

  2. Developing Ethical Guidelines
    Dr. Allen has contributed to the development of ethical guidelines for the use of generative AI in education. These guidelines aim to balance the benefits of AI with the need to uphold academic integrity [86] [89].

  3. Supporting Assessment Redesign
    Recognizing the challenges posed by generative AI, Dr. Allen has been involved in initiatives to redesign assessments to promote higher-order thinking and reduce opportunities for misconduct [89].

  4. Fostering Collaborative Efforts
    Dr. Allen has engaged in collaborative efforts with other educators and institutions to share best practices and develop effective strategies for maintaining academic integrity in the age of AI [91].

8.4 Key Themes in Dr. Allen’s Work

Dr. Allen’s contributions reflect several recurring themes:

9. Recommendations and Best Practices

Based on the case studies, reports, and collaborative efforts examined, the following recommendations have emerged as effective strategies for addressing GenAI-related academic misconduct:

  1. Transparency and Communication: Clearly communicate policies on the use of GenAI to students and staff, ensuring that expectations are well understood [95].

  2. AI Literacy Training: Provide training for students and staff to use GenAI tools ethically and effectively [96] [97].

  3. Authentic Assessments: Design assessments that minimize opportunities for misconduct, such as in-class assignments, two-stage exams, and group projects [98].

  4. AI Statements: Require students to disclose their use of GenAI tools in their assignments, promoting transparency and accountability [99] [27].

  5. Collaboration and Research: Foster collaboration among institutions to share resources, develop robust policies, and conduct research on the ethical use of GenAI.

  6. Proactive Approaches: Emphasize prevention over detection by implementing proactive strategies such as ethical education, assessment redesign, and fostering a culture of integrity.

  7. Balanced Use of AI Detection Tools: Recognize the limitations of AI detection tools and adopt a balanced approach that combines technology with human judgment.

  8. Student Engagement: Involve students in discussions about ethical AI use and the development of academic integrity policies.

  9. Continuous Policy Review: Regularly review and update academic integrity policies to keep pace with technological advancements and emerging challenges.

  10. Interdisciplinary Collaboration: Encourage partnerships between different academic disciplines to develop comprehensive approaches to academic integrity in the age of AI.

10. Conclusion

The rise of generative AI has presented significant challenges to academic integrity in UK universities, including Bournemouth University. While institutions have adopted a range of strategies to address these challenges, the effectiveness of these measures varies. Bournemouth University has demonstrated a comprehensive and forward-thinking approach to addressing academic misconduct related to generative AI. By implementing clear policies, providing educational resources, and fostering collaboration, BU aims to uphold academic integrity while embracing the opportunities offered by generative AI.

Collaborative efforts, such as those led by the Russell Group, and the development of innovative assessment methods offer promising pathways to uphold academic standards in the age of AI. The contributions of researchers like Dr. Steph Allen have been instrumental in shaping the discourse around academic integrity and generative AI use in higher education.

However, ongoing research and adaptation will be essential to navigate the evolving landscape of GenAI in education. Universities must continue to balance the prevention of academic misconduct with the promotion of academic freedom and innovation, fostering a culture of integrity that embraces the potential of AI while maintaining the core values of higher education.

As the field continues to evolve, it is clear that a multifaceted approach involving policy development, educational initiatives, collaborative research, and ongoing dialogue will be crucial in addressing the challenges posed by generative AI in academic settings. The efforts made by Bournemouth University and other UK institutions serve as valuable models for the global higher education community in navigating this complex and rapidly changing landscape.

References

  1. Bournemouth University declares using ChatGPT to help with assignments an academic offence. https://thetab.com
  2. Using Artificial Intelligence to support assignments or assessments | Bournemouth University. https://www.bournemouth.ac.uk
  3. Bournemouth University declares using ChatGPT to help with assignments an academic offence. https://thetab.com
  4. Turnitin’s new AI writing detection capability. https://microsites.bournemouth.ac.uk
  5. Turnitin’s new AI writing detection capability. https://microsites.bournemouth.ac.uk
  6. Important changes to our policy around academic offences | Bournemouth University. https://www.bournemouth.ac.uk
  7. Embracing Generative AI: Navigating the Future of Education at Bournemouth University. https://microsites.bournemouth.ac.uk
  8. Engaging Assessment Practices in the Era of Generative AI. https://microsites.bournemouth.ac.uk
  9. Generative AI workshops – Wed 29th Nov. https://microsites.bournemouth.ac.uk
  10. Embracing Generative AI: Navigating the Future of Education at Bournemouth University. https://microsites.bournemouth.ac.uk
  11. FLIE. https://microsites.bournemouth.ac.uk
  12. News. https://microsites.bournemouth.ac.uk
  13. Embracing Generative AI: Navigating the Future of Education at Bournemouth University. https://microsites.bournemouth.ac.uk
  14. AI and Assessment and Feedback workshops. https://microsites.bournemouth.ac.uk
  15. Turnitin’s new AI writing detection capability. https://microsites.bournemouth.ac.uk
  16. Turnitin’s new AI writing detection capability. https://microsites.bournemouth.ac.uk
  17. BU Research Blog | artificial intelligence | Bournemouth University. https://blogs.bournemouth.ac.uk
  18. Events and workshops. https://microsites.bournemouth.ac.uk
  19. Centre for Fusion Learning Innovation and Excellence. https://microsites.bournemouth.ac.uk
  20. Guidance on AI Detection and Why We’re Disabling Turnitin’s AI Detector. https://www.vanderbilt.edu
  21. Embracing Generative AI: Navigating the Future of Education at Bournemouth University. https://microsites.bournemouth.ac.uk
  22. Risks of AI-enabled academic misconduct flagged in new study. https://www.pinsentmasons.com
  23. UK universities warned to ‘stress-test’ assessments as 92% of students use AI | Universities | The Guardian. https://www.theguardian.com
  24. ChatGPT: student AI cheating cases soar at UK universities. https://www.timeshighereducation.com
  25. These are the Russell Group unis that have banned students from using ChatGPT. https://thetab.com
  26. Academic misconduct and AI software. https://exchange.nottingham.ac.uk
  27. Academic misconduct and AI software. https://exchange.nottingham.ac.uk
  28. The Boar. https://theboar.org
  29. The Boar. https://theboar.org
  30. The Boar. https://theboar.org
  31. ‘I received a first but it felt tainted and undeserved’: inside the university AI cheating crisis | Artificial intelligence (AI) | The Guardian. https://www.theguardian.com
  32. The financial impact of AI on institutions through breaches of academic integrity - HEPI. https://www.hepi.ac.uk
  33. These are the Russell Group unis that have banned students from using ChatGPT. https://thetab.com
  34. The financial impact of AI on institutions through breaches of academic integrity - HEPI. https://www.hepi.ac.uk
  35. Academic Integrity in the Age of AI | EDUCAUSE Review. https://er.educause.edu
  36. The financial impact of AI on institutions through breaches of academic integrity - HEPI. https://www.hepi.ac.uk
  37. Risks of AI-enabled academic misconduct flagged in new study. https://www.pinsentmasons.com
  38. The UK’s top universities reached an agreement on how to deal with generative AI. https://qz.com
  39. On the Russell Group principles on AI in education. https://medium.com
  40. On the Russell Group principles on AI in education. https://medium.com
  41. Maintaining academic integrity in the AI era. https://cte.ku.edu
  42. Russell Group Universities issue guidelines on how students can use ChatGPT in their studies | Regulatory Blog | Kingsley Napley. https://www.kingsleynapley.co.uk
  43. New UK university principles promote AI literacy and integrity. https://www.universityworldnews.com
  44. Maintaining academic integrity in the AI era. https://cte.ku.edu
  45. Maintaining academic integrity in the AI era. https://cte.ku.edu
  46. University creates ‘AI’ category for academic misconduct after rise in cases. https://www.varsity.co.uk
  47. University creates ‘AI’ category for academic misconduct after rise in cases. https://www.varsity.co.uk
  48. University creates ‘AI’ category for academic misconduct after rise in cases. https://www.varsity.co.uk
  49. ChatGPT: student AI cheating cases soar at UK universities. https://www.timeshighereducation.com
  50. A comprehensive AI policy education framework for university teaching and learning - International Journal of Educational Technology in Higher Education. https://educationaltechnologyjournal.springeropen.com
  51. University creates ‘AI’ category for academic misconduct after rise in cases. https://www.varsity.co.uk
  52. Academic misconduct. https://www.westminster.ac.uk
  53. Academic Misconduct. https://student.londonmet.ac.uk
  54. Academic Misconduct. https://student.londonmet.ac.uk
  55. UK universities draw up guiding principles on generative AI | Artificial intelligence (AI) | The Guardian. https://www.theguardian.com
  56. Academic Misconduct. https://student.londonmet.ac.uk
  57. Academic Misconduct. https://www.salford.ac.uk
  58. Section 9: Student Academic Misconduct Procedure. https://www.ucl.ac.uk
  59. Academic Misconduct Penalties. https://www.kent.ac.uk
  60. The financial impact of AI on institutions through breaches of academic integrity - HEPI. https://www.hepi.ac.uk
  61. UK universities warned to ‘stress-test’ assessments as 92% of students use AI | Universities | The Guardian. https://www.theguardian.com
  62. Risks of AI-enabled academic misconduct flagged in new study. https://www.pinsentmasons.com
  63. Academic Integrity Charter for UK Higher Education. https://www.qaa.ac.uk
  64. Academic Integrity in the United Kingdom: The Quality Assurance Agency’s National Approach. https://link.springer.com
  65. Academic integrity. https://www.qaa.ac.uk
  66. The financial impact of AI on institutions through breaches of academic integrity - HEPI. https://www.hepi.ac.uk
  67. AI & Academic Integrity. https://teaching.cornell.edu
  68. Risks of AI-enabled academic misconduct flagged in new study. https://www.pinsentmasons.com
  69. Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity - ScienceDirect. https://www.sciencedirect.com
  70. Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity - ScienceDirect. https://www.sciencedirect.com
  71. Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
  72. Stephen Tee, Steph Allen, Jane Mills & Melanie Birks, Managing the mutations: academic misconduct Australia, New Zealand, and the UK - PhilPapers. https://philpapers.org
  73. Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
  74. Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
  75. Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
  76. Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
  77. Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
  78. Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
  79. Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
  80. #unsdg4 #unsdg16 #unsdg17 #bournemouthuni | Dr. Steph Allen. https://www.linkedin.com
  81. AI and Academic Integrity: What Next? — HE Professional. https://heprofessional.co.uk
  82. Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
  83. International Conference of Artificial Intelligence in Higher Education 2024 - oapa.. https://open-publishing.org
  84. Annual Conference Registration. https://academicintegrity.org
  85. Leading UK universities issue joint statement on the use of AI | DailyAI. https://dailyai.com
  86. Principles on the use of generative AI tools in education. https://www.russellgroup.ac.uk
  87. Principles on the use of generative AI tools in education. https://www.russellgroup.ac.uk
  88. Leading UK universities issue joint statement on the use of AI | DailyAI. https://dailyai.com
  89. Principles on the use of generative AI tools in education. https://www.russellgroup.ac.uk
  90. Top UK Universities Develop Guiding Principles for Ethical Use of AI in Education. https://mpost.io
  91. On the Russell Group principles on AI in education. https://medium.com
  92. Russell Group hosts Dutch universities to discuss AI in education. http://russellgroup.ac.uk
  93. Principles on the use of generative AI tools in education. https://www.russellgroup.ac.uk
  94. Russell Group hosts Dutch universities to discuss AI in education. http://russellgroup.ac.uk
  95. Russell Group hosts Dutch universities to discuss AI in education. http://russellgroup.ac.uk
  96. Leading UK universities issue joint statement on the use of AI | DailyAI. https://dailyai.com
  97. Leading UK universities issue joint statement on the use of AI | DailyAI. https://dailyai.com
  98. Challenges and Opportunities of Generative AI for Higher Education as Explained by ChatGPT. https://www.mdpi.com
  99. Navigating the Future: Higher Education policies and guidance on generative AI - Artificial intelligence. https://nationalcentreforai.jiscinvolve.org
  100. Navigating the Future: Higher Education policies and guidance on generative AI - Artificial intelligence. https://nationalcentreforai.jiscinvolve.org
  101. (PDF) Generative Artificial Intelligence (AI) Education Policies of UK Universities. https://www.researchgate.net
  102. AI and ethics: Investigating the first policy responses of higher education institutions to the challenge of generative AI - Humanities and Social Sciences Communications. https://www.nature.com
  103. AI and ethics: Investigating the first policy responses of higher education institutions to the challenge of generative AI - Humanities and Social Sciences Communications. https://www.nature.com
  104. Challenges and Opportunities of Generative AI for Higher Education as Explained by ChatGPT. https://www.mdpi.com
  105. Linking artificial intelligence facilitated academic misconduct to existing prevention frameworks - International Journal for Educational Integrity. https://edintegrity.biomedcentral.com
  106. Frontiers Publishing Partnerships | Generative AI in Higher Education: Balancing Innovation and Integrity. https://www.frontierspartnerships.org
  107. Generative AI Academic Integrity Resources. https://academictechnology.umich.edu
  108. New UK university principles promote AI literacy and integrity. https://www.universityworldnews.com
  109. AI & Academic Integrity. https://teaching.cornell.edu
  110. Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity - ScienceDirect. https://www.sciencedirect.com
  111. New UK university principles promote AI literacy and integrity. https://www.universityworldnews.com