Addressing Academic Misconduct and GenAI at Bournemouth University
Table of Contents
1. Executive Summary
This report examines the efforts made in the United Kingdom, with a particular focus on Bournemouth University (BU), to address academic misconduct related to the use of generative artificial intelligence (GenAI). The rapid advancement of GenAI tools, such as ChatGPT, has introduced significant challenges to academic integrity in higher education. UK universities, including Bournemouth University, have been proactive in implementing policies, providing resources, and fostering discussions on the ethical use of GenAI. The report also highlights the contributions of Dr. Steph Allen, a Principal Academic in Learning Development and Academic Integrity at Bournemouth University, who has been actively involved in addressing these challenges.
2. Introduction
The integration of generative artificial intelligence (GenAI) tools into academic environments has introduced significant challenges to maintaining academic integrity. UK universities have been at the forefront of addressing these challenges, given the rapid adoption of GenAI by students and the associated risks of academic misconduct. This report investigates the specific efforts made by Bournemouth University and other UK institutions to combat academic misconduct related to GenAI use, focusing on the period from 2023 to 2025.
3. Bournemouth University’s Approach to GenAI and Academic Misconduct
3.1 Policy Framework and Guidelines
Bournemouth University has taken a structured approach to address academic misconduct involving generative AI. Key elements of its policy framework include:
-
Explicit Prohibition and Acknowledgment Requirements:
- BU has declared that using generative AI tools, such as ChatGPT, to complete assignments without proper acknowledgment constitutes an academic offence under its “Policy and Procedure for Academic Offences” (6H plagiarism) [1].
- Students are required to acknowledge the use of AI tools by naming the tool, describing how it was used, and detailing any modifications made to the AI-generated content. This includes specifying prompts used and how the output was integrated into their work [2].
-
Consequences for Misconduct:
- Failure to comply with these rules can lead to severe penalties, including being required to repeat a unit, receiving a zero or capped mark, or even expulsion [3].
-
Turnitin AI Detection Tool:
- While Turnitin has introduced AI detection capabilities, BU has decided not to use this functionality at present. The decision was based on concerns about the tool’s reliability, lack of detailed guidance from Turnitin, and the need for a more considered approach to integrating such tools into BU’s policies [4] [5].
-
Policy Updates:
- BU regularly reviews its academic integrity policies to ensure they remain relevant and effective. Recent changes include removing the 30% similarity parameter from Turnitin reports, leaving the determination of academic misconduct to the marker’s academic judgment [6].
3.2 Educational Initiatives and Resources
Bournemouth University has invested in educational initiatives to promote understanding and ethical use of generative AI among students and staff:
-
Generative AI Resource Hub:
-
Workshops and Training:
- A series of workshops and drop-in sessions have been organized to support staff and students in understanding and using generative AI responsibly. Topics include “Hacking GenAI,” ethics and integrity, and inclusivity [9].
- Weekly AI drop-ins provide opportunities for academics to ask questions and receive guidance on integrating AI into their teaching practices [10].
-
Student AI Literacy:
-
Authentic Assessment Practices:
3.3 Collaborative Efforts and Research
Bournemouth University has engaged in collaborative efforts and research to address the challenges posed by generative AI:
-
Collaboration with Turnitin:
-
Research on AI and Academic Integrity:
- BU researchers have explored the implications of generative AI on academic integrity, including the need for better quality control, standards, and regulation of large language models [17].
-
Engagement with Broader Academic Community:
3.4 Challenges and Considerations
Despite its proactive measures, BU faces several challenges in addressing academic misconduct related to generative AI:
-
Reliability of AI Detection Tools:
- AI detection tools, including Turnitin’s, have been criticized for their lack of reliability and potential bias against non-native English speakers [20].
-
Balancing Innovation and Integrity:
- BU aims to strike a balance between empowering creativity and ensuring academic integrity. This involves promoting the ethical use of AI while preventing its misuse [21].
-
Evolving Nature of Generative AI:
- The rapid development of generative AI technologies necessitates continuous updates to policies, resources, and educational initiatives [22].
4. Broader UK Context: Case Studies and Institutional Responses
4.1 University of Nottingham
The University of Nottingham has taken a proactive approach to address GenAI-related academic misconduct. The institution has trained academic markers to detect changes in students’ writing styles, which may indicate the use of AI tools [23]. Additionally, the university has emphasized the importance of regularly communicating its academic misconduct policies to students, including updates reflecting the evolution of AI technology [24] [25]. Nottingham has also opted out of using Turnitin’s AI detection tool due to concerns about its reliability.
4.2 University of Oxford
Oxford University has adopted a nuanced approach by incorporating GenAI into its assessment methods while maintaining strict guidelines to prevent misuse. Professor Steve New of Oxford has encouraged students to use AI tools thoughtfully and critically to enhance their essays, provided they fact-check and make significant edits to AI-generated content [26]. Students are required to submit an “AI statement” alongside their assignments, detailing how they used GenAI tools [27]. This policy aims to ensure transparency and uphold academic integrity.
4.3 University of Cambridge
Cambridge University has warned students against over-reliance on GenAI tools, emphasizing the importance of developing critical thinking skills. While the use of GenAI has not been banned, the university has made it clear that submitting AI-generated content as original work constitutes academic misconduct [28]. Cambridge has also explored the use of GenAI as a collaborative coach and for time management, reflecting a balanced approach to its integration [29].
4.4 Glasgow Caledonian University
Glasgow Caledonian University has experienced a significant increase in academic misconduct cases, with numbers rising from 422 in 2020/21 to 742 in 2021/22 and 711 in 2022/23 [30]. The university has responded by working with experts to explore how GenAI can be used as a learning tool while implementing measures to detect and deter its misuse [31].
4.5 Edinburgh Napier University
The Business School at Edinburgh Napier University reported 1,395 cases of academic misconduct over two years, highlighting the scale of the issue [32]. The university has emphasized the need for robust policies and inclusive assessment processes to mitigate the risks associated with GenAI.
5. Challenges in Detecting and Addressing GenAI Misuse
One of the primary challenges faced by UK universities is the difficulty in reliably detecting AI-generated content. Studies have shown that AI detection tools, such as Turnitin’s AI writing detection capability, have accuracy rates ranging from 33% to 81%, depending on the provider and methodology [33]. This has led some institutions to rely on oral examinations (vivas) to verify the originality of students’ work. For example, in 2022/23, 825 out of 952 breaches investigated at UK universities required viva examinations [34].
Additionally, the rapid advancements in GenAI technology have made it increasingly difficult to distinguish between human-written and AI-generated content. ChatGPT-generated answers have gone undetected in 94% of cases and often outperformed real student submissions in assessments [35].
6. Collaborative Efforts and Shared Resources
UK universities have recognized the need for collaboration to address the challenges posed by GenAI. The Russell Group, comprising 24 leading research-intensive universities, has developed guiding principles for the ethical use of GenAI in education. These principles emphasize the importance of AI literacy, adapting teaching and assessment methods, and ensuring academic rigor and integrity [36] [37]. The group has also encouraged universities to share best practices and work collaboratively to refine their approaches to GenAI [38].
6.1 Russell Group’s Five Guiding Principles
In July 2023, the Russell Group released five guiding principles aimed at ensuring the ethical and responsible use of generative AI in education. These principles emphasize:
- Supporting students and staff to become AI-literate [39] [37].
- Equipping staff to help students use generative AI tools effectively and appropriately [37] [40].
- Adapting teaching and assessment to incorporate the ethical use of generative AI while ensuring academic integrity [37] [41].
- Sharing best practices as the technology evolves [42] [43] [44].
- Evaluating the benefits and drawbacks of generative AI with contextual sensitivity [45].
These principles were developed collaboratively with input from AI experts and member universities, reflecting a coordinated approach to addressing the challenges of generative AI [46] [47].
6.2 Collaboration with Dutch Universities
In February 2024, the Russell Group hosted representatives from 14 Dutch research-intensive universities to discuss AI in education. This workshop facilitated knowledge exchange on best practices in assessments, inclusivity, and innovation in teaching and learning. The event underscored the importance of international collaboration in addressing the evolving challenges of generative AI [48] [49].
6.3 Interdisciplinary and Inter-Sectoral Collaboration
UK universities have recognized the need for interdisciplinary and inter-sectoral collaboration to address the complexities of generative AI. This includes partnerships between faculties of computer science, ethics, education, and law, as well as engagement with industry leaders and professional bodies [50]. Such collaborations aim to develop robust ethical frameworks, innovative assessment designs, and inclusive educational environments.
6.4 Shared Resources and Training Initiatives
UK universities have prioritized the development of AI literacy programs for both staff and students. These programs include workshops, seminars, and online courses designed to raise awareness about the ethical use of generative AI and its potential risks [51] [52]. For example, Durham University has created a micro-course exploring the appropriate use of AI tools [53].
Several universities have developed toolkits and frameworks to guide the ethical use of generative AI. For instance:
- The University College Cork developed the SATLE-funded (AI)²ed toolkit, which focuses on academic integrity and artificial intelligence [54].
- Queen’s University Belfast introduced the RAISE framework, emphasizing responsible use, AI best practices, integrity, support, and equitable access [55].
The Russell Group has committed to sharing best practices across its member universities to ensure consistency and coordination in addressing generative AI challenges. This includes the dissemination of resources, guidelines, and case studies to support the ethical integration of AI in education [42] [43] [44].
7. Recent Policies and Guidelines on Academic Misconduct (2023-2025)
7.1 Recognition of AI-Related Academic Misconduct
- The academic year 2023/24 marked the first time that academic misconduct specifically related to AI was formally distinguished from other types of misconduct in some UK universities [56].
- Cambridge University recorded its first cases of AI-related academic misconduct in 2024, with three cases formally documented [57] [58].
- The University of Sheffield reported a dramatic increase in AI-related misconduct cases, rising from six in 2022/23 to 92 in 2023/24, with 79 penalties issued [59].
- Similarly, the University of Glasgow saw a rise in suspected AI-related misconduct cases, from 36 in 2022/23 to 130 in 2023/24 [59].
7.2 Policies on Generative AI Use
- Many universities have updated their academic conduct policies to reflect the emergence of generative AI. For example, eight Russell Group universities, including Oxford and Cambridge, have classified the use of AI tools for assignments as academic misconduct [60].
- The University of Cambridge prohibits the use of AI in assessed work, categorizing it as academic misconduct [61].
- The University of Westminster explicitly states that using generative AI systems like ChatGPT to produce content for assessment constitutes an academic offence [62].
- The University of London Metropolitan permits the responsible use of generative AI but requires students to document or reference its use in assessments [63] [64].
7.3 Guidelines for Ethical AI Use
- UK universities have begun to emphasize the ethical use of AI in education. For instance, the Russell Group universities have developed guiding principles to ensure students and staff are AI-literate and can use generative AI tools appropriately [65].
- These principles include adapting teaching and assessment methods to incorporate ethical AI use, ensuring equal access to AI tools, and educating students about the risks of plagiarism, bias, and inaccuracy in generative AI [65].
7.4 Penalties for AI-Related Misconduct
Penalties for academic misconduct involving AI vary across institutions but generally align with the severity of the offence. For example:
- The University of Manchester applies penalties ranging from a mark of zero for the assessment to expulsion for severe cases [66] [67].
- The University of Nottingham reserves the right to revoke awards and apply penalties for confirmed cases of academic misconduct [68].
- The University of Kent categorizes offences into minor, significant, and serious, with penalties scaling accordingly [69].
7.5 Detection and Prevention Measures
- Universities are increasingly relying on viva examinations to verify the originality of student work when AI use is suspected, as AI detection tools remain imperfect [70].
- The University of Sussex and other institutions have implemented stress-testing of assessments to address the widespread use of AI tools, with 92% of students reportedly using AI in 2025 [71].
- Institutions are also investing in inclusive assessment processes to mitigate the risks of AI exploitation [72].
7.6 Collaborative Efforts and Sector-Wide Initiatives
- The Quality Assurance Agency (QAA) has played a pivotal role in promoting academic integrity across UK higher education. It has developed an Academic Integrity Charter to provide a baseline for institutional policies and practices [73].
- The QAA has also issued guidance on addressing contract cheating and the use of essay mills, which were criminalized in England through the Skills and Post-16 Education Act 2022 [74] [75].
- Universities are encouraged to share resources and best practices to combat academic misconduct involving generative AI, as highlighted in recent studies [76].
7.7 Challenges and Future Directions
- The rapid adoption of generative AI has outpaced the development of robust detection tools, leading to challenges in identifying AI-generated content [77].
- There is a need for more research to inform institutional responses and establish clear guidelines for the ethical use of AI in education [78] [79].
- Universities must balance the prevention of academic misconduct with the promotion of academic freedom and innovation, fostering a culture of integrity [80].
8. Contributions of Dr. Steph Allen
Dr. Steph Allen, a Principal Academic in Learning Development and Academic Integrity at Bournemouth University, has been actively involved in addressing the challenges posed by academic misconduct, particularly in the context of generative artificial intelligence (GenAI). Her work spans academic publications, conference presentations, and initiatives aimed at fostering academic integrity in the evolving educational landscape.
8.1 Academic Publications
Dr. Steph Allen has contributed to several academic publications that explore the intersection of academic misconduct and generative AI. These include:
“Managing the Mutations: Academic Misconduct in Australia, New Zealand, and the UK”
This publication examines how academic misconduct has evolved in response to technological advancements, including generative AI. It highlights the need for consistent policies and preventative education to address the mutations of academic misconduct across different regions [81] [82].“Academic Integrity: Tales of the Unexpected”
This work, published by Bournemouth University, delves into unexpected challenges in maintaining academic integrity, likely including the implications of generative AI [83].“Artificial Intelligence: How Have Learning Developers Engaged?”
This publication explores how learning developers have responded to the challenges and opportunities presented by AI, including its impact on academic integrity [84].“Academic Integrity and Artificial Intelligence: A Student-Led R/Evolution”
This publication discusses the role of students in shaping the discourse around academic integrity in the age of AI, emphasizing the importance of student engagement in ethical AI use [85].“AI Assistance or AI Replacement? An Academic Integrity Conversation”
This work addresses the ethical dilemmas surrounding the use of AI in academic settings, particularly the distinction between AI as a tool for assistance versus a replacement for original work [86].“Enhancing Academic Integrity: Avoiding Academic Offences During COVID-19”
Although focused on the pandemic, this publication likely provides insights into how rapid technological adoption, including AI, has influenced academic misconduct [87].“AI-Text Detection: The Challenges for Academic Integrity”
This publication specifically addresses the limitations and challenges of AI detection tools in maintaining academic integrity, a critical issue in the context of generative AI [88].“Generative AI Has Been the Catalyst for Assessment Change but What Impact Has This Had on Education Fraud, If Any?”
This work explores how generative AI has necessitated changes in assessment design and its implications for academic misconduct [89].
8.2 Presentations and Conferences
Dr. Allen has also been an active participant in conferences and forums discussing academic integrity and generative AI:
International Student Round Table on Artificial Intelligence in the Digital Society (2023)
Dr. Allen co-organized and co-hosted this event, which tackled monumental topics at the intersection of AI and ethics, including academic integrity [90].AI and Academic Integrity: What Next? (2024)
Dr. Allen was a confirmed speaker at this practice-led conference, which explored the challenges generative AI presents to academic integrity. The conference focused on identifying AI-led academic misconduct, documenting and investigating breaches, and rethinking assessment approaches in the age of AI [91].SRHE Conference 2021: Academic Integrity: Campaigning Now and Beyond
Although predating the widespread adoption of generative AI, this presentation underscores Dr. Allen’s long-standing commitment to academic integrity [92].ASCILITE Panel on the Perils of Artificial Intelligence (2024)
Dr. Allen participated in this panel, which discussed the ethical and regulatory concerns surrounding AI in higher education, including its impact on academic integrity [93].ICAI Annual Conference (2024)
Dr. Allen contributed to discussions on academic integrity in the era of generative AI, emphasizing the need for ethical guidelines and preventative strategies [94].
8.3 Initiatives and Contributions
Dr. Allen has been involved in several initiatives aimed at addressing academic misconduct in the context of generative AI:
Promoting Preventative Education
Dr. Allen advocates for preventative education as a key strategy to combat academic misconduct. Her work emphasizes the importance of educating both staff and students about ethical AI use and the risks of misconduct [82].Developing Ethical Guidelines
Dr. Allen has contributed to the development of ethical guidelines for the use of generative AI in education. These guidelines aim to balance the benefits of AI with the need to uphold academic integrity [86] [89].Supporting Assessment Redesign
Recognizing the challenges posed by generative AI, Dr. Allen has been involved in initiatives to redesign assessments to promote higher-order thinking and reduce opportunities for misconduct [89].Fostering Collaborative Efforts
Dr. Allen has engaged in collaborative efforts with other educators and institutions to share best practices and develop effective strategies for maintaining academic integrity in the age of AI [91].
8.4 Key Themes in Dr. Allen’s Work
Dr. Allen’s contributions reflect several recurring themes:
- Proactive Approaches: Emphasizing prevention over detection, Dr. Allen advocates for proactive strategies such as ethical education, assessment redesign, and fostering a culture of integrity.
- Ethical AI Use: Her work highlights the importance of using AI as a tool for assistance rather than a replacement for original thought, and she stresses the need for clear guidelines and ethical frameworks [86] [89].
- Challenges of AI Detection: Dr. Allen has critically examined the limitations of AI detection tools, advocating for a balanced approach that combines technology with human judgment [88].
- Student Engagement: Recognizing the role of students in shaping academic integrity, Dr. Allen has emphasized the importance of involving students in discussions about ethical AI use.
9. Recommendations and Best Practices
Based on the case studies, reports, and collaborative efforts examined, the following recommendations have emerged as effective strategies for addressing GenAI-related academic misconduct:
Transparency and Communication: Clearly communicate policies on the use of GenAI to students and staff, ensuring that expectations are well understood [95].
AI Literacy Training: Provide training for students and staff to use GenAI tools ethically and effectively [96] [97].
Authentic Assessments: Design assessments that minimize opportunities for misconduct, such as in-class assignments, two-stage exams, and group projects [98].
AI Statements: Require students to disclose their use of GenAI tools in their assignments, promoting transparency and accountability [99] [27].
Collaboration and Research: Foster collaboration among institutions to share resources, develop robust policies, and conduct research on the ethical use of GenAI.
Proactive Approaches: Emphasize prevention over detection by implementing proactive strategies such as ethical education, assessment redesign, and fostering a culture of integrity.
Balanced Use of AI Detection Tools: Recognize the limitations of AI detection tools and adopt a balanced approach that combines technology with human judgment.
Student Engagement: Involve students in discussions about ethical AI use and the development of academic integrity policies.
Continuous Policy Review: Regularly review and update academic integrity policies to keep pace with technological advancements and emerging challenges.
Interdisciplinary Collaboration: Encourage partnerships between different academic disciplines to develop comprehensive approaches to academic integrity in the age of AI.
10. Conclusion
The rise of generative AI has presented significant challenges to academic integrity in UK universities, including Bournemouth University. While institutions have adopted a range of strategies to address these challenges, the effectiveness of these measures varies. Bournemouth University has demonstrated a comprehensive and forward-thinking approach to addressing academic misconduct related to generative AI. By implementing clear policies, providing educational resources, and fostering collaboration, BU aims to uphold academic integrity while embracing the opportunities offered by generative AI.
Collaborative efforts, such as those led by the Russell Group, and the development of innovative assessment methods offer promising pathways to uphold academic standards in the age of AI. The contributions of researchers like Dr. Steph Allen have been instrumental in shaping the discourse around academic integrity and generative AI use in higher education.
However, ongoing research and adaptation will be essential to navigate the evolving landscape of GenAI in education. Universities must continue to balance the prevention of academic misconduct with the promotion of academic freedom and innovation, fostering a culture of integrity that embraces the potential of AI while maintaining the core values of higher education.
As the field continues to evolve, it is clear that a multifaceted approach involving policy development, educational initiatives, collaborative research, and ongoing dialogue will be crucial in addressing the challenges posed by generative AI in academic settings. The efforts made by Bournemouth University and other UK institutions serve as valuable models for the global higher education community in navigating this complex and rapidly changing landscape.
References
- Bournemouth University declares using ChatGPT to help with assignments an academic offence. https://thetab.com
- Using Artificial Intelligence to support assignments or assessments | Bournemouth University. https://www.bournemouth.ac.uk
- Bournemouth University declares using ChatGPT to help with assignments an academic offence. https://thetab.com
- Turnitin’s new AI writing detection capability. https://microsites.bournemouth.ac.uk
- Turnitin’s new AI writing detection capability. https://microsites.bournemouth.ac.uk
- Important changes to our policy around academic offences | Bournemouth University. https://www.bournemouth.ac.uk
- Embracing Generative AI: Navigating the Future of Education at Bournemouth University. https://microsites.bournemouth.ac.uk
- Engaging Assessment Practices in the Era of Generative AI. https://microsites.bournemouth.ac.uk
- Generative AI workshops – Wed 29th Nov. https://microsites.bournemouth.ac.uk
- Embracing Generative AI: Navigating the Future of Education at Bournemouth University. https://microsites.bournemouth.ac.uk
- FLIE. https://microsites.bournemouth.ac.uk
- News. https://microsites.bournemouth.ac.uk
- Embracing Generative AI: Navigating the Future of Education at Bournemouth University. https://microsites.bournemouth.ac.uk
- AI and Assessment and Feedback workshops. https://microsites.bournemouth.ac.uk
- Turnitin’s new AI writing detection capability. https://microsites.bournemouth.ac.uk
- Turnitin’s new AI writing detection capability. https://microsites.bournemouth.ac.uk
- BU Research Blog | artificial intelligence | Bournemouth University. https://blogs.bournemouth.ac.uk
- Events and workshops. https://microsites.bournemouth.ac.uk
- Centre for Fusion Learning Innovation and Excellence. https://microsites.bournemouth.ac.uk
- Guidance on AI Detection and Why We’re Disabling Turnitin’s AI Detector. https://www.vanderbilt.edu
- Embracing Generative AI: Navigating the Future of Education at Bournemouth University. https://microsites.bournemouth.ac.uk
- Risks of AI-enabled academic misconduct flagged in new study. https://www.pinsentmasons.com
- UK universities warned to ‘stress-test’ assessments as 92% of students use AI | Universities | The Guardian. https://www.theguardian.com
- ChatGPT: student AI cheating cases soar at UK universities. https://www.timeshighereducation.com
- These are the Russell Group unis that have banned students from using ChatGPT. https://thetab.com
- Academic misconduct and AI software. https://exchange.nottingham.ac.uk
- Academic misconduct and AI software. https://exchange.nottingham.ac.uk
- The Boar. https://theboar.org
- The Boar. https://theboar.org
- The Boar. https://theboar.org
- ‘I received a first but it felt tainted and undeserved’: inside the university AI cheating crisis | Artificial intelligence (AI) | The Guardian. https://www.theguardian.com
- The financial impact of AI on institutions through breaches of academic integrity - HEPI. https://www.hepi.ac.uk
- These are the Russell Group unis that have banned students from using ChatGPT. https://thetab.com
- The financial impact of AI on institutions through breaches of academic integrity - HEPI. https://www.hepi.ac.uk
- Academic Integrity in the Age of AI | EDUCAUSE Review. https://er.educause.edu
- The financial impact of AI on institutions through breaches of academic integrity - HEPI. https://www.hepi.ac.uk
- Risks of AI-enabled academic misconduct flagged in new study. https://www.pinsentmasons.com
- The UK’s top universities reached an agreement on how to deal with generative AI. https://qz.com
- On the Russell Group principles on AI in education. https://medium.com
- On the Russell Group principles on AI in education. https://medium.com
- Maintaining academic integrity in the AI era. https://cte.ku.edu
- Russell Group Universities issue guidelines on how students can use ChatGPT in their studies | Regulatory Blog | Kingsley Napley. https://www.kingsleynapley.co.uk
- New UK university principles promote AI literacy and integrity. https://www.universityworldnews.com
- Maintaining academic integrity in the AI era. https://cte.ku.edu
- Maintaining academic integrity in the AI era. https://cte.ku.edu
- University creates ‘AI’ category for academic misconduct after rise in cases. https://www.varsity.co.uk
- University creates ‘AI’ category for academic misconduct after rise in cases. https://www.varsity.co.uk
- University creates ‘AI’ category for academic misconduct after rise in cases. https://www.varsity.co.uk
- ChatGPT: student AI cheating cases soar at UK universities. https://www.timeshighereducation.com
- A comprehensive AI policy education framework for university teaching and learning - International Journal of Educational Technology in Higher Education. https://educationaltechnologyjournal.springeropen.com
- University creates ‘AI’ category for academic misconduct after rise in cases. https://www.varsity.co.uk
- Academic misconduct. https://www.westminster.ac.uk
- Academic Misconduct. https://student.londonmet.ac.uk
- Academic Misconduct. https://student.londonmet.ac.uk
- UK universities draw up guiding principles on generative AI | Artificial intelligence (AI) | The Guardian. https://www.theguardian.com
- Academic Misconduct. https://student.londonmet.ac.uk
- Academic Misconduct. https://www.salford.ac.uk
- Section 9: Student Academic Misconduct Procedure. https://www.ucl.ac.uk
- Academic Misconduct Penalties. https://www.kent.ac.uk
- The financial impact of AI on institutions through breaches of academic integrity - HEPI. https://www.hepi.ac.uk
- UK universities warned to ‘stress-test’ assessments as 92% of students use AI | Universities | The Guardian. https://www.theguardian.com
- Risks of AI-enabled academic misconduct flagged in new study. https://www.pinsentmasons.com
- Academic Integrity Charter for UK Higher Education. https://www.qaa.ac.uk
- Academic Integrity in the United Kingdom: The Quality Assurance Agency’s National Approach. https://link.springer.com
- Academic integrity. https://www.qaa.ac.uk
- The financial impact of AI on institutions through breaches of academic integrity - HEPI. https://www.hepi.ac.uk
- AI & Academic Integrity. https://teaching.cornell.edu
- Risks of AI-enabled academic misconduct flagged in new study. https://www.pinsentmasons.com
- Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity - ScienceDirect. https://www.sciencedirect.com
- Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity - ScienceDirect. https://www.sciencedirect.com
- Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
- Stephen Tee, Steph Allen, Jane Mills & Melanie Birks, Managing the mutations: academic misconduct Australia, New Zealand, and the UK - PhilPapers. https://philpapers.org
- Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
- Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
- Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
- Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
- Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
- Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
- Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
- #unsdg4 #unsdg16 #unsdg17 #bournemouthuni | Dr. Steph Allen. https://www.linkedin.com
- AI and Academic Integrity: What Next? — HE Professional. https://heprofessional.co.uk
- Dr Steph Allen. https://staffprofiles.bournemouth.ac.uk
- International Conference of Artificial Intelligence in Higher Education 2024 - oapa.. https://open-publishing.org
- Annual Conference Registration. https://academicintegrity.org
- Leading UK universities issue joint statement on the use of AI | DailyAI. https://dailyai.com
- Principles on the use of generative AI tools in education. https://www.russellgroup.ac.uk
- Principles on the use of generative AI tools in education. https://www.russellgroup.ac.uk
- Leading UK universities issue joint statement on the use of AI | DailyAI. https://dailyai.com
- Principles on the use of generative AI tools in education. https://www.russellgroup.ac.uk
- Top UK Universities Develop Guiding Principles for Ethical Use of AI in Education. https://mpost.io
- On the Russell Group principles on AI in education. https://medium.com
- Russell Group hosts Dutch universities to discuss AI in education. http://russellgroup.ac.uk
- Principles on the use of generative AI tools in education. https://www.russellgroup.ac.uk
- Russell Group hosts Dutch universities to discuss AI in education. http://russellgroup.ac.uk
- Russell Group hosts Dutch universities to discuss AI in education. http://russellgroup.ac.uk
- Leading UK universities issue joint statement on the use of AI | DailyAI. https://dailyai.com
- Leading UK universities issue joint statement on the use of AI | DailyAI. https://dailyai.com
- Challenges and Opportunities of Generative AI for Higher Education as Explained by ChatGPT. https://www.mdpi.com
- Navigating the Future: Higher Education policies and guidance on generative AI - Artificial intelligence. https://nationalcentreforai.jiscinvolve.org
- Navigating the Future: Higher Education policies and guidance on generative AI - Artificial intelligence. https://nationalcentreforai.jiscinvolve.org
- (PDF) Generative Artificial Intelligence (AI) Education Policies of UK Universities. https://www.researchgate.net
- AI and ethics: Investigating the first policy responses of higher education institutions to the challenge of generative AI - Humanities and Social Sciences Communications. https://www.nature.com
- AI and ethics: Investigating the first policy responses of higher education institutions to the challenge of generative AI - Humanities and Social Sciences Communications. https://www.nature.com
- Challenges and Opportunities of Generative AI for Higher Education as Explained by ChatGPT. https://www.mdpi.com
- Linking artificial intelligence facilitated academic misconduct to existing prevention frameworks - International Journal for Educational Integrity. https://edintegrity.biomedcentral.com
- Frontiers Publishing Partnerships | Generative AI in Higher Education: Balancing Innovation and Integrity. https://www.frontierspartnerships.org
- Generative AI Academic Integrity Resources. https://academictechnology.umich.edu
- New UK university principles promote AI literacy and integrity. https://www.universityworldnews.com
- AI & Academic Integrity. https://teaching.cornell.edu
- Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity - ScienceDirect. https://www.sciencedirect.com
- New UK university principles promote AI literacy and integrity. https://www.universityworldnews.com