Addressing AI

Miriam.G.66
Miriam.G.66 Posts: 2 🔍
edited February 8 in Thought Leadership
While most students want to learn and earn their credits with honesty and integrity, there are now more than ever opportunities to cheat, so much that some students may not even know when they are cheating, especially in an online environment. What are your thoughts on addressing this issue? Does Brightspace enable AI detectors for students to use when submitting assignments? What are your thoughts on this?

Answers

  • Hello @Miriam.G.66

    Thank you for reaching us through Brightspace Community!

    There is a PIE request that speaks similar to your query that you can upvote for our team to review this idea.

    Add Turnitin AI writing indicator to Assignment Submissions page (D10330)

    https://desire2learn.brightidea.com/ideas/D10330

    We highly encourage upvoting the idea to raise visibility and participate in the comments section of the idea with your inputs and suggestions.

    Should you have any questions or concerns please do not hesitate to contact.

    Thanks and Regards,

    Prithvi

  • Chris.S.534
    Chris.S.534 Posts: 200
    edited June 2023

    Hi Miriam, there are several 3rd party tools available to integrate with Brightspace that perform AI-based text analysis, such as Copyleaks. Others are listed in the Integration Hub, see https://integrationhub.brightspace.com/browse/app-all-categories/assessment?search=Plagiarism. There is also a great blog post written by Kari Clarkson - Content Marketing Specialist at D2L discussing some of broader AI issues in education. See https://www.d2l.com/blog/the-eruption-of-ai-content-tools-in-higher-education/ Hope that helps!

  • RS.S.997
    RS.S.997 Posts: 2 🔍
    Based on this discussion and the article https://community.d2l.com/brightspace/kb/articles/20477-a-standards-based-approach-to-ai-detection:

    The article "A Standards Based Approach to AI Detection" by Mike Johnston focuses on the integration of AI detection technologies in education, particularly through D2L's partnership with 1Edtech and its involvement in the LTI Working Group. This initiative aims to create a new industry standard, the Asset Processor, to streamline the process of evaluating learner submissions for plagiarism or AI-generated content.

    However, an essential concern arises from the potential inaccuracies of AI detection systems. These systems, as mentioned, may flag a significant portion of original writings as AI-generated, leading to a false accusation rate of around 10% or more. This presents a considerable challenge in educational settings where teachers might mistakenly accuse students of using AI assistance when their work is original creating an environment of distrust.

    The problem underscores the need for a more nuanced approach to AI in education.

    Rather than outright banning or relying solely on AI detection, there's a pressing need to develop more accurate and fair systems. Moreover, considering the growing relevance of AI skills in various aspects of work and life, it's crucial for educational systems to adapt by not only teaching students about AI but also integrating it thoughtfully into the learning process. This approach should aim to harness the benefits of AI while maintaining academic integrity and supporting original student work.

    The article highlights the potential of the Asset Processor standard to support a wide range of applications beyond plagiarism and AI detection, including accessibility, transcription, and linguistic analysis. This broad applicability could be leveraged to enhance educational experiences rather than merely policing them.

    While the development of standards like the Asset Processor is a step forward in integrating AI into education, it's imperative to address the limitations and ethical concerns of AI detection systems. This includes improving accuracy to avoid false accusations and integrating AI as a tool for learning and skill development, preparing students for a future where AI is increasingly integral.