Christmas crackers: How to integrate GenAI in assessment
This blog post provides guidance and recommendations for using GenAI tools in assessment within higher education (HE)
((This post is a translation generated by AI of my original post in Spanish; if you spot any inconsistencies or mistakes, please email me at: maricruzgarciavallejo@gmail.com)
In the United Kingdom, "Christmas crackers" are cardboard tubes wrapped in shiny plastic, traditionally used during Christmas celebrations. Typically, a Christmas cracker is opened by two people, each holding one end of the tube and pulling it apart. Inside, the cracker contains symbolic, holiday-themed gifts, along with valuable pieces of advice, akin to the messages found in "Chinese fortune cookies."
I wanted these recommendations for the ethical use of AI in university assessments to be like a Christmas cracker—a small gift offered to the community.
I wrote the following recommendations for the course "Developing AI Competencies Applied to Assessment: Towards Smarter Evaluation" that I designed for the Vice-Rectorate of Educational Innovation at ULPGC. With this course, the university has become a pioneer in offering a program specifically designed for teaching staff to train them in integrating generative AI into assessment strategies and methods.
I hope that the following recommendations are useful to the community:
1. Assessing Students' Competence in and Access to Generative AI first
Before implementing any policies, it is crucial to determine students' level of competence in AI and their access to generative AI tools:
- What types of generative AI tools do students currently use to support their learning?
- Are all students familiar with such tools?
- Do all students have equitable access to generative AI tools?
- Are students aware of the risks and limitations of these tools?
2. Establishing Clear Policies on Generative AI Use
Institutions should set out unambiguous policies regarding the use of generative AI in coursework and assessments. For students who lack sufficient maturity to understand the purpose of evaluation, the academic staff should establish the usage policy. In other cases, policies may be developed collaboratively between students and staff. The policy must clarify:
a. Which generative AI tools are permitted and which are not.
b. The acceptable and prohibited uses of generative AI.
c. How contributions from AI tools should be cited in coursework and assessments.
d. What constitutes misuse or plagiarism involving AI tools, and the disciplinary measures that will follow such instances.
For instance, if the policy allows the use of AI for text editing, it must specify what ‘editing’ entails. A clear policy might state: “Generative AI may be used to correct spelling, punctuation, and basic grammar, but it must not be used to rewrite sentences or paragraphs for improved clarity or conciseness.”
Ambiguity must be avoided when establishing and agreeing upon these policies.
3. Collaborating on Policies with Students
In cases where students are deemed sufficiently mature, they should be involved in creating AI usage policies for the course. This fosters openness, honesty, and trust, which are critical components of academic integrity. When students and staff collaborate non-hierarchically to determine ethical AI usage, it becomes easier to comprehend and adhere to the agreed policies and their underlying purpose. These “rules of the game” are co-created and mutually respected.
4. Revising AI Policies Throughout the Course
Policies established at the beginning of the course may need to be evaluated and revised as the course progresses. New dilemmas or considerations may arise, such as the emergence of new AI tools capable of emulating human creativity. In such cases, discussions with students can help to adapt the policy, for instance, by debating the advantages and risks of using such tools in course activities.
5. Designing Assessment Tasks That Require Active Student Participation
To mitigate the risk of students relying solely on AI tools, tasks that necessitate active participation and cannot be completed entirely by generative AI should be incorporated. Examples include:
- Oral presentations delivered in class on a specific course topic.
- Group projects where each member evaluates the contributions of their peers to the final product.
- Individual case studies with unique circumstances and data.
- Real-world, context-specific problems requiring students to propose solutions tailored to specific social or professional contexts.
6. Encouraging Active Learning
Learning activities that promote active engagement and synchronous interaction, whether in face-to-face, hybrid, or online environments, can deter inappropriate or excessive reliance on AI. Examples include:
- Facilitated debates, simulations, and problem-solving exercises.
- Immediate responses to questions posed during live sessions.
Active learning in synchronous environments limits opportunities for generative AI to complete tasks on behalf of students.
7. Evaluating the Process, Not Just the Outcome
For tasks conducted outside the classroom or in asynchronous online settings, students should provide evidence of their work process. Examples of such evidence include:
- Notes from discussions with peers or instructors.
- Email correspondence investigating possible solutions.
- Drafts, early attempts, and hypotheses considered or discarded.
Reflective components, such as those outlined in the PAIR framework, can also be included. For example, students could describe how they arrived at their solution, the challenges encountered, and how these were addressed.
8. Incorporating Formative and Progressive Assessment
Instead of relying solely on a final assessment, consider implementing formative or summative assessments throughout the course. This approach better tracks students’ genuine learning processes. Examples include:
- Requiring a prototype or draft of a project for feedback from peers or instructors.
- Embedding peer evaluation tasks within the curriculum.
- Using self-assessment as a tool for students to evaluate their own work.
Progressive assessment fosters engagement with the agreed AI usage policies, ensuring students actively contribute to group projects and evaluations.
9. Integrating Reflection into Assessment
Reflection facilitates students’ awareness of their learning processes. Reflective questions in assessments might include:
- “What challenges did you encounter while completing this task or project?”
- “What knowledge did you acquire?”
- “What skills did you develop?”
These questions can also prompt students to consider the AI-related competencies they have gained in line with the course’s AI usage policies.
10. Including Metacognitive Elements
Questions with metacognitive elements are challenging for AI to answer fully and encourage students to focus on their learning process. Examples include:
- “Describe how your work demonstrates the knowledge, skills, or competencies specified in the assessment criteria.”
- “Explain how you would apply knowledge or skill X in a professional setting.”
11. Leveraging Course Materials for Summative Assessment
Content generated through course activities (e.g., discussion posts, group tasks, presentations, quizzes, and surveys) can be referenced in exam questions and assignments. By restricting tasks to specific sources, such as course materials or class interactions, students are encouraged to engage directly with the learning environment rather than relying on AI tools.
12. Avoiding Tasks Suited to AI’s Strengths in Uncontrolled Environments
Generative AI excels in cognitive tasks such as memorisation, analysis, summarisation, and producing general content (e.g., definitions, data analysis, and creative outputs). While it is ethical to allow AI as a supportive tool for such tasks, assessments in uncontrolled environments should incorporate additional complexity, such as:
- Metacognitive questions.
- Context-specific materials or data.
- Reflective components.
13. Restricting Certain Assessments to Controlled Environments
Where assessments include tasks generative AI performs well, such as summarisation or general content production, they should be conducted in supervised, controlled environments. For online courses, this may involve the use of online proctoring software to ensure academic integrity.