The VERIFY AI Accuracy Method
- Karen Walstra

- Feb 8
- 10 min read
Updated: Feb 9
A Discerning Action Plan

We need to teach learners to be aware that any AI response no matter what AI tool is being used - needs to be checked.
Teach AI literacy using the VERIFY AI Accuracy Method. A 6 step process to teach learners about verifying digital content, through the acronym
V-E-R-I-F-Y .
The free, short, self-paced course below explains the 6 stages, encouraging learners to be AI literate - critical thinkers, diligent and discerning when using AI. At the end of the course is a teacher information section with resources, and a commitment certificate to download.
AI is designed to please the end-user, it will try to give you the best response it thinks appropriate.
The response may have some form of bias (contextual, algorithmic, etc). It may hallucinate presenting inaccurate information, incorrect or misleading fake-news as correct.
So we need to be discerning! We need to VERIFY every piece of content.
The VERIFY AI Accuracy Method Short Course
Below is a brief course with a series of videos and notes to learn or to use to teach "The VERIFY AI Accuracy Method" Short Course. At the end of the course are teacher resources and explanations about this course.
Scroll down to read the content and view the videos!

The VERIFY AI Accuracy Method helps learners practice and use the VERIFY framework to interrogate every AI response:
V - Voice your questions
E - Explore other sources
R - Remember what you know
I - Identify Bias
F - Fact-check specific details
Y - Your judgement matters
Explore the whole method

V - Voice your questions
The V - In the Verify AI Accuracy Method is for "Voice your questions!"
Pause and ask,"Does this actually make sense?"
The V is for VOICE your questions encourages you to take a moment to pause and critically reflect right away, immediately after receiving an AI response. Look at the response and ask yourself, "Does this actually make sense?" What is your gut feel? Do you want to use this response? Does it make sense? Do not just accept the output as absolutely correct and true, you, as the user, are urged to ask yourself, "Does this actually make sense?"
Use your common sense and general knowledge as you reflect whether the information makes sense or is worth using!
Be an active reflective participant! Be the human-in-the-loop!
This initial check acts as a logical filter, preventing the uncritical acceptance of information that may be coherent and grammatically correct, but logically or contextually flawed.
The V is for voice your questions.

E - Explore other sources
The E - In the Verify AI Accuracy Method is to apply
the "Three Source Rule".
The "E" is for EXPLORE other sources encourages you to validate the information provided by AI. This step mandates the application of the "Three Source Rule."
We need to check against other sources because AI may
Hallucination: AI can hallucinate. It creates made-up information , sources, statistics, or facts that sounds good and believable, and you may think it is correct, but it could be an error! So check!
create Context and Reasoning Failures: AI can struggle to understand the nuances of human language, slang, or the broader context of a situation or conversation.
produces or shares Fake-news: AI can be used to create fake news or spread fake news very quickly! Fake news is created through a combination of deliberate creation, the exploitation of social media algorithms, and the intentionally manipulating and changing information to sway public opinion, generate profit, or cause chaos. It is designed to mimic the look and feel of legitimate news reporting while lacking the integrity and values of traditional journalism.
create Harmful Content: AI can produce dangerous or inappropriate content, because of "jailbreaking" (prompt hacking) or improper data training.
produce Security Failures: AI systems, especially autonomous ones, can cause operational failures or expose sensitive data.
This rule dictates that a fact should never be trusted unless it can be corroborated (verified) by three independent, human-verified places, such as established textbooks or reputable news sites or high-authority reference sources (trusted) websites
This triangulation helps distinguish reliable data from potential fabrication. So, never trust a fact unless you can find it in three independent, human-verified places!
E is for Explore other sources
A quick look at the "Three Source Rule."

R - Remember what you know
The E - In the Verify AI Accuracy Method, is to use a "Brain Check" to compare the AI's answer with what you have already learned in class.
The R is for REMEMBER what you know. This involves performing a "Brain Check" to ground the new information in existing knowledge.
You, as the learner, are advised to compare the AI’s answer with what you have already learned in class.
So building your knowledge content, understanding and knowing what you are learning at school is extremely important!
You, as the human, must make the decision! By using your own education as a benchmark, you can spot inconsistencies or contradictions that an automated AI system might not know or may overlook.
The R is for remembering what you know, doing your own "brain check" and comparing the AI information with what YOU know!

I - Identify Bias
Ask the questions: "Whose voice is missing from this answer?
Does this promote a stereotype?"
.The I is for Identify Bias. This critical step focuses on the social and ethical implications of the content. When exploring, creating or using AI a response is provided which you need to always check. AI can be helpful, reduce work time, BUT it can also be biased, from an African point of view, African countries contribute a tiny portion of the data used to train artificial intelligence models. We are many people but still have a large digital divide therefore producing less data compared to other continents or countries. African perspectives and views are less evident in the digital world, leading to potential biases towards local users.
Examples of AI bias:
Machine learning bias or algorithm bias: Reflects and perpetuates human biases within a society, including historical and current social inequality
Availability bias: Overemphasises frequently occurring, easily recalled or vivid data which leads to skewed outputs or distorting information.
Confirmation bias: Reinforces and confirms existing beliefs, ignoring other research
Contextual bias: The AI model struggles to understand or interpret the context of a conversation or prompt accurately. Provides misinformation
Training data bias: The datasets used to teach the AI model is skewed, unrepresented or imbalanced by societal, historical or technical errors. This causes the AI to produce unfair, inaccurate, or discriminatory results.
Cognitive bias: Favours datasets gathered a small group of people from a small area, and applying it to a range of populations around the globe.
So as the user, you, should interrogate the text by asking, "Whose voice is missing from this answer?" and considering whether the output might "promote a stereotype". This helps to uncover the subtle prejudices or gaps in representation that often exist within AI training data.
I is for Identify Bias and ask the questions, "Whose voice is missing from this answer?" and "Does this promote a stereotype?"

F - Fact-check specific details
Be extra cautious with dates, names, and statistics, as these AI "hallucinates" most often.
The F is for Fact-check specific details, while general concepts might be correct, AI often struggles with specific, detail and precision. This step warns users to be extra cautious with dates, names, and statistics, noting that these are the specific areas where AI "hallucinates" or confidently invents false information.
AI can hallucinate creating made-up information because of the data it is trained on , sources, statistics, or facts that sounds good and believable, and you may think it is correct, but it is incorrect, wrong and an error! Rigorous verification is required for the concrete details. So always fact-check!
The F in the VERIFY method is to fact-check specific details such as dates, names and statistics.

Y - Your judgement matters
You are the expert of your own work, add your own creativity and unique perspective.
The Y represents "Your judgement matters."
As the learner using AI, you are the expert of your own work and your own human insight is irreplaceable. If the AI output feels generic or "cookie-cutter," you should intervene, add your own creativity, a unique perspective to make your work your own and elevate your work beyond a standard automated AI response.
The Y explains that your individual judgement matters, work through your work and add your own take on the information.
Think about . . .

In Conclusion:
On Tuesday 10th February 2026 is Safer Internet Day, the theme is "Smart tech, safe choices" focusing on equipping users with skills to safely navigate AI, chatbots, and voice assistants.
Teaching about AI Literacy and critical thinking is more important than ever with AI all around. Encouraging students to know their work, and reflect on it critically with or without the use of AI.

The VERIFY acronym as a method is important when explaining to your learners about the responsible use of AI for the following reasons:
Ensuring Accuracy: It enforces the "Three Source Rule," requiring that facts be corroborated by three independent, human-verified sources such as textbooks or news websites before being trusted. This is particularly necessary for checking dates, names, and statistics, which are the specific areas where AI "hallucinates" most often.
Detecting Bias: It compels learners to critically assess the social implications of an answer, prompting them to ask questions like "Whose voice is missing?" or to determine if the output promotes a stereotype.
Empowering Human Agency: It reminds learners that their judgment matters and that they are the experts on their own work. This prevents reliance on generic, "cookie-cutter" AI responses and encourages the addition of unique perspectives and creativity of the individual.
Promoting Critical Thinking: It encourages learners to perform a "Brain Check" to compare AI answers with what they have already learned in class, and to pause and ask if the information logically makes sense.
The VERIFY method is an acronym and also a powerful mnemonic device. A mnemonic device (pronounced nuh-MON-ik) is a tool designed to help you remember something more easily. It is a series of steps to help learners interrogate AI responses.
The VERIFY AI Accuracy Method is important because it equips learners with a structured approach to interrogate every AI response rather than accepting outputs at face value. It fosters true AI literacy by teaching them to distinguish between tasks that can be delegated to a machine and instances where "human empathy and judgement are irreplaceable".
We need to teach learners to be critical thinkers, especially when using AI. Teaching AI Literacy
Below is a video summary of the Verify Method and teacher tips about it.
"The Verify AI Accuracy Model" is one idea for you as a teacher to use in your lessons.
Here is full colour infographic of the VERIFY Method to download (PDF or jpg)
A jpg:

PDF. of the colourful VERIFY Method image above.
Here is grey-scale infographic of the VERIFY AI Accuracy Method to download (PDF or jpg)
A jpg:

PDF of the image of the Greyscale Verify Method
Here is the VERIFY AI Accuracy Method Commitment Certificate to download (PDF or jpg)
For yourself as a teacher to show commitment to your students / learners, and for you to use with your learners making them aware of how to use AI responsibly.
jpg

PDF.
My explanation of working with AI:
I collaborated with NotebookLM providing one resource. The videos and images were created using prompts and the attached note. I checked contents and asked for other copies at times, clarifying the prompt details. I also referred to a range of sources.
I look forward to your feedback and comments if you use it with your learners.
Sources:
Ade-Ibijola, A., Okonkwo, C. (2023). Artificial Intelligence in Africa: Emerging Challenges. In: Eke, D.O., Wakunuma, K., Akintoye, S. (eds) Responsible AI in Africa. Social and Cultural Studies of Robots and AI. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-08215-3_5
Daryna Antoniuk (February 15th, 2024). Lack of data makes AI more biased in African countries, says former tech official. The Record Media. https://therecord.media/lack-of-data-makes-ai-more-biased-in-africa
Evidently AI (October 8, 2025) When AI goes wrong: 13 examples of AI mistakes and failures
Forbes (Sep 06, 2023) Navigating The Biases In LLM Generative AI: A Guide To Responsible Implementation https://www.forbes.com/sites/forbestechcouncil/2023/09/06/navigating-the-biases-in-llm-generative-ai-a-guide-to-responsible-implementation/?sh=3cad6ab25cd2
James Holdsworth (no date). What is AI bias?/. Think IBM. IBM. https://www.ibm.com/think/topics/ai-bias
Julie Rogers and Alexandra Jonker (no date) What is data bias? IBM Think. IBM. https://www.ibm.com/think/topics/data-bias#
Merriam-Webster. (n.d.). Acronym. In Merriam-Webster.com dictionary. https://www.merriam-webster.com/dictionary/acronym
Radović, T., & Manzey, D. (2019). The Impact of a Mnemonic Acronym on Learning and Performing a Procedural Task and Its Resilience Toward Interruptions. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.02522
Siagian, D. T., Maida, N., Irianto, D. M., & Sukardi, R. R. (2023). The Effectiveness of Mnemonic Device Techniques in Improving Long-Term Memory in Learning in Elementary Schools: A Literature Review. Equator Science Journal, 1(1), 24–30. https://doi.org/10.61142/esj.v1i1.4
Springer Nature Link. (01 January 2023) Artificial Intelligence in Africa: Emerging Challenges https://link.springer.com/chapter/10.1007/978-3-031-08215-3_5
UNESCO (24 October 2025) AI can make mistakes: Why media literacy matters more than ever. https://www.unesco.org/en/articles/ai-can-make-mistakes-why-media-literacy-matters-more-ever#
University of Maryland Library. (Jan 13, 2026) Artificial Intelligence (AI) and Information Literacy
AI and Information Literacy. Assess Content. https://lib.guides.umd.edu/c.php?g=1340355&p=9880574#
University of Michigan Library (Jan 20, 2026) "Fake News," Lies, and Misinformation. Library Research guides. https://guides.lib.umich.edu/fakenews#
Utica University. Bloom’s taxonomy https://www.utica.edu/academic/Assessment/new/Blooms%20Taxonomy%20-%20Best.pdf








Comments