Skip to Main Content
Research Guides@Tufts

AI Literacy in the Biomedical Sciences

AI Writing Detection

When using the tools and techniques on this page, keep in mind that you cannot tell if a piece of writing is AI-generated just by reading it, even if you are sure that you can. Neither can AI writing detectors, which are notoriously unreliable. AI humanizing services exist specifically to make AI-generated writing sound more "natural" to both human readers and AI writing detectors. When you attempt to discern whether a piece of writing is AI-generated or not, you will be biased against groups like English-language learners or those who can't afford AI humanizers. It is ok to judge a piece of writing against a rubric or on its own merits, but do not attempt to determine whether a specific piece of writing was written by AI or not.

Evaluation Tools and Tests

Checking the veracity and trustworthiness of images, videos, or writing is a crucial step, whether they are AI-generated or not. There are several existing frameworks for checking the trustworthiness of information, and usually come in the form of an acronym. Any of these tests work well for checking the reliability of information you find:

  • CRAAP (Currency, Relevance, Authority, Accuracy, and Purpose)
  • RADAR (Relevance, Authority, Date, Appearance, Reason for writing)
  • SIFT (Stop, Investigate the source, Find better coverage, Trace claims, quotes and media to the original context)
  • ZODIAC (Zooming in, Other opinions and sources, Date of publication/revision, Intended audience, Author/Authority, Consistency)

All of these tests have a few things in common. They all ask the reader to do a little digging on the veracity of the material, rather than just taking it at face value. Common tasks include checking the authority of the author or publisher, checking the date to make sure the news in current, and considering the purpose or reason for writing the piece - is the piece informative, persuasive, or something else?

Evaluating News About AI (ROBOT Test)

The ROBOT test (Reliability, Objective, Bias, Ownership, Type) is a special test designed specifically to test the trustworthiness of news about AI. If you stumble upon any news about an AI tool, especially news about a new tool or feature coming from a company that creates AI tools (like OpenAI or Google), what you should look for is:

  • Reliability: If the news was written by someone outside of an AI company, what are their credentials? If it was written by an AI company, are they giving you a full picture? Could they be withholding information (including trade secrets) or painting a biased picture of their tools?
  • Objective: What is the goal or objective of both using the AI tool (what is the tool for?) and the information shared about the tool? Is the writer writing solely to inform, or do they have a financial incentive?
  • Bias: What biases may exist in the tool itself? What ethical issues might arise from using the tool?
  • Ownership: Who owns the AI tool - is the group private or public? Who has access to the tool?
  • Type: What type of AI tool is being covered - does it generate words, images, video, etc? Is it something more complex, like a self-driving car? What information was input into the AI? Does it require human intervention?

Need Help?

Ask Us

Desk Hours: M-F 7:45am-5pm