When using the tools and techniques on this page, keep in mind that you cannot tell if a piece of writing is AI-generated just by reading it, even if you are sure that you can. Neither can AI writing detectors, which are notoriously unreliable. AI humanizing services exist specifically to make AI-generated writing sound more "natural" to both human readers and AI writing detectors. When you attempt to discern whether a piece of writing is AI-generated or not, you will be biased against groups like English-language learners or those who can't afford AI humanizers. It is ok to judge a piece of writing against a rubric or on its own merits, but do not attempt to determine whether a specific piece of writing was written by AI or not.
Checking the veracity and trustworthiness of images, videos, or writing is a crucial step, whether they are AI-generated or not. There are several existing frameworks for checking the trustworthiness of information, and usually come in the form of an acronym. Any of these tests work well for checking the reliability of information you find:
All of these tests have a few things in common. They all ask the reader to do a little digging on the veracity of the material, rather than just taking it at face value. Common tasks include checking the authority of the author or publisher, checking the date to make sure the news in current, and considering the purpose or reason for writing the piece - is the piece informative, persuasive, or something else?
The ROBOT test (Reliability, Objective, Bias, Ownership, Type) is a special test designed specifically to test the trustworthiness of news about AI. If you stumble upon any news about an AI tool, especially news about a new tool or feature coming from a company that creates AI tools (like OpenAI or Google), what you should look for is:
Desk Hours: M-F 7:45am-5pm