‘But there is a difference between recognising AI use and proving its use. So I tried an experiment. … I received 122 paper submissions. Of those, the Trojan horse easily identified 33 AI-generated papers. I sent these stats to all the students and gave them the opportunity to admit to using AI before they were locked into failing the class. Another 14 outed themselves. In other words, nearly 39% of the submissions were at least partially written by AI.‘
Article archived: https://web.archive.org/web/20251125225915/https://www.huffingtonpost.co.uk/entry/set-trap-to-catch-students-cheating-ai_uk_691f20d1e4b00ed8a94f4c01


My math undergrad classes were largely like that, too, and that was before there were smartphone solver apps, let alone “AI”. A typical grade breakdown was 10% assignments, 30% midterm, 60% final in first and second year. Then in third and fourth year, it was entirely midterm + final.
They gave a few marks for assignments in lower years since high schoolers often come to them thinking the only things that are important are grades, so won’t practice unless it’s for marks. If you haven’t figured out that practice is important by third year…
And agreed re: changing the focus of our assessment, just like memorizing facts for history “trivia-style” assessment should no longer be used by anyone in a post-search Web 2.0 world. (Although it was never good assessment, regardless.)