• taiyang@lemmy.world
    link
    fedilink
    English
    arrow-up
    100
    ·
    1 day ago

    I’m the type to be in favor of new tech but this really is a downgrade after seeing it available for a few years. Midterms hit my classes this week and I’ll be grading them next week. I’m already seeing people try to pass off GPT as their own, but the quality of answers has really dropped in the past year.

    Just this last week, I was grading a quiz on persuasion and for fun, I have students pick an advertisement to analyze. You know, to personalize the experience, this was after the super bowl so we’re swimming in examples. Can even be audio, like a podcast ad, or a fucking bus bench or literally anything else.

    60% of them used the Nike Just Do It campaign, not even a specific commercial. I knew something was amiss, so I asked GPT what example it would probably use it asked. Sure enough, Nike Just Do It.

    Why even cheat on that? The universe has a billion ad examples. You could even feed GPT one and have it analyze for you. It’d be wrong, cause you have to reference the book, but at least it’d not be at blatant.

    I didn’t unilaterally give them 0s but they usually got it wrong anyway so I didn’t really have to. I did warn them that using that on the midterm in this way will likely get them in trouble though, as it is against the rules. I don’t even care that much because again, it’s usually worse quality anyway but I have to grade this stuff, I don’t want suffer like a sci-fi magazine getting thousands of LLM submissions trying to win prizes.

    • faythofdragons@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      Why even cheat on that? The universe has a billion ad examples.

      I’m not one of your students, but I do remember how I thought in high school. Both of my parents worked, so I was the one that had to cook dinner and help my little brothers with their homework, then I had multiple hours of my own homework to do.

      While I do enjoy analyzing media, the homework I struggled with would get priority. I was the oldest, so I didn’t have anybody to ask for help with questions, and often had to spend a larger amount of time than intended on topics I struggle with. So, I’d waste the whole night struggling with algebra and chemistry, then do the remaining ‘easy’ assignments as quickly and carelessly as possible so I could get to bed before midnight. Getting points knocked off for shoddy work is far preferable to getting a zero for not doing it at all, and if I could get to bed at a reasonable time, I wouldn’t lose points in the morning class for falling asleep.

      It just… makes sense to cheat sometimes.

    • RunawayFixer@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      Students and cheating is always going to be a thing, only the technology evolves. It’s always been an interesting cat and mouse game imo, as long as you’re not too personally affected (sorry).

      I was a student when the internet started to spread and some students had internet at home, while most teachers were still oblivious. There was a french book report due and 4 kids had picked the same book because they had found a good summary online. 3 of the kids hand wrote a summary of the summary, 1 kid printed out the original summary and handed that in. 3 kids received a 0, the 4th got a warning to not let others copy his work :D

      • taiyang@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        Lol, well sounds like a bad assignment if you can get away with just summary, although I guess it is language class(?) it’s more reasonable. I’m not really shooken up over this type of thing, though. I’m not pro-cheating, but it’s not for justice or morality; it’s cause education is for the students benefit and they’re missing out on growth. We really need more critical thinkers in this world. Like, desperately need them. Lol

    • Vespair@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      10 hours ago

      The reason chatgpt would recommend Nike though is because of its human-based training data. This means that for most humans the Nike ad campaign would also be the first suggestion to come to mind.

      I’m not saying LLMs aren’t having an impact, or denying that said impact is negative, but the way people talk about them is infuriating because it just displays a lack of understanding or forethought on how these systems work.

      People always talk about how they can tell something “sounds like chatgpt” or, as is the case here, is the default chatgpt answer, while ignoring the only reason it would be so is because of the real human patterns which it is mimicking.

      Brief caveats: of course chatgpt is wildly fallible and when producing purely generative content it pulls from nowhere because it’s just remixing unrelated sources, but for things within the normal course of discussion and output chatgpt’s output is vastly more human-like than we want to pretend.

      I would almost guarantee that Nike’s “just so it” was the singularly most popular answer to this kind of assignment before chatgpt existed too.

      • taiyang@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 hours ago

        Except I’ve given this quiz prior to GPT and no, it wasn’t once used because it’s not even a current advertisement campaign. My average 19 year old usually uses examples from my influencers, for instance, so I get stuff like Hello Fresh or Better Help, and usually specific to an ad read on stream on the past couple weeks. After all, the question asks for ads they’ve seen and remembered.

        Also, you neglect how these models get data. It’s likely pulled not because it’s a favorite, but because GPT steals from textbooks, blogs, etc, and those examples that would use that as a go-to (especially if the author uses 90s examples). Plus nevermind that your joe shmo Internet user isn’t the same as the group I’m teaching, most of them weren’t even alive when the Just Do It campaign started, lol.

        It really undermines the point of coming up with your own examples and applying theory to something from their life. I am not inherently anti GPT but this is a very bad use case.

    • Bamboodpanda@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      8 hours ago

      "I recall Ethan Mollick discussing a professor who required students to use LLMs for their assignments. However, the trade-off was that accuracy and grammar had to be flawless, or their grades would suffer. This approach makes me think—we need to reshape our academic standards to align with the capabilities of LLMs, ensuring that we’re assessing skills that truly matter in an AI-enhanced world.

      • taiyang@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        That’s actually something that was discussed like, two years ago within the institutions I’m connected to. I don’t think it was ever fully resolved, but I get the sense that the inaccurate results made it too troublesome.

        My mentally coming out of an education degree, if your assessment can be done by AI, you’re relying too much on memorization and not enough on critical thinking. I complain in my reply, but the honest truth is these students mostly lost points because they didn’t apply theory to the example (although it’s because the example wasn’t fully understood since it wasn’t their own). K-12 generally fails on this, which is why freshmen have the hardest time with these things, GPT or otherwise.

    • Shou@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      ·
      24 hours ago

      As someone who has been a teenager. Cheating is easy, and class wasn’t as fun as video games. Plus, what teenager understands the importance of an assignment? Of the skill it is supposed to make them practice?

      That said, I unlearned to copy summaries when I heard I had to talk about the books I “read” as part of the final exams in high school. The examinor would ask very specific plot questions often not included in online summaries people posted… unless those summaries were too long to read. We had no other option but to take it seriously.

      As long as there isn’t something that GPT can’t do the work for, they won’t learn how to write/do the assignment.

      Perhaps use GPT to fail assignments? If GPT comes up with the same subject and writing style/quality, subract points/give 0s.

      • taiyang@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        22 hours ago

        I have a similar background and no surprise, it’s mostly a problem in my asynchronous class. The ones who have my in person lectures are much more engaged, since it is a fun topic and I don’t enjoy teaching unless I’m also making them laugh. No dice with asynchronous.

        And yeah, I’m also kinda doing that with my essay questions, requiring stuff you sorta can’t just summarize. Important you critical thinking, even if you’re not just trying to detect GPT.

        I remember reading that GPT isn’t really foolproof on verifying bad usage, and I am not willing to fail anyone over it unless I had to. False positives and all that. Hell, I just used GPT as a sounding board for a few new questions I’m writing, and it’s advice wasn’t bad. There’s good ways to use it, just… you know, not so stupidly.