• glitchdx@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 hour ago

    I have a book that I’m never going to write, but I’m still making notes and attempting to organize them into a wiki.

    using almost natural conversation, i can explain a topic to the gpt, make it ask me questions to get me to write more, then have it summarize everything back to me in a format suitable for the wiki. In longer conversations, it will also point out possible connections between unrelated topics. It does get things wrong sometimes though, such as forgetting what faction a character belongs to.

    I’ve noticed that gpt 4o is better for exploring new topics as it has more creative freedom, and gpt o1 is better for combining multiple fragmented summaries as it usually doesn’t make shit up.

  • traches@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    ·
    4 hours ago

    I have a guy at work that keeps inserting obvious AI slop into my life and asking me to take it seriously. Usually it’s a meeting agenda that’s packed full of corpo-speak and doesn’t even make sense.

    I’m a software dev and copilot is sorta ok sometimes, but also calls my code a hack every time I start a comment and that hurts my feelings.

  • 2ugly2live@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    5 hours ago

    I used it once to write a polite “fuck off” letter to an annoying customer, and tried to see how it would revise a short story. The first one was fine, but using it with a story just made it bland, and simplified a lot of the vocabulary. I could see people using it as a starting point, but I can’t imagine people just using whatever it spots out.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      just made it bland, and simplified

      Not always, but for the most part, you need to tell it more about what you’re looking for. Your prompts need to be deep and clear.

      “change it to a relaxed tone, but make it make me feel emotionally invested, 10th grade reading level, add descriptive words that fit the text, throw an allegory, and some metaphors” The more you tell it, the more it’ll do. It’s not creative. It’s just making it fit whatever you ask it to do. If you don’t give enough direction, you’ll just get whatever the random noise rolls, which isn’t always what you’re looking for. It’s not uncommon to need to write a whole paragraph about what you want from it. When I’m asking it for something creative, sometimes it takes half a dozen change requests. Once in a while, I’ll be so far off base, I’ll clear the conversation and just try again. The way the random works, it will likely give you something completely different on the next try.

      My favorite thing to do is give it a proper outline of what I need it to write, set the voice, tone, objective, and complexity. Whatever it gives back, I spend a good solid paragraph critiquing it. when it’s > 80% how I like it, I take the text and do copy edits on it until I’m satisfied.

      It’s def not a magic bullet for free work. But it can let me produce something that looks like I spent an hour on it when I spent 20 minutes, and that’s not nothing.

  • Binette@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    5 hours ago

    Not much. Every single time I asked it for help, it or gave me a recursive answer (ex: If I ask “how do I change this setting?” It answers: by changing this setting), or gave me a wrong answer. If I can’t already find it on a search engine, then it’s pretty useless to me.

  • Fedegenerate@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 hours ago

    It’s my rubber duck/judgement free space for Homelab solutions. Have a problem: chatgpt and Google it’s suggestions. Find a random command line: chatgpt what does this do.

    I understand that I don’t understand it. So I sanity check everything going in and coming out of it. Every detail is a place holder for security. Mostly, it’s just a space to find out why my solutions don’t work, find out what solutions might work, and as a final check before implementation.

  • PeriodicallyPedantic@lemmy.ca
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    6 hours ago

    It’s changed by job: I now have to develop stupid AI products.

    It has changed my life: I now have to listen to stupid AI bros.

    My outlook: it’s for the worst; if the LLM suppliers can make good on the promises they make to their business customers, we’re fucked. And if they can’t then this was all a huge waste of time and energy.

    Alternative outlook: if this was a tool given to the people to help their lives, then that’d be cool and even forgive some of the terrible parts of how the models were trained. But that’s not how it’s happening.

  • jg1i@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    2
    ·
    11 hours ago

    I absolutely hate AI. I’m a teacher and it’s been awful to see how AI has destroyed student learning. 99% of the class uses ChatGPT to cheat on homework. Some kids are subtle about it, others are extremely blatant about it. Most people don’t bother to think critically about the answers the AI gives and just assume it’s 100% correct. Even if sometimes the answer is technically correct, there is often a much simpler answer or explanation, so then I have to spend extra time un-teaching the dumb AI way.

    People seem to think there’s an “easy” way to learn with AI, that you don’t have to put in the time and practice to learn stuff. News flash! You can’t outsource creating neural pathways in your brain to some service. It’s like expecting to get buff by asking your friend to lift weights for you. Not gonna happen.

    Unsurprisingly, the kids who use ChatGPT the most are the ones failing my class, since I don’t allow any electronic devices during exams.

    • mrvictory1@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      Are you teaching in university? Also you said “%99 of students uses ChatGPT”, are there really very few people who don’t use AI?

    • polle@feddit.org
      link
      fedilink
      arrow-up
      7
      ·
      8 hours ago

      As a student i get annoyed thr other way arround. Just yesterday i had to tell my group of an assignment that we need to understand the system physically and code it ourselves in matlab and not copy paste code with Chatgpt, because its way to complex. I’ve seen people wasting hours like that. Its insane.

    • Infinite@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      6 hours ago

      Sounds like your curriculum needs updating to incorporate the existence of these tools. As I’m sure you know, kids - especially smart ones - are going to look for the lazy solution. An AI-detection arms race is wasting time and energy, plus mostly exercising the wrong skills.

      AVID could be a resource for teaching ethics and responsible use of AI. https://avidopenaccess.org/resource/ai-and-the-4-cs-critical-thinking/

    • PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      6 hours ago

      I’m generally ok with the concept of externalizing memory. You don’t need to memorize something if you memorize where to get the info.

      But you still need to learn how to use the data you look up, and determine if it’s accurate and suitable for your needs. Chat gpt rarely is, and people’s blind faith in it is frightening

  • frog_brawler@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    7 hours ago

    I get an email from corporate about once a week that mentions it in some way. It gets mentioned in just about every all hands meeting. I don’t ever use it. No one on my team uses it. It’s very clearly not something that’s going to benefit me or my peers in the current iteration, but damn… it’s clear as day that upper management wants to use it but they don’t know how to implement it.

  • sudneo@lemm.ee
    link
    fedilink
    arrow-up
    11
    ·
    11 hours ago

    After 2 years it’s quite clear that LLMs still don’t have any killer feature. The industry marketing was already talking about skyrocketing productivity, but in reality very few jobs have changed in any noticeable way, and LLM are mostly used for boring or bureaucratic tasks, which usually makes them even more boring or useless.

    Personally I have subscribed to kagi Ultimate which gives access to an assistant based on various LLMs, and I use it to generate snippets of code that I use for doing labs (training) - like AWS policies, or to build commands based on CLI flags, small things like that. For code it gets it wrong very quickly and anyway I find it much harder to re-read and unpack verbose code generated by others compared to simply writing my own. I don’t use it for anything that has to do communication, I find it unnecessary and disrespectful, since it’s quite clear when the output is from a LLM.

    For these reasons, I generally think it’s a potentially useful nice-to-have tool, nothing revolutionary at all. Considering the environmental harm it causes, I am really skeptical the value is worth the damage. I am categorically against those people in my company who want to introduce “AI” (currently banned) for anything other than documentation lookup and similar tasks. In particular, I really don’t understand how obtuse people can be thinking that email and presentations are good use cases for LLMs. The last thing we need is to have useless communication longer and LLMs on both sides that produce or summarize bullshit. I can totally see though that some people can more easily envision shortcutting bullshit processes via LLMs than simply changing or removing them.

  • LovableSidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    13 hours ago

    Never explored it at all until recently, I told it to generate a small country tavern full of NPCs for 1st edition AD&D. It responded with a picturesque description of the tavern and 8 or 9 NPCs, a few of whom had interrelated backgrounds and little plots going on between them. This is exactly the kind of time-consuming prep that always stresses me out as DM before a game night. Then I told it to describe what happens when a raging ogre bursts in through the door. Keeping the tavern context, it told a short but detailed story of basically one round of activity following the ogre’s entrance, with the previously described characters reacting in their own ways.

    I think that was all it let me do without a paid account, but I was impressed enough to save this content for a future game session and will be using it again to come up with similar content when I’m short on time.

    My daughter, who works for a nonprofit, says she uses ChatGPT frequently to help write grant requests. In her prompts she even tells it to ask her questions about any details it needs to know, and she says it does, and incorporates the new info to generate its output. She thinks it’s a super valuable tool.

  • raspberriesareyummy@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    10 hours ago

    The only thing I have to worry about is not to waste my time to respond to LLM trolls in lemmy comments. People admitting to use LLM to me in conversation instantly lose my respect and I consider them lazy dumbfucks :p

    • PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      3 hours ago

      You can lose respect for me if you want; I generally hate LLMs, but as a D&D DM I use them to generate pictures I can hand out to my players, to set the scene. I’m not a good enough artist and I don’t have the time to become good enough just for this purpose, nor rich enough to commission an artist for a work with a 24h turnaround time lol.

      I’m generally ok with people using LLMs to make their lives easier, because why not?

      I’m not ok with corporations using LLMs that have stolen the work of others, to reduce their payroll or remove the fun/creative parts of jobs, just so some investors get bigger dividends or execs get bigger bonuses

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 hours ago

        I’m generally ok with people using LLMs to make their lives easier, because why not?

        Because 1) it adds to killing our climate and 2) it increases dependencies on western oligarchs / technocrats who are generally horrible people and enemies of the public.

        • PeriodicallyPedantic@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          3 hours ago

          I agree, but the crux of my post is that it doesn’t have to be that way - it’s not inherent to the training and use of LLMs.

          I think your second point is what makes the first point worse - this is happening at an industrial scale, with the only concern being profit. We pay technocrats for the use of their services, and they use that money to train more models without a care for the deviation it causes.

          I think a lot of the harm caused by model training can be forgiven if the models were used for the betterment of quality of life of the masses, but they’re not, they’re mainly used to enrich technocrats and business owners at any expense.

  • Caboose12000@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    13 hours ago

    I got into linux right around when it was first happening, and I dont think I would’ve made it through my own noob phase if i didnt have a friendly robot to explain to me all the stupid mistakes I was making while re-training my brain to think in linux.

    probably a very friendly expert or mentor or even just a regular established linux user could’ve done a better job, the ai had me do weird things semi-often. but i didnt have anyone in my life that liked linux, let alone had time to be my personal mentor in it, so the ai was a decent solution for me

  • RalphFurley@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    12 hours ago

    I love using it for writing scripts that need to sanitize data. One example I had a bash script that looped through a csv containing domain names and ran AXFR lookups to grab the DNS records and dump into a text file.

    These were domains on a Windows server that was being retired. The python script I had Copilot write was to clean up the output and make the new zone files ready for import into PowerDNS. Made sure the SOA and all that junk was set. Pdns would import the new zone files into a SQL backend.

    Sure I could’ve written it myself but I’m not a python developer. It took about 10 minutes of prompting, checking the code, re-prompting then testing. Saved me a couple hours of work easy.

    I use it all the time to output simple automation tasks when something like Ansible isn’t apropos