I know the reputation that AI has on Lemmy, however I’ve found that some users (like myself) have found that LLMs can be useful tools.
What are fellow AI users using these tools for? Furthermore, what models are you using that find the most useful?
The only thing that comes to mind is I wanted to make a shell script that moved every file in a directory to another directory, but one at a time, slowly, and I didn’t want to learn sh from scratch, so I asked an LLM for a script that would do it.
The script didn’t work, but I was able to figure out how to fix it better than write it from scratch.
I felt bad for the environment I was destroying, and I would never pay for this shit.
Honestly I’m part of the problem a little bit.
In my hobby project I used GitHub copilot, to help me ramp up on unfamiliar tech. I was integrating three unfamiliar platforms in an unfamiliar program language, and it helped expose the APIs and language features I didn’t know about. It was almost like a tutorial; it’d make some code that was kinda broken, but fixing it would introduce me to new language features and API resources that would help me. Which was nice because I struggle to just read API specs.
I’ve also used it when on my d&d campaign to create images of new settings. It just a 3 player weekly game so it’s hard to justify paying an artist for a rush job. Not great, I know. I hope the furry community has more backbone than I do, because they’re singlehandedly keeping the illustration industry afloat at this point.
The one that the other department tried, and which failed to meet expectations dramatically. Gave management a healthy dose of reality on “AI”.
My CRM system at work has what called “Genius AI” integrated into it. When customer service reps receive calls that require site visits the AI auto fills the work ticket using the phone conversation adds in contact name and numbers and even puts a brief description of what service they require. The AI also transcribes our calls into text to be able to refer back to or get caught up on a job when someone is out sick. It wasnt Iife changing it wasnt forced but as a simple aid it makes life a bit easier.
Supermaven code autocomplete in vscode is really nice
If we are talking chatbots I see them as another level of abstraction to search and is useful but I have concerns on the energy use. Other uses I have encountered is just sorta a convenience thing. Where it can do a bunch of things that individual software can do but at a one stop shop. I have not directly been involved in other aspects but im aware how they are baked into things like facial recognition and tracking and such.
Dont use AI chat as a replacement for search except on popular subjects with broad consensus. Which unfortunately is when you generally don’t need it.
its fine as long as it gives references to check out. I mean its not fine because of the energy usage but if that is solved I would use it for search. again as long as it tells me sources.
If what you want is sources, a regular Google search will do a better job if it’s a popular subject.
Chatgpt and it’s kin will be inexplicably creative in its choice of sources, and in its summary thereof. And in the sources themselves sometimes.
If it’s something you care to get right, just skip AI.
If it’s meaningless, then it’s harmless in its potential inaccuracySee just like a normal search its up to you to evaluate it. ai search wise is as I said another abstraction. Not using it is like turning off the little snipets search engines do nowadays and going back to just clicking an each and every link. The problem is people just taking the response as gospel with no critical thought.
Like a normal search, except it only provides like 4 links, it’s choice of links is even worse than Google SEO, and it provides inaccurate summaries of them rather than relevant text snippets.
So yes, people just taking the response as gospel is bad. But also it’s just worse than search if you’re using it as a search.
You go to the sites just like you would with the snippets and if they don’t pan out you can rephrase or just go back to a normal search. This is what I meant by a level of abstraction. You can skip the scroll and check what it gives you and if its good you save time (much like the snippets saved time) and if not you are no worse and you correct or go slightly older school. As much as I agree the chatbots can be wrong I don’t find them to usually be off base. They generally find pretty decent resources. Now when they are wrong they can be really wrong but its no different from someone searching and just using the first return without going through the results and evaluating each link. It reminds me when a neighbor in the dorms was explaining html to me as he was making a site and I was like. Why? Just use gopher.
But like I said, if you’re using it like a search then it’s just worse.
It’s way slower, it’s way fewer results, and the summaries can never be as accurate as verbatim quotes of the pages themselves.The only reason to use AI is to get the summary, and then you’re not using it as a search. Maybe it provides some references you can use to fact check, but that’s still not a search.
You have to remember, LLM’s are literally just auto-complete. They don’t have the goal of giving you an answer or resources, they have the goal of providing text that would complete the conversation in a way that looks similar to what they’ve seen before. If they can give you a biased answer supported by cherry picked references, that’s just as valid of a completion, because it looks like how such a conversation might be completed.
In situations where the response can be inherently assessed for correctness (eg art, but that’s a whole other ethical issue) or correctness can be automatically verified (eg programming) then there is some value in limited use.
Although I personally think the implications of putting it in the hands of business owners isn’t worth it, that another topic.
I chat with llm’s to help me remember shit I already know, or to change the tone of stuff I write but thats about it.
AI is great at helping me multitask. For example, with AI, I can generate misinformation and destroy the environment at the same time!
Shit. Where I come from, people don’t need AI for that. They just hang a Trump flag in the back of their truck and roll coal through town.
Why not both ¯\(ツ)/¯ ?
It sucks so much that if the US kept up with green energy infrastructure (or nuclear power) all these datacenters (not just AI) could be running on abudant and cheap power without killing our environment.
xAI running off of fucking diesel generators should be a crime but environmental and human health issues get less attention than “look everyone it called itself Hitler, so crazy!!!”
Can you provide some specific examples? I can think of a few ways to implement some of that for my own use case.
If you read their comment you can see they did.
Not all AI are bad, there are types of AI that actually useful (in a good terms) for people. Don’t refer AI as LLMs. LLM is just a branch of AI. SMH people…
A lot of what we take for granted in software now days was once considered “AI”. Every NPC that follows your character in a video game while dynamically accounting for obstacles and terrain features uses the “A* algorithm” which is commonly taught in college courses on “AI”. Gmail sorting spam from non-spam (and not really all that well, honestly)? That’s “AI”. The first version of Google’s search algorithm was also “AI”.
If you’re asking about LLMs, none. Zero. Zip. Nada. Not a goddamned one. LLMs are a scam that need to die in a fire.
Also a lot of things that were considered automations before are now rebranded to ai. They are still often good as well.
I don’t think there’s many consumer use cases for things like LLMs but highly focused, specialized models seem useful. Like protein folding, identifying promising medication, or finding patterns in giant scientific datasets.
I use it to help give me ideas for DND character building and campaigns. I used it to help me write a closing poem for my character who sacrificed himself for the greater good at the end of a long 2 year run. It gave me the scaffold and then I added some details. It generated my latest character picture based upon some criteria.
Otherwise, it gave me some recommendations on how to flavor up a dish the other day. Again I used it but added my own flair.
I asked it a question to help me the remember a movie title based upon some criteria (tip of my tongue style) it nailed it spot on.
I’ll tell you one place I hated it today. The Hardee’s drive through line. Robot voice drives me up the wall.
I use it to help me write emails at work pretty regularly. I have pretty awful anxiety and it can take me a while to make sure my wording is correct. I don’t like using it, not really, but would I rather waste 4 hours of my time typing up an email to all the bosses that doesn’t sound stupid AF or would I rather ask for help and edit what it gives me instead.
I know people use it to summarize policy or to brainstorm or to come up with very rough drafts.
I understand the connotations of using it, but I would definitely not say there’s zero consumer use case for it at all.
LLMs can be useful in hyperfocused , contained environments where the models are trained on a specific data set to provide a service for a specific function only. So it won’t be able to answer random questions you throw at it, but it can be helpful on the only thing it’s trained to do.
Also known as “narrow AI”. You know like a traffic camera that can put a rectangle on every car in the picture, but nothing else. Those kinds of narrow applications have been around for decades already.
I tried Whisper+ voice-to-text this week.
Uses a downloaded 250MB model from Hugging-Face, and processes voice completely offline.
The accuracy is 100% for known words, so far.
For transcribing texts, messages and diary entries.
* I’d be interested to know if it has a large power drain per use.
The only AI tool I’ve found actually useful and reliable is AI denoise for photo editing.
“AI” as in the hyped and since 5 years mainstream “Generative AI”: Jetbrains’ locally run code line completion. Sometimes faster than writing, if you have enough context.
Machine learning stuff that existed well before, but there was exactly 0 hype: Image tagging/face detection.
Jetbrains local completion isnt even a llm, it’s a sort of ML fuckery that’s very low on compute requirement. They released it initially just before the ai craze