

Morons existed long before 2009. They are not a new phenomenon that accounts for a 40% increase in casualties. So your point, astute though it may be, is tangential to the article.
25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)


Morons existed long before 2009. They are not a new phenomenon that accounts for a 40% increase in casualties. So your point, astute though it may be, is tangential to the article.


I would definitely get both remakes if they ever see the light of day (and the reviews are decent). KOTOR was a great game and story.


Interesting. My grandpa was Albanian. Not that he ever talked about it, really. Or maybe I was too young at the time to listen.
Anyhow, I’m glad we weren’t dicks to his people. There aren’t many countries you can say that about.


I don’t have context for your question, and sometimes someone gaslights you into thinking what they want is the right thing. But taking your question at face value, having someone encouraging you to be your best self isn’t a bad thing. I think that’s one of the ways we grow as people.


The fuck is it about right wingers that makes them so fascinated with other people’s junk that they can’t keep their hands off of them?


I’ve noticed, at least with the model I occasionally use, that the best way I’ve found to consistently get western eyes isn’t to specify round eyes or to ban almond-shaped eyes, but to make the character blonde and blue eyed (or make them a cowgirl or some other stereotype rarely associated with Asian women). If you want to generate a western woman with straight black hair, you are going to struggle.
I’ve also noticed that is you want a chest smaller than DDD, it’s almost impossible with some models — unless you specify that they are a gymnast. The model makers are so scared of generating a chest that could ever be perceived as less than robustly adult, that just generating realistic proportions is impossible by default. But for some reason gymnasts are given a pass, I guess.
This can be addressed with LORAs and other tools, but every time you run into one of these hard associations, you have to assemble a bunch of pictures demonstrating the feature you want, and the images you choose better not be too self-consistent or you might accidentally bias some other trait you didn’t intend to.
Contrast a human artist who can draw whatever they imagine without having to translate it into AI terms or worry about concept-bleed. Like, I want portrait-style, but now there are framed pictures in the background of 75% of the gens, so instead I have to replace portrait with a half-dozen other words: 3/4 view, posed, etc.
Hard association is one of the tools AI relies on — a hand has 5 fingers and is found at the end of an arm, etc. The associations it makes are based on the input images, and the images selected or available are going to contain other biases just because, for example, there are very few examples of Asian woman wearing cowboy hats and lassoing cattle.
Now, I rarely have any desire to generate images, so I’m not playing with cutting edge tools. Maybe those are a lot better, but I’d bet they’ve simply mitigated the issues, not solved them entirely. My interest lies primarily in text gen, which has similar issues.


The model is publicly available. You and I can run it — I do. People will continue to do research long after the bubble bursts. People will continue to make breakthroughs. The technology will continue forward, just at a slower, healthier pace once the money tightens up.


We started putting our shit up almost immediately after Halloween. I don’t mind all the gaudy bullshit, just the work and storage space. I just want to put up projector lights. My wife complains that they look like someone didn’t put any effort in — I said that’s exactly why I like them. At least we were able to agree on a prelit tree with no extra ornaments. I do miss the extravagant trees my grandma put up when I was little but it’s so much breakable glass shit.
Last year I put up permanent Govee lights. They were pretty good but then we had our roof redone this fall and I noticed half of them don’t work now. C’est la vie.


The people releasing public models aren’t the ones doing this for profit. Mostly. I know OpenAI and DeepSeek both have. Guess I’ll have to go look up who trained GLM, but I suspect the resources will always be there to push the technology forward at a slower pace. People will learn to do more with less resources and that’s where the bulk of the gains will be made.
Edit: A Chinese university trained GLM. Which is the sort of place where I expect research will continue to be done.


I pay for it. One of the services I pay is about $25/mo and they release about one update a year or so. It’s not cutting edge, just specialized. And they are making a profit doing a bit of tech investment and running the service, apparently. But also they are just tuning and packaging a publicly available model, not creating their own.
What can’t be sustained is this sprint to AGI or to always stay at the head of the pack. It’s too much investment for tiny gains that ultimately don’t move the needle a lot. I guess if the companies all destroy one another until only one remains, or someone really does attain AGI, they will realize gains. I’m not sure I see that working out, though.


What does that chatbot add?


deleted by creator


“AI Chatbot”. Which is what to 99% of people, almost certainly including the journalist who doesn’t live under a rock? They are just avoiding naming it.


what is the message to the audience? That ChatGPT can investigate just as well as BBC.
What about this part?
Either it’s irresponsible to use ChatGPT to analyze the photo or it’s irresponsible to present to the reader that chatbots can do the job. Particularly when they’ve done the investigation the proper way.
Deliberate or not, they are encouraging Facebook conspiracy debates by people who lead AI to tell them a photo is fake and think that’s just as valid as BBC reporting.


Okay I get you’re playing devil’s advocate here, but set that aside for a moment. Is it more likely that BBC has a specialized chatbot that orchestrates expert APIs including for analyzing photos, or that the reporter asked ChatGPT? Even in the unlikely event I’m wrong, what is the message to the audience? That ChatGPT can investigate just as well as BBC. Which may well be the case, but it oughtn’t be.
My second point still stands. If you sent someone to look at the thing and it’s fine, I can tell you the photo is fake or manipulated without even looking at the damn thing.


To whom? People without jets? That sounds like a really niche market…


A “chatbot” is not a specialized AI.
(I feel like maybe I need to put this boilerplate in every comment about AI, but I’d hate that.) I’m not against AI or even chatbots. They have their uses. This is not using them appropriately.


It’s been a minute but as I recall he did not treat evidence of wrongdoing the same between Hillary and Trump.
The case against Comey is ridiculous, but frankly, he is having the day he created in the first place. Fuck that guy. Not that I think he should be found guilty of anything here, but I’m not celebrating his win here, just Trump’s loss.


A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.
What the actual fuck? You couldn’t spare someone to just go look at the fucking thing rather than asking ChatGPT to spin you a tale? What are we even doing here, BBC?
A photo taken by a BBC North West Tonight reporter showed the bridge is undamaged
So they did. Why are we talking about ChatGPT then? You could just leave that part out. It’s useless. Obviously a fake photo has been manipulated. Why bother asking?
To take her to the gavel pit and brain her is no less than she deserves.