25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

  • 0 Posts
  • 1.7K Comments
Joined 1 year ago
cake
Cake day: October 14th, 2024

help-circle
  • I’ve noticed, at least with the model I occasionally use, that the best way I’ve found to consistently get western eyes isn’t to specify round eyes or to ban almond-shaped eyes, but to make the character blonde and blue eyed (or make them a cowgirl or some other stereotype rarely associated with Asian women). If you want to generate a western woman with straight black hair, you are going to struggle.

    I’ve also noticed that is you want a chest smaller than DDD, it’s almost impossible with some models — unless you specify that they are a gymnast. The model makers are so scared of generating a chest that could ever be perceived as less than robustly adult, that just generating realistic proportions is impossible by default. But for some reason gymnasts are given a pass, I guess.

    This can be addressed with LORAs and other tools, but every time you run into one of these hard associations, you have to assemble a bunch of pictures demonstrating the feature you want, and the images you choose better not be too self-consistent or you might accidentally bias some other trait you didn’t intend to.

    Contrast a human artist who can draw whatever they imagine without having to translate it into AI terms or worry about concept-bleed. Like, I want portrait-style, but now there are framed pictures in the background of 75% of the gens, so instead I have to replace portrait with a half-dozen other words: 3/4 view, posed, etc.

    Hard association is one of the tools AI relies on — a hand has 5 fingers and is found at the end of an arm, etc. The associations it makes are based on the input images, and the images selected or available are going to contain other biases just because, for example, there are very few examples of Asian woman wearing cowboy hats and lassoing cattle.

    Now, I rarely have any desire to generate images, so I’m not playing with cutting edge tools. Maybe those are a lot better, but I’d bet they’ve simply mitigated the issues, not solved them entirely. My interest lies primarily in text gen, which has similar issues.



  • We started putting our shit up almost immediately after Halloween. I don’t mind all the gaudy bullshit, just the work and storage space. I just want to put up projector lights. My wife complains that they look like someone didn’t put any effort in — I said that’s exactly why I like them. At least we were able to agree on a prelit tree with no extra ornaments. I do miss the extravagant trees my grandma put up when I was little but it’s so much breakable glass shit.

    Last year I put up permanent Govee lights. They were pretty good but then we had our roof redone this fall and I noticed half of them don’t work now. C’est la vie.


  • The people releasing public models aren’t the ones doing this for profit. Mostly. I know OpenAI and DeepSeek both have. Guess I’ll have to go look up who trained GLM, but I suspect the resources will always be there to push the technology forward at a slower pace. People will learn to do more with less resources and that’s where the bulk of the gains will be made.

    Edit: A Chinese university trained GLM. Which is the sort of place where I expect research will continue to be done.


  • I pay for it. One of the services I pay is about $25/mo and they release about one update a year or so. It’s not cutting edge, just specialized. And they are making a profit doing a bit of tech investment and running the service, apparently. But also they are just tuning and packaging a publicly available model, not creating their own.

    What can’t be sustained is this sprint to AGI or to always stay at the head of the pack. It’s too much investment for tiny gains that ultimately don’t move the needle a lot. I guess if the companies all destroy one another until only one remains, or someone really does attain AGI, they will realize gains. I’m not sure I see that working out, though.





  • what is the message to the audience? That ChatGPT can investigate just as well as BBC.

    What about this part?

    Either it’s irresponsible to use ChatGPT to analyze the photo or it’s irresponsible to present to the reader that chatbots can do the job. Particularly when they’ve done the investigation the proper way.

    Deliberate or not, they are encouraging Facebook conspiracy debates by people who lead AI to tell them a photo is fake and think that’s just as valid as BBC reporting.


  • Okay I get you’re playing devil’s advocate here, but set that aside for a moment. Is it more likely that BBC has a specialized chatbot that orchestrates expert APIs including for analyzing photos, or that the reporter asked ChatGPT? Even in the unlikely event I’m wrong, what is the message to the audience? That ChatGPT can investigate just as well as BBC. Which may well be the case, but it oughtn’t be.

    My second point still stands. If you sent someone to look at the thing and it’s fine, I can tell you the photo is fake or manipulated without even looking at the damn thing.





  • A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.

    What the actual fuck? You couldn’t spare someone to just go look at the fucking thing rather than asking ChatGPT to spin you a tale? What are we even doing here, BBC?

    A photo taken by a BBC North West Tonight reporter showed the bridge is undamaged

    So they did. Why are we talking about ChatGPT then? You could just leave that part out. It’s useless. Obviously a fake photo has been manipulated. Why bother asking?


  • MagicShel@lemmy.ziptoVoyager@lemmy.worldCan we do better than this?
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    2 days ago

    I’ve never looked at the code base and I could make a reasonable guess as to what it’s about. I’m guessing it has to do with pasting a fediverse link into a post and making it point to the actual instance instead of the federated link.

    I could be wrong but it looks like it would make sense to the maintainers of the code.

    Or am I just missing a joke here?

    Edit: sorry if this comes across dickish. I don’t have context for the question. It looks frankly like a very efficient use of words — I’m a little bit in awe. Take away one word and it would be gibberish to me. My own commit messages are not nearly as efficient.






  • It’s openings, not employment. Which is why I asked whether the charts pasted here are showing employment or openings. And why I complained that the chart cuts off everything pre-Covid. If employment is going down, that’s a problem. If job openings are going down, it isn’t AI but a regression to mean. This video is the same jobs trend looked at through a different lens. It’s pretty clear and logical that the demand for more seasoned professionals is more static that for juniors.

    This is numbers taken from public data and put into context, and I don’t think the fact that it’s posted on TikTok is relevant to the math. TikTok just has a better algorithm for discovery for me and that’s where I saw this guy’s work and started following him, and the length of short form video helps the content not exceed attention span.

    That all being said, if employment of juniors is trending down and not just reverting to mean, then I agree with the consolation this is a doomsday scenario cooking over the next 40 years. I have been saying for a couple of years that’s a concern to watch out for. But so far I haven’t seen numbers that concern me. I’ll be continuing to watch this space closely because it’s directly related to my interests.