• 3 Posts
  • 102 Comments
Joined 3 years ago
cake
Cake day: July 16th, 2023

help-circle


  • Ich kann mir beim besten Willen nicht vorstellen das er selbst glaubt er wäre ein kompetenter Politiker.

    Naja, mittelklassiger Politiker, dafür aber maximal korruptes Hohlbrot, als Lobby-Marionette zu wichtig klingenden Positionen in Aufsichtsräten und sonstigen sinnlosen Führungskreisen “befördert”. Dieser Mensch ist sein Leben lang umgeben von Speichelleckern, die damit beschäftigt sind ihm für ihren eigenen Profit Puderzucker in den Arsch zu blasen. Ob er das schlicht nicht (mehr) versteht, oder es ihm egal ist, ist völlig gleich, aber das erklärt seinen krankhaften Narzissmus. Vielleicht ist auch ein Teil davon angeboren, aber sehr viel anerzogen. Ich bin mir absolut sicher das Merz die Art Mensch ist die sich morgens im Spiegel als den absolut geilsten Hengst anhimmelt, nicht als Motivation oder mit dem bisschen Selbstironie, sondern im vollen Brustton der Überzeugung.



  • Just let them leave, or even better, pressure them to withdraw and show some teeth finally. With this whole “pwease amewica do not leave us” all they show is that trump is right with his paper tiger argument. Fuck the US, reform or fully rebuild the NATO in a proper way, without interference of rogue and enemy states.

    So fucking what that the EU does not have as much forces as the US do? If Europe or NATO learned something from the Ukraine war, then it’s that a kid with a gamepad and a 100 USD drone jerry rigged with explosives can take out heavy armor. What’s the worth of an American soldier in NATO, if he’ll go rogue on order tangerine? And when it comes to a direct conflict against the US it’s not about military strength, but only about how far the current president will be willing to go. If they decide on scorched earth (conventional, not nuclear) you’re fucked, no matter whether you’re in the NATO or they’re in the NATO, simply because of the stockpile of painful shit they have access to.

    Restructure the NATO as a proper defense alliance centered around European forces and under European command, where every member contributes defensive capabilities, without hiding behind single members and with their own technology, instead of constantly relying on US and Israel tech.



  • Personal opinion: Came over when the API changes went live, simply because being forced to their official “app”, which was a pile of garbage in all aspects for me, was too much. And since that started the exodus, I couldn’t be arsed to mess around with 3rd party apps to make them work again, because I was too lazy and it simply didn’t feel like it’s worth it anymore. Made a lemmy account, lurked mostly as I did on reddit, content for doom scrolling was lacking quantity mostly. For me there’s now enough content for the daily scrolling session, where quality posts end about as I start to get bored or need to get my ass up, so it’s a win win here.

    It really just feels that more people are here. What I’m missing are a few more different users, because it kinda feels that most people here are very similar in their views, but that also would probably pave the way for more defederation drama.

    Since I’m mostly lurking and liberally use the block instance/community feature to simply hide the content I’m not interested in, for me personally it only got better, so I jump in, get my daily fix of memes, news and other random interesting things, comment occasionally and get back to whatever. I’m honestly lacking an alternative, so it’s as good as it gets for now, and I’m happy with that.


  • I think the face is what’s most off-putting. It looks like straight from any of the generic “AI girlfriend” mobile ads you see. The environment is alright-ish I guess, for me personally it’s a bit too much into an uncanny valley too, because it looks too smooth, too perfect. It seems realistic, but the more and closer you look it feels wrong, because you don’t have that perfection in real life either, and the AI takes all of the realism and detail out of the picture.

    Try taking any picture you have and smack it into an AI upscaler anywhere, and try to upscale it to x4 or x8, and compare the pictures side-by-side directly. Faces lose any kind of wrinkle, detail, markings. The generated image loses all identity it had, and while objectively it looks “good” as in “it’s upscaled correctly, it’s smooth and looks detailed” all and any of the details that give that image something are actually lost. It just has that “AI look”, the generic, hyper realistic “perfection”.


  • Yeah, they did that with RTX Remix afaik? But that kept the style of the game at least, it just or mostly upscaled textures. The face filter on DLSS 5 is the weird thing for me. Especially their example on requiem put me off, I love the style of the game, the acting was good, it’s been a good mix between great graphics and realism while keeping the resi aesthetic, but the gooner AI girlfriend grace is really uncanny valley territory.



  • All that to say, nothing I’ve said in this thread is the consensus of the moderation team because there is no consensus among the moderation team. If there was a feature to let me remove my moderator badge from comments, I’d utilize it. There is an option to “speak as moderator”, but I think that does… kindof the opposite of what I’d want? (Like it’d make the moderator badge bigger and meaner looking or something. I dunno.)

    Absent a feature to remove my mod badge on a per-comment basis, though, maybe I should either 1) try to initiate some effort to get the mods all on one page, 2) step down as mod, or 3) not comment in a way that might be interpreted as a statement of the consensus of the mod team. I haven’t thought through which one of those if any I should do.

    I did not want to call you out on the mod badge, it was a legit question. Because if this community is thought/planned to be an anti-AI sentiment only, it’s fair game, and then I can just call it quits and block the community on my end, because I have zero interest in neither techbro vibe code stories nor in anything that mentions AI is bad stories.

    Sorry if the comment came out wrong - I just realized there that you were a mod, and probably misinterpreted your comment as well, since the whole comment chain before has been kinda heated too.

    But I definitely think that you guys (as in the whole moderation team, whoever that might be in the end) should most definitely set it clear whether this is anti AI only, no discussions, or a critical discussion of AI where, given the name, the majority is obviously anti AI, but critical discussion is welcomed and encouraged. Both are fine, but y’all need to bite the bullet and decide which it is, the current state is … weird to say the least.


  • Is that your personal opinion, or is that the general consensus of the moderation team? Because if it’s the former, I couldn’t give less f’s about it, if it’s the latter, then you guys probably should rewrite the community description:

    “A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype.”

    I’d not call something a discussion if no other or more differentiated opinion is allowed, and putting that up as a community description while reacting with a “yeah, here’s the door, you kinda definitely should leave” is a fucking joke.



  • I’ve never told anyone how to do their activism, I’ve criticized the consequences of said activism, which I still haven’t got answered, and the lack of objective arguments for this specific instance of activism, which I still haven’t heard, except for “AI bad lol”.

    I’ve been rolling with the definition of slop that’s kinda universally agreed on, that is low quality, spammy AI generated content, and I’d ask again for an example of that in vim, but since your definition is “LLM used = slop”, I don’t need to do that here I guess. Also, you’ve missed the irony of telling me to not tell people what slop is, while telling me what slop is right after.

    I don’t understand how you can be so dense to call that an easy problem that’s just boom done. It’s not about compiling and aliasing it, you can do that on probably any commit of vim. It’s about the maintenance and longevity of the fork, who’s gonna support it, and will it have a proper level of maintenance that will make it productively usable in the long term? It’s been forked to a “pre-AI” state of vim, how is it known that it’s not having LLM generated (as in LLM assisted) content already, before the official guidelines have mentioned that? If all that makes evi stand out is a strict no-ai policy, how is this gonna be checked and enforced (e.g human developer uses LLM tooling on his local machine, without disclosing it)? Who are the developers behind it, greetings from xz and similar supply chain attacks? How are upstream changes and fixes handled, since you’ll use it at some point with elevated privileges or to edit sensitive files? But yeah, fuck all of that, it compiles and you can just alias it, right, so we can talk about the severe problems in the open issues - will vim script be renamed too, and we need to rename vimrc to evirc asap, and boom done.

    I’ve said it here at some point already, screeching “AI IS BAD REEEE” is not helping the case, it’s discrediting the “movement” or “activism” as a whole. AI will not leave. When the bubble bursts, people will stop shoving it everywhere, but it will stay where it can be used properly. Software engineering is something where it CAN be used properly, since whatever you’re building doesn’t give a flying fuck about who’s been writing the code. It’s either good, or it’s bad. Instead of worthless decisions on principle, do better. Coach and talk with people on how to do better, how to live in a world with AI responsibly and for good. Avoid, boycott and fork the ones deciding to not do that, based on objective reasons, and build it better. That’s what activism is about, using your actions to lead to change for the better, or isn’t it? And I don’t see how a hard fork, with all the mentioned consequences and problems, for the simple reason of vim maintainers saying “disclose AI usage” is leading to anything better just for the reason of shoveling an antislop and no-ai tag into the codeberg repository.


  • Of course it works fucking fine if it’s a hard fork of a stable state.

    What mental gymnastics? The ones you’re doing right now. You have not answered a single question from my comment. And what “problem” did you solve exactly? Has there been any issue that has come up because of the acceptance of AI in vim? What kind of “slop” is actually there that makes vim problematic for you?

    People vibe coding random bullshit ideas because they now can, do indeed produce slop. A bunch of highly experienced devs working on a successful project for years using tools that are at their disposal properly is not slop. You’re lending your public voice to a split of the community and of the project for made up bullshit reasons based on no objective proof but claims of slop and out of principle.

    I’d trust the original vim maintainers to decide what’s a good or bad pull, instead of a bunch of random people who simply hard forked for literally no reason.


  • Wait, so because vim is allowing code written with AI we are switching to a random fork? The mental gymnastics here are insane once again. Is someone assuming that the vim maintainers are gonna do agentic requests? How is this project gonna handle upstream changes into their own main? Cherry-picking only “confirmed human-only” commits? Decisions like that out of spite, with zero thoughts and just out of principle do not help against slop. You’re just adding human slop to the AI slop.


  • x1gma@lemmy.worldtoSelfhosted@lemmy.worldCertificates...ugh
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 months ago

    The easiest way would be to set up caddy to use acme on the servers, and never care about certificates again. See https://caddyserver.com/docs/automatic-https.

    If you insist on your centralized solution, which is perfectly fine imo, just place the certificates to a directory properly accessible to caddy, and make sure to keep the permissions minimal, so that the keys are only accessible by authorized users.

    If the certificates are only for caddy, there’s no reason to mess around in system folders.


  • No, I think the distinction is already made and there are words for that. Adding additional terms like “generators” or “pretend intelligence” does not help in creating clarity. In my opinion, the current definitions/classifications are enough. I get Stallman’s point, and his definition of intelligence seems to be different from how I would define intelligence, which is probably the main disagreement.

    I definitely would call a LLM intelligent. Even though it does not understand the context like a human could do, it is intelligent enough to create an answer that is correct. Doing this by basically pure stochastics is pretty intelligent in my books. My car’s driving assistant, even if it’s not fully self driving, is pretty damn intelligent and understands the situation I’m in, adapting speed, understanding signs, reacting to what other drivers do. I definitely would call that intelligent. Is it human-like intelligence? Absolutely not. But for this specific, narrow use-case it does work pretty damn good.

    His main point seems to be breaking the hype, but I do not think that it will or can be achieved like that. This will not convince the tech bros or investors. People who are simply uninformed, will not understand an even more abstract concept.

    In my opinion, we should educate people more on where the hype is actually coming from: NVIDIA. Personally, I hate Jensen Huang, but he’s been doing a terrific job as a CEO for NVIDIA, unfortunately. They’ve positioned themselves as a hardware supplier and infrastructure layer for the core component for AI, and are investing/partnering widely into AI providers, hyperscalers, other component suppliers in a circle of cashflow. Any investment they do, they get back multiplied, which also boosts all other related entities. The only thing that went “10x” as promised by AI is NVIDIA stock. They are bringing capex to a whole new level currently.

    And that’s what we should be discussing more, instead of clinging to words. Every word that any company claims about AI should automatically be assumed to be a lie, especially for any AI claim from any hyperscaler, AI provider, hardware supplier, and especially-especially from NVIDIA. Every single claim they do directly relates to revenue. Every positive claim is revenue. Every negative word is loss. In this circle of money they are running - we’re talking about thousands of billions USD. People have done way worse, for way less money.



  • I disagree with this post and with Stallman.

    LLMs are AI. What people are actually confused about is what AI is and what the difference between AI and AGI is.

    There is no universal definition for AI, but multiple definitions which are mostly very similar: AI is the ability of a software system to perform tasks that typically would involve human intelligence like learning, problem solving, decision making, etc. Since the basic idea is basically that artificial intelligence imitates human intelligence, we would need a universal definition of human intelligence - which we don’t have.

    Since this definition is rather broad, there is an additional classification: ANI, artificial narrow intelligence, or weak AI, is an intelligence inferior to human intelligence, which operates purely rule-based and for specific, narrow use cases. This is what LLMs, self-driving cars, assistants like Siri or Alexa fall into. AGI, artificial general intelligence, or strong AI, is an intelligence equal to or comparable to human intelligence, which operates autonomously, based on its perception and knowledge. It can transfer past knowledge to new situations, and learn. It’s a theoretical construct, that we have not achieved yet, and no one knows when or if we will even achieve that, and unfortunately also one of the first things people think about when AI is mentioned. ASI, artificial super intelligence, is basically an AGI but with an intelligence that is superior to a human in all aspects. It’s basically the apex predator of all AI, it’s better, smarter, faster in anything than a human could ever be. Even more theoretical.

    Saying LLMs are not AI is plain wrong, and if our goal is a realistic, proper way of working with AI, we shouldn’t be doing the same as the tech bros.


  • It is not a lie but a widely accepted and agreed on definition that precedes LLMs by years, and had been created by people way smarter then you and I combined, and who have spent more time in AI research than most people here.

    An LLM is an ANI (artificial narrow intelligence), and any ANI is an AI, the broader term for any artificial intelligence. An ANI operates not on intelligence as a human intelligence, its intelligence is a set of rules. A search engine algorithm is a set of rules. Your phone’s keyboard is a set of rules. T9 typing on your old Nokia is a set of rules and can be classified as an ANI. An LLM has rules how it spits out the next token.

    There is no universal definition of AI, because we would need to have a universal definition of human intelligence for that first. Since there is no single universal definition, it’s free for you to disagree on that definition. But calling it disinformation, that no computer program is intelligent, or a lie is simply wrong.