• 0 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle




  • Doug7070@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    9 months ago

    What, don’t you enjoy the incredible feature of your car being a rolling computer that constantly gets over the air software updates? Don’t you want to experience the joy of being stuck waiting for a forced Windows update, but instead of your computer it’s your car? Why would anybody not want this incredible and so clearly beneficial experience?!?



  • There are in fact a huge number of reliable counters to drones, including but not limited to anti-aircraft gun systems, anti-aircraft lasers, RF jamming devices (especially effective against cheap/makeshift drones), and several more. Drones are currently an emergent threat without a robust countermeasure scheme, but given their massive role in the Ukraine war that is not going to go unaddressed for long. From a purely mechanical standpoint, small drone munitions are also physically very vulnerable, making them readily destroyed by anti-air autocannon fire or even laser weapons if you assume RF jamming will not solve the problem.


  • There is a massive fundamental difference between having a person see your face in public, or even having a basic security camera record your face, and having a system recognize your biometric data and stalk you through every public environment with extreme precision.

    The general public should absolutely not accept the imposition of being expected to be followed through every public place by private corporate entities for undisclosed purposes. We can and should aggressively push government representatives to take strong regulatory action to outlaw this behavior and aggressively punish violations.

    Will making these efforts actually change matters? Maybe, maybe not. Will throwing your hands up and just assuming it’s impossible to change anything and that we should all just lay down and accept it as fact lead to the worst possible outcome? Absolutely.









  • I’ve found a very simple expedient to avoid any such issues is just to not use things like ChatGPT in the first place. While they’re an interesting gadget, I have been extremely critical of the massive over-hyped pitches of how useful LLMs actually are in practice, and have regarded them with the same scrutiny and distrust as people trying to sell me expensive monkey pictures during the crypto boom. Just as I came out better of because I didn’t add NFTs to my financial assets during the crypto boom, I suspect that not integrating ChatGPT or its competitors into my workflow now will end up being a solid bet, given that the current landscape of LLM based tools is pretty much exclusively a corporate dominated minefield surrounded by countless dubious ethics points and doubts on what these tools are even ultimately good for.



  • This is something I think a lot of people don’t get about all the current ML hype. Even if you disregard all the other huge ethics issues surrounding sourcing training data, what does anybody think is going to happen if you take the modern web, a huge sea of extremist social media posts, SEO optimized scams and malware, and just general data toxic waste, and then train a model on it without rigorously pushing it away from being deranged? There’s a reason all the current AI chatbots have had countless hours of human moderation adjustment to make them remotely acceptable to deploy publicly, and even then there are plenty of infamous examples of them running off the rails and saying deranged things.

    Talking about an “uncensored” LLM basically just comes down to saying you’d like the unfiltered experience of a robot that will casually regurgitate all the worst parts of the internet at you, so unless you’re actively trying to produce a model to do illegal or unethical things I don’t quite see the point of contention or what “censorship” could actually mean in this context.