• varsock@programming.dev
      link
      fedilink
      arrow-up
      26
      arrow-down
      7
      ·
      1 year ago

      to add to this, id like standardization of qualification and competencies - kind of like a license so I don’t have to “demonstrate” myself during interviews.

      I hate being in a candidate pool that all have a degree and experience, we all go through a grueling interview process on college basics, and the “best one gets picked.” Company says “our interview process works great, look at the great candidates we hire.” like, duh, your candidate pool was already full of qualified engineers with degrees/experience, what did you expect to happen?

      • v_krishna@lemmy.ml
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        2
        ·
        1 year ago

        I’m betting you aren’t involved in hiring? The number of engineers I’ve interviewed with graduate degrees from top universities who are fundamentally unable to actually write production quality code is mind-boggling. I would NEVER hire somebody without doing some panel with coding, architecture/systems design, and behavioral/social interviews.

        • RonSijm@programming.dev
          link
          fedilink
          arrow-up
          12
          ·
          1 year ago

          This. I’ve had someone in my team that was completely self-taught with no relevant education that was a great dev.

          I’ve also interviewed someone that supposedly had a master degree and a couple of certificates and couldn’t remember how to create a loop during the interview.

          I don’t know how you could properly implement “standardization of qualification and competencies” without just min-maxing it in a way that favors academics

          • varsock@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            good question. Software and computer practices are changing much faster than other fields but with time, pillars are being better and better defined. Production quality code, CI/CD, DevOps, etc…

            Civil engieers have a successful licensure process established. See my comment regarding that.

            But an approach where a candidate would spend time under a “licensed professional software eng” would favor practical work experience over academic.

        • thelastknowngod@lemm.ee
          link
          fedilink
          arrow-up
          6
          ·
          1 year ago

          As a counter balance to that though, interviewers need to understand what they are hiring for and tailor the questions asked to those requirements.

          For example, there is genuinely very little coding required of an SRE these days but EVERY job interview wants you to do some leetcode style algorithm design… Since containers took over, the times I have used anything beyond relatively unremarkable bash scripts is exceptionally small. It’s extremely unlikely that I will be responsible for a task that is so dependent on performance that I need to design a perfect O(1) algorithm. On terraform though, I’m a fucking surgeon.

          SRE specifically should HEAVILY focus on system design and almost all other things should have much much less priority… I’ve failed plenty of skill assessments just because of the code though.

          • lysdexic@programming.dev
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            As a counter balance to that though, interviewers need to understand what they are hiring for and tailor the questions asked to those requirements.

            This does not happen. At all.

            Back in reality we have recruiters who can’t even spell the name of the teck stacks they are hiring for as a precondition, and asking for impossible qualifications such as years of experience in tech stacks that were released only a few months ago.

            From my personal experience, cultural fit and prior experience are far more critical hiring factors, and experience in tech stacks are only relevant in terms of dictating how fast someone can onboard onto a project.

            Furthermore, engineering is all about solving problems that you never met before. Experience is important, but you don’t assess that with leetcode or trivia questions.

        • nilloc@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          Programming should be more like other trades, apprentice for a year or two before getting journeymen status, then work up to master status. Pay and job changing becomes more fair, and we get some reasonable fucking hours and rules to keep us from making overworked mistakes.

          Companies know what they’re getting asked on the programmer’s level (specific experience will still matter, but baseline will be much more standard).

          And workers get experience and learn from the gray beards instead of chatgpting their way into a job they don’t understand.

          • varsock@programming.dev
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            1 year ago

            the trades is a great example of having to work under a professional. Other engineering disciplines also have successful licensure processes. See my comment regarding that.

            There are parallels to be drawn between licensed professionals (like doctors, CPAs, lawyers, civil engineers) that they all have time under a professional and the professional then signs off and bears some responsibility vouching for a trainee.

            • lysdexic@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              There are parallels to be drawn between licensed professionals (like doctors, CPAs, lawyers, civil engineers) that they all have time under a professional and the professional then signs off and bears some responsibility vouching for a trainee.

              We need to keep in mind that the main value proposition of these licenses is to bar people from practicing. There is no other purpose.

              In some activities this gatekeeping mechanismo is well justified: a doctor who kills people out of incompetence should be prevented from practicing, and so do accountants who embezzle and civil engineers who get people killed by designing and building subpar things.

              Your average software developers doesn’t handle stuff that gets people killed. Society gains nothing by preventing a software developer from implementing a button in a social network webapp.

              • varsock@programming.dev
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                1 year ago

                society gains nothing by preventing a software developer from implementing …

                I see the point you are trying to make but I respectfully disagree. Technology is at the core of seemingly every field and at the core of technology is software. Will it result in direct bodily harm? Rarely. But indirectly the impact is certainly more substantial.

                Take internet as an example. The significance of internet and information sharing cannot be disputed. Disturptions to information sharing can send ripples through services that provide essential services. Networking these days is accomplished Vida software defined networking techniques. And we are becoming more dependant on technology and automation.

                I can see why the indirect risk is not as scary as direct risk, but you have to admit, as automation is growing and decisions are being made for us, regulation of those that build these systems should not be overlooked. Professional engineers have a code of ethics they have to adhere to and if you read through it you can see the value it would bring.

                As a counter example to your “doctors are licensed to not kill people” - orthodontists, who move teeth around, pose no fatal risk to their patients. Should they be exempt from being licensed?

                EDIT:

                Just yesterday news was published by Reuters that Musk and managers at Tesla knew about defects of autopilot but marketed otherwise. If those working on it had been licensed, then negligence and decietfulness could line them up to lose their license and prevent them from working in this line again. It would bring accountability

        • BrianTheeBiscuiteer@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          Evaluation is fine but I’d like to eliminate the overuse of leetcode style questions. I’ve used those skills in exactly two places: school and tech interviews. If the tech job is actually for a scientific/biotech company then it’s fair game.

          Also, schools don’t really prepare students for writing production code (if it’s a compsci degree). I learned about unit testing, mocking, dependency injection, environment configs, REST APIs, and UI/UX on the job.

          • v_krishna@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            It varies on who does the interview but I push for much simpler than leetcode type stuff- e.g. not puzzle problems but more “design a program that can represent a parking structure and provide a function that could be used for the ticket printer to determine where a new car should park, as well as one that can run upon exit to determine payment”

            Then if they are actually solid we can dive into complexity and optimization and if they can’t write a class or a function at all (and esp if they can’t model a problem in this way) it’s really obvious.

        • AA5B@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          1 year ago

          And yet that could rule people like me out. I have a history of delivering longer than most developers have been alive, across many technologies, languages, toolsets, for several industries. My resume looks fantastic, and I can pull together a larger strategy and project plan in my sleep, and deliver a cost effective and quality solution.

          However after jumping across all these technologies, I really rely on my IDE for the syntax. I’ll use a plugin for the cli syntax of whatever tool, framework or cloud service we’re using today.

          I like to think I’m extremely qualified, but that programming test on paper will get me every time (why the eff is anything on paper these days), and certifications were a thing for early in your career

        • varsock@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I think it’s important to check for competencies that are valuable to the employer during the interview process. However many, but admittingly not all, employers will use time constrained college level puzzels that a candidate can usually only solve if they have seen it before.

          I’ve been on both sides of the interview process. In my day to day I use a debugger to verify and step through code all the time. Hacker rank, the leading platform to test candidates and generate a metric report, doesn’t even have a debugger. Off-by-one index mistakes are sooo common to see from a candidate who is under time pressure. A few iterations with a debugger and problem solved. I advocate for candidates to develop on their on env and share their screen or bring it with them. But anyway, I’m ranting.

          I agree with most comments arguing against a standardization and pointing to the weakness. I didn’t say it works great, I just wish it was like some other professionals have. See my comment about other engineering disciplines that have a successful licensure process.

      • lysdexic@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        to add to this, id like standardization of qualification and competencies - kind of like a license so I don’t have to “demonstrate” myself during interviews.

        I strongly disagree. There is already a standardization of qualification of competences in the form of cloud vendor certifications. They are all utter bullshit and a huge moneygrab which do nothing to attest someone’s experience or competence.

        Certifications also validate optimizing for the wrong metric, like validating a “papers, please” attitude towards recruitment instead of actually demonstrate competence, skill, and experience.

        Also, certifications validate the parasitic role of a IT recruiter, the likes of which is responsible for barring candidates for not having decades of experience in tech stacks they can’t even spell and released just a few months ago. Relying on certifications empower parasitic recruiters to go from clueless filterers to outright gatekeepers, and in the process validate business models of circumventing their own certification requirements.

        We already went down this road. It’s a disaster. The only need this approach meets is ladder-pulling by incompetent people who paid for irrelevant certifications and have a legal mechanism to prevent extremely incompetent people from practicing, and the latter serves absolutely no purpose on software development.

        • varsock@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          I agree with what you said, it is a shit show. but I wish it weren’t so.

          My good friend is a civil engineer and for him to obtain a Professional Engineer license (PE) he had to complete a four-year college degree, work under a PE licensed engineer for at least four years, pass two intensive competency exams and earn a license from their state’s licensure board. Then, to retain their licenses, PEs must continually maintain and improve their skills throughout their careers.

          This licencing approach is prohibitive to just “pay your way” through. This never caught on in software and computer eng because of how quickly it was (and still is) changing. But certain pillars are becoming better defined such as CI/CD, production-safe code & practices, DevOps.

  • aport@programming.dev
    link
    fedilink
    arrow-up
    49
    arrow-down
    1
    ·
    1 year ago

    Attention and awareness of the ways in which modern technology is harming ourselves.

    We’re providing people with the electronic equivalent of heroin, from a young age, completely rewiring our brains and detaching us from nature and each other.

    • onlinepersona@programming.devOP
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      3
      ·
      1 year ago

      The statistic that ~90% of American teens own an iPhone was shocking to me. It makes me think that from a young age, children are taught not to question but just accept their cage. If closed source is all they grow up with, opensource will be foreign to them. And that in a way that’s worse than when you grow up with windows which doesn’t completely lock you in.

    • fuck_u_spez_in_particular@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      This! I feel it myself, my ADHD was much better when I stayed in a relatively natural setting with only little technology. for a few weeks (I did some programming there though, and boy was I focused in complex problems without medication etc. had one of my best coding sessions there I think). I’m pretty sure that a lot of ADHD but also other psychiatric issues like autism or social anxiety etc. that is diagnosed these days is because of all this unhealthy environment we have created. Or in other words, our modern technology promotes psychiatric issues such as ADHD, autism, social anxiety etc.

  • porgamrer@programming.dev
    link
    fedilink
    arrow-up
    39
    ·
    1 year ago

    Three things off the top of my head:

    • Unionisation
    • Way more stuff publicly funded with no profit motive
    • Severe sanctions on US tech giants all around the world, with countries building up their own workforce and tech infrastructure. No more east india company bullshit.
    • AA5B@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      Severe sanctions on US tech giants

      For the hell of it? Because they’re inherently evil? Protectionism am to develop local industry?

      I’ve worked for a few, but not the consumer giants most people think of. I haven’t found them evil, and they support employees across the world.

      I’ll go even further with developing countries in particular. From my perspective, entire software industries were built on multi-national funding, and we still pay better than local companies. The biggest change over the last decade or two has been switching models from cheapest outsourcing to employing local talent everywhere

      • porgamrer@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        This is just like, my opinion, but here you go:

        If you live in the western sphere, the US tech giants control half of your critical infrastructure and invade every aspect of your personal and professional life. If you live outside the US, they do not answer to you or to anyone you can vote for. They lean on your government for permission to turn your whole existence into a series of transactions, and then extract as much value as possible from each one. The money doesn’t swirl around your community making everyone richer. Instead, 5% goes to pay a few nice salaries in your biggest city, and the rest of it gets funneled straight out of the country and into california.

        Even Europe - their imperial mentor and favourite uncle - is treated like shit. Europe built half of their technology but controls none of it. There is not a single european tech giant. Every last one is american, with extensive ties to the US government and security apparatus.

          • Miaou@jlai.lu
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            To give some example, I saw recently an article about a Frenchman looking to fill some paperwork, which was possible… Except the account needed you to install some Android app, and the app used Google services.

            Author was saying that, since he doesn’t think he should have to create a Google account to fill in some paperwork, he will send a letter instead. A damn letter, like Germany or something

            • AA5B@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Haha yeah, I was eager to watch the recent SoaceX test launch but their official feed required a Twitter account. So I patronized some random YouTuber instead

  • Gabadabs@kbin.social
    link
    fedilink
    arrow-up
    37
    arrow-down
    1
    ·
    1 year ago

    More focus on the ability to maintain, repair, and perhaps even upgrade existing tech. So often people are pushed to upgrade constantly, and devices aren’t really built to last anymore. For example, those yearly trade in upgrade plans that cell phone providers do. It sucks knowing that, once the battery in my cell phone finally dies, the whole phone is essentially garbage and has to be replaced. I miss my older smartphones that still had replaceable batteries, because at least then it’s just the battery that’s garbage.
    We’re throwing so much of our very limited amount of resources right into landfills because of planned obsolescence.

    • onlinepersona@programming.devOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I think the solution to this will come by itself: the supply chain will break down and people will have to learn to make do with what they have. It was like that in the Soviet Union, is like that in some parts of the world right now, and can easily return if we don’t get climate change in check.

    • AA5B@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      once the battery in my cell phone finally dies, the whole phone is essentially garbage

      I don’t get this. I understand they aren’t user replaceable but surely you can get it replaced? Given how good batteries are, they easily last 2-3 years. iPhones are supported for 5-6 years so you only ever need one replacement

      Getting my iPhone battery replaced has typically cost about $75, not all that different from a decade ago spending $35 for a user replaceable battery for a flip phone

      One major difference now is that at least iOS gives me a good measurement of battery health so I can make data driven decision

  • Falst@lemmy.world
    link
    fedilink
    arrow-up
    31
    ·
    1 year ago

    More privacy and less profit 🫣

    I realize most people could rather not pay for a service they currently have for free (which is partly due to the lack of transparency regarding our data usage).

    • onlinepersona@programming.devOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      It’s possible that a donation based-society might work. However, I’m not sure how that can be achieved in parallel to a profit-based society (the on we majorly have to take part in).

      IMO one way is to force the issue by making certain methods of profit impossible or not worth it in the long run. Something like “don’t use it? you lose it” in terms of patents or proprietary solutions. For example if a company stops producing and supporting something, then it has to release the designs, code, and intellectual property to the public.

  • AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.world
    link
    fedilink
    arrow-up
    35
    arrow-down
    4
    ·
    1 year ago

    Have developers be more mindful of the e-waste they’re contributing to by indirectly deprecating CPUs when they skip over portions of their code and say “nah it isn’t worth it to optimize that thing + everyone today should have a X cores CPU/Y GB of RAM anyway”. Complacency like that is what leads software that is as simple in functionality as equivalent software was one or two decades ago to be 10 times more demanding today.

    • space@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      18
      ·
      1 year ago

      Yes!! I enjoy playing with retro tech and was actually surprised on how much you can do with an ancient Pentium 2 machine, and how responsive the software at the time was.

      I really dislike how inefficient modern software is. Like stupid chat apps that use more RAM while sitting in the background than computers had 15-20 years ago…

    • onlinepersona@programming.devOP
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      It leads to software obesity and is a real thing. I think it has to do with developer machines being beefy, so if you write something that runs on it and don’t have a shit machine to test it on, you don’t know just how badly it actually performs.

      But it also has to do with programming languages. It’s much much easier to prototype in Python or Javascript and often the prototype becomes the real thing. Who really has time (and/or money) to rewrite their now functional program in a language that is performant?
      IMO there doesn’t seem to be a clear solution.

      • whofearsthenight@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        I don’t think that even the languages are the problem, it’s the toolchain. While certainly if you went back to C or whatever, you can design more performant systems, I think the problem overall stems from modern toolchains being kinda ridiculous. It is entirely common in any language to load in massive libraries that suck up 100’s of mb of RAM (if not gigs) to get a slightly nicer function to lowercase text or something.

        The other confounding factor is “write once, run anywhere” which in practice means that there is a lot of shared code and such that does nothing on your machine. The most obvious example being Electron. Pretty much all of the Electron apps I use on the reg (which are mostly just Discord and slack) are conceptually simple apps that have analogues that used to run on a few hundred mbs of storage and 10’s of mb of RAM.

        Oh, one other sidetone - how many CPUs are wasting cycles on things that no one wants, like extremely complex ad-tracking/data mining/etc.

        I know why this is the case, and ease of development does enable us to have software that we probably otherwise wouldn’t, but this is a thing that I think is a real blight on modern computing, and I think it’s solvable. I mean, probably the dumbest idea, but improving translation layers to run platform-native code can be vastly improved. Especially in a world where we have generative AI, there has to be a way to say “hey, I’ve got this javascript function, I need this to work in kotlin, swift, c++, etc.”

          • porgamrer@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            LLVM is ironically a very slow compiler back-end, whose popularity has contributed to a general slow-down in compilation speed across the whole industry (it’s even slow at doing debug builds for fast iteration).

            WASM has some promise though

            • onlinepersona@programming.devOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Doesn’t really matter if the compiler is slow if the result is optimized and fast 🤷 Rust compiles slower than C, but that’s because C has no safeguards (excluding static typing). Very often the wasted CPU cycles are on the end of the user, not the developer.

  • planish@sh.itjust.works
    link
    fedilink
    arrow-up
    31
    arrow-down
    2
    ·
    1 year ago

    The death of the device and the return of the system.

    A device is a sealed thing provided on a take it or leave it basis, often designed to oppose the interests of the person using it. Like hybrid corn, a device is infertile by design: you cannot use a device to develop, test, and program more devices.

    A system is a curated collection of interchangeable hardware and software parts. Some parts are only compatible with certain other parts, but there is no part that cannot be replaced with an alternative from a different manufacturer. Like heirloom seeds, systems are fertile: systems can be used to design and program both other systems and devices.

    A system is a liberatory technology for manipulating information, while a device is a carceral technology for manipulating people.

  • cmeerw@programming.dev
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    1 year ago

    not being forced to have an Android or Apple smartphone, so more open standards and just Web apps instead of proprietary apps

    • lysdexic@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      (…) so more open standards and just Web apps instead of proprietary apps

      What do you classify as “proprietary apps”, and from the user’s standpoint where do you see a difference between them and web apps?

      • cmeerw@programming.dev
        link
        fedilink
        English
        arrow-up
        17
        ·
        1 year ago

        Pretty much anything that’s only available via an app store. The difference with web apps is that I can also use them on a laptop/PC and I have a bit more control about tracking (by using ad/tracking blockers).

  • profoundlynerdy@programming.dev
    link
    fedilink
    arrow-up
    26
    arrow-down
    2
    ·
    1 year ago

    A pivot way from cargo cult programming and excessive containerization towards simplicity and the fewest dependencies possible for a given task.

    Too many projects look like a jinga tower gone horribly wrong. This has significant maintainability and security implications.

  • CatTrickery@lemmy.world
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    1 year ago

    Phones with fully open source drivers including the bootloader and decent specs. Give me a UEFI over fastboot any day.

    I’d also love it if electron and sexism would kindly go away.

  • KseniyaK@lemmy.ca
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I would like to see:

    1. Corporations treating their customers like people, not just bags of money.
    2. Corporations and employers to stop spying on people. Like, it makes me feel so unsafe and that I can’t really trust them.
    3. People becoming more tech literate.
    4. Open source software, such as Linux being used by more people, especially those who are not so tech literate.
  • RonSijm@programming.dev
    link
    fedilink
    arrow-up
    28
    arrow-down
    4
    ·
    1 year ago

    Probably less elitism. “Oh you build it in x language? Well that’s a shit language. You should use y language instead. We should be converting everything to y language because y language is the most superior language!”

    (If this feels like a personal attack, Rust programmers, yes. But other languages as well)

      • RonSijm@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Well sure, it depends on the context. If it’s a shitpost on /c/programmer_humor, whatever, meaningless banter.

        If it’s a serious question, (maybe for a beginner) asking how to do something in their language, and the response is “It would be a lot easier in y language” - I don’t think it’s particularly helpful

    • SorteKanin@feddit.dk
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      As someone who’s quite vocal about my support for Rust, I can definitely see how it can go overboard.

      But on the other end of the spectrum, saying that all languages are just as good or capable and it doesn’t matter which one you use is definitely wrong. There are meaningful differences. It all comes down to what your needs are (and what you/your team knows already, unless you’re willing to learn new stuff).

      • RonSijm@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Yea, I kept my original comment language-agnostic (Just referring to it as y language) - but added the extra wink to Rust because generally they seem to be the highest offenders.

        I have years of experience in loads of languages: PHP, Ruby, Java, Python, C#, C++, Rust - And that’s probably how I’d order the level of elitism. PHP Devs know everything they’re doing is shit - Python should probably be next in ranking of how shit they are, but they’re not self-aware enough - (Sarcastic elitism aside here - )

        Anyways, besides that - at the end of the elitism-spectrum there seems to be Rust. Someone like me says something about Rust in a general unrelated-to-Rust thread like this - and a Rust enthusiast sees it, and it would just devolve into a dumbass back-end-forth about how good Rust is

    • huntrss@feddit.de
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      C’mon, a little bit of flexing is so nice.

      But, I get what you’re saying. I usually filter out this bullshit (because I’m a Rustacean myself 😜) but this doesn’t mean that it is as easy for someone else as it is for me.

    • tatterdemalion@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      The cargo culting is always going to happen and turn into elitism. But it stems from real advantages of specific technologies, and sometimes you should actually consider that the tech you’re using is irresponsible when better alternatives exist.

    • Xanx@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      Maybe we could just reduce it to 4 time zones, and no DST

            • Pyro@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              1 year ago

              I think what they meant is requiring that only UTC time should be in the database. This prevents ambiguity when pulling dates/times out as with many poorly designed systems it’s not possible to know whether a date represents UTC time or local time.

              At my work we store local time in our database and I hate it. We only serve customers in our country, which only has one time zone, so that’s fine for now. But we’ve definitely made it harder for ourselves to expand if we ever wanted to.

              • AA5B@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                Better yet, why are you not using a Date type? That’s very precisely defined, more scalable, and plenty of functionality for doing math or converting among time zones. You don’t have to care what time zone you’re in, and can give people the answer in any time zone

                • Pyro@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 year ago

                  I get the advantage, and if I could change our schema with a click of my fingers I would, but it’s not that easy. We do use the native date type in our schema, but the dates we store in there are in local time. It’s bad, I know. It was originally written by a couple of people about 15 years ago, so software standards were a lot more lax back then.

                  We already have many customers with lots of data that are currently using this product, so it’s unfortunately non-trivial to fix all of their data with the current systems we have in place.

                  We developers often want to fix so many things but we’re often told what to do based on what the business cares more about, rather than what we actually want to fix. That’s why we always end up building shit on top of shit, because the business doesn’t want to pay us to rewrite 15-20 years worth of legacy code despite in doing so it would make the product an order of magnitude better in every conceivable way.

        • UFODivebomb@programming.dev
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          That is a somewhat tongue in cheek comment. I think time zones are silly and people could easily account for the differences of what a particular time point means for them at their location. More realistically, UTC should be the only way a time stamp is stored.

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    1 year ago

    Data is a part of a person’s individual self. Storing such data on another person is owning a part of their person. It is slavery for exploitation, manipulation, and it is wrong.

    This distinction is the difference between a new age of feudalism with all of the same abuses that happened in the last one, or a future with citizens and democracy.

    Never trust anyone with a part of yourself. Trust, no matter how well initially intentioned, always leads to abuse of power.

    • sbv@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I agree with the sentiment that personal data is owned by whoever it is about. And that other organizations shouldn’t be able to exploit it.