Every industry is full of technical hills that people plant their flag on. What is yours?
give me a job or im dead how about that hill
Do not power law fit your process data for predictive models. No. Stop. Put the keyboard down. Your model will almost certainly fail to extrapolate beyond the training range. Instead, think for at least two seconds about the chemistry and the process, maybe review your kinetics textbook, and only then may you fit to a physics-based model for which you will determine proper statistical significance. Poor fit? Too bad, revise your assumptions or reconsider whether your “data” are really just noise.
Always run qNMR with an internal standard if you are using it to determine purity. And, as a corollary, do not ignore unidentified peaks. Yes, even if it “has always been that way”.
DOE models almost certainly tell you less than you think they do, especially when cross-terms are involved, or when the effects are categorical, or when running a fractional factorial design…
Maybe not technical, but teaching is weird.
If people aren’t having fun/engaged they’re not learning much. People don’t care how much you know until they know how much you care. It’s so frustrating to come across someone who writes the standards you’re supposed to follow and they are the most boring and fake teacher you’ve experienced.
There is no goddamn reason to continue to use magneto ignition in aircraft engines. I’ve been a Rotax authorized service technician for 13 years, I have never seen the digital CDI installed on a Rotax 900 series engine fail in any way, and you’ve still got two. Honestly I believe a CDI module is more reliable and less prone to failure than a mechanical magneto. The only reason why we’re still using pre-WWII technology in modern production aircraft engines is societal rot.
Transparency + blur + drop shadow is peak UI design and should remain so for the foreseeable future. It provides depth, which adds visual context. Elements onscreen should not appear flat; our human predator brains are hardwired and physiologically evolved to parse depth information.
Can you give an example?
Im not him, but theres a tiny shadow underneath the cursor on most windows, probably most everything else. Buttons are slightly 3D looking to appear like they pop out.
What of, more specifically?
An image of the 3 concepts together as you deavribe.
A plain text physical password notebook is actually more secure than most people think. It’s also boomer-compatible. My folks understand that things like their social security cards need to be kept secure and out of public view. The same can be applied to a physical password notebook. I also think a notebook can be superior to the other ways of generating and storing passwords, at least in some cases.
- use the same password for everything: obviously insecure.
- Use complex unique passwords for everything: You’ll never remember them. If complex passwords are imposed as a technical control, even worse if you have to change them often, you’ll just end up with passwords on post-its.
- use a password manager: You’re putting all your eggs in one basket. If the manager gets breached there goes everything.
I understand, somewhat, this being discouraged at work but I agree that doing it for personal passwords with the notebook at home is fine. I’ve met people opposed to ever writing down passwords and I think it’s just a rote reaction based on work training.
If you have a notebook at home with all your passwords then somebody needs to break into your house to get them, which is pretty good security.
But will you be diligent enough to make a new password for every single website using this method?
Dynamic typing sucks.
Type corrosion is fine, structural typing is fine, but the compiler should be able to tell if types are compatible at compile time.
This is one of those things like a trick picture where you can’t see it until you do, and then you can’t unsee it.
I started with C/C++ so typing was static, and I never thought about it too much. Then when I started with Python I loved the dynamic typing, until it started to cause problems and typing hints weren’t a thing back then. Now it’s one of my largest annoyances with Python.
A similar one is None type, seems like a great idea, until it’s not, Rust solution is much, much better. Similar for error handling, although I feel less strongly about this one.
Its not possible to have objects that are statically typed.
Wut
Coming from a background where all the datatypes are fixed and static (C, PLCs) it took me so very long to get used to python’s willy nilly variables where everything just kinda goes, until it doesn’t. Then it breaks, but would’ve been fine if we just damn knew what these variables where
Now my brain just goes “it’s all just strings”
Now my brain just goes “it’s all just strings”
Dynamic typing does kinda smell like primitive obsession, now that you’ve brought it to my attention lol
Do you mean type coercion?
Dyac
the hill i am willing to die on is: FUCK AI. I’ll be dead before I let it write a single line of code.
I don’t let it wrote code per se but I’ve found it useful for writing regex for me to paste into notepad++ find/replace commands.
There are a load of things in IT where using a processor is the wrong choice, and using an FPGA instead would have made a lot of problems a non-issue.
Is that controversial? I’ve always assumed people avoid FPGAs just because they’re unfamiliar with them.
Tell that to the people who think they will soon replace this expensive and complicated FPGA stuff with something running on a cheap MPU programmed by an intern. For thirty years now…
Yikes.
Welcome to my world.
I think it’s because FPGAs are an intermediate to just making your own ASIC.
If you’re at a scale where making a new ASIC is your go-to, congratulations on your job at Google or Apple. I don’t even know if FaceMeta would do that. Designing and founding a new chip is a whole thing.
if you’re using modern fabrication techniques, a couple 10uf mlcc capacitors in small packages are just as good as traditional decade capacitors (10uf,1uf,0.1uf) for decoupling in pretty much every situation, and you need to worry about less varieties on your bill of materials
If you don’t understand that development, security, and operations are all one job you will constantly make crap and probably point at some other team to make excuses about it, but it will be actually be your fault.
Programs have to run. They have to be able to change to meet needs. Implementing working security measures is one of those needs.
The amount of times I’ve had to slap devs hands that wanted to just disable security or remind security that just shutting it down is denial of service is crazy. If it can’t deploy or is constantly down or uses stupid amount of resources it’s also worthless no matter what it looked like for split second you ran on on the dev machine.
The next patch isn’t going to fucking fix it if no one that writes patches knows about the damn issue. Work arounds are hidden technical debt and you have to assume that they will fucking break on some update later. If you are not updating because it breaks your unreported workarounds you will get ignored by the devs at some point, and they are right in doing so.
If you depend on something communicate with the team that works on it. We can send a fucking petabyte of info around the world and to the moon and back before some people write a fucking Ticket, email, or even a IM. Look dumb and asking the stupid question rather than being an actual idiot and leaving something broken for the next decade. We’re all dumb, it’s why we built computers, get over it and just talk to people. If you really struggle with, don’t just communicate, try to over communicate, say an obvious thing now and again just to keep the dialogue open and test that you really on the same page.
That’s my rant/hill borne from ulcers supporting crappy IT orgs and having to overcome my own shortcomings to actually say something in channels where things can actually change and not just griping in private about it.
We’re all dumb, it’s why we built computers,
I love that.
I don’t know if the basic idea that it’s okay to look dumb will ever catch on, though. There’s a lot of self interest and direct ego motive going against it.
And it’s a balance too that self interest and ego gets alot done too. It’s just getting over protective of ego or too self interested (very hard in an economy where a lot employers are straight up conmen) that leads to these pain points.
I do find in the rooms I’ve had the pleasure of being with the smartest people in a field were always full of reasonably humble people.
Okay, I’m pretty late to the party, but here we go. My field is illustration and art, and especially color theory is something that a lot too often is teached plainly wrong. I think it was in the 1950s when Johannes Itten introduced his book on colortheory. In this book, he states that there are three “Grundfarben” (base colors) that will mix into every color. He explained this model with a color ring that you will still find almost anywhere. This model and the fact that there are three Grundfarben is wrong.
There are different angles from where you can approach color mixing in art, and it always depends on what you want to do. When we speak about colors, we actually mean the experience that we humans have, when light rays fall into our eyes. So, it’s actually a perceptual phenomenon, which means it is actually something that has small statistical differences from individual to individual. For example, a greenish blue might be a little bit more green for one person or a little more blue for the other.
Every color, however, has its opposite color. Everybody can test this. Look into a red (not too bright) light for some time and then onto a white wall. The color you will see is the opposite. They will cancel each other out and become white / neutral.
Ittens colormodel, however, is not based in perception. In this model yellow is opposed to violet, which might mix to a neutral color with pigments but not with lightrays. But even that doesn’t work a lot of times. I mean, even his book is printed in six colors, even though his three basecolors are supposedly enough to print every color…
In history lot of colormodels have been less correct course. What is so infuriating is that in Ittens case, he just plainly ignored the correct colortheory that already existed (by Albert Henry Munsell) and created his own with whatever rules that he believes are correct.
Even today, this model and rules are teached at art schools and you can see his color circle plastered all over the internet.
Tldr: Johannes Ittens colormodel is wrong, even though it’s almost everywhere.
(Added tldr)
Fun fact:
OKLab which was created recently by Björn Ottosson as a hobby project, is a pretty accurate perceptual colorspace. It is open Source and has been adapted by Photoshop for Black and White conversion.
I kinda hope painting apps will also impliment it as a standard model for colopickers.
Professionally: Waterfall release cycle kills innovation, and whoever advocates it should be fired on the spot. MVP releases and small, incremental changes and improvements are the way to go.
Personally: Don’t use CSS if tables do what you need. Don’t use Javascript for static Web pages. Don’t overcomplicate things when building Web sites.
Use tables for presenting tabular data, not for layout of non-tabular data.
If you’d put it in a relational db or spreadsheet, then tables is fine.
It’s also bad for accessibility.
Don’t use CSS if tables do what you need.
As a web dev, please don’t. Use a table if you have data that should be (re)presented. Don’t use tables for layout. Please use semantic HTML elements, for the love of accessibility.
Weird i haven’t seen this one yet: the cloud is just someone else’s computers.
…which are much more secure than yours ?
If you’re selfhosting, the cloud is your someone else’s computer ;)
It is, but I’m ready to officially throw in the towel and embrace the fact that running your own hardware is not much more than a hobby these days. I’ve preached and preached the value of multi or hybrid cloud, only for the people with money to pour it down the same hole time and time again.
I’ve always said IT is essentially an entirely CYA driven industry. Having someone to blame is more valuable for them than uptime, and if they can show their outages, even if the numbers suck, was not their fault (easy to do when all your competitors are down at the same time), it’s all good…
Update- lol, YouTube is currently down.
Geopolitics is kind of coming to the rescue, since it’s bad if your server is subject to a hostile power’s laws. Although it remains to be seen if there’s fundamental change, or just what we call in Canada “maplewashing”.
Hardly a hot take really…
OP didn’t really ask for a hot take…
It was kind of implied, though.
How do you die on a hill if nobody’s fighting you? Is it just a hill suicide? That wasn’t in any war I’ve read about. I guess Life of Brian had something a bit like that.
Dying on the hill doesn’t mean it has to be controversial or a “hot take” IMO, but whatever.
I fucking hate AI in HR/hiring. I try so hard not to spread my personal data to LLMs/AI ghuls and the moment I apply for a job I need to survive I have to accept that the HR department’s AI sorting hat now knows a shit ton about me. I just hope these are closed systems. if anyone from a HR department knows more, please let me know
I’m lucky in that I’ve been in the same job for ages (since before AI) and so I haven’t had to deal with this yet, but a friend of mine was using AI to write his resume recently and I had the thought that the resume is probably being written by an AI, then sent to another AI to read it and that you could conceivably get a job with a resume that no human has ever entirely read. Probably not an original thought but it had never occurred to me before lol.
You could also starve in the street after your résumé is rejected by several levels of LLMs, never having had human eyes land on it once.
Yeah probably the more likely outcome.
Hardly a hot take really…
deleted by creator









