Fess up. You know it was you.
One time I was deleting a user from our MySQL-backed RADIUS database.
DELETE * FROM PASSWORDS;
And yeah, if you don’t have a WHERE clause? It just deletes everything. About 60,000 records for a decent-sized ISP.
That afternoon really, really sucked. We had only ad-hoc backups. It was not a well-run business.
Now when I interview sysadmins (or these days devops), I always ask about their worst cock-up. It tells you a lot about a candidate.
Always skeptical of people that don’t own up to mistakes. Would much rather they own it and speak to what they learned.
This is what I was told when I started work. If you make a mistake, just admit to it. They most likely won’t punish you for it if it wasn’t out of pure negligence
It’s difficult because you have a 50/50 of having a manager that doesn’t respect mistakes and will immediately get you fired for it (to the best of their abilities), versus one that considers such a mistake to be very expensive training.
I simply can’t blame people for self-defense. I interned at a ‘non-profit’ where there had apparently been a revolving door of employees being fired for making entirely reasonable mistakes and looking back at it a dozen years later, it’s no surprise that nobody was getting anything done in that environment.
Incredibly short-sighted, especially for a nonprofit. You just spent some huge amount of time and money training a person to never make that mistake again, why would you throw that investment away?
Exactly!
I was a sysadmin in the US Air Force for 20 years. One of my assignments was working at the headquarters for AFCENT (Air Forces Central Command), which oversees every deployed base in the middle east. Specifically, I worked on a tier 3 help desk, solving problems that the help desks at deployed bases couldn’t figure out.
Normally, we got our issues in tickets forwarded to us from the individual base’s Communications Squadron (IT squadron at a base). But one day, we got a call from the commander of a base’s Comm Sq. Apparently, every user account on the base has disappeared and he needed our help restoring accounts!
The first thing we did was dig through server logs to determine what caused it. No sense fixing it if an automated process was the cause and would just undo our work, right?
We found one Technical Sergeant logged in who had run a command to delete every single user account in the directory tree. We sought him out and he claimed he was trying to remove one individual, but accidentally selected the tree instead of the individual. It just so happened to be the base’s tree, not an individual office or squadron.
As his rank implies, he’s supposed to be the technical expert in his field. But this guy was an idiot who shouldn’t have been touching user accounts in the first place. Managing user accounts in an Airman job; a simple job given to our lowest-ranking members as they’re learning how to be sysadmins. And he couldn’t even do that.
It was a very large base. It took 3 days to recover all accounts from backup. The Technical Sergeant had his admin privileges revoked and spent the rest of his deployment sitting in a corner, doing administrative paperwork.
BEGIN TRAN
ROLLBACK TRAN
This. My comment was going to be “what kind of maniac uses auto commit?”
I always put the where clause first since a fuck up in my early 20s lost a loans company £40k of business.
My trick is writing it as a
SELECT
statement first, making sure it’s returning the right number of records, and then switching out theSELECT
forDELETE
. Hasn’t steered me wrong yet.This.
The hero we don’t deserve.
I worked for a company where the testing database was also the only backup.
Accidentally deleted an entire column in a police department’s evidence database early in my career 😬
Thankfully, it only contained filepaths that could be reconstructed via a script. But I was sweating 12+1 bullets. Spent two days rebuilding that.
And if you couldn’t reconstruct, you still had backups, right? … right?!
What the fuck is a “backups”?
He’s the guy that sits next to fuckups
Oh sweet summer child
deleted an entire column in a police department’s evidence database
Based and ACAB-pilled
deleted by creator
Did you know that “Terminate” is not an appropriate way to stop an AWS EC2 instance? I sure as hell didn’t.
Explain more?
Noob was told to change some parameters on an AWS EC2 instance, requiring a stop/start. Selected terminate instead, killing the instance.
Crappy company, running production infrastructure in AWS without giving proper training and securing a suitable backup process.
“Stop” is the AWS EC2 verb for shutting down a box, but leaving the configuration and storage alone. You do it for load balancing, or when you’re done testing or developing something for the day but you’ll need to go back to it tomorrow. To undo a Stop, you just do a Start, and it’s just like power cycling a computer.
“Terminate” is the AWS EC2 verb for shutting down a box, deleting the configuration and (usually) deleting the storage as well. It’s the “nuke it from orbit” option. You do it for temporary instances or instances with sensitive information that needs to go away. To undo a Terminate, you weep profusely and then manually rebuild everything; or, if you’re very, very lucky, you restore from backups (or an AMI).
Apparently Terminate means stop and destroy. Definitely something to use with care.
Maybe there should be some warning message… Maybe a question requiring you to manually type “yes I want it” or something.
Maybe an entire feature that disables it so you can’t do it accidentally, call it “termination protection” or something
It doesn’t help that the webui used to hide stop. I think it still does.
I didn’t call out a specific dimension on a machined part; instead I left it to the machinist to understand and figure out what needed to be done without explicitly making it clear.
That part was a 2 ton forging with two layers of explosion-bonded cladding on one side. The machinist faced all the way through a cladding layer before realizing something was off.
The replacement had a 6 month lead time.
That’s hilarious, actually pretty recently I “caused” a line stop because a marker feature (for visuals at assembly, so pretty meaningless dimension overall) was very much over dimensioned (we talking depth, rad, width, location from step) and to top it off instead of a spot drill just doing a .01 plunge they interpolated it! (Why I have zero clue). So it was leaving dwell marks for at least the past 10 months and because it was over dimensioned it all of them had to be put on hold because DOD demands perfection (aircraft engine parts)
It was the bad old days of sysadmin, where literally every critical service ran on an iron box in the basement.
I was on my first oncall rotation. Got my first call from helpdesk, exchange was down, it’s 3AM, and the oncall backup and Exchange SMEs weren’t responding to pages.
Now I knew Exchange well enough, but I was new to this role and this architecture. I knew the system was clustered, so I quickly pulled the documentation and logged into the cluster manager.
I reviewed the docs several times, we had Exchange server 1 named something thoughtful like exh-001 and server 2 named exh-002 or something.
Well, I’d reviewed the docs and helpdesk and stakeholders were desperate to move forward, so I initiated a failover from clustered mode with 001 as the primary, instead to unclustered mode pointing directly to server 10.x.x.xx2
What’s that you ask? Why did I suddenly switch to the IP address rather than the DNS name? Well that’s how the servers were registered in the cluster manager. Nothing to worry about.
Well… Anyone want to guess which DNS name 10.x.x.xx2 was registered to?
Yeah. Not exh-002. For some crazy legacy reason the DNS names had been remapped in the distant past.
So anyway that’s how I made a 15 minute outage into a 5 hour one.
On the plus side, I learned a lot and didn’t get fired.
I once “biased for action” and removed some “unused” NS records to “fix” a flakey DNS resolution issue without telling anyone on a Friday afternoon before going out to dinner with family.
Turns out my fix did not work and those DNS records were actually important. Checked on the website halfway into the meal and freaked the fuck out once I realized the site went from resolving 90% of the time to not resolving at all. The worst part was when I finally got the guts to report I messed up on the group channel, DNS was somehow still resolving for both our internal monitoring and for everyone else who tried manually. My issue got shoo-shoo’d away, and I was left there not even sure of what to do next.
I spent the rest of my time on my phone, refreshing the website and resolving domain names in an online Dig tool over and over again, anxiety growing, knowing I couldn’t do anything to fix my “fix” while I was outside.
Once I came home I ended up reversing everything I did which seemed to bring it back to the original flakey state. Learned the value of SOPs and taking things slow after that (and also to not screw with DNS).
If this story has a happy ending, it’s that we did eventually fix the flakey DNS issue later, going through a more rigorous review this time. On the other hand, how and why I, a junior at the time, became the de facto owner of an entire product’s DNS infra remains a big mystery to me.
Hopefully you learned a rule I try to live by despite not listing it: “no significant changes on Friday, no changes at all on Friday afternoon”.
"Man who deployed Friday, works Saturday. "
I spent over 20 years in the military in IT. I took took down the network at every base I was ever at each time finding a new way to do it. Sometimes, but rarely, intentionally.
took out a node center by applying the patches gd recommended… took an entire weekend to restore all the shots and my ass got fed 3/4ths into the woodchipper before it came out that the vendor was at fault for this debacle.
Updated WordPress…
Previous Web Dev had a whole mess of code inside the theme that was deprecated between WP versions.
Fuck WordPress for static sites…
I fixed a bug and gave everyone administrator access once. I didn’t know that bug was… in use (is that the right way to put it?) by the authentication library. So every successful login request, instead of being returned the user who just logged in, was returned the first user in the DB, “admin”.
Had to take down prod for that one. In my four years there, that was the only time we ever took down prod without an announcement.
UPDATE without a WHERE.
Yes in prod.
Yes it can still happen today (not my monkey).
Yes I wrap everything in a rollback now.
I did something similar. It was a list box with a hidden first row representing the id. Somehow the header row got selected and an update where id=id got ran.
I did this once. But only once. The panic I felt in that moment is something I will never forget. I was able to restore the data from a recent backup before it became a problem, though.
Plugged a serial cable into a UPS that was not expecting RS232. Took down the entire server room. Beyoop.
That’s a common one I have seen on r/sysadminds.
I think APC is the company with the stupid issue.
Took down the entire server room
ow, goddamn…
You don’t have two unrelated power inputs? (UPS and regular power)
This was 2001 at a shoestring dialup ISP that also did consulting and had a couple small software products. So no.
I took down an ISPfor a couple hours because I forgot the ‘add’ keyword at the end of a Cisco configuration line
That’s a rite of passage for anyone working on Cisco’s shit TUI. At least its gotten better with some of the newer stuff. IOS-XR supported commits and diffing.
- Create a database,
- Have organisation manually populated it with lots of records using a web app,
- accidentally delete database.
All in between the backup window.
“acknowledge all” used to behave a bit different in Cisco UCS manager. Well at least the notifications of pending actions all went away… because they were no longer pending.
It wasn’t “worst” in terms of how much time it wasted, but the worst in terms of how tricky it was to figure out. I submitted a change list that worked on my machine as well as 90% of the build farm and most other dev and QA machines, but threw a baffling linker error on the remaining 10%. It turned out that the change worked fine on any machine that used to have a particular old version of Visual Studio installed on it, even though we no longer used that version and had phased it out for a newer one. The code I had written depended on a library that was no longer in current VS installs but got left behind when uninstalling the old one. So only very new computers were hitting that, mostly belonging to newer hires who were least equipped to figure out what was going on.
That reminds me of when some of my former colleagues and I were on a training about programming industrial camera system that judges the quality of produced parts. I’m not really a programmer, just a guy who can troubleshoot and google stuff and occasionally hack together a simple code with heavy help from Google too.
The guy was a German (we are Czech and we communicated in English) programmer who coded the whole thing in Omron software but he also wrote his own plugin for it. All was well when he was showing us on the big screen, but when he sent us the program file so we could experiment on it (changing parameters, adding steps to the flow…) the app would crash. I finally delved into the app logs and with the help of Google I found it was because he compiled his plugin with debug flags and it worked for him because he had the VS debug DLLs installed but we didn’t.
I feel a repressed memory or two stirring 😐