
Pexels/Reddit
When having technical problems, turning the device off and on again is almost always the smartest first step to try.
What would you do if a whole building had issues, but the systems in question were behind a locked door?
That is what happened to the IT team in this story, so they went to the circuit breakers and cut the power to the whole building and turned it back on, fixing the issues.
Weird problems require even weirder solutions.
Remember the Heartbleed bug?
That mean vulnerability in the OpenSSL library that made for quite some hectic days in 2014?
For our company, that bug came in a very unfortunate moment: The regulatory agency responsible for us had ordered a security audit just then – and passing it was critical.
It should all run smoothly.
In theory, getting all our devices in order for the audit’s vulnerability check should’ve been a breeze. 90% of our user devices consisted of custom Linux thin clients, with a very streamlined deployment process:
- Get update files,
- Push update to test group,
- Validate it,
- Deploy image files to production,
- All devices update themselves automatically by the next reboot.
When users follow instructions, there aren’t issues.
This worked great for all machines that were powered off, because when the users came in and switched them on, they updated themselves before login and were current for the audit the same morning.
Those that were left running by users at the end of their workday would’ve just required a remotely triggered reboot. Due to a freak coincidence, however, the current OS build suffered from a previously undiscovered bug that prohibited reliable execution of any remote shutdown command.
Finding those issues during a major upgrade like this can be hectic.
So, we frantically needed to find a solution for this, or we’d have a severe number of vulnerable devices left in the fleet!
Brainstorming within our team led to the conclusion that manually finding and rebooting those of the hundreds of thin clients that were left running was too time consuming and prone for human error.
Well, this makes things even more difficult.
Some machines were also locked behind closed office doors IT had no key for. Then one of us had a brainwave:
“Hang on – aren’t those machines set up with ‘Restore on Power Loss = Last State’ in the BIOS?”
Now that is some creative thinking.
You know what IT did have a key for? The main facilities room which housed the central power breakers for our HQ.
Power cycling the whole building did the trick: All previously running thin clients powered back up and fetched the update.
They did what they had to do to get the work done.
By morning when the auditor came to us, 100% of our fleet was current with the Heartbleed fix and we passed with flying colors.
Sometimes IT people have to get creative on how to overcome a problem.
Let’s see what the people in the comments have to say about it.
I like the way this commenter thinks.
Yup, the ultimate help desk move.
This commenter had the same thought.
This is next level.
Have you tried turning it off and on?
If you liked that story, check out this post about an oblivious CEO who tells a web developer to “act his wage”… and it results in 30% of the workforce being laid off.