3

I had a colleague, who built a bunch of smaller systems for the company I'm working in. He didn't want to waste his time building a "perfect" system (which I generally agree with, the question is just where to draw the line).

But because it took him so long to build the prototype, usually it went into production without being hardened (like basic input validations were missing. It wouldn't allow anything malicious, but instead of a validatiom error it'd just 500).

When he left, literally less then a week later, one of his systems, which was a prototype and nobody except him could maintain, because it was done in a fancy new technology, which wasn't even v1 at that time and their documentation said, it's production ready when we release v1. Anyway, that one system started crashing just few days after him leaving. Another Dev and me tried to fix it, but every time we touched it, it just got worse.

At some point, we gave up and just configured a cron job to reboot it every 12h. He could have probably fixed it, but to us it was just black magic.

Anyhow, this rent isn't about him, AFAIK all the systems still working, as long as you provide the correct input. Nor is it about the management decisions, which lead to this Frankenstein service on live support, which we had to increase, to be restarted every 8 hours, 6h, 4h, 3h, .....

It's about the service itself, which I'm looking forward to every day, when the rewrite will be done and I can nuke the whole git repository.

I was even thinking about moving all the related files onto a USB stick and putting that on 🔥, once we're done rewriting it....

Maybe next month or in 2. Hopefully before we'll have to configure the cron job to restart the service every couple minutes....

Comments
Add Comment