21

So we have an API that my team is supposed send messages to in a fire and forget kind of style.

We are dependent on it. If it fails there is some annoying manual labor involved to clean that mess up. (If it even can be cleaned up, as sometimes it is also time-sensitive.)

Yet once in a while, that endpoint just crashes by letting the request vanish. No response, no error, nothing, it is just gone.

Digging through the log files of that API nothing pops up. Yet then I realize the size of the log files. About ~30GB on good old plain text log files.

It turns out that that API has taken the LOG EVERYTHING approach so much too heart that it logs to the point of its own death.

Is circular logging such a bleeding edge technology? It's not like there are external solutions for it like loggly or kibana. But oh, one might have to pay for them. Just dump it to the disk :/

This is again a combination of developers thinking "I don't need to care about space! It's cheap!" and managers thinking "100 GB should be enough for that server cluster. Let's restrict its HDD to 100GB, save some money!"

And then, here I stand trying to keep my sanity :/

Comments
  • 0
    Sounds like something a client of ours has. It logs everything. And this is an intranet for a large multinational corporation. each day it's about 15GB of logs accumulating on the server, and it stores the logs for 7 days.
Add Comment