Once I had a customer report that her computer was giving her out of disk space errors. This was weird because we script their My Documents and Desktop to network file shares. Like wtf could be using up the disk? While walking to their system I figured the drive was going bad. Nope.
Just a 250+GB log file from a chat program that they used. Like OMG that was amazing
Can you mention which “chat program” was that?
it was an early version of Pidgin
Meanwhile sysadmins:
Im my experience, if your logs are growing that fast for a reason, you’ll get to see it again… and again… and again. And show it to people going, “WTF, have you ever seen anything like this before?”
In my case docker didn’t have a default max size that logs would stop at, so they just grew and grew exponentially. I also had the highest log level turned on to debug something so it was constantly logging a bunch of data.
I’ve had that happen with database logs where I used to work, back in 2015-6.
The reason was a very shitty system that, for some reason, threw around 140 completely identical delete queries per millisecond. When I say completely identical, I mean it. It’d end up something like this in the log:
2015-10-22 13:01:42.226 = delete from table_whatever where id = 1 and name = 'Bob' and other_identifier = '123'; 2015-10-22 13:01:42.226 = delete from table_whatever where id = 1 and name = 'Bob' and other_identifier = '123'; 2015-10-22 13:01:42.226 = delete from table_whatever where id = 1 and name = 'Bob' and other_identifier = '123'; -- repeated over and over with the exact same fucking timestamp, then repeated again with slightly different parameters and different timestamp
Of course, “no way it’s our system, it handles too much data, we can’t risk losing it, it’s your database that’s messy”. Yeah, sure, I set up triggers to repeat every fucking delete query. Fucking morons. Since they were “more important”, database logging was disabled.
Just add
: > logfile
to crontab and run it once a minute, problem solved.Yeah I centralize all my server logs, they point to a nifty location called /dev/null. It’s so good at collection and compression, it never grows in size!