A timely reminder that if you have data that your business depends on, get it backed up properly. If you have personal data that you wouldn't like to lose, you also must back up.
http://journalspace.com/ has been well covered in the media. It wasn't one of the great players on the web, but had a sizeable user base. Visit the link to see what it has become now that their database has been wiped.
There are many ways to back up your data, depending on budget and 'continuity' requirements. For example, with a PHP/MySQL website such as this you'd certainly pull an image of the MySQL data down to some form of backup device. You might do this daily. But for disaster recovery, this form of backup requires you to build a new server from scratch (or at least the system software), then import your data, then spend the next six month finding and applying the performance tweaks that'd taken you years to discover.
You're talking hours to days of downtime. But you will get back online.
The next level is a complete snapshot of the server. This is most common with tape backups - you place a complete copy of the server's file system on tape - perhaps once a week with daily incrementals, depending on your data sizes and backup overhead. All you need to do now is get server hardware similar to that which you have lost and pull the image back onto the disks. Well, it's rarely that simple. But the expense of tape buys you time - you'll be back up and running in hours - and there are other important advantages including robust offsite backup.
After that, you get into standby servers and clusters. These increase your availability following a failure to the point where a server failure is completely hidden from your users.
Note that these are cumulative backup systems - if you have a cluster, you also have tape backups of at least one typical machine. If you have tape, you also have MySQL dumps.
I have designed and tested and implemented backup solutions for a variety of situations, including PHP/MySQL hosting - both shared and dedicated systems, including PHP server clusters and MySQL replication. I'll shamelessly plug my services here - I'm not looking for full time work but if you've got a need for this sort of system administration consultation for your small business or start-up, then give me a shout. You'll find an email address in the contact link at the bottom of the page.
In contrast to other blogs and news sites, I'd like to discuss the real reason behind JournalSpace's (JS's) failure - the technical problem of data loss seems to have been caused by a human, and the certainly the fundamental problem was human.
JS's manager doesn't seem to be a very technical person. He/she didn't need to be, there were employees to look after that aspect. From the publicly available information it looks like this unfortunate business man has been taken for a ride by a thoroughly unprofessional idiot.
In business, you really need to trust your employees. It's an unfortunate truth that you can't. This is why computer networks get locked down, you can't install software on your work PC, you can't visit Facebook, you're not allowed in the server room, etc.
But there are some employees that you have to trust. The ones who look after the money. The ones who look after your data. If your company is a jewellers, somebody is going to handle the diamonds - will it be the contract cleaner who comes in on Saturday mornings and is a different face every week?
JS has been exposed to a toxic and dangerous employee. I wish they'd name names so we know who to avoid. There are two aspects here:
1. The person was expected to be technically literate. If they're in charge, or the only IT guy there, they need to be better than average. Indeed, they are claimed to have boasted about being smart.
2. The person must be trustworthy. They have your future (your business data) in their hands, you must know that whatever happens, they will try their hardest to ensure its safety.
In JS's case, the person was neither and unfortunately there was no way of raising alarm bells that this might be the case.
Until, that is, it was too late. The employee was caught stealing, and on being ejected caused damage to servers. I assume something along the lines of deleting system files. This behaviour of someone so trusted is shocking.
At this point, JS should have called in security experts to go over each and every system the employee had touched, but that's easy to say in hindsight. I have work on post-attack forensics and have demonstrated rescuing systems which have been compromised, and then patching holes. In this case a strong security policy would be needed for new and existing employees.
I'm reminded of this from last year:
http://www.zedshaw.com/rants/rails_is_a_ghetto.html
Would you trust that person with everything you have worked for and built up? It's a shame that such people are not always so clearly marked out by publishing their own career suicide notes.