My daughter's boyfriend recently had a hard drive failure on his laptop. From a hardware perspective, the vendor met their obligation and swapped out the drive with a new, higher-capacity replacement. I helped him get his OS restored and updated, and get the typical apps reloaded, but it was time-consuming, and he was without his machine for a couple of days.
While thrilled that he now had a 500G hard drive, I pointed out that this only meant he would have even more data lost the next time he experienced catastrophic failure, since he's one of the millions of users who does not regularly back up his data, even after enduring a significant data loss.
This posting isn't meant to call him out for poor data management practices. In fact, he's probably the rule rather than the exception. But it does lead to a larger, more salient point - how critical is your data, where is it located, and how do you get it back if something goes wrong?
In the old days, it was pretty easy - you saved everything on your C drive, and if you didn't maintain a copy elsewhere, when your machine died, your data died with it. Today, we live in a much more distributed environment. If you're using any sort of "cloud" computing platform, you might be surprised to learn that even if you don't store your email and data files on a local drive, they could disappear forever, and you're be back in the information stone-age.
Talk to T-Mobile Sidekick users who have gone days without access to their calendars, address books, and other information that many of us would consider critical to our day-to-day functioning. For some users, that data might be lost for all eternity, although recovery efforts are underway. Regardless, if you can't get to your data for days or weeks, what's your plan?
As more data-driven infrastructure moves "to the cloud", it's time to ask ourselves what exactly is keeping that cloud aloft? Whether it's thin clients on the desktop, or handheld devices like an iPhone or Blackberry, the heavy-lifting of processing is performed off-device - usually in "the cloud" with the results displayed locally. Significant portions of your data reside not on the device itself, but rather on back-end infrastructure that you never see. You also never have direct access to it.
How is that back-end maintained from a business continuity - disaster recovery perspective? In T-Mobile's case, it appears that their was no hot-swap available when the primary server, in this case the ONLY server, failed, and no redundancy built in for exactly this scenario. Can you say single point of failure?
Back in my consulting days, any BC-DR planning involved a minimalist recommendation: Data needs to reside in production, in a back-up, AND in a back-up stored off-site. Without all three components, it's virtually impossible to fully and quickly recover from data or communication failures.
Use Gmail or Google Docs? What's your plan if the Internet goes down where you are, or if Google stops responding? Do you maintain local replicas of your mail, calendar, and data? Do you even know how to do that?
If you use iTunes to sync your iPhone, what happens if your backup becomes corrupted, or your machine dies, taking iTunes with it? Not only have you lost your music, but also other pieces critical to maintain business and personal communication. I don't know many people who make a backup of their iTunes install and store it away from their primary iTunes location.
If large companies like T-Mobile fall short, and users get outraged by not being able to access their stuff, it's perfectly acceptable to hold these companies accountable for delivering good BC-DR as part of their normal service offering. But if you are responsible for your own data management, how would you do in a crisis?
Images via Wikimedia Commons
No comments:
Post a Comment
Please tell me what you think.