Skip to content

Converting an Existing System to RAID

Well, it’s time to move my SQL Ledger over to my linux server (it’s been on my Mac laptop since I started with it) and to date I haven’t had RAID on that system.

<p>One way to do this is to back, re-install, use the installer to create the RAID set, and restore from backup.</p>
<p>But I wanted to minimize downtime and convert the system in place.  Could it be done?  How much downtime?</p>
<p>First Step: Install the new hard drive.  I got a Maxtor 200GB disk from Outpost.com for $66.  Maxtor isn’t my first choice in disks, I prefer Seagate, but this is a RAID-1 drive, so it can fail and I won’t care too much.  It took longer to un-cable and re-cable the server than it did to install the new drive.  10 minutes of downtime.</p>
<p>The system comes back up, and I create new partitions on the new disk, the same size as the old partitions.  I was originally going to put root and boot on RAID-1 and leave two swap partitions on both disks, for performance.  But a drive failure with active swap can be very bad, so I decided to forgo performance and put swap on RAID-1 as well.</p>
<p>Now, everything has changed in sofware RAID from Redhat 8 to Fedora Core 3.  It’s gotten better and easier, but different.  Previously I would boot off a floppy/CD, create a raiddtab, make a mirror set, and build into the new RAID set the old, then new disk.  Update LILO and reboot.  This method creates a perfect clone with lots of downtime.  Unfortunately, the documentation on kernel.org describes this method even though the tools in that documentation are long abandoned.</p>
<p>The better way to do it today is to create the RAID sets, define them as having a missing disk, and create them with only the new disk as a memeber.   You copy over the data from the existing disks onto the RAID set (use the rsync method I wrote about <a href="http://blog.bfccomputing.com/index.php?p=25">here</a> - do /var again and again until just before you reboot).   Then there are some grub machinations to do, a new initrd, a reboot onto the RAID set, and then you bring the old partitions into the RAID set.  There’s a small possibility of losing some data this way if there are changes just before you reboot and that’s the downside of this method.  But very high uptime is the advantage.  You can go to a lower runlevel for the final sync to lock out any network users to all but eliminate this potential problem (maybe you’ll lose some final syslog entries).</p>
<p>I found a great tutorial <a href="http://wiki.clug.org.za/clugwiki/index.php/RAID-1_in_a_hurry_with_grub_and_mdadm">here</a>.  It assumes you’re a Linux admin, but you don’t have to know how to use the <code>mdadm</code> tool to use the tutorial.</p>
<p>So, I’m up on a completely RAID-1 system and there was less than 15 minutes of downtime, from hardware to 3 reboots.