Please note: Do not post advertisements, offensive material, profanity, or personal attacks. Please remember to be considerate of other members. All submitted content is subject to our Terms Of Use. General discussion. Share Flag. All Comments. Collapse -. Tekram DCC. Try DupliDisk. Regards 0 Votes. Sean 0 Votes. Replies 4. Only if you add a RAID controller and have at least two drives. TomXPS 4 Beryllium. The hardware version is much better.
I have a Dell I use a Promise raid card plugged into an ide slot with two hard drives in a raid 1 array. Instructions and driver cd came with card and set up easily. Good luck. Hi I'm also trying to run my in raid 0. I have 2 new blank identical harddrive and I want to take out my current drives by replacing the 2 new ones with raid. Thus, spare disks add a nice extra safety to especially RAID-5 systems that perhaps are hard to get to physically. One can allow the system to run for some time, with a faulty device, since all redundancy is preserved by means of the spare disk.
You cannot be sure that your system will keep running after a disk crash though. Also, once reconstruction to a hot-spare begins, the RAID layer will start reading from all the other disks to re-create the redundant information. If multiple disks have built up bad blocks over time, the reconstruction itself can actually trigger a failure on one of the "good" disks. This will lead to a complete RAID failure. If you do frequent backups of the entire filesystem on the RAID array, then it is highly unlikely that you would ever get in this situation - this is another very good reason for taking frequent backups.
Remember, RAID is not a substitute for backups. When the RAID layer handles device failures just fine, crashed disks are marked as faulty, and reconstruction is immediately started on the first spare-disk available.
Faulty disks still appear and behave as members of the array. The RAID layer just treats crashed devices as inactive parts of the filesystem.
If you are going after high performance, you should make sure that the bus ses to the drives are fast enough. Also, you should only have one device per IDE bus. IDE is really bad at accessing more that one drive per bus. And excellent performance can be achieved too. It all boils down to: All disks fail, sooner or later, and one should be prepared for that. Data integrity: Earlier, IDE had no way of assuring that the data sent onto the IDE bus would be the same as the data actually written to the disk.
This was due to total lack of parity, checksums, etc. With the Ultra-DMA standard, IDE drives now do a checksum on the data they receive, and thus it becomes highly unlikely that data get corrupted. Performance: I am not going to write thoroughly about IDE performance here. The really short story is: IDE drives are fast, although they are not as of this writing found in The RAID layer will mark the disk as failed, and if you are running RAID levels 1 or above, the machine should work just fine until you can take it down for maintenance.
Not only would two disks ruin the performance, but the failure of a disk often guarantees the failure of the bus, and therefore the failure of all disks on that bus. In a fault-tolerant RAID setup RAID levels 1,4,5 , the failure of one disk can be handled, but the failure of two disks the two disks on the bus that fails due to the failure of the one disk will render the array unusable.
Also, when the master drive on a bus fails, the slave or the IDE controller may get awfully confused. One bus, one drive, that's the rule. Considering the much lower price of IDE disks versus SCSI disks, an IDE disk array can often be a really nice solution if one can live with the relatively low number around 8 probably of disks one can attach to a typical system.
IDE has major cabling problems when it comes to large arrays. Even if you had enough PCI slots, it's unlikely that you could fit much more than 8 disks in a system and still get it running without data corruption caused by too long IDE cables.
Furthermore, some of the newer IDE drives come with a restriction that they are only to be used a given number of hours per day. Although hot swapping of drives is supported to some extent, it is still not something one can do easily. IDE doesn't handle hot swapping at all. Sure, it may work for you, if your IDE driver is compiled as a module only possible in the 2. But you may just as well end up with a fried IDE controller, and you'll be looking at a lot more down-time than just the time it would have taken to replace the drive on a downed system.
The main problem, except for the electrical issues that can destroy your hardware, is that the IDE bus must be re-scanned after disks are swapped. While newer Linux kernels do support re-scan of an IDE bus with the help of the hdparm utility , re-detecting partitions is still something that is lacking. Normal SCSI hardware is not hot-swappable either. It may however work. If your SCSI driver supports re-scanning the bus, and removing and appending devices, you may be able to hot-swap devices.
However, on a normal SCSI bus you probably shouldn't unplug devices while your system is still powered up. But then again, it may just work and you may end up with fried hardware.
If your SCSI driver dies when a disk goes down, your system will go with it, and hot-plug isn't really interesting then. With SCA, it is possible to hot-plug devices. Unfortunately, this is not as simple as it should be, but it is both possible and safe. The arguments to the "scsi remove-single-device" commands are: Host, Channel, Id and Lun. If you encounter problems or find easier ways to do this, please discuss this on the linux-raid mailing list.
Preferably a kernel from the 2. Alternatively a 2. The RAID tools. Patience, Pizza, and your favorite caffeinated beverage. Remember it, that file is your friend. If you do not have that file, maybe your kernel does not have RAID support. It should tell you that you have the right RAID personality eg.
Issue a nice make install to compile and then install mdadm and its documentation, manual pages and example files. There you can run emerge mdadm Other distributions may also have this package available. Now, let's go mode-specific. Ok, so you have two or more partitions which are not necessarily the same size but of course can be , which you want to append to each other. I set up a raidtab for two disks in linear mode, and the file looked like this:.
If a disk dies, the array dies with it. There's no information to put on a spare disk. You're probably wondering why we specify a chunk-size here when linear mode just appends the disks into one large array with no parallelism. Well, you're completely right, it's odd. Just put in some chunk size and don't worry about this any more. The parameters talk for themselves. You have two or more devices, of approximately the same size, and you want to combine their storage capacity and also combine their performance by accessing them in parallel.
RAID-0 has no redundancy, so when a disk dies, the array goes with it. This should initialize the superblocks and start the raid device. You should see that your device is now running. You have two devices of approximately same size, and you want the two to be mirrors of each other. Eventually you have more devices, which you want to keep as stand-by spare-disks, that will automatically become a part of the mirror if one of the active devices break.
Ok, now we're all set to start initializing the RAID. The mirror must be constructed, eg. So, your system should still be fairly responsive, although your disk LEDs should be glowing nicely. The reconstruction process is transparent, so you can actually use the device even though the mirror is currently under reconstruction.
Try formatting the device, while the reconstruction is running. It will work. Also you can mount it and use it while reconstruction is running. Of Course, if the wrong disk breaks while the reconstruction is running, you're out of luck. I haven't tested this setup myself.
The setup below is my best guess, not something I have actually had up running. If you use RAID-4, please write to the author and share your experiences.
You have three or more devices of roughly the same size, one device is significantly faster than the other devices, and you want to combine them all into one larger device, still maintaining some redundancy information. Eventually you have a number of devices you wish to use as spare-disks. You have three or more devices of roughly the same size, you want to combine them into a larger device, but still to maintain a degree of redundancy for data safety. Eventually you have a number of devices to use as spare-disks, that will not take part in the array before another device fails.
This "missing" space is used for parity redundancy information. Thus, if any disk fails, all data stay intact. But if two disks fail, all data is lost. A chunk size of 32 kB is a good default for many general purpose filesystems of this size. It holds an ext2 filesystem with a 4 kB block size. You could go higher with both array chunk-size and filesystem block-size if your filesystem is either much larger, or just holds very large files.
Ok, enough talking. Hopefully your disks start working like mad, as they begin the reconstruction of your array. If the device was successfully created, the reconstruction process has now begun.
Your array is not consistent until this reconstruction phase has completed. However, the array is fully functional except for the handling of device failures of course , and you can format it and use it even while it is reconstructing. This is unfortunate if you want to boot on a RAID. Also, the old approach led to complications when mounting filesystems on RAID devices. The persistent superblocks solve these problems.
This allows the kernel to read the configuration of RAID devices directly from the disks involved, instead of reading from some configuration file that may not be available at all times. The persistent superblock is mandatory if you want auto-detection of your RAID devices upon system boot.
This is described in the Autodetection section. The chunk-size deserves an explanation. You can never write completely parallel to a set of disks. If you had two disks and wanted to write a byte, you would have to write four bits on each disk, actually, every second bit would go to disk 0 and the others to disk 1.
Hardware just doesn't support that. Instead, we choose some chunk-size, which we define as the smallest "atomic" mass of data that can be written to the devices. A write of 16 kB with a chunk size of 4 kB, will cause the first and the third 4 kB chunks to be written to the first disk, and the second and fourth chunks to be written to the second disk, in the RAID-0 case with two disks.
Thus, for large writes, you may see lower overhead by having fairly large chunks, whereas arrays that are primarily holding small files may benefit more from a smaller chunk size. Chunk sizes must be specified for all RAID levels, including linear mode. However, the chunk-size does not make any difference for linear mode. For optimal performance, you should experiment with the value, as well as with the block-size of the filesystem you put on the array. So "4" means "4 kB". Data is written "almost" in parallel to the disks in the array.
Actually, chunk-size bytes are written to each disk, serially. If you specify a 4 kB chunk size, and write 16 kB to an array of three disks, the RAID system will write 4 kB to disks 0, 1 and 2, in parallel, then the remaining 4 kB to disk 0.
A 32 kB chunk-size is a reasonable starting point for most arrays. But the optimal value depends very much on the number of drives involved, the content of the file system you put on it, and many other factors. Experiment with it, to get the best performance.
The following tip was contributed by michael freenet-ag. There is more disk activity at the beginning of ext2fs block groups. On a single disk, that does not matter, but it can hurt RAID0, if all block groups happen to begin on the same disk.
With 4k stripe size and 4k block size, each block occupies one stripe. The default block group size is blocks, so all block groups start on disk 0, which can easily become a hot spot, thus reducing overall performance.
Unfortunately, the block group size can only be set in steps of 8 blocks 32k when using 4k blocks , so you can not avoid the problem by adjusting the block group size with the -g option of mkfs 8. If you add a disk, the stripe- disk-product is 12, so the first block group starts on disk 0, the second block group starts on disk 2 and the third on disk 1.
The load caused by disk activity at the block group beginnings spreads over all disks. In case you can not add a disk, try a stripe size of 32k. The stripe- disk-product is 64k.
Since you can change the block group size in steps of 8 blocks 32k , using a block group size of solves the problem. Additionally, the block group boundaries should fall on stripe boundaries. That is no problem in the examples above, but it could easily happen with larger stripe sizes. For writes, the chunk-size doesn't affect the array, since all data must be written to all disks no matter what.
For reads however, the chunk-size specifies how much data to read serially from the participating disks. Since all active disks in the array contain the same information, the RAID layer has complete freedom in choosing from which disk information is read - this is used by the RAID code to improve average seek times by picking the disk best suited for any given read operation.
When a write is done on a RAID-4 array, the parity information must be updated on the parity disk as well. Updating a parity chunk requires either The original chunk, the new chunk, and the old parity block Or, all chunks except for the parity chunk in the stripe The RAID code will pick the easiest way to update each parity chunk as the write progresses.
The parity calculation itself is extremely efficient, so while it does of course load the main CPU of the system, this impact is negligible. If the writes are small and scattered all over the array, the RAID layer will almost always need to read in all the untouched chunks from each stripe that is written to, in order to calculate the parity chunk.
This will impose extra bus-overhead and latency due to extra reads. There is a special option available when formatting RAID-4 or -5 devices with mke2fs. If the chunk-size is 32 kB, it means, that 32 kB of consecutive data will reside on one disk. If we want to build an ext2 filesystem with 4 kB block-size, we realize that there will be eight filesystem blocks in one array chunk.
I am unsure how the stride option will affect other RAID levels. If anyone has information on this, please send it in my direction. The ext2fs blocksize severely influences the performance of the filesystem. You should always use 4kB block size on any filesystem larger than a few hundred megabytes, unless you store a very large number of very small files on it. This section is about life with a software RAID system, that's communicating with the arrays and tinkertoying them.
Note that when it comes to md devices manipulation, you should always remember that you are working with entire filesystems. So, although there could be some redundancy to keep your files alive, you must proceed with caution.
No mistery here. It's enough with a quick look to the standard log and stat files to notice a drive failure. But, when it's about a disk crash, huge lots of kernel errors are reported. Some nasty examples, for the masochists, kernel: scsi0 channel 0 : resetting for second half of retries.
It won't hurt. Let's learn how to read the file. The first number is the number of a complete raid device as defined. Lets say it is "n". The raid role numbers [ ] following each device indicate its role, or function, within the raid set.
Any device with "n" or higher are spare disks. Also, if you have a failure, the failed device will be marked with F after the [ ]. The spare that replaces this device will be the device with the lowest role number n or higher that is not marked F. Once the resync operation is complete, the device's role numbers are swapped. Finally, remember that you can always use raidtools or mdadm to check the arrays out.
If you plan to use RAID to get fault-tolerance, you may also want to test your setup, to see if it really works. Now, how does one simulate a disk failure? The short story is, that you can't, except perhaps for putting a fire axe thru the drive you want to "simulate" the fault on. You can never know what will happen if a drive dies.
0コメント