Its a little hard understanding this?
I wouldn't worry about software RAID/lvm as that will always use whatever mix of drive speeds or sizes you need. RAID 10 will always use 2+2 pairs and multiples of 2 there after, 2 for usage and 2 for redundancy's, 4/6/8/10/12 etc etc. So 4x1TB hdd would only give you 2TB in total. But RAID 10 will always work to the smallest drives size, and for consistency in speed can only uses one per array.
Are you trying to make 3 arrays on 4 disks simultaneously? Mix and match different sizes if disks? Out of 4TB in total want all 4TB available in one array (no redundancy)?
What exactly do you have and what do you want to do?
I know the raid controller is going to have to do more work across the arrays. If I do what I am doing.
What you call "consistency in speed".
You are correct in stating that I want 3 arrays on 4 disks simultaneously.
These disks are all identical, 4x 1TB.
The raid software (BIOS) allows me to create TWO RAID 10 arrays on this same set of disks.
And I am wondering why not three; is this a limitation of the scheduler? I just want / desire / need a 3-some of /LOGICAL/ disks.
Basically I am turning my 4 disks into 3 disks with the added benefit of (some) speed and redundancy. Performance wise, there is usually only one array that is actively getting used (which is the system drive).
So you are correct that I want:
1TB + 1TB + 1TB + 1TB --> array 1 (750GB) + array 2 (250GB) + array 3 (1000GB) ===> leading to 3 logical disks in my operating system.
But I wonder whether this is a limitation in the controller or just in the configuration software (BIOS), since in the latter case you would only need to mod the bios to take advantage of 3 arrays.
I see, the problem here would be the RAID 10 bit. Its not a pair, it is logically but not RAID logically. How deep/many levels can your RAID go, can you build a nested array? 1+0+whatever?
I think the latter will be the limitation of the hardware, you may be able to buy a PCIe raid card able to do as you wish. But deeper and deeper RAIDs normally live in huge server environments. If your mobo isn't having it its unlikely it will work.
Can you not partition your 2TB RAID array? Or do you physically need 3 logical? Partitions would be the way to go, or software stripes.
Having multiple arrays and partitioning a single array is really the same thing, but at different levels. When you have multiple raids (or disks) you have 2 levels of organisation. If you have a single disk, you just have one level of organisation. It makes it, at least conceptually, harder to "take out" one disk and store it elsewhere, even though perhaps currently the tools are not really up to par for that.
But I like to get conceptually as close to what I want as possible, because it enables me to feel safe, to feel in power, and it might enable me to get to the solution I'm after. Moreoever, it means the data is more organised in my mind.
I understand, sorry for the late reply. I'm curious due to your needs if a specific and maybe more advanced from the average home user RAID controller on everyone mobo may be a better solution. If you have the time and a few £$(can find euro) you could buy and try a few PCIe RAID controller boards. My understanding would be that on out mobos being good but built to a number, a more specified controller will offer more advanced options and even better performance in some cases.
What if the mobo in question? Could you send a support ticket to AMD seeing if they can offer any advice or ask on your mobo makes forum to see if anyone using the same has archived what your trying?
I have previously only attempted Asus technical support, however they directed me to AMD. I have not attempted a ticket with AMD directly yet.
I really wonder if the behaviour is hard-limited in the firmware, or just a configuration issue / limitation.
In the latter case, perhaps a simple BIOS mod might be the call for answers ;-).
An interesting item regardless. If a driver/firmware is able to schedule IO calls across 2 different arrays, then 3 or more should also simply be within the realm of the possible.
I may try to ask AMD directly, thank you. I will also ask on the bios MODS forum ;-).
As regards to PCI or PCIe cards, this is difficult.
There is not much on offer.
I know the German "DawiControl" cards have very poor firmware. I have one running in some machine, but I forwent the RAID capability, because it was a Linux-only system and software-RAID was easier to accomplish, and also more versatile. The DawiControl card uses a Silicon Image 3114 chip I believe - - that is the PCI card.
The PCIe x1 card, the DC-324e uses a Marvell 88SX7042 chip, but I have no experience with that. I would have to ask around. But where to ask?
Besides, software support won't be nearly as good as with the AMD RAIDXpert software. Which is actually quite nice.
So before I attempt DawiControl I would have to pursue AMD more.
I am asking on some other forum if people have experience with these things.
Sounds like your on the road
Naturally I would recommend the overclockers forum just because anyone who buys a product from Overclockers.co.uk normally sign up, and its good advice most of the time.
I am solely using lvms at the moment. But don't have a need for so much speed as its only a small data servers and the networks the bottleneck for now.
I hope someone here can jump in on this post with some current feedback of experiences. Its been bumped and should be for a little while.
For me it's more like advancement, I want to get ahead in this game you might say.
In the sense of: using RAID is just a natural progression for me. I like the redundancy. I also avoid SSD which makes RAID more important.
Random read / write performance of SSDs is of course a vast multiple of spinning disks. With RAID you can get some improvement. Still my system doesn't feel faster than any other system, although sequential copies are sometimes good. Maybe it feels stupid, but I'm using 2.5" rotating disks. It's not wholly stable, but it's fun.
That's the point right, water cooling, ott crossfire setups, RAID. All part of the fun. My last home RAID use was back in the Athlon days, but then everything special made an impact. And as games or whatever get bigger and longer to load you'll be the one laughing!
I have found this that might be of interest... RAID Basics Its a little old talking about SCSI but may help you get what you want efficiently. My recommendation would be to do the 2+2 RAID and partition. Or get a better supported RAID controller.
Thank you for your interest.
The reason I want those 3 logical disks is because I have or had, mostly, moreover, that setup at home.
See it has to do also with encryption setup.
Let's call it research into setups that will grant me a certain mode of plausible deniability.
Back at home I had a main disk with two partitions: C and D:. I had an external drive (let's call it G:) for backup of the D: partition, and I had another separate harddisk that served as my E: partition, on which I kept more confidential data.
So basically I had 3 harddisks with a total of 4 partitions, not including any Windows 7 boot partition that it may have created.
Normally in Windows there is an additional boot partition. So in order to have 5 partitions, I must use an extended partition, which of course is possible. Then I would have a single partition table (MBR) with 5 partitions, 2 primary for Windows, 1 extended, and 3 data partitions inside of that as logicals, with another primary available if the need should arise.
However for encryption there are different modes: full disk encryption in its real form encrypts also the partition tables and any paritition data such as what type of partition you have.
It makes it just a bit harder for an attacker or adversary (as they are called) to know what you are up to or what your computer systems look like. Naturally there are some who vow against full disk encryption of that type, because not having a partition table and individually encrypted partitions also makes it that much harder for YOU to recover data in the case of a failure. However, that is really what backups are for.
Not that I would necessarily use this. On my home computer I had a system partition encrypted (C:) but D: was encrypted separately. However I don't think my Windows 7 had created that boot partition, not sure.
My E: and G: also didn't exactly use FDE at the master table level, just at the partition level.
when the police came, ;-), there were 4 different passwords to be retrieved from me. Thus far I have given them only one .
I prefer to not let them know whether I am using Windows or Linux, hence the partition table encryption, I guess. In addition to the fact that...... It's just research, and practicality ;-).
So there you have my 80% answer as to why I want 3 logical disks: it's the setup I had back at home. 3 disks gives 3 partition tables. 3 arrays gives 3 partition tables.
*THESE ARE NOT NESTED ARRAYS*. These are just /ARRAY-LEVEL/ partitioning.
Of course it could also have implications for backup, but normally you do not backup encrypted volumes, that makes no sense, you backup unencrypted data. And then you encrypt that.
Restoring a system then always involves first copying the data, and then re-encrypting it. This will give you different encryption headers, etc. I do not know how else to do it, but it is not entirely ideal. However backing-up encrypted data voids compression as well as not being able to skip empty sectors. That is why containers are not very suitable for backup; they are always larger than the data itself. With backup, you really use an encrypted channel, and you decrypt and recrypt the data at either end. That's at least how you do it today, or how I do it today.
That's about the 90% answer I guess. You cannot save a partition "container" while retaining its original encryption keys. So restoring a system always means creating a new system. It's not really "up and running".
Let's say I am trying to investigate the ways in which safe systems can be saved and restored so you can be up and running again in less time.