Install Esxi Software Raid Debian

Posted on
Install Esxi Software Raid Debian

Recently I needed to setup software RAID1 during Debian installation process. As it turned out, this process was simpler then I initially expected. I will shortly describe it here using screen-shots captured during initial tests. As an example I will configure RAID1 for the root file-system and swap space using two storage devices, without any additional spare devices.

Step 1 Perform normal installation process up to the disk partitioning menu. Step 2 Select manual partitioning method in the disk partitioning menu. Step 3 Create empty partition table on each disk used to create RAID1 array. Step 4 Create partitions on the first disk. During partition creation process select physical volume for RAID as partition type. Replicate changes in the same way to the second disk.

Install Esxi Software Raid Debian

Step 5 Execute configure software RAID option. You will be asked to store changes applied to the partition tables - do it so partitions created in the previous step can be used to create RAID arrays. Create new MD device for identical partitions on recently configured disks. Choose RAID1 as device type. Select 2 as a number of active devices for the RAID1 array. Select 0 as a number of spare devices.

Root on software RAID+LVM. However, GRUB can not install to a RAID device. DebianInstaller automatically switch to lilo when you have root on RAID: Reboot Debian. More drivers need to be installed to enable certain add-on peripherals. You can download the drivers from Downloads.vmware.com. • ESXi does not support storage LUN exposed from On-board SATA Controller with Software RAID. • vSphere 6.0 supports booting ESXi hosts in Unified Extensible Firmware Interface (UEFI).

Select identical partitions on recently configured disks (eg. Md0 → [sda1, sdb1] and md1 → [sda2,sdb2]). Step 6 Create root file-system on the first RAID1 device. Create swap space on the second RAID1 device. Select finish partitioning and write changes to disk option to confirm changes applied to the RAID1 devices. Step 7 Continue installation process up to the install the grub boot loader on a hard disk menu.

By default grub will be installed only on the first disk so switch to the second ( ALT + F2) or third ( ALT + F3) console before system reboot, and execute the following commands to install it on the second disk. # chroot /target /bin/bash # grub-install /dev/sdb. Signals And Systems By Nagoor Kani Pdf Viewer.

The un-official VMware Reddit. Everything virtual. Have a technical question? Just make a self post! Current Links: General Links: Icons: The VMware logo icon following a username indicates that this user is a VMware employee.

If you are an employee, please PM one of the moderators that has a VMware logo for verification instructions and we will add it to yours as well! Certification Flair: To get flair with your certification level send a picture of your certificate with your Reddit username in the picture to the moderators. Spam Filter: The spam filter can get a bit ahead of itself. If you make a post and then can't find it, it might have been snatched away. Please and we'll pull it back in. I currently run a bare-metal Linux server that has a 5x1TB RAID5.

I'd like to completely migrate that RAID over to my ESXi host, without losing the data on the RAID. Is this possible?

Old system is a home-built P4, onboard SATA controller. New system is Dell Poweredge 2950 Gen III, with controllers: PERC6E, PERC6I, SAS 6/ir.

I know the PERC controllers don't support 'pass-through'. Using a PERC, I'd have to create a RAID0 array for each disk, losing my data.

I bought the SAS 6/ir knowing it's a HBA, but once I got it in the system, I'm not presented a 'dumb' controller - it wants me to setup a RAID as well. I'm also aware that once I get direct drive access from the controller, I'd also need to set up VMware to hand that drive to the VM, unmolested, to run the 'md' raid. My real question is this: Since I've already got ESXi running on the PowerEdge, is there a way (through SSH or otherwise) that I can verify my disks are accessible (eg, verify the superblock) before I start mucking about with ESXi passthrough? Present the storage via NFS to the ESXi host. Other than that, no, it's not possible to do what you want. ESXi itself doesn't support any software RAID implementation in any way, shape or form.

You can't get the HBA to see you existing data since the on-disk format is different, and even if you could magically do that, then you'd be presenting a file system that ESXi doesn't support. And if you present the file system as a block device using iSCSI, you run into the same roadblock--ESXi wants VMFS or NFS. There is no way to move the disks into the machine running ESXi and still have either ESXi or a VM running on it access data on those disks. I interpreted your post as asking for a way to do just that--did I misunderstand? The only way to have either ESXi or a VM running on it access the data without moving the data temporarily off those drives is via the network.

If you want to present the data to ESXi, then your only option is NFS. If you want to present the data natively to a VM, then (based on what I understand it is you want) your best bet is probably to present the volumes you want as iSCSI from the old machine and have the initiatior in the VM map them; of course that will mean that you can't have them mounted on the old machine as well, unless the file system on these volumes is a cluster file system.

If iSCSI is not the way to go, then Samba or what have you will have to suffice. It doesn't even matter whether or not ESXi can parse the file system; it will not be able to assemble the RAID array since it doesn't know what LVM is in the first place. Even if it magically could, since the PERC HBA won't let you present the individual drives as a JBOD, you can't use raw device mappings to attach the drives to the VM. The takeaway is this (bears repeating): There is no way to move the disks into the machine running ESXi and still have either ESXi or a VM running on it access data on those disks.

If you need the disks in the ESXi machine running on the PERC HBA, you will need to move the data temporarily somewhere else and you will have to let the HBA format the drives to construct its array. Edit: That being said, if you're going to run any form of RAID5, software or hardware, on consumer level drives, I hope you have good backups or can afford to lose the data.

Yes, I'm trying to physically move the disks over to the new ESXi machine, and decommissioning the old machine. I agree with NFS/iSCSI/Samba being a workaround for not losing data, but this does not solve my original problem of being able to decommission the extra computer. Adobe Cs6 Crack Milkman Images there.

I belive you have done a pretty good job beating into my thick skull that in no way will I be able to retain the data without restoring from a backup after moving the physical drives. I think I've come to terms with that. Which brings me to my next question: I can build a new RAID (Raid 10, actually, in case you were wondering) that the PERC controller will present to the OS (ESXi) as a single volume. This volume will ONLY be utilized in one VM. The Linux OS in that VM will take care of presenting the data to the network. Can someone at least point me to some documentation on how to efficiently give this volume to a single VM? In my head, I don't think turning the array into a VMFS volume with a ~5TB.vmdk file is an efficient solution.

I believe the SAS 6ir generation controllers can only create/support a max raid container size of 2tb. You'd need to purchase an aftermarket H700 PCIe card to make one big raid set.

Hard drive controllers, such as the Dell PERC and SAS controllers, have firmware that needs to provide support for >2TB drives. Dell supports these large hard drives on the Dell PERC H700 and H800 and soon, the H200 controller at a later date. In addition, firmware support is also specific to the type of drive – SAS or SATA. It just seems redundant, and not in a good way. Mabye I'm stuck thinking incorrectly. 'Bare' way: Partition the entire drive as 'RAID'.

RAID controller sees all the drives, presents the drives as a single volume to the OS. In the OS, Partition (again) the 'virtual' volume as whatever, format it to fit needs, done. 'VMware' way: Partition the entire drives as 'Raid'. RAID controller sees all the drives, presents the drives to ESXI as a single volume. Once in ESXi, Partition the volume as something ESXi can use.

Specify how much space is allocated to a particular VM (in my case, 'all space'), essentially creating a VMDK file. FINALLY, inside the VM, the guest OS can see the space specified.

Since I want to give the ENTIRE RAID volume to a single guest OS, I'm thinking some of these steps arent necessary? It's probably outside of your use scope as I'm guessing you only have the one esxi host.

I guess I am thinking about this more from a clustered deployment where things like vmotion and whatnot come into play. Which was the angle I jumped on before I thought to much about it. Even in a single host deployment I still see the benefits of just doing it all in vmfs. The biggest being next time you migrate to new hardware you don't have to muck with all of this stuff again. You just stand it up and copy the vmks and boot the machines. I guess I've just taken the road of let the hosts do as much of the hardware operations as possible and keep the guests simple. The SAS 6/iR card, if you leave the config blank and don't define any arrays, will present all the disks directly to ESXi; I think you can also flash it with an alternate firmware the strips the RAID mode completely.

Once there, you can (although it's not supported) present those disks as physical-mode RDMs to a VM, in which you can run software RAID. You may also be able to pass the SAS 6/iR to a VM wholesale (the 2950 might be too old for that; I've only done it on the next gen newer). In either case, you'll need to have somewhere to boot ESXi from, and to hold the VMX and VMDK files for this software-RAID VM -- this can be booting from USB and holding the VM on an external NFS device, or you can use the PERC 6e for this, if you have an external disk chassis. If you want to use this RAID for storing VMs, then you'll need to export it back to the host with NFS (or iSCSI).

Further, as another poster mentioned, RAID 5 on 1TB disks is risky. Since the 2950 is usually a 6-slot chassis (unless you have the 8x2.5in model), consider adding a disk and reshaping the array into RAID6. Man, where to start? RAID x is a block device, you don't want to 'migrate' any legitimate RAID data from one block device(software or hardware) to another block device. I think you're looking to transfer from the file level?

Second, if you're looking to setup a software RAID on a VM not only are you going to have a bad time, you don't know what you're doing. You need to read up on vSphere storage. Third, you're telling me you have three different hardware RAID controllers in a 2950? 4: I bought the SAS 6/ir knowing it's a HBA. RAID controllers are not HBAs(Host Bus Adapters) 5, ESXi is an operating system. This OS can read VMFS or NFS - that's it. No ext, zfs, btrfs, or any other Linux file system you can think of.

You cannot present a superblock or anything else native to Linux to a vSphere system. The volume has to be in the form of NFS and even then it would just be a volume to run FROM.

6: I'm also aware that once I get direct drive access from the controller, I'd also need to set up VMware to hand that drive to the VM, unmolested, to run the 'md' raid. I'm lost by this point but let's say you get this block device presented to vSphere and you're not using NFS. VSphere is going to format that device with VMFS and you will lose all your data.

Block level migration to VM file level migration.is that what we're talking about here? Edit: my question quoting format skills suck • • • •. I'm technically only running 2 controllers. One for internal backplane, one for the external Powervault. The SAS 6/ir is not a true hardware RAID controller. It's FakeRAID, essentially propriatery software RAID.

I agree with your statement in regards to the PERC controllers. On the old system, I had a RAID volume set up to hold all my media for my network, TV shows, movies, software, etc. The only purpose for this RAID volume was media storage. This way if I hosed the OS partition, I could rebuild the OS HD and still keep all my media.

I was attempting to move the physical RAID drives to the ESXI box and still be able to keep my data. (This RAID volume is 3x larger than any single spare disk I have lying around.) • • • • •. I'm not trying to preserve the OS on the machine. I'm simply trying to preserve my media on the RAID. On the old system, I had a RAID volume set up to hold all my media for my network, TV shows, movies, software, etc. The only purpose for this RAID volume was media storage. This way if I hosed the OS partition, I could rebuild the OS HD and still keep all my media.

I was attempting to move the physical RAID drives to the ESXI box and still be able to keep my data. (This RAID volume is 3x larger than any single spare disk I have lying around.) • • • • •.