vSphere upgrade saga: Adding a D2200SB w/P4000 VSA

By  

storage space

flickr/.:Amy:.

I am in need of more primary storage to increase the size of my virtual environment. I have already updated my tertiary storage (as I explained in an earlier post), but now it is time to increase the size of my primary storage. The choices were to increase the capacity of disks for slower disks, add another disk tray to my SAN, or go the route of using a virtual storage appliance (VSA) within vSphere.

Since I did not want even more power supplies running (to save on energy and cooling costs), I chose to go the route of a VSA. Since VSAs need to be backed by disks I am going two different routes. The first is to use a storage blade for my HP c3000, a D2200SB, and the second is to increase the on-blade storage. Why two routes? Because I want to use two distinctly different VSA technologies: VMware Storage Virtual Appliance (SVA), and HP's P4000 (formerly Left Hand Networks) products.

However, the problem was still about capacity of drives vs speed. So for the D2200SB I choose to use 6 900GB 10K SAS drives and for the VMware SVA I will be using 300GB 15K SAS drives. The 300GB 15K SAS drives are an upgrade for the drives in each blade. Since I was using 146GB 15K drives, going to 300GB 15K increases capacity without changing my existing functionality. I would have gone to bigger 15K drives, but none exist in the 2.5” form factor from HP. At this time, I am unsure as to whether or not these new disks will become primary storage or secondary storage but they will increase my available storage capabilities quite a bit while implementing storage hypervisors plus allow me to finally play with VMware's VASA and VAAI APIs.

So with this all in mind and all the bits ordered, it was time to put things into the rack. Here is how things ended up being placed:

* Slot 2 blade was moved to Slot 3

* D2200SB was placed in Slot 2

* Slot 1 blade had a write back battery backup unit added to it to help with the local disks. The D2200SB already had such a cache.

* Replaced Disk 2 in each blade with a 300GB 15K SAS Disk (replacing the 146GB 15K SAS)

* Placed the 146GB 15K SAS drives into a space DL380G5 (unused at this time but will become another virtualization host eventually)

As an aside, I chose this layout and didn't just use slots 1-4 (the top slots) for my blades because the c3000 is split into two zones (slots 1,2,5,6 are one zone and the other 4 slots are their own as well), which impacts power draw, networking, etc. In one zone inter blade communication stays on the backplane and never hits the FLEX 10 switch. With Slot 3 now in use, I have to cross the HP Flex 10 switch to interconnect my blades. It is a small thing to change, but each blade enclosure has its own idiosyncrasies.

This led to some interesting issues:

1. It required me to reboot the blade in Slot 1 twice to finally team it with the D2200SB, which uses the slot-to-slot PCI-e extender on the backplane of the c3000. So slot 1 teams with slot 2, slot 3 with slot 4, slot 5 with slot 6, and slot 7 with slot 8. The teaming is very different within a c7000. If you have a c7000, read its documentation. You can see the dashed outline in the image above, which shows that slot 1 is extended into slot 2 using the PCI-e extender.

2. There is no way to configure the drives in the D2200SB within VMware vSphere ESXi unless you install the HP Utility Bundle for ESXi. I now have two HP specific Offline Bundles for my blades: HP ESXi Management Bundle, and the HP Utility Bundle. Granted if I used the HP Specific installation of ESXi, these bundles would not be needed, unless they have updates. To install this required two remediations in update manager, the first to install the latest security patches from VMware, and the second to install the HP Utility Offline Bundle. Since I had to enter maintenance mode anyway, I figured I might as well upgrade vSphere as well.

3. Moving the blade from Slot 2 to Slot 3 was not as simple as I thought. Virtual Connect for HP Flex 10 requires you to set a server profile. That blade had a server profile already, but the Virtual Connect Manager considered the slot to be empty (which it was for a few years).

4. The blade in Slot 1 had been powered off for quite a while as well, but now is on, and there is a virtual connect issue with this blade not being seen by one of my more important networks.

To solve the Virtual Connect issue of the blade in slot 3 not being seen, I reset the virtual connect manager. This also fixed not being able to talk to some of my Slot 1 networks. Perhaps this had something to do with the recent Virtual Connect firmware upgrade. So one key bit of this ongoing saga:

If you update the Virtual Connect Firmware, be sure to reset your Virtual Connect Manager.

Now to update the array(s) and verify the rebuild finished for the 300GB upgrade. This will require logging into the ESXi node as root. Now, this is the first time I have logged into root in several years. This is not something you should do regularly, and if the HP ACU CLI ran from a web interface via HPSIM I would not login as root at all. This is a break glass hardware problem or configuration only action. Here is a running log of what I did:

Find all the controllers.


~ # /opt/hp/hpacucli/bin/hpacucli
HP Array Configuration Utility CLI 9.20.9.0
Detecting Controllers...Done.
Type "help" for a list of supported commands.
Type "exit" to close the console.

=> ctrl all show config

Smart Array P410i in Slot 0 (Embedded) (sn: 500143800522A790)

  array A (SAS, Unused Space: 0 MB)

    logicaldrive 1 (136.7 GB, RAID 1, OK)

    physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK)
    physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 300 GB, OK)

  SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 500143800522A79F)

Smart Array P410i in Slot 3 (sn: 5001438021F98500)

  unassigned

    physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 900.1 GB, OK)
    physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 900.1 GB, OK)
    physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 900.1 GB, OK)
    physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 900.1 GB, OK)
    physicaldrive 1I:1:5 (port 1I:box 1:bay 5, SAS, 900.1 GB, OK)
    physicaldrive 1I:1:6 (port 1I:box 1:bay 6, SAS, 900.1 GB, OK)

  Expander 250 (WWID: 5001438022E788A6, Port: 1I, Box: 1)

  Enclosure SEP (Vendor ID HP, Model D2200sbx12) 248 (WWID: 5001438022E788A5, Port: 1I, Box: 1)

  SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 249 (WWID: 5001438021F9850F)

=>

The key here is to check for two things:

1. The first is the status of the disks in the logicaldrive 1, which is the 2 drive RAID-1 Array into which the 300GB drive was added. As we can see everything is OK, which implies the rebuild happened smoothly. Since we upgrading the capacity of all the nodes, we will have to check the rebuild on all nodes and not just 1. On a second node this same command may yield the following results, which shows the array is rebuilding. Speed of the rebuild depends upon the cache in use within the controller.


~ # /opt/hp/hpacucli/bin/hpacucli
HP Array Configuration Utility CLI 9.20.9.0
Detecting Controllers...Done.
Type "help" for a list of supported commands.
Type "exit" to close the console.

=> ctrl all show config

Smart Array P410i in Slot 0 (Embedded) (sn: 5001438005280E90)

  array A (SAS, Unused Space: 0 MB)

    logicaldrive 1 (136.7 GB, RAID 1, Recovering, 35% complete)

    physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK)
    physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 300 GB, Rebuilding)

2. The second item is the controller number and disk numbers for the 900GB drives from which we will create an array for use by the HP P4000. This is only for the node with the shared PCI-e bus D2200sb.

Next we create the array using raid 5. I would have chosen raid 6 but it did not exist on the P410i controller. We are limited to RAID 0,1,1+0, and 5. So I have chosen to use raid 5 with a spare drive. The first command is to create the array with 5 900GB 10K SAS drives. The second shows the array was created.


=> ctrl slot=3 create type=ld drives=1I:1:1,1I:1:2,1I:1:3,1I:1:4,1I:1:5 raid=5
=> ctrl slot=3 show config

Smart Array P410i in Slot 3         (sn: 5001438021F98500)

  array A (SAS, Unused Space: 0 MB)

    logicaldrive 1 (3.3 TB, RAID 5, OK)

    physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 900.1 GB, OK)
    physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 900.1 GB, OK)
    physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 900.1 GB, OK)
    physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 900.1 GB, OK)
    physicaldrive 1I:1:5 (port 1I:box 1:bay 5, SAS, 900.1 GB, OK)

  unassigned

    physicaldrive 1I:1:6 (port 1I:box 1:bay 6, SAS, 900.1 GB, OK)

  Expander 250 (WWID: 5001438022E788A6, Port: 1I, Box: 1)

  Enclosure SEP (Vendor ID HP, Model D2200sbx12) 248 (WWID: 5001438022E788A5, Port: 1I, Box: 1)

  SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 249 (WWID: 5001438021F9850F)

With the array created, we have to add the spare drive to the array using the following command. We need the array name to add the spare drive to the recently created array. In this case the array name is A. Once more we verify with the 'show config' command to ensure the array was created. Then exit the hpacucli.


=> ctrl slot=3 array A add spares=1I:1:6
=> ctrl slot=3 show config

Smart Array P410i in Slot 3         (sn: 5001438021F98500)

  array A (SAS, Unused Space: 0 MB)

    logicaldrive 1 (3.3 TB, RAID 5, OK)

    physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 900.1 GB, OK)
    physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 900.1 GB, OK)
    physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 900.1 GB, OK)
    physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 900.1 GB, OK)
    physicaldrive 1I:1:5 (port 1I:box 1:bay 5, SAS, 900.1 GB, OK)
    physicaldrive 1I:1:6 (port 1I:box 1:bay 6, SAS, 900.1 GB, OK, spare)

  Expander 250 (WWID: 5001438022E788A6, Port: 1I, Box: 1)

  Enclosure SEP (Vendor ID HP, Model D2200sbx12) 248 (WWID: 5001438022E788A5, Port: 1I, Box: 1)

  SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 249 (WWID: 5001438021F9850F)

At this time you should logout of the node and disable SSH or Shell access. There is no need to run any more commands as the root user or directly on the host; This is a break glass situation, we have new hardware.

To see this storage we need to go into vCenter Configuration for the node we ran all the previous commands. Once we "Rescan All.." we can then see the volume when we go to "Add Storage…"

Now we have 3.27 TB of local storage to use for our VSA. It should be noted that, for redundancy, you may want to get two D2200sbs so data can be replicated between nodes. There are redundancy limitations with this approach. The RAID 5 + spare overhead is pretty hefty as we loose 900 GB for the spare, then another 900 GB+ for parity but it is not as high as using RAID 1 or 1+0 and also not as fast. I am hoping the onboard Cache and VSA cache mechanisms will aid in performance.

Also in this series:
vSphere upgrade saga: Veeam upgrade
vSphere upgrade saga: Upgrading the storage on your Iomega ix2-200
vSphere upgrade saga: Upgrading vCenter Operations Manager
vSphere upgrade saga: Fixing backup and other virtual appliances
vSphere upgrade saga: vSphere ESXi and Host Profiles
vSphere upgrade saga: vCloud Director 5.1
vSphere upgrade saga: Upgrading to vCNS 5.1
vSphere upgrade saga: vCenter 5.1
vSphere upgrade saga: Upgrading HPSIM
vSphere upgrade saga: Getting all the bits
vSphere upgrade saga: Fixing VMware View

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Answers - Powered by ITworld

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Ask a Question