Deploy Equallogic’s Virtual Storage Manager (VSM)

Equallogic has a great appliance that you can deploy to vSphere and use to provision datastores to your cluster, create smart copies, replicas and clones as well as configure replication. Equallogic Virtual Storage Manager (VSM) is a must have if you have storage running on either a PS or FS array.  The latest version of VSM, v3.5.3 supports vSphere 5.5.

To deploy a new VSM appliance:

Download the latest OVA (v.3.5.3), release notes, installation and user guides, from equallogic’s website.

Download VSM
Download VSM appliance

Using the vSphere client (not the web client), confirm the vCenter managed IP is set. This will ensure that VSM can identify and communicate the vCenter.

Administration + vSphere server settings + Runtime Settings. Confirm that vCenter’s IP and FQDN is listed, if not, add them.

vCenter Runtime Settings
vCenter Runtime Settings

Time to deploy:

Click on File + Deploy OVF template. The deploy OVF template launches. Browse to the OVA file you downloaded. The appliance will need 15GB of space if thick provisioned (2.2GB if thin).

Click Next twice. Accept the EULA twice. Click next.

Deploy OVF
Browse to the OVA
  • Name the appliance as it should appear in vCenter. Select an inventory location, click next.

Name the VSM appliance

  • Select the host/cluster on which the appliance should run. Click next.
  • Select a resource pool, click next. Select the datastore, click next.
  • Choose a disk format. Thick provisioned is a good choice. Click next.
  • If prompted to select a network where the NIC should be attached, choose it from the drop down and click next.

VSM Properties:

vsm properties

Enter the values as required and click next.

  • FQDN hostname
  • Time zone
  • NTP servers
  • vCenter http and https ports
  • vCenter username
  • vCenter password
  • Default gateway
  • DNS servers
  • IP address
  • Netmask

Review the settings. Click the check box next to power off after deployment and click finish. Close the ‘completed successfully’ dialog box.

View the task & events tab on the vsm appliance to check the status. Look for the “VSM server starting up” entry. This confirms the appliance is ready.

vsm server starting

Review the summary tab of the VM. Do not update VMware tools on the appliance. This just means that the VMware updated tools after the appliance was released. You can safely ignore the out of date tools status.

vsm summary tab

Enable the VSM plug-in:

Click on Plug-ins + Manage Plug-ins. Right click on the VSM plug-in and click enable.

plugins

Enable Plug-in

Close plug-in manager & confirm that Dell EqualLogic VSM is listed under solutions & applications.

under home

When the appliance is deployed successfully, click close and launch the VSM console.

The default credentials are root/eql

Change the default password:

VSM cli

Select 1 + Select 4

Enter a new root password. Press enter to return to main menu.

Configuring VMware vSphere Storage APIs for Storage Awareness (VASA)

For more info on VASA, read Cormac Hogan’s blog post.

Select 3 to configure VASA

configure vasa

Press enter to continue and restart your vSphere client.

Launch VSM:

Click on Home and launch VSM & log in with your vSphere admin credentials.

Launch VSM

login

These are the vCenter credentials you use to log into the client.

Configure Storage Network (optional):

Click the configure VSM properties icon in the toolbar.

configure vsm in gui

Enable the 2nd nic in the VSM server and configure it with an IP on your iSCSI subnet. You’ll have to already have a port group configured to see that subnet or else, the enabled check will be grayed out. *Note*The storage network can only be configured from the gui. This is a setting that cannot be set from the CLI.

cfg-storagenw

Click OK to close. You’ll be prompted to restart. Restart the vSphere client as well. When the VSM server is back up (see console), enable the plug-in.

Add a PS series group:

Click Groups in the navigation pane. Under the getting started tab, click Add PS series group.

Add PS group

In the add PS series group box, enter the group name or IP and credentials (I suggest using grpadmin). Click add and OK.

add-ps-creds

Monitor recent tasks to see when it’s complete.

add-ps-tasks

You’re done.

Deploy a VDP appliance and Migrate VDR restore points

Looks like my new vSphere 5.1 cluster upgrade is coming along swimmingly. Time to get these VMs backed up and the old restore points accessible. Enters VDP

VDP is VMware’s newest VM backup and recovery solution that is supported for vSphere 5.1. As before, it’s integrated with vCenter and can be managed via the vSphere web client. It comes in 2 flavors and depending on the flavor, you get your choice of duplication store sizes, 1/2 Tb, 1TB and 2TB in the Basic version and many more choices in the Advanced version. Data is deduplicated across all backup jobs and utilizes change block tracking and VADP to lighten the load on your ESXi hosts and keeps that backup window much shorter, but only capturing the blocks that have changed. Works with both the Windows and Linux appliance vCenter.

Here are the steps I took to deploy my VDP appliance as well as migrate the restore points from VDR.

Scenario:

I had 1 VDR appliance that backed up both Windows and Linux VMs in 4.1, just with separate dedupe stores. To back up all windows VMs I configured a 900GB disks target and a 500GB disk for Linux.  Since one appliance can only contain 1 disk, I will deploy 2 appliances, (1) .5TB and (1) 1TB. Take a look at the VDP Admin guide for more details on how to size, configure, etc.

Prerequisite:

  • A disk presented to your esxi hosts that are of a large enough capacity to hold the dedupe store, OS, logs, checkpoints, etc. It must be formatted for VMFS-5.
  • Download the appliance from vmware’s website.

Deploy VDP appliance:

  • Launch vSphere web client. From Home, click on vCenter > Datacenter > Objects Tab > Actions Icon > Deploy OVF Template
  • Browse to the downloaded OVA appliance file and click next.
  • Review the details and click next.
  • Accept the EULA and click next.
Review Deploy Details
Review Deploy Details
  • Enter the name of the appliance and select a folder or datacenter in which to deploy it to.
  • Select a resource where to run  the template and click next.
  • Select a datastore that has sufficient space then select a  virtual disk format. Thin Provisioned is best to begin with. You can convert to thick once the VM is deployed, otherwise  the deploy may fail. Click next.
Select Datastore and disk format
Select Datastore and disk format
  • Select the network the appliance will run on and click next.
  • Enter the IP and DNS information and click next.
IP and DNS  values
Enter IP and DNS info
  • Review the details, click the check box next to power on and click finish.
review template settings
Review your settings
  • To check the status of the deploy, review recent tasks. When the template is powered up, point your supported browser to https://IP_OR_Hostname:8543/vdp-configure
  • Log in with username: root password:changeme
Login to configure
Log in to configure VDP
  • On the welcome screen, click next.
  • Confirm the network settings are correct and click next.
Confirm or edit network settings
Confirm or edit network settings.
  • Select your timezone, click next.
  • Enter your new root password. It must follow all of the listed criteria and cannot be more than 9 characters, 9 exactly! (so weird) Click next.
Enter your new root password for VDP
Enter your new root password for VDP
  • Enter your vCenter information:
    • vCenter service account username
    • Password
    • FQDN or IP
    • If you’re running vCenter on a non-default port, change it here.
    • Check the box next to use vCenter for SSO authentication.
    • Click test connection, if successful, click ok then finish.
  •  Click Restart Now to reboot the appliance. The restart may take ~30 minutes. Monitor it’s progress from the console and when the blue “Welcome” screen is up you can proceed. Be warned, you may be back in ‘install mode’ instead of ‘maintenance mode’ after the appliance come back online. I thought it was me, but there are people out there that had to go through process more than once.  If you’re back at the configuration wizard, just humor it and answer all the questions again.

vdp-welcome

View the Maintenance Interface

  • Launch the appliance. Once configured, it’s now on it’s maintenance interface. Point your browser to https://VDPIP_or_Hostname:8543/vdp-configure/
  • When the system health check completes, view the status of the appliance. From page 24 of the VDP admin guide explains what the maintenance interface is used for.
    • “Viewing Status”—Allows you to see the services currently running (or currently stopped) on the VDP Appliance.
    • “Starting and Stopping Services” —Allows you to start and stop selected services onthe VDP Appliance.
    • “Collecting Logs”—Allows you to download current logs from the VDP Appliance.
    • “Changing vSphere Data Protection Configuration ”—Allows you to view or change network settings, configure vCenter Registration, or to view or edit system settings (timezone information and vSphere Data Protection credentials).
    • Rolling Back an Appliance” —Allows you to restore the VDP Appliance to an earlier known and valid state.
    • “Upgrading the vSphere Data Protection Appliance” —Allows you to upgrade ISO images on your vSphere Data Protection Appliance.
  • *Note* Maintenance services will be stopped for the first 24-48 hours after deployment. This is so your 1st backup window is uninterrupted by maintenance activities.

Migrate Restore Points

  • Point browser to https://VDPIP_or_HOSTNAME:8543/vdp-migration and log in with your VDP root password.
  • Click Attach VDR. This takes some time and will depend on the size of your backup set size.

attach-vdr

You can select the jobs you like to keep and edit the retention. This takes a very long time so be patient.

migrate

VMware KB on migrating restore points

Keeping VDR around for restores

Officially, VDR is not supported in 5.1, however, if you have the plug-in for VDR in vCenter, you can do restores. It’s best to do a restore rehearsal and never overwrite the existing VM. It’s recommended to keep VDR and VDP on separate hosts and to keep VDR powered off until you need to do a restore. If you’re using DRS, just create a rule that will separate virtual machines. This way, they won’t wind up on the same host by mistake.

ESXi 4.1 to 5.1 Upgrades with VUM

This is a really good post about how to remove software modules that are no longer needed. I came upon this issue when upgrading from ESXi4.1 to ESXi 5.1. One of my hosts wouldn’t remove the drivers.

esxupdate encountered error

I uploaded the offline bundle zip to VUM and created a host extension baseline and ran it against the host. The host had a kernel panic and didn’t boot up.

kernel panic

I had to upgrade that particular host from the ISO and forgo update manager so all is well.

I was even given this post from VMware technical support when I called. I’d already happened upon it, but it was impressive that his blog post was on the tech’s radar.

Grow (Extend) an LVM on a Linux VM

We’ve all been there, you’re running an app on a VM and you see that it is quickly running out of free space. Since we’re not constrained by any physical limitation, we can just allocate more space to the guest and grow the disk.

As with any hard drive partitioning, make sure you back up any critical data. Since this is a vm and I’m running VDR, I can run a quick backup and begin. Also, remove any snapshots if you have them.

Power down the VM. Add space to the hard disk under Edit Settings.

Edit Settings on the VM

Power the VM back up  and  run a  # df -Th  to see the current disk usage & filesystem type.

df -h output before growing the disk

Run  # ls -al /dev/sda* to view all the disks

fdisk /dev/sda

:n (new disk)

:p (create primary partition)

:3 (the partition number)

: first cylinder  (keep the defaults)

: p (print the partition table and review the new partition on /dev/sda#)

:w (write the table to disk and exit)

# reboot the server or run  # partprobe . If partprobe throws an error, just reboot. This is to make sure the partition table is actually written.

Run  # vgdisplay to view the volume group. In the FREE PE / Size section, you will see that there isn’t any free space yet. Take note of the proper name of the volume group.

Output from vgdisplay

Run a  #vgextend vg_insertyourvghere /dev/sda#

Do another #vgdisplay to confirm the free space on the volume group.

freespace

It shows you there is now 4GB free space that can be added to the volume group.  Now we extend the logical volume into that free space

Do an #lvdisplay to get the proper name of the logical volume you’re going to extend.

Now, time to extend the logical volume group.

#  lvextend -L +4G /dev/vg_kimathegreat/lv_root

lvextend

I reduced the size slightly to avoid the error about  “Insufficient free space”.

Now to resize the file system. If this is a partition that can be unmounted, run a # umount before running resize2fs.

#resize2fs /dev/vg_kimathegreat/lv_root

Run df -h to see the new size of the partition and the increased free space.

newfreespace

Now, there are a few caveats I’ve run across. Mainly when running fdisk /dev/sda. I will allow me to make a partition, but depending on where the sectors start and end, I won’t be able to run a pvextend.

If that happens, run # cfdisk and see where the partitions are and if you have any free space. If you do, that is where you create your partition.  You can make your corrections, between both fdisk and cfdisk , by deleting the small partition and creating a new one with the larger chunk of free space.  Just be careful and don’t delete anything critical.

As you can see, I’m missing sda3. That was because the 1st fdisk /dev/sda created a partition using only a few megs of space. I didn’t note the sector count and created the primary partition in the smaller block of free space. I used fdisk  (d) to delete that partition and then created a new one.

missingsda3

Having VT enabled on your CPU isn’t all you need to run a nested ESXi enviroment in VMware workstation 9

I built an ESXi 4.1 test environment in workstation 9 to test and document an upgrade from vSphere 4.1U2 to 5.1U1a

Overview of ESXi enviroment

My test box is:

  • Dell PowerEdge 2900 with (2) dual core Intel Xeon 5160’s
  • 24GB RAM
  • iSCSI Lun attached to hold my VMDKs
  • 2008 R2 Enterprise SP1 with VMware workstation 9.0.2 (upgraded from 8.0.2)

I wanted to install some 64-bit VMs inside of my ESX servers and after some cursory reading and searching, I was under the impression that I could. The first thing was to make sure that CPU could handle a 64-bit OS (Intel® 64) VT (Intel® Virtualization Technology (VT-x) and XD (Execute Disable Bit). I went to the Intel site and looked up my processor. Looks like I was ready to go. (I missed the part about EPT=No at the bottom  as Mike would say, “RTFM”)

After installing the DC,SQL, vCenter server and making sure communication between the 3 was working, I created my 1 ESXi VM and when I powered it on I got this warning:

Intel VT-X/EPT is disabled for this ESX VM

vt-xept_disabled

There is a setting on the VM under processors that should be enabled. I checked the box, proceeded with my installation.

esxinstall-eror-fixed
Edit the settings on the ESXi VM and enable VT

I configured the host, added it to the cluster. So I thought I was in the clear. Somewhere along the line, before I tried to install a guest, I saw this message.

Virtualized Intel VT-x/EPT alert
I clicked yes to continue. Clicking no wouldn’t let me proceed.

I searched this message and the previous one and found several posts that suggested  confirming the bios settings on the machine that’s running workstation, enabling VT in the settings (we did that) vmx or /etc/vmware/config hacks to fix it and nothing worked. I went so far as to update the bios on the server just in case there were some features that were not exposed on the version I was currently running.

Here are several errors I got when I tried to install 2003 R2 64bit on the ESXi host:

buildingcluster-01

buildingcluster-error01
buildingcluster-error02

I posted on a vmware community for workstation and got a response that I really didn’t anticipate:

The implication of the message, “Virtualized Intel VT-x/EPT is not supported on this platform,” is that your host does not support EPT.

OK, so there is one more technology that is required to run a nested  64-bit environment. But what is EPT and when did it become so important?

The ability to run a nested environment is fairly new. The newer processors (AMD and Intel both) have begun to include Extended Page Tables (EPT) in to their procs (Nested page tables in AMD). EPT allows each guest VM to have it’s own page table to keep track of virtual & physical memory.

I have my nested environment running fine, there just aren’t any 64-bit VMs running on any of the ESXi 4.1 hosts.

For more info on EPT and Memory Management read here and here.

Adding space to a vDisk on a PERC 5/i Controller Part 1 of 3

We’ve all been there, running a server that’s way past it’s prime on 6 year old hardware. The application grows, the size on the partitions shrink. Then one day, you get the call and things aren’t working ‘right’ because there is no space. You suggest P2V or reinstalling the application on a VM. You’re met with stares, grumbles of “I’m too busy” and looks of utter discontent.

The only thing to do is add more disks and grow the drives in question.

How to add drives to a vDisk on a PERC 5/i integrated controller

(disclaimer! These are the steps I took, they may not work for you in your environment)

  • Before performing any hard disk reconfigurations, please make sure you have a good (tested) backup of your server. As with all HD reconfigurations, there is a risk of data loss.
  • Confirm the Dell Open Manage Server Administrator is installed. If not, download and install it.
  • Add the hard drives to the server and reboot. They should be the same size or larger.
  • Boot to the PERC (raid controller) in the bios. On the Dell PowerEdge it’s CTRL+R
  •  There is an additional menu  that now appears up top called Foreign View. CTRL+N  over to the Foreign View menu and view the additional disk group. You’ll see it’s marked as foreign.

PERC 5/i Integrated Bios Configuration

  • Highlight the Controller where the foreign configuration exists, press F2 . Use the arrow keys expand the menu with the arrow keys and select clear. Clear will delete the foreign configuration (example, if you’re using disks from another server). Press Enter.

Clear foreign configuration

  • Press OK if you’re sure you want to clear the configuration.

Confirm Clear

  • Press OK. You’ll see that the foreign view menu has disappeared.

noforeignview

  • Exit the BIOS utility by pressing ESC. Reboot the server.
  • Log into the server and launch Server Administrator. Expand the Connector and the physical disks. The state of the disks should read “Ready” and the used RAID disk space as 0GB.

disk ready

  • From the left pane, click on the virtual disk and click the down arrow next to Available Tasks. Select Reconfigure and click execute.

osma-reconfig

  • Click the connector to view the physical disks. Select the disks that you just installed by clicking the check box.

Select Physical Disks to add to the vDisk

  • The new disks are now listed under selected physical disks. Click continue.

osma-reconfigvdisk-1of3b

  • Select the current raid level of the current disks. If your virtual disk is currently RAID-5, make sure you select RAID-5. Click continue.

Select Disk Attributes

  • Review the new virtual disk configuration. Click finish.

osma-reconfigvdisk-3of3

  • The disk reconstruction will begin and the progress will be displayed.

omsa-reconstruct

  • This will take several hours to complete.  Once finished, view disk management. There you can see the new unallocated space at the end of the disk.

diskmgmt-unallocated

  • To grow C:\ the free space must be immediately following that partition.  Use disk part to move the partitions around (steps to follow). First, you can grow D:\ by using diskpart.

Continue on to Step 2

Adding space to a vDisk on a PERC 5/i Controller Part 2 of 3

To Grow the D:\ Drive

This is part 2 of the “Adding space to a vDisk on a PERC 5/i Controller” post.

Back to Part 1

  • Launch diskpart from the command prompt.  #diskpart (enter)
  • It opens in another window.  Enter  #LIST DISK (enter) and review the disks of the server.
  • Enter #LIST VOLUME   (enter) to view the volumes. Note the volume numbers next to the drive letter.  You will select the volume that you would like to grow.

diskpart-listvolume

  • #LIST PARTITION (to view the partitions)

diskpart-listpartition

  • Enter #SELECT DISK # (the number of the volume you’re growing). This is to bring focus on the disk that you’re working with. OR
  • Enter #SELECT VOLUME 2 (the D:\  is what we’re growing so we will have enough space to ‘give’ to C:\)
  • Decide on the size that you’d like to grow the disk by (in MB). In this example, we grew the disk by 100GB. Enter #EXTEND SIZE=100000 (100GB~)
  • You’ll see the success message immediately  following.
  • Enter #LIST VOLUME (to review the size of the new volume.  It will have an asterisk beside it)

diskpart-extend

  • Enter #LIST DISK to view the available space on the vDisk.

diskpart-listdisk-done

  • Enter #EXIT to close diskpart.
  • Go back to disk management and view the size change of D:\ and the unallocated space.

diskmgmt-unallocated2

Next Step: Growing the C:\ Drive

Adding space to a vDisk on a PERC 5/i Controller Part 3 of 3

To Grow the C:\ Drive

This is part 3 of the “Adding space to a vDisk on a PERC 5/i Controller” post.

Back to Part 1 or Part 2.

Growing the system partition is a much more elaborate task. Download gparted (do not use 0.15.0, it’s buggy) and burn the ISO to disk. Make sure you have a full system backup before performing any disk modifications. Yes, that means YOU if this a production system.

Reboot the system with the gparted ISO in the drive. Press the enter key when you get to the boot screen. Follow remaining prompts to get you to the GUI. When asked which mode do you prefer, press 0 then enter.

gparted-1

  • Review the partitions and the familiarize yourself with your disk layout. Note which partition is the boot partition. For Windows, that is your C:\ drive. In this example, C:\ is /dev/sda2 and D:\ is inside the extended partition, /dev/sda5
  • Now this is where it gets tricky. Here I shrunk the /dev/sda5 partition from the left to add space to the beginning of the partition. Then I shrunk /dev/sda3 (the extended partition). Lastly, I grew the C:\ drive partition, /dev/sda2.

gparted-resize

gparted-resize2

gparted-resize3

gparted-resize4

  • Notice the several unallocated slices between the partitions, these occur when you have cylinder aligned and MiB aligned partitions on the same disk. Also, notice the unallocated space at the end of the partition (like when we started) this will taken care of in Windows, however we could handle it hear, but it seems quicker to do from the OS.
  • Once your partitions appear as you like them to, click Apply. This will queue up all 3 disk operations to disk. Depending on the size of your disk, this may take some time.
  • Click Details for more information about what’s happening.

gparted-apply

  • Once the operations are complete, click close.

gparted-done

  • Reboot the server and remove the CD.  Your system will run a check disk on C:\ (and maybe D:\). Let it complete and allow the system to boot to windows
  • When your system boots up, open Disk Management and view the newly sized C:\ and D:\. Note the unallocated space still on the end of the drive. Also note, D:\ no longer has a drive letter. Reassign the same drive letter.

diskmgmt-unallocated3

  • Now Run dis part and extend the volume.
  • Enter #List DISK
  • Enter #LIST VOLUME
  • Enter #SELECT VOLUME #
  • Enter #EXTEND. This will take all available space and use it to extend the partition.
  • Reboot the server and allow for another check disk to run. This time it will be for the D:\ drive. Depending on the size of the drive, this may take some time.
  • Once the server reboots, open up disk management and review the disk layout.

diskmgmt-done

Adding AD Users and Computers to Windows 2008R2

I’m human. Someone asked me how to get to Active Directory Users and Computers (ADUC) on a 2008 R2 and my first response was, look under Administrative Tools.  Well, as luck would have it, it’s not there. Where is it, MMC? admintools? adminpak?  I wasn’t sure. This is a new server and not under my purview so I just assumed it was there.   Sometimes I think I’ve forgotten more that I remember, but then, It dawned on me, this is 2008R2 and EVERYTHING is either a role or a feature.

Problem

How to add Active Directory Users and Computers to a Windows 2008 R2 Server

Solution

  • Launch Server Manager
  • Click on/Expand Features
  • Click Add Features
  • Scroll down and expand Remote Server Administration Tools
    • Expand Role Administration Tools + AD DS and AD LDS Tools
    • Select AD DS Snap-Ins and Command-line Tools

    Add Remote Server Administration Tools

  • Click Next. Confirm installation selections and Click install.
  • Click Close when the installation succeeds.

aduc-installed

 

 

 

 

When powering on a VMA template- Cannot initialize property ‘vami.netmask0.vSphere_Management_Assistant…has no associated network protocol profile.

I’m new to 5.1 and I’m chugging along, getting my new cluster up and running. Deploying a template was a walk in the park in 4.1. This is where you find out you don’t know what you don’t know.

Problem

When I power on the VMA template I get this error:

 

VM Power-On Error

This is caused by not having created an IP Pool for your vAPPS. What is an IP Pool you say?  Here is an explanation from the vSphere 5.1 online documentation:

IP pools provide a network identity to vApps. An IP pool is a network configuration that is assigned to a network used by a vApp. The vApp can then leverage vCenter Server to automatically provide an IP configuration to its virtual machines.

Solution

You’ll have to configure an IP Pool in order to get your template powered on. Click on the Datacenter in vSphere client. There is a new tab called IP Pools, click on it to configure a pool.

IP Pool Tab

Click Add. The New IP Pools Properties box appears.  Give the pool a name. Depending on which version of IP you’re using, click on the corresponding tab.

New IP Pool Properties

Enter the subnet and gateway information as it pertains to your environment. I did not check Enable IP pool and you may or may not have to depending on your environment. Click on the DNS tab and configure DNS as needed. Go through the other tabs and configure them as they apply. Since I’m only using IPv4 without DHCP, it requires limited config. Click OK when you’re finish.

ippool-finish

You should now be able to power on your VMA template.

Similar information can be found here:

 

 

 

 

 

Social media & sharing icons powered by UltimatelySocial