If you have ever had to reattach a disconnected SR to your XenServer (I published a blog post with instructions on how to do this some time ago), you will be already familiar with the xe-pbd-plug command.
What better way to start 2017 with a post on my future projects? There are several things I would like to try in the next 12 months, some that I have been meaning to try for a long time, others that made the list only recently. Of course, like every year, this list will change a million times and I will add and remove several entries as I go along, but for now, these are some of the things I want to spend some time playing with in 2017.
You might encounter this error message in XenServer when launching a VM:
This VM needs storage that cannot be seen from that server
I am not sure why the guys over at Citrix decided to refer to the host you are trying to launch the VM on as “that server”, but this error message simply means that your host cannot access the SR this VM is stored on. This usually means that the SR go detached for some reason, and it is very likely that the SR is going to show up as unplugged in XenCenter, and trying to repair it from the XenCenter client would return another error message, typically something like this:
Error parameters: , Logical Volume mount/activate error
The first result in Google if you try to look for the error message in the title of the post brings you to a post on the Citrix Discussion website recommending to eject any disc from the DVD drive of the VM, but in my case there was no disc inserted, so this wasn’t helpful. What worked for me was removing the SR and adding it back to XenServer. I then restored my VM metadata so that all the drives could be attached to the correct VMs once again.
And again another weird XenServer issue here, but luckily this one was super simple to solve. After rebooting my XenServer 6.5 and trying to launch a VM, I got this error message in xsconsole:
Failed: The specified host is disabled
It goes without saying that I did not disable the host myself before the reboot. The funny thing is that everything seemed to work as usual (SSH access, local shell etc), the only problem was that I could not start any VM.
I just took a look at the documentation and I saw that there is a command to manually re-enable a host:
Simply running this command allowed me to start virtual machines on this host again.
You have got to love informative error messages. This one right here is one of the most obscure ones I have seen so far:
Exit Maintenance Mode Failed
(“‘NoneType’ object has no attribute ‘xenapi'”,) in XenServer
This error message came up in XenServer after a reboot any time I would try to make my XenServer host exit maintenance mode. Also, I only noticed that the host was in maintenance mode because I could not launch any VM. Trying to launch a VM would result in an error message telling me to exit maintenance mode first.
Luckily, even if the error message itself is absolutely useless, the solution was extremely simple: free up some disk space.
I am a fan of thin provisioning, I like the idea of only using the necessary space for something, together with the flexibility of knowing that if I need more space, this will be taken care of automatically.
Enabling thin provisioning on XenServer is very simple at installation time, and not as straightforward if you want to convert an existing SR from thick to thin provisioning. So here are the two scenarios.
Here I am with another XenServer issue that was easy to solve but quite difficult to troubleshoot. I have applied a few XenServer updates (I am running XenServer 6.5 FP1), but I started having issues after installing XS65ESP1022 and XS65ESP1023.
Specifically, after rebooting the host following the installation of these two updates, I noticed that my host would not come out of maintenance mode anymore. If I tried to do so from XenCenter I would get an error message that said Server still booting. This of course was not true, since I could access the console on the host.
After a lot of trial and error I found a solution, but it turns out there were at least a couple of problems that I needed to solve before being up and running again.
This week I had the chance to experience my first XenServer crash. And by crash I mean that suddenly my XenServer became unresponsive and I lost console connection, XenCenter would not connect anymore and I had no way to understand what was really going on until I connected a display to the server itself. It was then that I realized that my XenServer was going thorugh an infinite boot sequence.
Upon further inspection, I noticed that there was a kernel panic right before the screen with the XenServer logo, and the error message was this:
kernel panic-not syncing: VFS: unable to mount root fs on unknown block(0,0)
I did not find any working solution so maybe something got corrupted on the boot USB drive (I actually started noticing some slowdowns when SSH-ing to the machine right before the crash). I also tried installing XenServer again using the installation CD (and the same USB drive as the destination), but the process failed during the initial backup of the existing XenServer data on the USB stick during the backup of the
/var directory. So probably something did indeed get corrupt on the boot drive.
I finally tried playing around with BIOS boot settings for a while but I had no success, so I brought things back to how they were before. The only thing left for me to do was to restore XenServer from backup.
If you are running ESX 5.0, Workstation 8.0, or Fusion 4.0 or higher, additional configuration is needed so that the virtual HPET setting does not prevent the virtual machine from booting.
I have found that this holds true for Citrix XenServer as well. If you don’t touch the HPET setting, FreeNAS 9.3 will fail to boot on XenServer 6.5
I have also found this bug report against FreeNAS 9.3 and XenServer 6.2, which is currently in resolved status. However, since I have experienced this same issue on XenServer 6.5, I have decided to put together some screenshots and instructions on how to solve this issue and allow FreeNAS 9.3 to boot successfully on XenServer 6.5.
There are two different phases to allow this to work:
- Disable HPET from GRUB to allow FreeNAS to boot successfully the first time
- Disable HPET within FreeNAS to avoid stumbling upon the same issue on following reboots
There is a quick way to check which VM’s don’t have XenServer tools installed in XenServer. Just type the following command on your host:
xe vm-list PV-drivers-up-to-date=false
Note that for this command to work, the virtual machines need to be running.