Vmware 5.5 – “A general system error occurred: vim.fault.NotFound”

Today I was faced with a problem I had never encountered before with vmware.. I had several machines that were trying to vmotion for various reasons and were popping up with an oddball error:

“A general system error occurred: vim.fault.NotFound”

It was a “PC LOAD LETTER? WTF DOES THAT EVEN MEAN?” moment for me (office space reference if you live under a rock). After doing some googling around for about 4 hours I found out that this has to do with networking. So I went to the host that the machine resides on via ssh and did the following:

# esxcli network vm list

The above will list out the machines running on this host. You can use grep if you want to make it easier to locate your vm guest that has this issue. At this point take note of the World ID number in the first column displayed and then run the following:

# esxcli network vm port list -w WORLDID


The output should look like this

So I noticed that the DVPort ID did not look correct for what the rest of the guests had on that vlan.. The rest were just 4 digits.. This one had a letter and a special character…. So now I see the problem from the hosts point of view so I go back to my vmware fat client and go to Home > Inventory > Networking (this is in the top “address bar” area)

Select the datacenter in the left hand pane.. Now I begin to search (the lower search box in the right pane) by clicking the down arrow and setting some new filters: “Port ID , Name or Port group or connectee contains:”


And I search for my “C-5539”.. As I expected nothing came back. So I clear my filters out again and I search by the VM name… It does not appear.. Somethings off and there are tons of free ports in this port group..

At this point I see the problem the same as the host did.. So heres how I fixed it with out causing a service disruption to a live linux machine.


Right click on the VM in the inventory = > Edit settings => Network adapter 1 =>(in the bottom right hand corner) Switch to advanced settings. Change the port ID to a free port (You can find a free port by going to the networking section again, selecting the vlan the machine is on and scrolling down until you see blank slots and grab the number on the left)


Enter the port number into the “Port ID:” section and then select “OK” and perform a vmotion. That is all!


Changing a password on Macbook pro without the previous password.

It’s not often I get to play with Macbooks these days. Today we had an interesting situation where a user had passed away and of course nobody knows what password was used on their account and it was the only account on the machine. But it had data on it that needed to be recovered (and of course the drive was also encrypted as well to make matters worse). In linux we would typically just go to single user mode and type passwd user account and be done with it.. In OSX you can to some extent do this in the case that the drive is NOT encrypted (just run an fsck on the disk then mount -uw / and you should be able to alter the password). In any case here is how I did it.



  • Reboot and hold the “command” + “s” keys to boot into single user mode
  • Mount the disk
/sbin/mount -uw /
  • Delete the applesetupdone file  – What this does is tricks the OS into thinking its new out of the box and it needs to go through its setup process again.
rm /var/db/.AppleSetupDone
  • Reboot the machine by typing “reboot” followed by the return key

Now you should be at a setup screen. Click through the prompts and when it asks about transfering data BE SURE TO CLICK “DO NOT TRANSFER MY DATA“. Don’t worry the original data is still there.

  • Connect to your wifi if you care to through the setup
  • Create a new account (this will be a new administrator account on the machine but the old one will still be there as well)
  • Finish setup and login to the machine with your new admin account.
  • Open System Preferences from the apple menu and then click Users & Groups
  • Click the little lock at the bottom left screen and enter in the password you created earlier for your now account
  • Select the user name of the original admin account and click “reset password” on the right
  • Type in the new password and repeat it in the “verify” box and click “change password”
  • You can now login as the original admin account with the new password you specified in the step above.

Updating ManageIQ to the latest code release


This is fairly simple and documented on various sites. I am putting this up so I don’t have to search for it again. Essentially I have fine-3 and I want to make sure I have the latest updates for the application as I am currently stuck with a bug. So here is how you can update your current MIQ version:

ssh into the appliance

modify the git config to use https

git config --global url."https://".insteadOf git:// 

 Do a git pull

git pull


Do a bundle install

bundle install

Compile everything

bundle exec rake evm:compile_assets


Reboot the appliance or restart the application (I rebooted the entire machine.)

Upgrading ManageIQ from Botvinnik to latest release

Good Morning Everyone,

It’s been a while since I’ve last written but this process is one that I feel needs to be well documented. As a disclaimer ManageIQ’s upgrade process is terrible as it is not possible nor is it supported to do an in place upgrade. One more note, There is currently a bug preventing reports (Manage IQ Fine-3 specifically) from being exported; information on that bug can be found here : https://bugzilla.redhat.com/show_bug.cgi?id=1471014. On to the show.

1. Login to your old appliance. In my case its Botvinnik but theoretically it could be any release.

2. Do a database dump (Make sure you have the appropriate space first).


pg_dump --format custom --file ~/db_backup.cpgd vmdb_production

3. Download, install, configure your new appliance http://manageiq.org/download/


##########From this point forward you are working ONLY ON THE NEW APPLIANCE#############



4. On the New appliance (fine-3 in my case) stop the evmserver service

systemctl stop evmserverd

5. Drop the current DB that came on the new appliance

dropdb vmdb_production


#NOTE: Make sure you have enough space for this following part. You will likely need to increase the disk size somewhere on your machine to store the database backup in step 2.

6.  SCP the files from the new old machine to the new machine

scp root@$OLDAPPLIANCEIPaddress:~/db_backup.cpgd ~/
scp root@OLDAPPLIANCEIPaddress:/var/www/miq/vmdb/GUID /var/www/miq/vmdb/

7. Create a new DB on the New Appliance

createdb vmdb_production

8. Restore your database that you copied over

pg_restore --dbname=vmdb_production ~/db_backup.cpgd

9. On your new appliance move to the vmdb directory by simply typing “vmdb”

10. Run the following commands in order

rake db:migrate

bin/rails r tools/purge_duplicate_rubyrep_triggers.rb

systemctl start evmserverd


11. At this point I ran into an error regarding some v2_key issue. To get around this do the following

Select the option to Generate Custom Encryption Key (12).
Follow the prompts to fetch the key from the Old Appliance (2)
Follow the prompts for VMDB location. (/var/www/miq/vmdb/certs/v2_key)
Use the same Region number as the Old VMDB Appliance.


12.  One more rake  to run

rake evm:automate:reset


13. Start the evm service if it didn’t work in step 10 (give the appliance 5 minutes or so to get going)

systemctl start evmserverd


After this you should have all of your data from your old production MIQ appliance on your new shiny upgraded appliance.

Redirecting tomcat to a public URL in lieu of local server IP

I ran into an issue recently where an App team requested help making an Apache change to redirect their apache instance to their public URL that we serve off of our F5.. I don’t do this often so I feel the need to document what I have done to get this to work.

  1. Find the apache installation folder and server.xml file in my case /ds1/apache-tomcat-7.0.59/conf/server.xml

NOTE: Make a backup of your file in case you mess up.


cp server.xml server.xml.ori


2. Using your favorite editor open up the file


[root@somehost ~]# vi /ds1/apache-tomcat-7.0.59/conf/server.xml

3.  Locate the connector section of your conf file. Mine started on line 71


     71     <Connector port="8080" protocol="HTTP/1.1"
     72                connectionTimeout="20000"
     73                redirectPort="8443" />

4.  Add the following underneath line 71 or underneath the <Connector port=”8080″ protocol=”HTTP/1.1″ line where ever this may live for you.



5.  Remove the line redirectPort=”8443″ from the connector and close the connector next to the “secure=”true” section. Your finished product should look like this:

    <Connector port="8080" protocol="HTTP/1.1"
               secure="true" />

6. If your app needs to be started as a specific user su over to that user at this point and issue a restart of the tomcat installation. And you will no longer have a tomcat instance that uses internal only links.

Ubuntu MATE 16.04 LTS connecting to PEAP network without a CA cert

This little problem has been haunting me since the release of Ubuntu MATE 16.04 LTS… Everything worked great! Integrated BT working on my PI, wifi worked awesome, It was snappy… And then I attempted to connect it to my companies corporate network via wireless… No go… Kept telling me it was an incorrect password (I am logged into about 100 servers give or take I think I know my password).. At any rate this turned out to be an issue with wpa_supplicant 2.4 and should be resolved in 2.5 however I don’t have that kind of time to wait. Here’s how you get around it. :

  1. Download wpasupplicant 2.3 : http://ftp.us.debian.org/debian/pool/main/w/wpa/wpasupplicant_2.3-1+deb8u3_armhf.deb
  2. Download wpagui: http://ftp.us.debian.org/debian/pool/main/w/wpa/wpagui_2.3-1+deb8u3_armhf.de

While connected to a wireless or wired network open your MATE Terminal and do the following:


sudo apt-get remove wpasupplicant

Now find your packages that I had you download. We begin with wpasupplicant 2.3-1. Double click on the file. It will pop up a message complaining that there are newer versions out there in repo… Ignore this and click install on the top right.

Then follow the same process for your package “wpagui”

Now that this is complete we have successfully gotten rid of the buggy WPA_SUPPLICANT package; However when you uninstalled it you lost Network Manager as a result. Luckily if you havn’t restarted your machine you are still connected to the internet. Run the following command to get around this:

sudo apt-get install -y network-manager

Once this is complete restart your PI and you will be able to connect to your corporate or otherwise “secure” network.

Automatic Password generator to work with Ansible

I recently had an Idea to change all of my root passwords across my entire environment using Ansible. Ansible made this pretty straight forward however what they did STILL required human interaction… Which I didn’t really want.. I wanted to make a cron job that would run this every X amount of months that would launch a script edit my playbook and launch the playbook and finally emailing our password to me (The email part I know is dicey and I am working on using curl to add it into our password management program via REST. I will edit this article later when this is done with the edits I make to the script). This seemingly simple task caused me a huge headache because the hash was contained to many characters for sed to pass properly and the python module (libpass) is a POS and it won’t accept variables passed in a string. I will start off with my Ansible playbook that i’m running to edit the accounts. It’s standard really, and you can find it right on Ansibles’ web site


- hosts: test
  - name: Change root password
    user: name=root update_password=always password=asd76aseJFSADA6/


(yes that’s all it contains)

As you can see from above there’s nothing special about changing passwords with Ansible. Just that you have to generate it manually and then edit the playbook with the updated hash for each password change.  (which drives me nuts). To automate this process I made a bash script which in hindsight is pretty simple..  It uses openssl to generate a 6 character password then uses crypt to create the hash, Adds it to the playbook, and then launches the playbook (see comments in script).

#Generate Random 6 char password
clearpass=`openssl rand -base64 6`
#Use sed to remove the user line to clear last password update
sed -i '/user/d' /etc/ansible/playbooks/rootpass.yml
#Use crypt to make a hash of your 6 char password
HASH=`openssl passwd -crypt "${clearpass}"`
#Add the User line back into the file with your new hash
echo "    user: name=root update_password=always password=${HASH}" >> /etc/ansible/playbooks/rootpass.yml
#Email me my new password
mail -s  ${clearpass} john.foo@bar.com < /dev/null
#Launch playbook
ansible-playbook /etc/ansible/playbooks/rootpass.yml

And there it is! I so far have tested this method on Centos 5,6,7 I cannot verify functionality on any other OS; However I assume as long as you have openssl libs installed you should be good to go. Enjoy and hopefully I will save some other nerd time trying to jump through hoops.

Partition-less root file system Centos 7

I have been on the quest over the last 6 months to create a completely scalable Centos VM template for deployments.. This is not only to save my precious after hours and weekend time but to provide better uptime to my customers… Today I have finally accomplished this and would like to share how you could make your own scalable VM… In this post I am only going to focus on / because I feel the rest of the volumes are fairly easy to handle with DD and can be done on the fly as it is. If you are just building a template.. I know there is going to be a bunch of LVM fan boy’s out there asking me “why not just use LVM?”… Well simple answer… LVM had its purpose on hardware … In the virtual world you can add disk space to the disk directly instead of adding a disk to a vol group; It was just the partitions holding us back from doing stuff on the fly and the linux installers never gave an option to place the OS on a raw disk. Any way here is how I accomplished this.

1. Add a small disk to the virtual machine (between 512mb/1gb) Make it SCSI(0:1)

2.  Reboot the machine

3.   Your new drive should now be /dev/sdb. Use the following commands to make it bootable and turn the old /boot off


[root@centos7x64 ~]# parted -s /dev/sdb 'mklabel msdos'
[root@centos7x64 ~]# parted -s /dev/sdb 'u s mkpart primary 2048 1048575'
[root@centos7x64 ~]# parted -s /dev/sdb 'set 1 boot on'
[root@centos7x64 ~]# parted -s /dev/sda 'set 1 boot off'


4. Create a filesystem and make it bootable

[root@centos7x64 ~]# mkfs.xfs -L boot /dev/sdb1

5. Make a copy of your fstab and add the new disk to fstab use the UUID you get from the last command run here to replace what you have next to /boot

[root@centos7x64 ~]# cp /etc/fstab /etc/fstab.orig
[root@centos7x64 ~]# blkid -o value -s UUID /dev/sdb1

6. Mount  your new /boot disk and copy existing /boot directory over

[root@centos7x64 ~]# mount /dev/sdb1 /mnt
[root@centos7x64 ~]# tar -C /boot -cpf - . | tar -C /mnt -xpvf -


7. Unmount /boot and remount your new boot disk

[root@centos7x64 ~]# umount /mnt; mount /dev/sdb1 /boot


8. Update grub to give the grub the correct UUIDS

[root@centos7x64 ~] grub2-mkconfig -o /boot/grub2/grub.cfg

9. Shut down the VM and make your new boot disk SCSI 0:0 and the old root disk SCSI 0:1

10. Start the machine up and verify it boots properly. Use df to make sure you are booted on the appropriate disk. should be /boot  on /dev/sda1

/dev/sda1       508M  194M  315M  39% /boot

11. Power down the VM and add an additional hard disk to the machine for root(10-15gb should do).

12. Power the machine on

13. Create  a file system on the raw device

[root@centos7x64 ]# 
mkfs.xfs /dev/sdf
mke2fs 1.41.11 (14-Mar-2010)
/dev/sdd is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=root
OS type: Linux

14. Take note of your device names old root should be /dev/sdb your new root disk should be /dev/sdf at this point (its ok if its not like this at this point we will change it)

15. Go download a rescue disk and or live distro for the following. I used this Rescue CD

16. Mount the disk to the cd drive on the VM and reboot. Interrupt boot and boot from the disk.

17. Open terminal and mount our disks /dev/sdb1 and /dev/sdf and copy over the data

root@ubuntu:~# mkdir /mnt/sdb1
root@ubuntu:~# mkdir /mnt/sdf
root@ubuntu:~# mount /dev/sdb1 /mnt/sdb1
root@ubuntu:~# mount /dev/sdf /mnt/sdf
root@ubuntu:~# tar -C /mnt/sdb1 -cpf - . | tar -C /mnt/sdf -xpf -

18. Modify the fstab on the new /root disk (/dev/sdf in my case) the below will append the UUID to the fstab file just take the new uuid and replace the old / UUID with the new one.

root@ubuntu:~# cp /mnt/sdf/etc/fstab /mnt/sdf/etc/fstab.orig2
root@ubuntu:~# blkid -o value -s UUID /dev/sdb >> /mnt/sdf/etc/fstab
root@ubuntu:~# vi /mnt/sdf/etc/fstab

19.  Now that we have our new / disk copied and in fstab we once more need to update the grub2 config files to use the new UUIDS so we will need to mount our /dev/sda1 disk to get access to this information… There really isnt a pretty way to do this through the console so here is what I did …

root@ubuntu:~# mkdir /mnt/sda1
root@ubuntu:~# mount /dev/sda1 /mnt/sda1
root@ubuntu:~# blkid |grep /dev/sdb1
/dev/sdb1: UUID="3b130000-d0fa-4778-a973-fdaf1f2a4f51" TYPE="xfs" ## Old root
root@ubuntu:~# blkid |grep /dev/sdf/
dev/sdf: UUID="50fb9289-db06-4ca0-beb8-42797b96154d" TYPE="xfs"  ## New root
root@ubuntu:~# vi /mnt/sda1/boot/grub2/grub.cfg

20. inside Vi use the following



21. Power down the machine and make sure your new /boot disk (the small one) is SCSI (0:0) and your new / disk is SCSI (0:1) and then power the VM back on. to verify functionality.

22. Make a clone of the machine at this point if it boots and leave it off just incase.. Remove the old root disk from the machine and hit ok… Then attempt to reboot.. If your machine goes FUBAR delete this machine and power on your clone and find the correct root disk to remove.


ManageIQ – LDAP authentication

Today I was tasked with making my ManageIQ instance available for all of our AD users provided they were in a group… I saw no clear documentation on how this was done (much to my dismay). The problem for us was, although you can go to Configure->Configuration->Server->Authentication  and then click on the Mode drop down menu and select LDAP. There is no place for our username and password for our read only LDAP account… (see below)


The two items you see blocked out there are my ldap server address and also my trailing domain for the end users. But as you can see there is no place for me to add in my read only user to get access to the ldap server…

(Please take note that ManageIQ will give you doom and gloom about manually editing the configuration files…)

So this is how we configure it to work:

1. On the third row of options down (where Authentication is currently highlighted) go to the “Advanced” tab.

2. In the “File” drop down menu make sure that “EVM Server Main Configuration” is selected

 *NOTE* If you setup your ldap host on the Authentication Page previously the LDAP host stuff will already be in there for you along with your port.

3. Edit the following lines to reflect your environment’s situation

base o=<base DN>


– <ldaphostaddress>

binddn uid=<bind username>

bindpw <bind password>



 *NOTE* For troubleshooting issues please login to the appliance via command line and run a tail on /var/www/miq/vmdb/log/evm.log it would serve you well to grep for “ERROR”

   #  tail -f /var/www/miq/vmdb/log/evm.log |grep ERROR

4. Now I wanted to setup roles based off of groups… But first I had to tell MIQ where to look.. So go back to the Authentication page (Confgiure->Configuration->Server->Authentication) scroll down to the bottom of the page and under neath “Role Settings” check the “Get Users Groups from LDAP” and “Get Roles from Home Forest” check boxes. Then fill in the “Base DN” (where you want it to look for said groups), Bind DN (User account used to query for groups) , and finally “Bind Password” which is obviously your Bind DN’s account password. Should look like :


5. Click validate. It should come back with a little success message at the top of the page


(I swear we are almost there)

6. On the lower left hand corner of the screen click on “Access  Control” And click on “Groups”


7. You will notice a “Configuration” button at the top. Click on that and “Add a new Group”


8. At the top give the group a Description (keep it the same as the group in AD). Then check the box to the right “Look Up LDAP Groups” and just below that choose a default Role for that group (These are the equivalent to  ACL’s and control access so choose wisely)

9. Below This section you have “LDAP Group to Look Up” give it a user to look up that is in the group of interest and the enter in the ldap username and password and click “Retrieve” Once this is done it will give you a drop down at the top “LDAP Groups for User”.. Select the Appropriate group and at this point you can click ADD (in the bottom right hand corner of the screen).. If you want to narrow down their access so they can only view certain things, you can go to the “Assign Filters” section at the very bottom of the page and assign hosts, clusters ,VM’s or tags that the group is allowed to view.


Now at this point any one who is in that group can login to ManageIQ and will automatically have the permissions from the role “approver”. You should be all set!

Cups admin page “403 Forbidden”

I ran into a group recently who needed (wanted) to administer cups from a graphical page in lieu of using the good ol’ trusty command line… Personally I don’t care for admin pages when it takes me a total of 3 commands and 30 seconds to fix pretty much any cups issue, But the end users are my customers so I obliged.. After speaking with my networks team and getting the appropriate firewall rules in place (port 631 TCP)  I noticed that when I went to the page all I got was a “403 Forbidden” error.. What I did fix the issue is as follows (probably 2 minutes worth of work):

1.  Edit the cupsd.conf located in /etc/cups/ (you will likely need to add sudo on that command if you are not root)


[john@localhost ~] $ vi /etc/cups/cupsd.conf


2. Locate the section that contains <Location /admin> as well as <Location /admin/conf>


3. Underneath each section add ” Allow all ” so that it looks like this:


4. Write and quit the file ( Esc + :wq + enter if you didn’t know)

5. Use the cupsctl command to enable remote administration (command below)

[john@localhost cups]# cupsctl --remote-admin


6. Restart cups (NOTE: If you are on Centos 7 + or any RHEL derivative its in your best interest to get used to using the systemctl command “systemctl restart cups”)

[john@localhost cups]# service cups restart
Stopping cups:                                             [  OK  ]
Starting cups:                                             [  OK  ]

7. Go check to see if you can get to the page http://machinename:631 . It is in your best interest to fully qualify the machine name as well depending on how your company has things setup. The end result should be you end up on a page that looks like this:


That is all, you are done!