So I recently bought a decked-out Leopard Extreme from System 76 with 64GB of RAM and SSDs for storage, and I wanted to set up a development environment for hacking on OpenStack Heat on it.

(By the way, I'm going to be working on Heat to work on autoscale! That's cool!)

Devstack is a really nice tool made by members of the OpenStack community that sets up a single-node OpenStack deployment. I've used it in the past, but I really didn't want to install it directly into my Ubuntu desktop OS running on my new machine, because it tends to stomp all over a lot of things. I want to be able to create an isolated test environment so I can blow it away if I ever need to start from scratch.

Generally I love LXC for this kind of stuff, but unfortunately Devstack doesn't play well with it, because it needs access to a number of physical-ish interfaces on the host. I haven't tried too hard here, but usually there are issues with networking, block storage, and permissions. So I just went with KVM, since I knew my new machine should support nested virtualization for fast OpenStack guests.

I'd like to thank Sam Corbett, who wrote a blog post titled Getting Started with Heat, DevStack & Vagrant. I learned a lot about setting up a dev environment from that, but I wanted the speed of nested KVMs instead of Vagrant/VirtualBox.

As an aside, there seems to be a lot of confusion in the community about whether it's possible to have OpenStack spawn KVM guests when it's running in a KVM. Many say it's impossible, and that you must use software-based QEMU virtualization, but I'd like to be clear that it is possible to do KVM in KVM. If you have nested virtualization enabled as I describe below, devstack will Just Work and spawn your instances with KVM. I recommend checking with "ps ax |grep kvm" after you spin up a guest to see if it's using kvm or not.

Nested virtualization with KVM in KVM

The first thing I needed to do was enable nested virtualization -- I'm not sure why it wasn't already, but oh well.

host$ cat /sys/module/kvm_intel/parameters/nested

(Actually, the first way I discovered I didn't have nested virtualization available was that I created a kvm instance and ran "kvm-ok" inside of it, which reported that kvm wouldn't work).

So I'm still a little bit confused about this part, since most things I read said I'd need to reconfigure grub or change stuff in /etc/modprobe.d, but all I had to do was this:

host$ sudo modprobe -r kvm-intel
host$ sudo modprobe kvm-intel nested=1

Of course this will only work if your kvm-intel module isn't currently in use, which basically means you've shut down all your guests and turned off libvirtd. I didn't expect this to survive a reboot, but it did. YMMV!

By the way, if you have an AMD machine instead of Intel, I think everything's the same except that the module is called "kvm-amd".

Create the OpenStack host

Next step: create an Ubuntu 12.04 KVM guest on my physical host. Devstack only officially supports 12.04 right now (not, for example, 13.04), and I didn't feel like trying to figure out what would break, so I just went straight for the standard.

I used virt-manager, which provides a really nice GUI for managing your virtual machines with libvirtd. I did a pretty standard install, with 30G of RAM and 20G of disk space. I used as the source for a network-based install. I didn't customize anything about anything else (not even the network!).

Word to the wise: if you're doing this headless without virt-manager, I know from past experience that it's a bit of a pain to get access to the console on an Ubuntu installer; I think I had to use the "mini" installer last time I tried that, and pass some extra arguments to virt-install to get it to use the console instead of a GUI.

So, that's pretty much it. You should ssh into the KVM guest that you just created.

Enable NAT for the OpenStack guests

Run This Command:

guest$ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

(ideally, set it up so that it's run on every boot of your OpenStack host, but I haven't gotten around to that yet).

Set up Devstack

Devstack is almost as easy to use as they try to make you believe on the front page of In my experience, though, networking will not be configured correctly unless you edit your localrc ahead of time and specify some stuff. Unfortunately, I am super ignorant about all things networking, so it took me a while to figure out exactly what to put in.

KVM happened to allocate the IP for my new guest, bound to its virtual eth0 interface (this is hooked up to a new virbr0 interface on my host, with IP

Here's the localrc I came up with, given that information:

HOST_IP= # /24

There are a couple of interesting things here. The easy things are HOST_IP, HOST_IP_IFACE, and PUBLIC_INTERFACE (well, actually, I wasn't sure if PUBLIC_INTERFACE should be the same as HOST_IP_IFACE at first, but it turns out to work fine). FLAT_INTERFACE I pretty much just copied out of someone else's example.

FLOATING_RANGE gave me the most pause; this is where I had to relearn about netmasks. My main goal was to allow floating IPs to be allocated on the same network that the OpenStack host was on, to allow my physical host machine to be able to access them (so I could easily use my web browser, or whatever, to connect to web apps in the OpenStack guests). Now, I don't really know how libvirt or virt-manager allocates guest IPs in my virbr0 network, but I wanted to make sure that I at least chose a floating IP range that wouldn't stomp on my existing IPs ( and After tinkering with ****an online IP address calculator for a while, I eventually came to the decision of using This is the area I'm least comfortable with, so I'm not sure if it's a good way to set it up in the long run; it works for me for now!

At this point you should be able to just run ./ and everything should work.

Do the crap that you're going to have to do after every run of ./

Add your SSH key to nova (you should know how to generate or copy your SSH key into place already)

guest$ nova keypair-add --pub-key ~/.ssh/ mykey

Allow all traffic into the OpenStack guests:

guest$ nova secgroup-add-rule default tcp 1 65535

Boot an instance!

guest$ nova boot --flavor m1.small --image cirros-0.3.1-x86_64-uec --key-name mykey testinstance

Now you should be able to ssh to your instance and make sure you have Internet access:

guest$ ssh [email protected] # this is *probably* the IP that was assigned
# ping

Now let's make sure our floating IPs work. Exit back out to your OpenStack host and run:

guest$ nova floating-ip-create
| Ip              | Instance Id | Fixed Ip | Pool   |
| | None        | None     | public |
guest$ nova add-floating-ip teststack-WikiDatabase
guest$ ssh

If you're in, then everything works! You should also be able to connect to that floating IP address from your host-host, if everything's right.

Extra Credit: setting up Heat

This was super easy, with a bit of copy-n-paste. You'll need to add the heat services to your localrc file:


And then run ./ again. Enabling Heat on your devstack will cause it to download some images from the specified URL and configure a Heat instance to talk to the rest of Devstack (don't worry, the URLs will only be downloaded once and cached in your files/ directory. It will disable the cirros instances so they won't be available after your ./ call.

Then create a stack:

guest$ wget
guest$ heat stack-create teststack -f WordPress_Single_Instance.template -P "InstanceType=m1.large;DBUsername=wp;DBPassword=verybaddpassword;KeyName=mykey;LinuxDistribution=F17"

Use "heat stack-list" and "nova list" to figure out when your instance and stack is ready. I then sshed to [email protected] and watched the /var/log/cloud-init-output.log logfile to see how the provisioning was going. When it was all done, I associated a floating IP with the instance (as described above) and was able to hit it with the web browser on my desktop. Yay!

Extra Credit: sharing directories with NFS

Since I wanted to do Heat hacking, I wanted an easy way to share source code between my physical machine and the OpenStack host. At first I tried sshfs, but that's incredibly slow, so I went with NFS. I was scared at first, but it was pretty trivial to set up.

All I had to do was install the nfs-kernel-server package and then edit /etc/exports to add the following line:

/home/radix/Projects    *(rw,async,root_squash,no_subtree_check)

Then, in my OpenStack guest, I installed nfs-common and added the following line to /etc/fstab:

host:/home/radix/Projects    /home/radix/Projects    nfs

Don't forget to mkdir /home/radix/Projects in the guest. You can then reboot the OpenStack host or just mount /home/radix/Projects.

Whew! Well, that was a lot of typing. I hope I haven't left anything out. If you have any issues, ping me and I'll try to update this guide.