Virtualized Openstack single node installation with Fuel on Ubuntu KVMThis document contains instructions that will do a install of Openstack all on one host. This is a virtualized install of openstack with Fuel on KVM. The end result is a KVM host for Fuel, Controller and Compute. The host configuration allows Fuel to run with all defaults. By supporting Fuel defaults, the install should be easier for first time installers. This procedure is done on Desktop version Ubuntu 16.04 host. This will work with all versions of Fuel from Fuel 6 on through Fuel 10.
Starting with Fuel 9, an install will run with one compute. Updates to the various Fuel releases are highly likely to change the required number of controller and compute servers. Your mileage may vary.
MotivationsThe three big benefits for doing a KVM all in one openstack deployment is an increased in vitalization performance, reduced hardware requirements, minimizing difference between development and production.
KVM is an efficient use of resources. Some vendors suggest using VirtualBox for a virtualized install. VirtualBox will run a nested VM with emulation. That means when the compute server starts an instance, it's running in software emulating a VM on the compute node. Modern processors have instructions to support running an VM that starts a VM on hardware. This feature makes it viable to do useful work with a virtualized openstack install. Based on VirtualBox feature request that has been open for many years, it's safe to conclude that waiting for VirtualBox to get nested virtualization is not an option. It's my option that VirtualBox install of openstack stops being useful when the install is complete.
Developing applications on Openstack with out the need for a stack of computers is more efficient. Running a minimal stack of four computers (Fuel, controller, two compute), a switch and some type of NAT device(s) all on physical devices is expensive to purchase, takes lots of space, consumes lots of electricity, requires cooling, and is time consuming to install and operate. This is alot of overhead for a proof of concept or for a one person development environment.
Minimizing the difference between development and deployment reduces errors. "It worked in devstack" but not in production is sometimes a problem.
Configuration needed for default Fuel deploymentThe default Fuel network is the network segment that contains 10.20.0.x IP address on VLAN 1. The Fuel documentation reefers to this network as the 'Admin' network. I guess it's full name could be Fuel Administration Network. Fuel Network also assumes that Fuel server is at 10.20.0.2. The Fuel network assumes default gateway is 10.20.0.1. Fuel server performs PXE boot service. In addition, default Fuel install assumes vlan 100 and 101 are used for Storage and Management networks respectively on the same network segment. Regardless of how Fuel is used to do an install, Fuel default assume the network switch, or network bridge, or emulated network segment must support VLAN tagging and un-tagged VLAN packets on the same segment. As a refresher, Ethernet packets without a VLAN tags are defaulted to VLAN 1. Hence, when an node boots up with 'boot from LAN' setting selected, it will DHCP boot. These DHCP packets are un-tagged. Therefore, defaulted to VLAN 1. Some real world hardware supports setting DHCP boot to happen on a VLAN. Fuel supports putting the Fuel Network on a VLAN also. This feature makes Fuel much more straight forward to install in to an existing network. Conversely, it's more work and a bit steeper learning curve to setup Fuel in a lab environment.
Default install of Fuel needs Network Address Translation(NAT) of 10.20.0.x and 172.16.0.x networks to the public Internet. This is the default behavior for virsh and virt-manager with 192.168.122.x network. virsh appropriately names this network 'default'. We will leverage virsh and virt-manager to do the NAT and Virtual Machine(VM) management of our nodes.
Secret SauceAfter much digging around the net, it became apparent that KVM, Linux Bridge, virsh, and virtio all work together with vlan tagged frames. The answer turned out to be really easy. vconfig command needs to used to tell a bridge what vlan traffic is allowed on a given bridge. Default behavior of a Linux Bridge is to deny all VLANS. To test this, run the following works in side of a KVM instance.
To get linux bridge to recognize vlan tagged traffic, we need to explicitly tell it what vlan are allowed. This is done with vconfig command ran on the host.
# vconfig add eth1 100 # ifconfig eth1.100 127.16.0.128 netmask 255.255.255.0
# vconfig add br1 101
br1 will now be able to have untagged traffic and tagged traffic. This enables the Fuel server to use un-tagged traffic for PXE and the tagged traffic for Openstack Management, Openstack Storage and Openstack Neutron controlled tenant networks.
The how to, for installing Fuel in KVM via virt-managerThis is the sequence of steps to get it installed and working. Steps 3, 4, 5, 6 and 7 have more details on how to do each of their respective steps in latter sections.
- Install a host computer with Ubuntu 16.04
- the user stack is on Ubuntu Host
- Host Processor supports virtualization and it's enabled in the BIOS
- virtualization is installed. sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils virt-manager vlan
- Define two bridges in virsh, br1 and br2
- Add needed vlans to br1
- Create soft allocated files for vm images
- Install Fuel in first vm
- Create nodes, each nodes needs:
- 2 network adaptors set to br1 and br2
- network adaptors use virtio device model
- boot options have NIC for br1 selected. This will pxe boot from br1
- select soft allocated files for VM VirtIO Disk, node-1, node-2 etc...
- After Fuel is installed, boot nodes, each node will pxe boot from Fuel Server
- On the Host, open web browser to 10.20.0.2. For more details on how to configure Fuel see Ghetto Stack blog post.
Define Network Bridges in virshCreate the following two files:
<network> <name>br1</name> <forward mode='nat'/> <bridge name='br1' stp='on' delay='0'/> <ip address='10.20.0.1' netmask='255.255.255.0'> </ip> </network>
<network> <name>br2</name> <forward mode='nat'/> <bridge name='br2' stp='on' delay='0'/> <ip address='172.16.0.1' netmask='255.255.255.0'> </ip> </network>
virsh net-define br1.xml virsh net-define br2.xml virsh net-start br1 virsh net-start br2 virsh net-autostart br1 virsh net-autostart br2
Add VLAN to br1
for i in 1 100 101 102 103 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 do echo $i vconfig add br1 $i done
Create soft allocated files for vm images
to check file for actual disk usage run:
for i in 0 1 2 3 do truncate -s 512G $HOME/virt_image/node-$i.img done
du -h $HOME/virt_image/node-*
Use virt-manager to create VMvirt-manager is a UI application. That means it needs to be run from the X-Window interface. This blog post used the default Desktop Ubuntu 16.04 install. I'm a big fan of using the command line interface. Creating a small number of VMs, adjusting their configuration, and restarts are just a lot easier with a UI interface. virt-manager is a really good tool to fill this need. Since were in the path of least resistance mode, this installation used the default GNOME 3 shell Unity, by Ubuntu. I just expect the default shell to be less buggy. run virt-manager on the command line. There is no need to run virt-manager as root.
do the following in virt-manager
- Configure first node for the fuel
- Only one Nic is needed
- set network interface to use br1
- use a soft allocated file node-0.img for VirtIO Disk
- download Fuel ISO from here or here or here
- Under "IDE CDROM 1" connect the device to the file download above, typically /home/stack/Downloads/*.iso
- Under Boot Options, select IDE CDROM 1 as the Boot device
- start the VM
- install Fuel with all defaults.
- de-select IDE CDROM 1 as the boot device, select only VirtIO Disk 1
- reboot VM
- Configure nodes 1 through N (at least 3)
- each node needs to have two nics. First NIC port is set to br1(with NAT), second NIC port is set to br2(with NAT)
- to use a soft allocated file for VirtIO Disk, select one the files created from above $HOME/virt_image/node-?.img
- Under Boot Options, select NIC that is on br1, this will enable this vm to pxe boot from fuel server.
- start node
- repeat for the desired number of nodes
Using FuelOpen web browser to 10.20.0.2 The Fuel web page should appear. login as admin password admin
If your in need of a click by click set of instructions for installing Openstack with Fuel, I've done that in a previous blog post Ghetto Stack
using the installed openstack from another hostIt's not always convenient to get on the host. Port forwarding can be used to gain access to Fuel server VM. do the following command on the host ubuntu to setup port forwarding to fuel dashboard ( http )
in your web browser, put IP address or hostname of your host in the web browser using the port 8443. i.e 192.168.0.100:8443 for keystone
# ssh -L 8443:10.20.0.2:8443 email@example.com
for fuel api
# ssh -L 8000:10.20.0.2:8000 firstname.lastname@example.org
# ssh -L 8773:10.20.0.2:8773 email@example.com
Remote usage tipx2go works for remote access to host. It allows for usage of virt-manager without being on the console. In addition, install MATE and XFCE bindings.
Recommended add of br0 tip
Most setups will have network access on the host with a port labeled eth0. If we create a linux bridge and call it br0, attach it to eth0. The host will behave just as it always has. Now when we go in to virt-manager, we can add another Ethernet port to any of our VM and specify br0. After IP address are correctly configured, you can now access the VM from you local network without port forwarding. If you have a VM that's got a Fuel Default Public IP address, your going to still need to do port forwarding. The follow on is that the Public IP address in the Fuel Environment setup can be changed from the Fuel default range of 172.16.0.x to a range in your eth0 network, Choose br0 as the segment or interface when configuring up a Fuel environment. Then when you allocate a public IP address in Openstack, it will get an IP address in br0 that will be accessible from outside your AIO host.
Memory sizing tip
Nodes consume about 10 gigs of ram just to make it through the install. That means a 32 gig system will be swapping or thrashing when you spin up a VM on a 3 node system . A three node system is a minimal Fuel, one controller, one compute. A 96 Gig of ram system with SSD is a good system. Ubuntu will use the extra Memory for buffering and caching.
Notes on Failed Deployment
Fuel is built on Puppet and an orchestration controller. Puppet is built on the concept of idempotent allocation of work to the host. Not a lot of programmatic feedback to catch and deal errors or exceptions. After an arbitrary time out period, and an arbitrary number of retry, Puppet will give up, hence the deployment will fail. If your running an a resource starved machine, Linux will deal with it and just take longer with paging, swapping, etc. Successful deployment is highly dependent on the version of Fuel you choose and your configuration. Fuel 6 is the least resource hungry version of Fuel I've use. In the event of a failed install, I suggest upping the amount of ram, virtual CPU, using ssds, etc. It will eventually work. Some data points, Dell 720 with 192 gigs of ram, Fuel 9, Two node 64 Gig nodes (one controller and one compute), the Host was consuming all remaining ram, about 40-50 Gigs, for buff/cache. I've had successful installs of Fuel 9 on Dell 710 with 96 gigs of ram. Fuel 6 will install on a mac with 16gb ram and virtual box. Each version of Fuel tracks a version of Openstack. As Openstack grows, the installer takes more resources to to run.
Fuel 5 is Openstack Icehouse
Fuel 6 is Openstack Juno
Fuel 7 is Openstack Kilo
Fuel 8 is Openstack Liberity
Fuel 9 is Openstack Mitaka
Fuel 10 is Openstack Newton
Fuel 11 is Openstack Ocata
Fuel 12 is Openstack Pike - work in progress
Fuel 11 is Openstack Ocata
Fuel 12 is Openstack Pike - work in progress