Sunday, October 26, 2014

Mirantis Fuel 5.1


I have been using Mirantis Fuel (https://github.com/stackforge/fuel-library/) for about a year now.  I started with version 4.x, and am now using version 5.1.  It is a great open source project!

Fuel is used to setup and manage clusters of machines and allows for auto-deployment and installation of OpenStack via a nice and intuitive web UI.  Note that Fuel also offers a CLI even if I do not use it as much as the UI.

In our lab, Fuel is currently used to manage two different clusters. The first cluster is composed of 4 compute nodes and 1 controller node and is managed by Fuel 4.1. It runs OpenStack Havana on Ubuntu Precise.  It has now been in use for more than 7 months. The second cluster is composed of 15 compute nodes and 1 controller node. It is managed by Fuel 5.1 and running IceHouse on CentOS.  It is not used yet for production.  Both clusters are setup using Neutron and VLAN tagging.  As a comparison,we also have a manually setup OpenStack Havana on Ubuntu Precise also Neutron but GRE tunnels for virtualization.  All in all we have had to deal with tweaking and setting up OpenStack quite a lot (nova, glance, cinder, neutron, etc.).

Starting with the good things

Fuel has a sound architecture with different layers that covers the different areas of provisioning i.e. from finding the right image, to identifying the different machines to manage, and automating the deployment and configuration of OpenStack.  Each layer is made of open source parts like cobbler, puppet, etc.  And all of Fuel is available on Git.  So it is a fully open source solution and it is thus easy to see the source code, and to contribute back. 

The most important node in a Fuel deployment is the Master node aka the node where Fuel is running.  This node acts as the central coordinator.  The Master runs a PXE server (cobbler http://www.cobblerd.org/ at the moment) such as when a new machine is connected to the network, it can auto discover via DHCP what to install. Since at this point, Fuel does not know the node, a default bootstrap image is assinged to the newly discovered node. Fuel distributes a special bootstrap image that runs a special script called the Nailgun agent. The agent collects the server’s hardware information and submits it back to the Master (part of the Nailgun service).  This allows for rapid inspection and smart information collection about all the physical instances making up the cluster.  It also exposes each machines interfaces and disks visually to the Fuel UI. This makes Nailgun a critical service. In fact, any commands sent by the user either via UI or CLI the user is received and executed by Nailgun.  Nailgun stores the state of the system, ongoing deployments, roles and all discovered nodes in a PostgreSQL database.  This is a critical part of the system in case of failures, and for recovering in case of errors.  This makes it relatively safe to wipe out an environment or node and re-create it from scratch.

Once data has been collected from all the nodes, it is trivial in the Fuel UI to create an environment by assigning roles to nodes.  The UI is flexible and allows for setting various networking options for the different communications between OpenStack, nodes, virtual machines, storage, and management.  From there it is simple to click on a button and see the cluster being set up.  Internally, the nailgun service generates a template of the configuration and submits it as a task via RabbitMQ to the Astute service.  The astute service isaccording to the Fuel documentation in fact a set of processes that are  calling a defined list of orchestration actions.  For example, one of the action is to tell cobbler how to generate different images based on the environment settings and the options set by the user in order to distribute to each of the nodes. As a result each MAC address of each of the node can be set differently, or the storage options, etc. This is initially tricky to understand, and can sometimes lead to problems especially when a node system is not removed from cobbler.

As part of the deployment, Fuel installs on each node an "mcollective" agent.  I am not 100% sure what those do bu aacording to the documentation, these agents become responsible for listening on the RabbitMQ for further commands from Nailgun.  The final step is to use puppet to provision each node according to the specified recipe and user settings.

See http://docs.mirantis.com/fuel-dev/develop/architecture.html the fuel developer documentation for more info including sequence diagrams.

When we started with Fuel 4x, we were amazed at how easy the the provisioning was.  We were however using all hardware (HP G4s, and G5s) and that created some issues due to the P400 controller that most of the machines used.  Hats off to the Mirantis people on #fuel irc channel as they are really friendly and helpful.  Thanks to kalya, evg and MiroslavAnashkin, we eventually were able to fix some issues and eventually contribute back some code to Fuel.

In short, it is a very smart, asynchronous and layered approach to provisioning a complex environment.


The not so good things


Version 5.1 further layered the components by introducing each as a separate docker (https://www.docker.com/) container.  However, in this case, maybe Mirantis jumped too far too fast.  Also, version 5 contained way more bugs than 4.x and while some of bugs are quite basic, some are quite a pain.

One of the first bug is that an environment with more than 10 nodes simply fails.  This was fixed in 5.0.1, then came back in 5.1.  Then log rotation is not working, and the master node collects all the logs from each of the remotely managed hosts.  This means the disk fills up.  I did not notice this until too late, and even if the bug can easily be fixed manually, the machine was registering an inode failure when it was found.  More on this later.

There are some merges that were not done at  Mirantis OpenStack pacakging. As a result, the nova CLI is missing the server-group capabilities. This is similar to the following problem with RDO (nova-server-group-apis-missing-in-rdo-icehouse-installation) . Not a big problem except I wanted to use that.  Of course, it is possible to download the git code for nova and rebuild it locally then apply the package but Mirantis relabels the pacakges so it is a bit difficult to track. See https://answers.launchpad.net/fuel/+question/255847.

Docker is great and I really like it.  Containers are much more sensible than hypervisors in many situations.  Putting each Fuel component into its own container makes a lot of sense.  And since each container communicates via RabbitMQ it is very logical.  But coming from previous Fuel versions, at first I tried to re-do some of the tweaks that were done in 4.x and got confused.  For example, setting the dns masq options had no effect in the server. As it turns out, the dns masq has to be set within the cobbler container and not in the files on the server node hosting the containers.  The cobbler container hosts the DNS masq so changes outside of the container have no effect. This is a bit confusing as it is hard to guess which container does what at times.  But, docker is still young and unstable.  When the disk filed up due to missing log rotation, the container running the postgreSQL database (fuel/postgres_5.1:latest) started to flag an inode corruption. 

XT4-fs error (device dm-5): __ext4_ext_check_block: bad header/extent in inode #150545: invalid magic - magic 0, entries 0, max 0(0), depth 0(0)

And Docker has this issue: https://github.com/docker/docker/issues/7229? of file system corruption when using devmapper.

Not sure what caused what but the result is that the container with all the data of my running OpenStack environment is now reporting an inode error, and my disk is full.

Mirantis introduced a new feature to backup/restore the whole Master.  This is great.  So, first let's delete some logs, restart rsyslog (dockerctl restart rsyslog) and launch that.  Bad surprises. First the backup tars all containers, then it tries to compress them into an lrz. This two operations require twice the amount of disk space of the final compressed tar ball.  The result is that it requires at least 25GBs of disk space to make a valid backup, and the compression phase is extremly long (count about 1-2h for a backup).  Personally I dont understand why compression is used.  A simple tar ball would have been sufficient.  Worse, if something fails, then compression occurs anyway and then all files except the compressed one are deleted.  Finally, where doing the restore decompression takes about 40 minutes. And I got an error during the compression:

backup failed: tar: Removing leading `/' from member names Compressing archives... tar: /var/backup/fuel/backup_2014-10-17_1​025/fuel_backup_2014-10-17_1025.tar: Wrote only 6144 of 10240 bytes tar: Error is not recoverable: exiting now

And no files except a corrupt archive.  At thr end of the "backup" operation, all files except the compressed binary are erased.  Anyhow, after adding an NFS mount with lots of space, I managed to finalize the backup.  It was then possible to launch the restore on another machine with the same Fuel 5.1 release pre-installed. 

Many posts point to Docker networking parts still being in the work. It seems that is what happened during the restore.  Once it finished (count about 2 hours), nothing was working.  All containers were pointing to being active and running while in fact all had network problems.  I could not reach web UI, fuel CLI commands returned either 500 or "bad gateway", etc..  A linux "route" command indicated that the NW setup on the node was screwed.  Fixed that but none of the container recovered. So, no go.  Not sure that this feature is actually working... In any case it is nor resilient nor fast.

By the time, I had finished the above, the Master (original failing node) had reached a critical stage.  The UI was non-responsive and could not be relaunched.  Trying to relaunch the nginx container was generating a missing container exception, inode errors were more frequent.  At that point, I tried to make a backup of the PostgreSQL database, and found out that this was not documented anywhere.  It was possible to reconstruct that from reading the source code though.  Miroslav on #fuel gave me instructions.  But it was Friday, and I was tired so I went home.  When I came back on Monday, the Master was not responding to backup postgreSQL commands.  Hopefully, the Master is running on an ESX machine and I had a snapshot so I used that to restore the Master.  Ther eI had made a mistake, the snapshot preceded my current environment and I ended witha Fuel Master managing a set of nodes with different identifiers than the one expected.  Cobbler generates a unique id for each node on each deployment, and this id is incremented by one for each node per deployment.  "node32" in Fuel database was now "node48"... Sigh... Funny enough, these generated errors in the network checks, and prevented resetting the environment. I had to re-create the whole thing from scratch.

Long lessson short, Docker is great but when it fails it hurts, restore/backup in Fuel is not very resilient.

Since it is difficult to find, here is the procedure for backing up Nailgun PostgreSQL data:

 dockerctl shell postgres
 sudo -u postgres pg_dump nailgun > /var/www/nailgun/dump_file.sql
 exit

This path /var/www/nailgun/ is cross-mounted between all the containers, so the dump appears in the root filesystem at the same path.

To restore postgreSQL data from dump, place dump file to /var/www/nailgun/ and then:

 dockerctl shell postgres
 sudo -u postgres psql nailgun -S -c 'drop schema public cascade; create schema public;'
 sudo -u postgres psql nailgun < /var/www/nailgun/dump_file.sql
 exit
 dockerctl restart nailgun
 dockerctl restart nginx
 dockerctl shell cobbler
 cobbler sync
 exit

In conclusion, Fuel is great and Mirantis has a great free support via irc.  But, it seems to me versions 5.x are not yet ready for production.  Then upgrades are still an issue as Mirantis relabels each OpenStack package making it hard to track and Fuel has a tight control on which package is available via owning the repositories.   As a result, I wonder if having a simpler setup would not allow for more rapid upgrades (maybe using a simpler puppet setup), and OpenStack Juno just came out and fixes thousands of IceHouse bugs.... All in all I am grateful for having my environment managed by Fuel, yet sometimes I  have doubts.

Thanks again to kalya, evg and MiroslavAnashkin for their patience and help.

No comments:

Post a Comment