Skip to content
tjmcs edited this page Jun 12, 2012 · 36 revisions

Project Overview

The Razor Microkernel is a small, in-memory Linux kernel that is used by the Razor Server for dynamic, real-time discovery and inventory of the nodes that the Razor Server is managing. The Razor Server accomplishes these tasks by using the Razor Microkernel as the default boot image for any nodes in the network that for which it can't find a specific policy that maps a model to that node.

There are a number of alternatives available today when it comes to small, in-memory Linux kernels, so several constraints were applied when making the choice of which distribution to use as the baseline for development of the Razor Microkernel:

  • The image (ISO) for the distribution should be smaller than 256MB in size (to speed up delivery of the image to the node)
  • Only distributions that were being actively developed were considered
  • The distribution should be based on a relatively recent Linux kernel (v3.0.0 or later) so that we could be fairly confident that it would support the newer hardware we knew that we knew we would find in many modern datacenters
  • Since we knew we would be using Facter as part of the node discovery process, the distribution needed to include a pre-built version of Ruby
  • In order to support the development of custom extensions for our base Microkernel down the line the distribution needed to have an easy mechanism for building "custom extensions"
  • In order to support commercial versions of these extensions, the distrubution needed to be licensed under a "commercial friendly" open-source license.

Given these constraints, there were a number of lightweight, in-memory kernels out there that seemed to meet our requirements, at least at first glance (Damn Small Linux, SliTaz, Porteus, and Puppy LinuX all come to mind). But, once we applied all of the constraints we had in mind, there was really one distribution that stood out above the others, and that distribution was Tiny Core Linux. Tiny Core Linux (or TCL) easily met all of our constraints (even a few constraints that we hadn't thought of initially):

  1. TCL is very small (the "Core" distribution is an ISO that is only 8MB in size) and is designed to run completely in memory (the default configuration assumes no local storage exists and only takes up about 20MB of system memory when fully booted)
  2. TCL is built using a (very) recent kernel; as of the time of this writing (the latest release of TCL, v4.5.4, uses a v3.0.21 Linux kernel and, at the time that this was written, that release was posted less than a week ago), so we knew that it would provide support for most of the hardware that we were likely to see.
  3. TCL can easily be extended (either during the boot process or dynamically, while the kernel is running) by installing TCL Extensions (which we will call TCEs for short). An extensive set of pre-built TCEs are available for download and installation (including Ruby). The complete set of extensions can be found here.
  4. It is relatively simple to build your own TCE mirror, allowing for download and installation of TCEs from a local server (rather than having to pull down the extensions you need across the network).
  5. Tools exist to build your own TCEs if you can't find a pre-build TCE for a package that you might need.
  6. The licensing terms under which TCL is available (GPLv2) are relatively commercial friendly, allowing for later development of commercial extensions for the Microkernel (as long as those extensions are not bundled directly into the ISO). This would not be the case if a distribution that used a GPLv3 license were used instead.

With the foundation for our Microkernel chosen, we set out to build the Razor Microkernel itself. Since we knew that the Microkernel would be using Facter for discovery of the capabilities of the nodes that the Microkernel was deployed to, and given the focus on keeping things small and simple in Razor, we decided that we would develop any additional components that we might need (our Razor Microkernel Controller, for example) using Ruby. In the next section, we'll describe these components (and their interaction with the Razor Server) in a bit more detail.

Components that make up the Razor Microkernel

As was mentioned previously, our Razor Microkernel has been built using the "Core" distribution from Tiny Core Linux. We have added a number of standard TCL extensions (and their dependencies) to this "Core" distribution in order to support the node discovery process:

  1. ruby.tcz - an extension that provides everything needed to run Ruby (v1.8.7) within the Microkernel
  2. bash.tcz - an extension containing the "bash" shell; out of the box the TCL distribution only provides users with access to the "ash" shell
  3. dmidecode.tcz - an extension containing the standard dmidecode UNIX command; this command is used by the Microkernel Controller during the discovery process
  4. scsi-3.0.21-tinycore.tcz - an extension that provides the tools, drivers, and kernel modules needed to access SCSI disks; without this extension any SCSI disks attached to the node are not visible to the Microkernel
  5. lshw.tcz - an extension containing the standard lshw UNIX command; this command is used by the Microkernel Controller during the discovery process
  6. firmware-bnx2.tcz - an extension that provides the firmware files necessary to access the network using a Broadcom NetXtreme II networking card during the system boot process; without this extension the network cannot be accessed using this type of NIC (which is fairly common on some newer servers).
  7. openssh.tcz - an extension containing the openssh daemon; this extension is only included in "development" Microkernel images, on a "production" Microkernel image this package is not included (to prevent unauthorized access to the underlying systems via SSH).

These extensions (which we'll refer to as the "built-in extensions") are installed automatically during the boot process, so they are readily available for use in the Microkernel setup and initialization process.

In addition to these "built-in extensions", the Razor Microkernel supports the download and installation of additional extensions (and their dependencies) from a local TCE mirror once the kernel has completed its boot process (but prior to starting up the Razor Microkernel Controller and any dependencies). Currently, there is only one of these "additional extensions" that is installed by the Razor Microkernel during the system initialization process (an Open VM Tools extension that we have built ourselves), and that extension is actually downloaded from an internal TCE mirror that is built into the Razor Microkernel itself. This initial configuration can easily be extended to support the inclusion of additional extensions in the Razor Microkernel ISO if necessary, or other "additional extensions" could be obtained from an external TCE mirror somewhere else in the local network (perhaps this external TCE mirror functionality could be supported from within the Razor Server itself). Changing over from using an internal TCE mirror to using an external TCE mirror is as simple as changing a URL in the Razor Server configuration file. Like all such server-side changes to the Microkernel configuration, this change would be pushed out to all Microkernel instances on their next (or even first) checkin.

In addition to the extensions mentioned previously, we have also pre-installed a number of "Ruby Gems" in our Razor Microkernel. Currently, this list of gems includes the following:

  1. daemons - a gem that provides the capability to wrap existing Ruby classes/scripts as daemon processes (that can be started, stopped, restarted, etc.); this gem is used primarily to wrap the Razor Microkernel Controller as a daemon process.
  2. facter - provides us with access to Facter, which is used to discover many "facts" about the systems that the Microkernel is deployed to (other "facts" are discovered using the standard lshw, lscpu, and dmidecode UNIX commands).
  3. json_pure - provides the functionality needed to parse/construct JSON requests, which is critical when interacting with the Razor Server; the json_pure gem is used because it is purely Ruby based, so we don't have to install any additional packages like we would have to do in order to use the more "performant" (but partly C-based) json gem instead.
  4. stomp - used by the MCollective daemon to communicate with its agents in the network via an ActiveMQ message queue These Ruby Gems are installed dynamically (as part of the system initialization process), and are used within the Razor Microkernel Controller (and the services that it depends on). Currently these gems are installed based on a local list (and a local set of gems) that are "burned into the Microkernel ISO", but it would be a fairly simple matter to pull this list (and the gems themselves) from an external server in the local network instead.

The list of gems actually installed during the boot process is contained within the '/opt/gems/gem.list' file in the Microkernel ISO, but the soure of this file is actually the 'opt/gems/gem.list' file from the Razor-Microkernel project. More information on how the ISO file is built from the Razor-Microkernel project is provided on a separate page in this Wiki, and a link to thate page is provided (along with links to other pages containing more detailed information about the Microkerne) in the References section at the bottom of this page.

Microkernel Initialization

During the Microkernel initialization process, there are several key services that are started. This set of services includes the following:

  1. The Microkernel Controller - a Ruby-based daemon process that interacts with the Razor Server via HTTP
  2. The Microkernel TCE Mirror - a WEBrick instance that provides a completely internal web-server that can be used to obtain TCL extensions that should be installed once the boot process has completed. As was mentioned previously, the only extension that is currently provided by this mirror is the Open VM Tools extension (and its dependencies).
  3. The Microkernel Web Server - a WEBrick instance that can be used to interact with the Microkernel Controller via HTTP; currently this server is only used by the Microkernel Controller itself to save any configuration changes it might receive from the Razor Server (this action actually triggers a restart of the Microkernel Controller by this web server instance), but this is the most-likely point of interaction between the MCollective and the Microkernel Controller in the future.
  4. The MCollective Daemon - as was mentioned previously, this process is not currently used, but it is available for future use
  5. The OpenSSH Daemon - only installed and running if we are in a "development" Microkernel; in a "production" Microkernel this daemon process is not started (in fact, the package containing this daemon process isn't even installed, as was noted above).

When the system is fully initialized, the components that are running (and the connections between them) look something like this:

Microkernel Components (Post-Boot)

The Razor Microkernel itself includes the four components on the left-hand side of this diagram (the OpenSSH Daemon may also be running but, since it is optional, it is not shown here). There are several interactions that are shown on this diagram that are worth describing in a bit more detail (since those interactions will help new users understand the underlying structure of the Microkernel and how the services that are running in the Microkernel work together to accomplish the task at hand):

  1. The Microkernel Controller (a Ruby-daemon process) interacts with the Razor Server and Microkernel Web Server via HTTP/HTTPS (the former via the Microkernel checkin requests, the latter via POSTs that are made to the Microkernel Web Server instance whenever a new configuration is received by the Microkernel Controller from the Razor Server).
  2. The Microkernel Web Server may restart the Microkernel Controller instance occasionally (typically this is done in order to force the Microkernel Controller to pick up and use a new configuration that it received from the Razor Server).
  3. The Razor Server can communicate with the Microkernel Controller (indirectly) via the MCollective daemon and the HTTP requests that MCollective agents running on the Microkernel submit to the Microkernel Web Server (in order to push out a new configuration to a set of controllers or force a set of controllers to restart, for example). We have set up the framework for this communications channel, and have tested it by changing the Microkernel configuration using this communications channel in the past, but this channel is not actively used in the current release of the Razor Microkernel. Even though it is not used, this channel has been maintained for possible use in future releases of the Microkernel.
  4. The Microkernel Controller can download and install TCL Extensions at any time (either from the Local TCE Mirror that is running within the Microkernel itself or from a Remote TCE Mirror). Currently only the local TCE mirror is used (and that mirror is only used to install a few extensions that are needed when the Microkernel first boots or reboots). However, using a remote mirror after the Microkernel Controller has been started is as simple as making change in the Microkernel Controller's configuration, a change that can be made on the Razor Server itself (and that would be picked up on by the Microkernel Controller on its next checkin).
    • It should be noted here that while we are showing the Remote TCE Mirror as being a part of the Razor Server in this diagram, this functionality could actually be provided by any web server in the local network; the Razor Server is just one possible source for this functionality.

References

For more detailed information about the Razor Microkernel (what it is, how it works, and how to build your own Microkernel ISOs), users should check the pages on this project Wiki that discuss these topics in more detail:

  • An-Overview-of-the-Razor-Microkernel - Provides users with an overview of the Razor Microkernel, the Razor Microkernel boot process, and how the Razor Server uses the Razor Microkernel to perform dynamic discovery and registration of nodes in the network.
  • Building-a-new-Razor-Microkernel-ISO - Describes the process of building a new Microkernel ISO using the tools provided by the Razor-Microkernel project.

For more detailed information about Tiny Core Linux (the basis for our Razor Microkernel), users are referred to the following pages:

  • The Tiny Core Linux Wiki - This Wiki contains a lot of detailed information about Tiny Core Linux including the details of how to install and use TCL, remaster TCL, the details about the internals of TCL (including the boot process)
  • The main Tiny Core Linux Project Page - This page contains links to TCL News, FAQs, Downloads, and Forms
  • The Tiny Core Base "Final Releases" Page - This page in the Tiny Core Linux Forum contains news about the current list of "final releases" (including links to Changelog-style information and discussions about each release)
Clone this wiki locally