Home > Articles > Building Blocks for Data Center Cloud Architectures

Building Blocks for Data Center Cloud Architectures

Chapter Description

This chapter from The Policy Driven Data Center with ACI: Architecture, Concepts, and Methodology describes the components of a cloud infrastructure and how ACI provides network automation for the cloud. It also explains the Amazon Web Services approach; covers the role of the various orchestration tools; introduces some key concepts on how to automate the provisioning of servers and how to get started with OpenStack; explains the OpenStack modeling of the cloud infrastructure; and discusses the administrator’s task of mapping the requirements of IaaS services onto the models of these technologies.

Automating Server Provisioning

In large-scale cloud deployments with thousands of physical and virtual servers, administrators must be able to provision servers in a consistent and timely manner.

This section is of interest to the network administrator for several reasons:

  • Some of these technologies can also be used to maintain network equipment designs.
  • Cisco ACI reuses some of the concepts from these technologies that have proven to be effective to the task of maintaining network configurations.
  • A complete design of ACI must include support for these technologies because the compute attached to ACI will use them.

The high-level approach to automating server provisioning consists of performing the following:

  • PXE booting a server (physical or virtual)
  • Deploying the OS or customized OS on the server with Puppet/Chef/CFEngine agents

Because of the above reasons, a typical setup for a cloud deployment requires the following components:

  • A DHCP server
  • A TFTP server
  • An NFS/HTTP or FTP server to deliver the kickstart files
  • A master for Puppet or Chef or similar tools

PXE Booting

In modern data centers, administrators rarely install new software via removable media such as DVDs. Instead, administrators rely on PXE (Preboot eXecution Environment) booting to image servers.

The booting process occurs in the following sequence:

  1. The host boots up and sends a DHCP request.
  2. The DHCP server provides the IP address and the location of the PXE/TFTP server.
  3. The host sends a TFTP request for pxelinux.0 to the TFTP server.
  4. The TFTP server provides pxelinux.0.
  5. The host runs the PXE code and requests the kernel (vmlinuz).
  6. The TFTP server provides vmlinuz code and provides the location of the kickstart configuration files (NFS/HTTP/FTP and so on).
  7. The host requests the kickstart configuration from the server.
  8. The HTTP/NFS/FTP server provides the kickstart configuration.
  9. The host requests to install packages such as the RPMs.
  10. The HTTP/NFS/FTP server provides the RPMs.
  11. The host runs Anaconda, which is the post-installation scripts.
  12. The HTTP/NFS/FTP server provides the scripts and the Puppet/Chef installation information.

Deploying the OS with Chef, Puppet, CFengine, or Similar Tools

One of the important tasks that administrators have to deal with in large-scale data centers is maintaining up-to-date compute nodes with the necessary level of patches, the latest packages, and with the intended services enabled.

You can maintain configurations by creating VM templates or a golden image and instantiating many of them, but this process produces a monolithic image, and replicating this process every time a change is required is a lengthy task. It is also difficult, if not impossible, to propagate updates to the configuration or libraries to all the servers generated from the template. The better approach consists of using a tool such as Chef, Puppet, or CFengine. With these tools, you create a bare-bones golden image or VM template and you push servers day-2.

These tools offer the capability to define the node end state with a language that is abstracted from the underlying OS. For instance, you don’t need to know whether to install a package with “yum” or “apt”; simply define that a given package is needed. You don’t have to use different commands on different machines to set up users, packages, services, and so on.

If you need to create a web server configuration, define it with a high-level language. Then, the tool creates the necessary directories, installs the required packages, and starts the processes listening on the ports specified by the end user.

Some of the key characteristics of these tools are that they are based on principles such as a “declarative” model (in that they define the desired end state) and idempotent configurations (in that you can rerun the same configuration multiple times and it always yields the same result). The policy model relies on the declarative approach. (You can find more details about the declarative model in Chapter 3, “The Policy Data Center.”)

With these automation tools, you can also simulate the result of a given operation before it is actually executed, implement the change, and prevent configuration drifting.

Chef

The following list provides a reference for some key terminology used by Chef:

  • Node: The server (but could be a network device).
  • Attributes: The configuration of a node.
  • Resources: Packages, services, files, users, software, networks, and routes.
  • Recipe: The intended end state of a collection of resources. It is defined in Ruby.
  • Cookbook: The collection of recipes, files, and so on for a particular configuration need. A cookbook is based on a particular application deployment and defines all the components necessary for that application deployment.
  • Templates: Configuration files or fragments with embedded Ruby code (.erb) that is resolved at run time.
  • Run list: The list of recipes that a particular node should run.
  • Knife: The command line for Chef.
  • Chef client: The agent that runs on a node.

Normally the administrator performs configurations from “Knife” from a Chef workstation, which has a local repository of the configurations. The cookbooks are saved on the Chef server, which pushes them to the nodes, as shown in Figure 2-2.

Figure 2-2

Figure 2-2 Chef Process and Interactions

The recipe that is relevant to the action to be performed on the device is configured on the Chef workstation and uploaded to the Chef server.

Puppet

Figure 2-3 illustrates how Puppet operates. With the Puppet language, you define the desired state of resources (users, packages, services, and so on), simulate the deployment of the desired end state as defined in the manifest file, and then apply the manifest file to the infrastructure. Finally, it is possible to track the components deployed, track the changes, and correct configurations from drifting from the intended state.

Figure 2-3

Figure 2-3 Puppet

The following is a list of some key terminology used in Puppet:

  • Nodes: The servers, or network devices
  • Resource: The object of configuration: packages, files, users, groups, services, and custom server configuration.
  • Manifest: A source file written using Puppet language (.pp)
  • Class: A named block of Puppet code
  • Module: A collection of classes, resource types, files, and templates, organized around a particular purpose
  • Catalog: Compiled collection of all resources to be applied to a specific node, including relationships between those resources
5. Orchestrators for Infrastructure as a Service | Next Section Previous Section

There are currently no related articles. Please check back later.