Home > Articles > Building Blocks for Data Center Cloud Architectures

Building Blocks for Data Center Cloud Architectures

Chapter Description

This chapter from The Policy Driven Data Center with ACI: Architecture, Concepts, and Methodology describes the components of a cloud infrastructure and how ACI provides network automation for the cloud. It also explains the Amazon Web Services approach; covers the role of the various orchestration tools; introduces some key concepts on how to automate the provisioning of servers and how to get started with OpenStack; explains the OpenStack modeling of the cloud infrastructure; and discusses the administrator’s task of mapping the requirements of IaaS services onto the models of these technologies.

Orchestrators for Infrastructure as a Service

Amazon EC2, VMware vCloud Director, OpenStack, and Cisco UCS Director are IaaS orchestrators that unify the provisioning of virtual machines, physical machines, storage, and networking and can power up the entire infrastructure for a given user environment (called a container, virtual data center, or tenant).

The following common operations are enabled by these tools:

  • Creating a VM
  • Powering up a VM
  • Powering down a VM
  • Power cycling a VM
  • Changing ownership of a server
  • Taking a snapshot of an image

vCloud Director

VMware supports the implementation of clouds with the use of vCloud Director. vCloud Director builds on top of vCenter, which in turn coordinates VMs across a number of hosts that are running vSphere. Figure 2-4 illustrates the features of vCloud Director, which provides tenant abstraction and resource abstraction and a vApp Catalog for users of the cloud computing service.

Figure 2-4

Figure 2-4 vCloud Director Components

Figure 2-5 shows how vCloud Director organizes resources in a different way and provides them as part of a hierarchy where the Organization is at the top. Inside the Organization there are multiple vDCs.

Figure 2-5

Figure 2-5 vCloud Director Organization of Resources

OpenStack

Chapter 6, “OpenStack,” covers the details of OpenStack as it relates to ACI. The purpose of this section is to explain how OpenStack fits in cloud architectures.

Project and Releases

Each functional area of OpenStack is a separate project. For the purpose of cloud deployments, you don’t have to use the entire OpenStack set of capabilities; you can, for instance, just leverage the APIs of a particular project.

The list of projects is as follows:

  • Nova for compute
  • Glance, Swift, and Cinder for image management, object storage, and block storage, respectively
  • Horizon for the dashboard, self-service portal, and GUI
  • Neutron for networking and IP address management
  • Telemetry for metering
  • Heat for orchestration

The release naming is very important because different releases may have significant changes in capabilities. At the time of this writing, you may encounter the following releases:

  • Folsom (September 27, 2012)
  • Grizzly (April 4, 2013)
  • Havana (October 17, 2013)
  • Icehouse (April 17, 2014)
  • Juno (October 2014)
  • Kilo (April 2015)

The releases of particular interest currently for the network administrator are Folsom, because it introduced the Quantum component to manage networking, and Havana, which replaced the Quantum component with Neutron. Neutron gives more flexibility to manage multiple network components simultaneously, especially with the ML2 architecture, and is explained in detail in Chapter 6.

The concept of the plug-in for Neutron is significant. It is how networking vendors plug into the OpenStack architecture. Neutron provides a plug-in that can be used by OpenStack to configure their specific networking devices through a common API.

Multi-Hypervisor Support

OpenStack manages compute via the Nova component, which controls a variety of compute instances, such as the following:

  • Kernel-based Virtual Machine (KVM)
  • Linux Containers (LXC), through libvirt
  • Quick EMUlator (QEMU)
  • User Mode Linux (UML)
  • VMware vSphere 4.1 update 1 and newer
  • Xen, Citrix XenServer, and Xen Cloud Platform (XCP)
  • Hyper-V
  • Baremetal, which provisions physical hardware via pluggable subdrivers

Installers

The installation of OpenStack is a big topic because installing OpenStack has been complicated historically. In fact, Cisco took the initiative to provide an OpenStack rapid scripted installation to facilitate the adoption of OpenStack. At this time many other installers exist.

When installing OpenStack for proof-of-concept purposes, you often hear the following terminology:

  • All-in-one installation: Places the OpenStack controller and nodes’ components all on the same machine
  • Two-roles installation: Places the OpenStack controller on one machine and a compute on another machine

To get started with OpenStack, you typically download a devstack distribution that provides an all-in-one, latest-and-greatest version. Devstack is a means for developers to quickly “stack” and “unstack” an OpenStack full environment, which allows them to develop and test their code. The scale of devstack is limited, naturally.

If you want to perform an all-in-one installation of a particular release, you may use the Cisco installer for Havana by following the instructions at http://docwiki.cisco.com/wiki/OpenStack:Havana:All-in-One, which use the git repo with the code at https://github.com/CiscoSystems/puppet_openstack_builder. Chapter 6 provides additional information regarding the install process.

There are several rapid installers currently available, such as these:

  • Red Hat OpenStack provides PackStack and Foreman
  • Canonical/Ubuntu provides Metal as a Service (MaaS) and JuJu
  • SUSE provides SUSE Cloud
  • Mirantis provides Fuel
  • Piston Cloud provides one

Architecture Models

When deploying OpenStack in a data center, you need to consider the following components:

  • A PXE server/Cobbler server (Quoting from Fedora: “Cobbler is a Linux installation server that allows for rapid setup of network installation environments. It glues together and automates many associated Linux tasks so you do not have to hop between lots of various commands and applications when rolling out new systems, and, in some cases, changing existing ones.”)
  • A Puppet server to provide image management for the compute nodes and potentially to image the very controller node of OpenStack
  • A node or more for OpenStack controllers running keystone, Nova (api, cert, common, conductor, scheduler, and console), Glance, Cinder, Dashboard, and Quantum with Open vSwitch
  • The nodes running the virtual machines with Nova (common and compute) and Quantum with Open vSwitch
  • The nodes providing the proxy to the storage infrastructure

Networking Considerations

Cisco products provide plug-ins for the provisioning of network functionalities to be part of the OpenStack orchestration. Figure 2-6 illustrates the architecture of the networking infrastructure in OpenStack.

Figure 2-6

Figure 2-6 OpenStack Networking Plug-ins

Networks in OpenStack represent an isolated Layer 2 segment, analogous to VLAN in the physical networking world. They can be mapped to VLANs or VLXANs and become part of the ACI End Point Groups (EPGs) and Application Network Policies (ANP). As Figure 2-6 illustrates, the core plug-ins infrastructure offers the option to have vendor plug-ins. This topic is described in Chapter 6.

UCS Director

UCS Director is an automation tool that allows you to abstract the provisioning from the use of the element managers and configure compute, storage, and ACI networking as part of an automated workflow in order to provision applications. The workflow provided by UCS Director is such that the administrator defines server policies, application network policies, storage policies, and virtualization policies, and UCSD applies these policies across the data center as shown in Figure 2-7.

Figure 2-7

Figure 2-7 UCS Director

The workflow can be defined in a very intuitive way via the graphical workflow designer.

UCSD has both a northbound API and a southbound API. The southbound API allows UCSD to be an extensible platform.

Cisco Intelligent Automation for Cloud

Cisco Intelligent Automation for Cloud is a tool that enables a self-service portal and is powered by an orchestration engine to automate the provisioning of virtual and physical servers. Although there are some blurred lines between UCSD and CIAC, CIAC uses the UCSD northbound interface and complements the orchestration with the ability to standardize operations such as offering a self-service portal, opening a ticket, doing chargeback, and so on. CIAC orchestrates across UCSD, OpenStack, and Amazon EC2, and integrates with Puppet/Chef. It also provides measurement of the utilization of resources for the purpose of pricing. Resources being monitored include vNIC, hard drive usage, and so on.

Figure 2-8 illustrates the operations performed by CIAC for PaaS via the use of Puppet.

Figure 2-8

Figure 2-8 CIAC Operations

Figure 2-9 illustrates more details of the provisioning part of the process.

Figure 2-9

Figure 2-9 CIAC Workflow

CIAC organizes the data center resources with the following hierarchy:

  • Tenants
  • Organization within tenants
  • Virtual data centers
  • Resources

Figure 2-10 illustrates the hierarchy used by CIAC.

Figure 2-10

Figure 2-10 Hierarchy in CIAC

The user is offered a complete self-service catalog that includes different options with the classic Bronze, Silver, and Gold “containers” or data centers to choose from, as illustrated in Figure 2-11.

Figure 2-11

Figure 2-11 Containers

Conciliating Different Abstraction Models

One of the tasks of an administrator is to create a cloud infrastructure that maps the abstraction model of the service being offered to the abstractions of the components that make the cloud.

A typical offering may consist of a mix of VMware-based workloads, OpenStack/KVM-based workloads with an ACI network, and UCSD/CIAC orchestration. Each technology has its own way of creating hierarchy and virtualizing the compute and network.

Table 2-1 provides a comparison between the different environments.

Table 2-1 Differences Among VMware vCenter Server, VMware vCloud Director, OpenStack, Amazon EC2, UCS Director, CIAC, and ACI

Platform Type/ Property

VMware vCenter Server

VMware vCloud Director

OpenStack (Essex)

Amazon AWS (EC2)

UCS Director

CIAC

ACI

Compute POD

Data center

Organization

OpenStack PE ID

Account

Account

Server

N/A

Tenant

Folder

Organization

N/A

Account

N/A

Tenant

Security domain

Organization

Folder

N/A

N/A

N/A

Group

Organization

Tenant

VDC

Resource pool

Organization VDC

Project

Account

VDC

VDC

Tenant

VLAN Instance

vCenter network

Org network/network pool

Network ID

Network ID

Network policy

Network

Subnet

VM Template

Full path

VM template HREF

Image ID

AMI ID

Catalog

Server template

N/A

In ACI the network is divided into tenants, and the administration of the tenants is organized with the concept of a security domain. Different administrators are associated with one or more security domains and, similarly, each tenant network can be associated with one or more security domains. The result is a many-to-many mapping, which allows creating sophisticated hierarchies. Furthermore, if two tenant networks represent the same “tenant” in CIAC but two different organizations within the same “tenant,” it is possible to share resources and enable the communication between them.

In CIAC, a tenant can contain different organizations (e.g., departments) and each organization can own one or more virtual data centers (aggregates of physical and virtual resources). Network and other resources can be either shared or segregated, and the API exposed by the ACI controller (APIC) to the orchestrator makes it very easy.

There are currently no related articles. Please check back later.