At the time of this writing, most large-scale data center deployments are designed with the principles of cloud computing at the forefront. This is equally true for data centers that are built by providers or by large enterprises. This chapter illustrates the design and technology requirements for building a cloud.
Introduction to Cloud Architectures
The National Institute of Technology and Standards (NIST) defines cloud computing as “a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” (See http://csrc.nist.gov/groups/SNS/cloud-computing.)
Data center resources, such as individual servers or applications, are offered as elastic services, which means that capacity is added on demand, and when the compute or application is not needed, the resources providing it can be decommissioned. Amazon Web Services (AWS) is often regarded as the pioneer of this concept and many similar services that exist today.
Cloud computing services are often classified according to two different categories:
- Cloud delivery model: Public cloud, private cloud, or hybrid cloud
- Service delivery model: Infrastructure as a Service, Platform as a Service, or Software as a Service
The cloud delivery model indicates where the compute is provisioned. The following terminology is often used:
- Private cloud: A service on the premises of an enterprise. A data center designed as a private cloud offers shared resources to internal users. A private cloud is shared by tenants, where each tenant is, for instance, a business unit.
- Public cloud: A service offered by a service provider or cloud provider such as Amazon, Rackspace, Google, or Microsoft. A public cloud is typically shared by multiple tenants, where each tenant is, for instance, an enterprise.
- Hybrid cloud: Offers some resources for workloads through a private cloud and other resources through a public cloud. The ability to move some compute to the public cloud is sometimes referred to as cloud burst.
The service delivery model indicates what the user employs from the cloud service:
- Infrastructure as a Service (IaaS): A user requests a dedicated machine (a virtual machine) on which they install applications, some storage, and networking infrastructure. Examples include Amazon AWS, VMware vCloud Express, and so on.
- Platform as a Service (PaaS): A user requests a database, web server environment, and so on. Examples include Google App Engine and Microsoft Azure.
- Software as a Service (SaaS) or Application as a Service (AaaS): A user runs applications such as Microsoft Office, Salesforce, or Cisco WebEx on the cloud instead of on their own premises.
The cloud model of consumption of IT services, and in particular for IaaS, is based on the concept that the user relies on a self-service portal to provide services from a catalog and the provisioning workflow is completely automated. This ensures that the user of the service doesn’t need to wait for IT personnel to allocate VLANs, stitch load balancers or firewalls, and so on. The key benefit is that the fulfillment of the user’s request is quasi-instantaneous.
Until recently, configurations were performed via the CLI to manipulate on a box-by-box basis. Now, ACI offers the ability to instantiate “virtual” networks of a very large scale with a very compact description using Extensible Markup Language (XML) or JavaScript Object Notation (JSON).
Tools such as Cisco UCS Director (UCSD) and Cisco Intelligent Automation for Cloud (CIAC) orchestrate the ACI services together with compute provisioning (such as via Cisco UCS, VMware vCenter, or OpenStack) to provide a fast provisioning service for the entire infrastructure (which the industry terms a virtual private cloud, a virtual data center, or a container).
The components of the cloud infrastructure are represented at a very high level in Figure 2-1. The user (a) of the cloud service (b) orders a self-contained environment (c) represented by the container with firewall load balancing and virtual machines (VM). CIAC provides the service catalog function, while UCSD and OpenStack operate as the element managers.
Figure 2-1 Building Blocks of a Cloud Infrastructure
This request is serviced by the service catalog and portal via the orchestration layer (d). The orchestration layer can be composed of several components. Cisco, for instance, offers CIAC, which interacts with various element managers to provision compute, network, and storage resources.
Figure 2-1 also explains where Application Centric Infrastructure (ACI) and, more precisely, the Cisco Application Policy Infrastructure Controller (APIC), fit in the cloud architecture.