January 15, 2015

The Elastic Cloud Project - Architecture

This posts is the continuation of The Elastic Cloud Project post.

There is a team at Cisco, called System Development Unit, that creates reference architectures and CVD (Cisco Validated Design).
They work with the product Business Units to define the best way to approach common use cases with the best technology.
But at the time of this project, they hadn’t completed their job yet (some of the products were not even released).
So we had to invent the solution based on our understanding of the end to end architecture and integrate the technologies on the field.

As I explained before, the most important components were:
- servers - Cisco UCS blades and rack mount servers
- network - Cisco ACI fabric, including the APIC software controller
- virtualization - ESXi, Hyper-V, KVM
- cloud and orchestration software - Cisco PSC and CPO, Openstack (PSC and CPO, plus pre-built services, make up Cisco Intelligent Automation - IAC)
IAC can integrate different “element managers” in the datacenter, so that their resources are used to deliver the cloud services (e.g. single VM or Virtual Data Centers - VDC).
Element managers include vmware vCenter and Openstack, so a end user can get a VDC based on one of these platforms.
There is a autometed process in IAC, called CloudSync, that discovers all the resources available in the element managers and allow the admin to select those he wants to use to provision services (resource management and lifecycle management are amongst the features of the product).

The ACI architecture
I will cover it in detail in one of next posts, but essentially ACI (http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html), that stands for Application Centric Infrastructure, is a holistic architecture with centralized automation and policy-driven application profiles. ACI delivers software flexibility with the scalability of hardware performance. 
Cisco ACI consists of:



The policies that you create in the software controller (APIC) are enforced by the fabric, including physical and virtual networks.
You describe the behavior your application need from the network, non the configuration you need.
This is easier for the application designer, in the collaboration with network managers, because it can be graphically described by the Application Network Profile.
A profile contains End Point Groups (EPG, representing deployment units of the application: both physical and virtual servers) and Contracts (that define the way EPG can communicate).
A profile can be saved as a XML or JSON document, stored in a repository, participate in the Devops lifecycle, used to clone a environment and managed by any orchestrator.
ACI is integrated with the main virtualization platforms (ESXi, Hyper-V, KVM).

To deploy our 3 tier application on 3 different hypervisors, we had to manage vmware and Openstack separately - but in a single process, because everything should be provisioned with a single click.
Initially we based our custom implementation of the new service on the standard IAC services, using them as building blocks.
So we had not to implement the code to create a network, create a VDC, create a Virtual Server, trigger CloudSync, integrate the virtual network with the hardware fabric.
This sequence of operations was common to the Openstack environment and the vmware environment.
The main workflow was built with two parallel branches, the Openstack branch (creating 2 web servers on one network and 1 application server on anothernetwork) and the vmware branch (doing the same for the database tier).



The problem is that the integration of IAC with Openstack, in the 4.0 release that we used at that time, only deals with Nova - that, in turn, manages both KVM and Hyper-V servers.
No Neutron integration was available out of the box, hence no virtual networks for the Openstack based VDC.
So we built the Neutron integration form scratch (implementing direct REST calls to the Neutron API) to create the networks.

The ACI plugin for Neutron does the rest: it talks to the APIC controller to create the corresponding EPG (End Point Group).
This implementation has been fed back to IAC 4.1 by the Cisco engineering, so in the current release it is available out of the box.

Solution for Openstack
A plugin distributed by Cisco ACI was installed in Openstack Neutron, to allow it to integrate into the APIC controller.
This is transparent to the Openstack user, that goes on working in the usual way: create network, create router, create VM instances.
Instructions are sent by Openstack to APIC, so that the corresponding constructs are deployed in the APIC data model (Application Profiles, End Point Groups, Contracts).
The orchestrator can then use these objects to create a specific application logic, spanning the heterogeneous server farms and allowing networks in KVM to connect to networks in ESXi and Hyper-V.
So the workflow that we built only needed to work with the native API in Openstack.

Logical flow:
— web tier --
create a virtual network for the web tier via Neutron API
     the Neutron plugin for ACI calls - implicitly - the APIC controller and creates a corresponding EPG.
     the Neutron plugin for OVS creates a virtual network in the hypervisor's virtual switch
trigger the CloudSync process, so that the new network is discovered and attached to the VDC
create a VM for the web server and attach it to the network created for the web tier
     this was initially done by reusing the existing IAC service “Provision a new VM"
— application server tier — 
create a virtual network for the app tier via Neutron API
     the Neutron plugin for ACI calls - implicitly - the APIC controller and creates a corresponding EPG.
     the Neutron plugin for OVS creates a virtual network in the hypervisor's virtual switch
trigger the CloudSync process, so that the new network is discovered and attached to the VDC
create a VM for the application server and attach it to the network created for the app tier
     this was initially done by reusing the existing IAC service “Provision a new VM"
— connect the tiers via the controller —  
connect the two EPG with a Contract, that specifies the business rules of the application to be deployed
     this is done via the APIC Controller’s API, creating the Application Profile for the new application in the right Tenant


Solution for vmware
The APIC controller has a direct integration with vmware vCenter, so the integration is slightly different from the Openstack case:
The operations are performed directly against the APIC API and, when you create a EPG there, APIC uses the vCenter integration to create a corresponding virtual network (a Port Group) in the Distributed Virtual Switch.
So we added a branch to the main process to operate on APIC and vCenter, to complete the deployment of the 3 tier application with the database tier. 

Logical flow:
— database server tier — 
call the APIC REST interface, implementing the right sequence (authentication, create Tenant, Bridge Domain, End Point Group, Application Network Profile).
     specifically a EPG for the database tier is created in the APIC data model, and this triggers the creation of a port group in vCenter.
trigger the CloudSync process, so that the new network is discovered and attached to the VDC
create a VM for the database server and attach it to the network created for the app tier
     this was initially done by reusing the existing IAC service “Provision a new VM"


Service Chaining
The communication between End Point Groups can be enriched by adding network services: load balancing, firewalling, etc.
L4-L7 services are managed by APIC by calling external devices, that could be either physical or virtual.
This automation is based on the availability of device packages (set of scripts for the target device), and a protocol (Opflex) has been defined to allow the declarative model supported by ACI being adopted by all 3rd party L4-L7devices. 
Cisco and its partners are working through the IETF and open source community to standardize OpFlex and provide a reference implementation.


In next post, I will describe the methodology we used to integrate the single pieces of the architecture, how we learned to use the API exposed by the target systems (APIC and Openstack) and to insert these calls into the orchestration flow.

Link to next post: The Elastic Cloud Project - Methodology


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.