January 19, 2015

The Elastic Cloud project - Methodology

This posts is the continuation of the post The Elastic Cloud Project - Architecture.
Here I will explain how we worked in the project: the sequence of activities that were required and the basic technologies we adopted.
The concepts are mostly explained by using pictures and screen shots, because an image is often worth 1000 words.
If you are interested in more detail, please add a comment or send me a message: I’ll be glad to provide detailed information.

To begin with, we had to:
  • map the data model of the products used to understand what objects should be created, for a Tenant, in all the layers of the architecture
  • create sequence diagrams to make the interaction clear to all the members of the team - and to the customer
  • understand how the API exposed by Openstack Neutron and from Cisco APIC work, how they are invoked and what results they produce
  • implement workflows in the CPO orchestrator to call the APIC controller and reuse the existing services in Cisco IAC
  • integrate Hyper-V compute nodes in Openstack Nova
  • create a new service in the Service Catalog to order the deployment of our 3 tiers application

Some detail about the activities above:

1 - Map the data model of the products used to understand what objects should be created, for a Tenant, in all the layers of the architecture



know that some of you still don’t know Cisco ACI… I promise that I will post a “ACI for dummies” soon.   :-)


  
This picture shows how concepts in Openstack Neutron map to concepts in Cisco ACI:


2 - Create sequence diagrams to make the interaction clear to all the members of the team


3 - Understand how the API exposed by Openstack Neutron and from Cisco APIC work, how they are invoked and what results they produce

This is a call to the Cisco APIC controller, using XML


This is a call to the Openstack Nova API, using JSON:

to do this, we used a REST client to learn the individual behavior and how the parameters need to be passed
a REST call is essentially a http call (GET or POST) where the body contains XML or JSON documents
some http headers are required to specify the content type and to hold security information (like a token for single sign on, that is returned by the authorization call and you need to resend in all the following calls to be recognized.
So we adopted Google Postman, that is a plugin for the Chrome browser (latest version is also released as a standalone application) to practice with the REST Calls then,after we learned how to manage them, we just copied the same content (plus the headers) into the “http call” tasks in the CPO workflow editor.



The XML or JSON variables that we passed are essentially static documents with some placeholders for current values, i.e. the Tenant name, the Network name, etc. were passed according to the user input.
Of course the XML elements tags are described in the APIC product documentation, you don’t have to reverse engineer their meaning   ;-)
Another way to get the XML ready to use is to export it from the APIC user interface: if you select an object that has been created already (either though the GUI or the API), you can export the corresponding XML definition:



This is how we copied the XML content from the test made in Postman and replaced some elements with placeholders for current values (that are variables in the workflow designer):

This is how the variable appear in the workflow instance viewer, after you have executed the process because a user ordered the service:


4 - Implement workflows in the CPO orchestrator to call the APIC controller and reuse the existing services in Cisco IAC

An example of the services that Cisco IAC provides out of the box.
They are also available through the API exposed by the product, so we created a custom workflow that reused some of the services as building block for our use case implementation.
his is the workflow editor, where we created the orchestration flow:



5 - integrate Hyper-V
At the time of this project, a direct support for Microsoft Hyper-V was not available in Openstack Nova.
But a free library was available from Cloudbase, so we decided to install it on our Hyper-V serverso that the virtual data center (VDC) we had created in Cisco IAC thanks to the integration with Openstack could use also Hyper-V resources to provision the VM.
More detail on the integration can be found here: http://www.cloudbase.it/openstack/
In the current Openstack release (Juno), Hyper-V servers are managed directly.


6 - create a new service in the Service Catalog

Conclusion

This project had a complexity that derived from being the among the first teams in the world to try the integration of so many disparate technologies: Cisco software products for Service Catalog and Orchestration, three hypervisors (ESXi, Hyper-V, and KVM), physical networks (Cisco ACI) and virtual networks in all the hypervisors, Openstack.
I didn't tell you, but also load balancers and firewalls were integrated.
Maybe I will post some detail about the Layer 4 - Layer 7 service chaining in the next weeks.
We had to learn the concepts before learning the products. Actually theinvestigation of the API and their integration was the easiest part... and was also fun for my ancient memory of programmer   :-)

Now, with the current release of the products involved in this project, everything would be much easier.
Their features are more complete (actually the integration of the Neutron API in the management of Virtual Data Centers in ACI was fed back to our engineering during this project).
Skills available on the field are deeper and widespread.

I've already implemented the same use case with alternative architectures twice.
Cisco UCS Director was used once, replacing the IAC orchestration and pre-built services.
And, in another variation, the Openstack API were integrated directly instead of reusing the existing services that manage the Openstack VDC in IAC.
Just to have more fun... ;-)

3 comments:

  1. Hey Lucy - we are just starting a project to do pretty much exactly this but only VMWare involved and we were looking at using UCSD and PSC/CPO- I may get in touch to discuss it if that is OK.

    ReplyDelete
  2. Hey Lucy - we are just starting a project to do pretty much exactly this but only VMWare involved and we were looking at using UCSD and PSC/CPO- I may get in touch to discuss it if that is OK.

    ReplyDelete

Note: Only a member of this blog may post a comment.