Showing posts with label APIC. Show all posts
Showing posts with label APIC. Show all posts

April 21, 2015

ACI for (Smarter) Simple Minds


In a previous post I tried to describe the new Cisco ACI architecture in simple terms, from a software designer standpoint.
My knowledge on networking is limited, compared to my colleagues at Cisco that hold CCIE certifications… I am a software guy the just understands the API   ;-)
Though, now I would like to share some more technical information with the same “not for specialists” language.
You can still go to the official documentation for the detail, or look at one of the brilliant demo recorded on YouTube.

These are the main points that I want to describe:
- You don’t program the single switches, but the entire fabric (via the sw controller)
- The fabric has all active links (no spanning tree)
- Policies and performances benefit from a ASIC design that perfectly fits the SDN model
- You can manage the infrastructure as code (hence, really do DevOps)
- The APIC controller manages also L4-7 network services from 3rd parties
- Any orchestrator can drive the API of the controller
- The virtual leaf of the fabric extends into the hypervisor (AVS)
- You get immediate visibility of the Health Score for the Fabric, Tenants, Applications

Next picture shows how the fabric is build, using two types of switches: the Spines are used to scale and connect all the leaves in a non blocking fabric that ensures performances and reliability.
The Leaf switches hold the physical ports where servers are attached: both bare metal servers (i.e. running a Operating System) and virtualized servers (i.e. running ESXi, Hyper-V and KVM hypervisors).
The software controller for the fabric, named APIC, runs on a cluster of (at least) 3 dedicated physical servers and is not in the data path: so it does not affect performances and reliability of the fabric, as it could happen with other solutions on the market.

The ACI fabric supports more than 64,000 dedicated tenant networks. A single fabric can support more than one million IPv4/IPv6 endpoints, more than 64,000 tenants, and more than 200,000 10G ports. The ACI fabric enables any service (physical or virtual) anywhere with no need for additional software or hardware gateways to connect between the physical and virtual services and normalizes encapsulations for Virtual Extensible Local Area Network (VXLAN) / VLAN / Network Virtualization using Generic Routing Encapsulation (NVGRE).

The ACI fabric decouples the endpoint identity and associated policy from the underlying forwarding graph. It provides a distributed Layer 3 gateway that ensures optimal Layer 3 and Layer 2 forwarding. The fabric supports standard bridging and routing semantics without standard location constraints (any IP address anywhere), and removes flooding requirements for the IP control plane Address Resolution Protocol (ARP) / Generic Attribute Registration Protocol (GARP). All traffic within the fabric is encapsulated within VXLAN.

The ACI fabric decouples the tenant endpoint address, its identifier, from the location of the endpoint that is defined by its locator or VXLAN tunnel endpoint (VTEP) address. The following figure shows decoupled identity and location.


Forwarding within the fabric is between VTEPs. The mapping of the internal tenant MAC or IP address to a location is performed by the VTEP using a distributed mapping database. After a lookup is done, the VTEP sends the original data packet encapsulated in VXLAN with the Destination Address (DA) of the VTEP on the destination leaf. The packet is then de-encapsulated on the destination leaf and sent down to the receiving host. With this model, we can have a full mesh, loop-free topology without the need to use the spanning-tree protocol to prevent loops.

You can attach virtual servers or physical servers that use any network virtualization protocol to the Leaf ports, then design the policies that define the traffic flow among them regardless the local (to the server or to its hypervisor) encapsulation.
So the fabric acts as a normalizer for the encapsulation and allows you to match different environments in a single policy.

Forwarding is not limited to nor constrained by the encapsulation type or encapsulation-specific ‘overlay’ network:





As explained in ACI for Dummies, policies are based on the concept of EPG (End Points Group).
Special EPG represent the outside network (outside the fabric, that means other networks in your datacenter or eventually the Internet or a MPLS connection):



The integration with the hypervisors is made through a bidirectional connection between the APIC controller and the element manager of the virtualization platform (vCenter, System Center VMM, Red Hat EVM...). Their API are used to create local virtual networks that are connected and integrated with the ACI fabric, so that policies are propagated to them.
The ultimate result is the creation of Port Groups, or the like of, where VM can be connected.
A Port Groups represents a EPG.
Events generated by the VM lifecycle (power on/off, vmotion...) will be sent back to APIC so that the traffic is managed accordingly.



How Policies are enforced in the fabric

The policy contains a source EPG, a destination EPG and rules known as Contracts, made of Subjects (security, QoS...). They are created in the Controller and pushed to all the leaf switches where they are enforced.
When a packet arrives to a leaf, if the destination EPG is known it is processed locally.
Otherwise it is forwarded to a Spine, to reach the destination EPG through a Leaf that knows it.

There are 3 cases, and the local and global tables in the leaf are used based on the fact that the destination EP is known or not:
1 - If the target EP is known and it's local (local table) to the same leaf, it's processed locally (no traffic through the Spine).
2 - If the target EP is known and it's remote (global table) it's forwarded to the Spine to be sent to the destination VTEP, that is known.
3 - If the target EP is unknown the traffic is sent to the Spine for a proxy forwarding (that means that the Spine discovers what is the destination VTEP).



You can manage the infrastructure as code.

The fabric is stateless: this means that all the configuration/behavior can be pushed to the network through the controller's API. The definition of Contracts and EPG, of POD and Tenants, every Application Profile is a (set of) XML document that can be saved as text.
Hence you can save it in the same repository as the source code of your software applications.

You can extend the DevOps pipeline that builds the application, deploys it and tests it automatically by adding a build of the required infrastructure on demand.
This means that you can use a slice of a shared infrastructure to create a environment just when it's needed and destroy it soon after, returning the resources to the pool.

You can also use this approach for Disaster Recovery, simply building a clone of the main DC if it's lost.

Any orchestrator can drive the API of the controller.

The XML (or JSON) content that you send to build the environment and the policies is based on a standard language. The API are well documented and lot of samples are available.
You can practice with the API, learn how to use them with any REST client and then copy the same calls into your preferred orchestrator.
Though some products have out of the box native integration with APIC (Cisco UCSD, Microsoft), any other can be used easily with the approach I described above.
See an example in The Elastic Cloud Project.

The APIC controller manages also L4-7 network services from 3rd parties. 

The concept of Service Graph allows a automated and scalable L4-L7 service insertion.  The fabric forwards the traffic into a Service Graph, that can be one or more service nodes pre-defined in a series, based on a routing rule.  Using the service graph simplifies and scales service operation: the following pictures show the difference from a traditional management of the network services.




The same result can be achieved with the insertion of a Service Graph in the contract between two EPG:



The virtual leaf of the fabric extends into the hypervisor (AVS).

Compared to other hypervisor-based virtual switches, AVS provides cross-consistency in features, management, and control through Application Policy Infrastructure Controller (APIC), rather than through hypervisor-specific management stations. As a key component of the overall ACI framework, AVS allows for intelligent policy enforcement and optimal traffic steering for virtual applications.

The AVS offers:
  • Single point of management and control for both physical and virtual workloads and infrastructure
  • Optimal traffic steering to application services
  • Seamless workload mobility
  • Support for all leading hypervisors with a consistent operational model across implementations for simplified operations in heterogeneous data centers



Cisco AVS is compatible with any upstream physical access layer switch that complies with the Ethernet standard, including Cisco Nexus Family switches. Cisco AVS is compatible with any server hardware listed in the VMware Hardware Compatibility List (HCL). Cisco AVS is a distributed virtual switch solution that is fully integrated into the VMware virtual infrastructure, including VMware vCenter for the virtualization administrator. This solution allows the network administrator to configure virtual switches and port groups to establish a consistent data center network policy.

Next picture shows a topology that includes Cisco AVS with Cisco APIC and VMware vCenter with the Cisco Virtual Switch Update Manager (VSUM).





 

Health Score

The APIC uses a policy model to combine data into a health score. Health scores can be aggregated for a variety of areas such as for infrastructure, applications, or services.

The APIC supports the following health score types:
      System—Summarizes the health of the entire network.
      Leaf—Summarizes the health of leaf switches in the network. Leaf health includes hardware health of the switch including fan tray, power supply, and CPU.
      Tenant—Summarizes the health of a tenant and the tenant’s applications.



Health scores allow you to isolate performance issues by drilling down through the network hierarchy to isolate faults to specific managed objects (MOs). You can view network health by viewing the health of an application (by tenant) or by the health of a leaf switch (by pod).



You can subscribe to a health score to receive notifications if the health score crosses a threshold value. You can receive health score events via SNMP, email, syslog, and Cisco Call Home.  This can be particularly useful for integration with 3rd party monitoring tools. 

Health Score Use case: 
An application administrator could subscribe to the health score of their application - and receive automatic notifications from ACI if the health of the specific application is degraded from an infrastructure point of view - truly an application-aware infrastructure.


Conclusion

I hope that these few lines were enough to show the advantage that modern network architectures can bring to your Data Center.
Cisco ACI joins all the benefit of the SDN and the overlay networks with a powerful integration with the hardware fabric, so you get flexibility without losing control, visibility and performances.

One of the most important aspects is the normalization of the encapsulation, so that you can merge different network technologies (from heterogeneous virtual environments and bare metal) into a single well managed policy model.

Policies (specifically, the Application Network Policies created in APIC based on EPG and Contracts) allow a easier communication between software application designers and infrastructure managers, because they are simple to represent, create/maintain and enforce.

Now all you need is just a look at ACI Fundamentals on the Cisco web site.


April 8, 2015

Software Defined Networking For Dummies


A very simple, yet complete description of what SDN is, now available as a free ebook that you can download from http://www.cisco.com/go/sdnfordummies


Software defined networking (SDN) is a new way of looking at how networking and cloud solutions should be automated, efficient, and scalable in a new world where application services may be provided locally, by the data center, or even the cloud. This is impossible with a rigid system that’s difficult to manage, maintain, and upgrade. Going forward, you need flexibility, simplicity, and the ability to quickly grow to meet changing IT and business needs.

Software Defined Networking For Dummies, Cisco Special Edition, shows you what SDN is, how it works, and how you can choose the right SDN solution. This book also helps you understand the terminology, jargon, and acronyms that are such a part of defining SDN.
Along the way, you’ll see some examples of the current state of the art in SDN technology and see how SDN can help your organization. 


You can find additional information about Cisco’s take on SDN by visiting:
http://cisco.com/go/aci
http://cisco.com/go/sdn
http://blogs.cisco.com/tag/sdn

March 17, 2015

The Elastic Cloud project - Porting to UCSD

Porting to a new platform

This post shows how we did the porting of the Elastic Cloud project to a different platform.
The initial implementation was done on Cisco IAC (Intelligent Automation for Cloud) orchestrating Openstack, Cisco ACI (Application Centric Infrastructure) and 3 hypervisors.

Later we decided to implement the same use case (deploy a 3 tier application to 3 different hypervisors, using Openstack and ACI) with Cisco UCS Director, aka UCSD.

The objective was to offer another demonstration of flexibility and openness, targeting IT administrators rather than end users like we did in the first project.
You will find a brief description of UCS Director in the following paragraphs: essentially it is not used to abstract complexity, but to allow IT professionals to do their job faster and error-proof.
UCSD is also a key element in a new Cisco end-to-end architecture for cloud computing, named Cisco ONE Enterprise Cloud suite.

The implementation was supported by the Cisco dCloud team, the organization that provides excellent remote demo capabilities on a number of Cisco technologies. They offered me the lab environment to build the new demo and, in turn, the complete demo will be offered publicly as a self service environment on the dCloud platform.

The dCloud demo environment

Cisco dCloud provides Customers, Partners and Cisco Employees with a way to experience Cisco Solutions. From scripted, repeatable demos to fully customizable labs with complete administrative access, Cisco dCloud can work for you. Just login to dcloud.cisco.com with your Cisco account and you'll find all the available demo:


Cisco UCS Director

UCSD is a great tool for Data Center automation: it manages servers, network, storage and hypervisors, providing you a consistent view on physical and virtual resources in your DC.

Despite the name (that could associate it to Cisco UCS servers only) it integrates with a multi-vendor heterogeneous infrastructure, offering a single dashboard plus the automation engine (with a library containing 1300+ tasks) and the SDK to create your own adapters if needed.

UCSD offers open API so that you can run its workflows from the UCSD catalog or from a 3rd party tool (a portal, a orchestrator, a custom script).

There is a basic workflow editor, that we used to create the custom process integrating Openstack, ACI and all the hypervisors to implement our use case. We don't consider UCSD a full business level orchestrator because it's not meant to integrate also the BSS (Business Support Systems) in your company, but it does the automation of the DC infrastructure including Cisco and 3rd party technologies pretty well.

Implementing the service in UCS Director

Description of the process

The service consists in the deployment of the famous 3 tier application with a single click.
The first 2 tiers of the application (web and application servers and their networks) are deployed on Openstack. The first version of the demo uses KVM as the target hypervisor for both tiers, next version will replace one of the Openstack compute nodes with Hyper-V.
The 3rd tier (the database and its network) is deployed on ESXi.
On every hypervisor, virtual networks are created first. Then virtual machines are created and attached to the proper network.

To connect the virtual networks in their different virtualized environments we used Cisco ACI, creating policies through the API of the controller.
One End Point Group is created for each of the application tiers, Contracts are created to allow the traffic to flow from one tier to next one (and only there).
If you are not familiar with the ACI policy model, you can see my ACI for Dummies post.

All these operations are executed by a single workflow created in the UCSD automation engine.
We just dropped the tasks from the library to the workflow editor, provided input values for each task (from the output of previous tasks) and connected them in the right sequence drawing arrows.
The resulting workflow executes the same sequence of atomic actions that the administrator would do manually in the GUI, one by one.

The implementation was quite easy because we were porting an identical process created in Cisco IAC: the tool to implement the workflow is different, but the sequence and the content of the tasks is the same.

Integration out-of-the-box

Most of the tasks in our process are provided by the UCSD automation library: all the operations on ACI (through its APIC controller) and on ESXi VM and networks (through vCenter).




When you use these tasks, you can immediately see the effect in the target system.
As an example, this is the outcome of creating a Router in Openstack using UCSD: the two networks are connected in the hypervisor and the APIC plugin in Neutron talks immediately to Cisco ACI, creating the corresponding Contract between the two End Point Groups (please check the Router ID in Openstack and the Contract name in APIC).



 

Custom tasks

The integration with Openstack required us to build custom tasks, adding them to the library.
We created 15 new tasks, to call the API exposed by the Openstack subsystems: Neutron (to create the networks) and Nova (to create the VM instances).
The new tasks were written in Javascript, tested with the embedded interpreter, then added to the library.




After that, they were available in the automation library among the tasks provided by the product itself.
This is a very powerful demonstration of the flexibility and ease of use of UCSD.



I should add that the custom integration with Openstack was built for fun, and as a demonstration.
To implement the deployment of the tiers of the application to 3 different hypervisors we could use the native integration that UCSD has with KVM, Hyper-V and ESXi (through their managers).
There's no need to use Openstack as a mediation layer, as we did here.


The workflow editor

Here you can drag 'n drop the task, validate the workflow, run the process to test it and see the executed steps (with their log and all their input and output values).









Amount of effort

The main activities in building this demo are two:
- creating the custom tasks to integrate Openstack
- creating the process to automate the sequence of atomic tasks.

The first activity (skills required: Javascript programming and understanding of the Openstack API) took 1 hour per task: a total of 2 days.
Jose, who created the custom tasks, has also published a generic custom task to execute REST API calls from UCSD: https://github.com/erjosito/stuff/blob/master/UCSD_REST_custom_tasks.wfdx
In addition, he suggests a simple method to understand what REST call corresponds to a Openstack CLI command.
If you use the  --debug option in the Openstack CLI you will see that immediately.

As an example, to boot a new instance:
nova --debug boot --image cirros-0.3.1-x86_64-uec --flavor m1.tiny --nic net-id=f85eb42a-251b-4a75-ba90-723f99dbd00f vm002


The second activity (create the process, test it step by step, expose it in the catalog and run it end to end) took 3 sessions of 2 hours each.
This was made easier by the experience we matured during the implementation of the Elastic Cloud Project. We knew already the atomic actions we needed to perform, their sequence and the input/output parameter for each action.
If we had to build everything from scratch, I would add 2-3 days to understand the use case.


Demo available on dCloud

The demo will be published on the Cisco dCloud site soon for your consumption.
There are also a number of demonstrations available already, focused on UCS Director.
You can learn how UCSD manages the Data Center infrastructure, how it drives the APIC controller in the ACI architecture, and how it is leveraged by Cisco IAC when it uses the REST API exposed by UCSD.

Acknowledgement

A lot of thanks to Simon Richards and Manuel Garcia Sanes from Cisco dCloud, to Russ Whitear from my same team and to Jose Moreno from the Cisco INSBU (Insieme Business Unit).
Great people that focus on Data Center orchestration and many other technologies at Cisco!

You can also find a powerful, yet easy demonstration of how UCSD workflows can be called from a client (a front end portal, another orchestrator...) at Invoking UCS Director Workflows via the Northbound REST API



March 1, 2015

ACI for Simple Minds

Cisco ACI means Application Centric Infrastructure 

Why application designers and developers don't want to speak to network engineers. 


In my previous life I was an enterprise architect and I led design and development of software systems in many projects. When we were in the phase of planning the procurement and the setup of the various environments for the project (dev, test, QA, prod) I was bored by the meetings with the infrastructure guys.
What I needed was a given amount of memory and CPU power, that I could calculate myself, on a single big server or on a number of smaller machines. Then I needed connectivity among the different deployment units in my architecture (a cluster of web servers, a cluster of application servers, a database and some pre-existing systems), and just some services like load balancing.
But those nasty network engineers and the ugly security guys wanted to discuss a long list of requirements and settings: vlans, ip addresses, subnets, firewalls, quality of service, access lists   :-(
I was only interested in application tiers and dependencies, SLA, application performances and compliance and I wanted to discuss that in my language, not in their unfamiliar slang.


How the system engineers see the world: a number of devices with their configuration.

 


How I see the world: a number of servers (or processes) with their role in the application. We can call them End Points.




The communication can be described as a contract.

Provided by some end points, consumed by others. 

And saved as a reusable policy, that could be applied to End Point Groups:

 


Eventually, network services like load balancers or firewalls can be added (creating a service graph):



You can easily understand that our meetings were not that easy    ;-)
It was not their fault (and of course it was not... mine): we only saw the world from different angles, or maybe with different glasses.

For the software guys, abstracting the topology of the deployment is essential. For the system guys, devil is in the detail and they need to know exactly what traffic is flowing to engineer the setup accordingly.

Having a set of policies that describe the desired behavior makes the conversation easy: what service is offered by a end point group, what group (or single end point) can consume it, what SLA should be enforced, etc... Contracts could be: access to a web application on http on port 80, access to montoring agents or to log collectors, access to a LDAP server for authentication and so on.
You will see later in the post that ACI allows this conversation.

After long discussions and escalations, the setup of the environment was never as fast as I needed.
It was not the DevOps time yet, but we still tried to roll out many builds of the application for rapid prototyping and quick wins.


Now imagine that you were able to agree on the policy definition.
Having an instant enforcement of those policies on all your network devices without touching them one by one, in a consistent way that prevents human errors and grants compliance by default, and have it done soon would be a miracle... or a magic.

Now we have a network architecture that makes this miracle real: Cisco ACI.
A single software controller (redundant, of course) manages all the network connectivity, security and the network services like load balancers and firewalls.
The network is a hardware fabric, with great performances, scalability, resiliency that I will not discuss here (see the links below for the detail), that extends smoothly into the virtual networks of any vendor or open source solution enforcing the policies for physical servers and VM as end points without any difference.
The controller (named APIC) has a GUI but, most important, a rich set of open API that can be invoked by your scripts, by orchestration tools from Cisco or 3rd parties, by cloud management systems.
You can create the policy from here, and also see the "telemetry" of the network with easy display of the health score of the fabric or individual applications as well.






Use cases for ACI

Fast provisioning

A stateless network like ACI can be provisioned and completely reshaped in seconds by pushing new policies through the controller.
This concept is pretty similar to what the UCS Service Profile made possible in the server industry, introducing the stateless computing.
You can add the complete configuration for a new application to a multitenant shared infrastructure, you can create a new tenant environment, you can create the test environment and the production environment just by cloning the development environment and applying any needed policies to ensure compliance.
Everything is represented as a XML document or a JSON data structure: in any case a small piece of text data that can be saved, versioned and built automatically by a automation script or tool.
Infrastructure as code is one of the pillars of DevOps.

Physical and virtual networking managed the same

When you design End Point Groups and their Contracts, they can be mapped to physical servers (i.e. servers running a single Operating System, like Linux, Unix or Windows servers) or to VM running on any hypervisor.
Traffic from a VM is encapsulated and isolated from other VM's, then the policies are applied to allow it to flow to the destination (physical or virtual).
The spine-leaf architecture of the fabric is extended by a virtual leaf that runs in the hypervisor, under the control of the APIC Controller.



Service Graph

The integration of network services (LB, FW, etc.) from 3rd parties is easy thanks to the Opflex protocol, that allows the extension of the declarative style (vs imperative) of the configuration.
You can add the services to a contract and all the end point that offer and consume that contract will benefit from the insertion without any need for local configuration (e.g. changing the default gateway to the newly inserted firewall).
Many 3rd party vendors added - or will add soon - Opflex agents to their product. The wide ecosystem of ACI becomes richer every day.

Easy deprovisioning

Often, when an existing application is deprovisioned, the network and security configurations created for it are not deleted.
This is due to different reasons: either because it's hard to find them (not all the organizations use a CMDB to track everything) or because the responsible for the operation is afraid of removing firewall rules, ACL, vlans that could potentially be used by a different application, creating a problem as a consequence.




If you have all the policies for a given application defined as attributes of a specific Application Network Profile in APIC, simply removing that ANP will clean all the configurations. You were not applying rules to ports (where other application could be attached) but to end points.



Finally, what advantages can you get from ACI?


Centralized Policy-Defined Automation Management
 • Holistic application-based solution that delivers flexibility and automation for agile IT
 • Automatic fabric deployment and configuration with single point of management
 • Automation of repetitive tasks, reducing configuration errors

Open and Comprehensive End-to-End Security
 • Open APIs, open standards, and open source elements that enable software flexibility for DevOps teams, and firewall and application delivery controller (ADC) ecosystem partner integration
 • Automatic capture of all configuration changes integrated with existing audit and compliance tracking solutions
 • Detailed role-based access control (RBAC) with fine-grained fabric segmentation

Real-Time Visibility and Application Health Score
 • Centralized real-time health monitoring of physical and virtual networks
 • Instant visibility into application performance combined with intelligent placement decisions
 • Faster troubleshooting for day-2 operation

Application Agility
 • Management of application lifecycle from development, to deployment, to decommissioning in minutes
 • Automatic application deployment and faster provisioning based on predefined profiles
 • Continuous and rapid delivery of virtualized and distributed applications

If you liked this post, you may want to read also ACI for (smarter) Simple Minds. You have passed the basic stage now   :-)

Links

Serious product documentation

ACI Marketing page
ACI at a glance
ACI in one page
Application Centric Infrastructure (ACI) Documentation
Learning ACI - Adam's blog

Cartoons (2 min. each)


February 12, 2015

DevOps - Tools and Technology

This post is the continuation of the DevOps - Operational model post in this blog.

We have seen how DevOps processes and organization can help the agility of IT, enabling a huge value for the business.
Let’s investigate the tools that smart organizations use to implement DevOps in the real world.
And let’s try to understand how, in addition to code management, the lifecycle of a sw application can be optimized by managing the infrastructure as code.
At the end of the day, we want to apply the following picture to the infrastructure as well.




Usually different environments are created to run application, often cloned for each Tenant (customer, project...): development, integration test, QA test, production, Disaster Recovery
The infrastructure must provide similar topology and functions, with different scale and HA requirements.
Those environments are sometimes used for few days, then they are no longer needed and the resources could be reused for next project.
If we were able to generate a new environment "end to end" when it is required, and to release all the resources to a shared pool, this  would help a lot in the optimization of resources usage.
The economy of scale provided by shared infrastructure and resource pools will add to the simplicity and speed of the operations.

The following picture shows the cycle of the builds (for both the sw application and the infrastructure) that optimizes the time and the resources.




There are a number of tools and solutions that can help automating this process.
Some apply to specific phases, other to the end to end DevOps.
Also collaboration tools help the team(s) to work together for their own and the entire company's benefit: from http://www.collab.net/solutions/devops



The most used DevOps tools, as far as I know from direct experience and investigation, are jenkins, vagrant, puppet, chef.
Here is another possible chain of tools that cover the entire process:


Stateless Infrastructure (also known as SDDC)

We understood that the maximum benefit comes from being able to create and destroy environment on demand, allocate resources just when needed (we can also consider Disaster Recovery as a important use case in this scenario, but in that case you should also ensure that data have been replicated before the event).

Infrastructure as code is a core capability of DevOps that allows organizations to manage the scale and the speed with which environments need to be provisioned and configured to enable continuous delivery.
Evolving around the notion of infrastructure as code is the notion of software-defined environments.
Whereas infrastructure as code deals with capturing node definitions and configurations as code, software-defined environments use technologies that define entire systems made up of multiple nodes — not just their configurations, but also their definitions, topologies, roles, relationships, workloads and workload policies, and behavior.

Stateless Computing and Stateless Networking are important innovations that some vendors (Cisco could be considered a leader here) brought to the market in last 5 years.
Policy based configuration and the availability of software controllers for all the components of the architecture allow the separation of the modeling from the physical topology.

Servers

As an example, UCS servers (up to 160 in one management domain, but domains can be joined to share resources and policies) are stateless.
You can imagine each server (either a blade or a rack-mount server) as a dumb piece of iron, before you push its identity, its features (e.g. number, type and configuration of the network interfaces) and its behavior as a piece of configuration.
It is like adding the soul to a body.
Later you can move the same soul to a different body (maybe more powerful, such as from a 2-CPU server to a 4-CPU one). The new machine will be restarted as if it was the same.
This can be useful to recover a faulty server, to do DR but also to repurpose a server farm in few minutes (and eventually restore the previous state the day after).
The state (identity, features and behavior) is defined by a XML document that can be stored, versioned and managed as code in a repository (other than in the embedded UCS Manager).
This abstraction of the server from the actual machine makes the management easier and was the main factor for the incredible success of UCS as a server platform.

Networks

Similarly, in the networking domain, we have had a quantum leap in network management with Cisco ACI (Application Centric Infrastructure).
For those that have not met ACI yet, I have published a “ACI for Dummies” post.
In few words, ACI brings the management of physical and virtual networks together.
It has a very performant and scalable fabric, made of spine and leaf switches, that are managed by a software controller called APIC.
APIC also integrates the virtual switches in the different hypervisors, so that its policy model can be extended to the virtual end points.
A GUI is provided to manage APIC, but essentially you would drive it through the excellent open API offered to orchestration systems and - of course - DevOps tools.
XML (or JSON) artifacts can be stored in a repository as code, and pushing them to APIC will create your new Data Center on the fly.
You can create new Tenants with dedicated resources, or deploy the infrastructure for a new application in such a way that it is isolated (in terms of security, performances and stability) from others, though running on a shared infrastructure.
It would take just the time of a REST call, where you push the new policy to the controller.
And of course you could use the same templates in the different environments: development, integration test, QA test, production, Disaster Recovery

The previous generation of network devices (e.g. the Nexus family) can be managed in a DevOps scenario as well.
They offer API and have puppet agents onboard. And a version of the APIC controller has been created also for networks outside ACI (APIC-EM - https://developer.cisco.com/site/apic-em/discover/overview/).
The Cisco DevNet community prodives a lot of information and samples at https://developer.cisco.com/site/devnet/home/index.gsp

I wrote a short post on Ansible here: http://lucarelandini.blogspot.com/2015/05/a-powerful-devops-tool-ansible.html where a great recorded session from the Openstack Summit is linked.

You might be interested also in my post on DevOps, Docker and Cisco ACI.

 

January 19, 2015

The Elastic Cloud project - Methodology

This posts is the continuation of the post The Elastic Cloud Project - Architecture.
Here I will explain how we worked in the project: the sequence of activities that were required and the basic technologies we adopted.
The concepts are mostly explained by using pictures and screen shots, because an image is often worth 1000 words.
If you are interested in more detail, please add a comment or send me a message: I’ll be glad to provide detailed information.

To begin with, we had to:
  • map the data model of the products used to understand what objects should be created, for a Tenant, in all the layers of the architecture
  • create sequence diagrams to make the interaction clear to all the members of the team - and to the customer
  • understand how the API exposed by Openstack Neutron and from Cisco APIC work, how they are invoked and what results they produce
  • implement workflows in the CPO orchestrator to call the APIC controller and reuse the existing services in Cisco IAC
  • integrate Hyper-V compute nodes in Openstack Nova
  • create a new service in the Service Catalog to order the deployment of our 3 tiers application

Some detail about the activities above:

1 - Map the data model of the products used to understand what objects should be created, for a Tenant, in all the layers of the architecture



know that some of you still don’t know Cisco ACI… I promise that I will post a “ACI for dummies” soon.   :-)


  
This picture shows how concepts in Openstack Neutron map to concepts in Cisco ACI:


2 - Create sequence diagrams to make the interaction clear to all the members of the team


3 - Understand how the API exposed by Openstack Neutron and from Cisco APIC work, how they are invoked and what results they produce

This is a call to the Cisco APIC controller, using XML


This is a call to the Openstack Nova API, using JSON:

to do this, we used a REST client to learn the individual behavior and how the parameters need to be passed
a REST call is essentially a http call (GET or POST) where the body contains XML or JSON documents
some http headers are required to specify the content type and to hold security information (like a token for single sign on, that is returned by the authorization call and you need to resend in all the following calls to be recognized.
So we adopted Google Postman, that is a plugin for the Chrome browser (latest version is also released as a standalone application) to practice with the REST Calls then,after we learned how to manage them, we just copied the same content (plus the headers) into the “http call” tasks in the CPO workflow editor.



The XML or JSON variables that we passed are essentially static documents with some placeholders for current values, i.e. the Tenant name, the Network name, etc. were passed according to the user input.
Of course the XML elements tags are described in the APIC product documentation, you don’t have to reverse engineer their meaning   ;-)
Another way to get the XML ready to use is to export it from the APIC user interface: if you select an object that has been created already (either though the GUI or the API), you can export the corresponding XML definition:



This is how we copied the XML content from the test made in Postman and replaced some elements with placeholders for current values (that are variables in the workflow designer):

This is how the variable appear in the workflow instance viewer, after you have executed the process because a user ordered the service:


4 - Implement workflows in the CPO orchestrator to call the APIC controller and reuse the existing services in Cisco IAC

An example of the services that Cisco IAC provides out of the box.
They are also available through the API exposed by the product, so we created a custom workflow that reused some of the services as building block for our use case implementation.
his is the workflow editor, where we created the orchestration flow:



5 - integrate Hyper-V
At the time of this project, a direct support for Microsoft Hyper-V was not available in Openstack Nova.
But a free library was available from Cloudbase, so we decided to install it on our Hyper-V serverso that the virtual data center (VDC) we had created in Cisco IAC thanks to the integration with Openstack could use also Hyper-V resources to provision the VM.
More detail on the integration can be found here: http://www.cloudbase.it/openstack/
In the current Openstack release (Juno), Hyper-V servers are managed directly.


6 - create a new service in the Service Catalog

Conclusion

This project had a complexity that derived from being the among the first teams in the world to try the integration of so many disparate technologies: Cisco software products for Service Catalog and Orchestration, three hypervisors (ESXi, Hyper-V, and KVM), physical networks (Cisco ACI) and virtual networks in all the hypervisors, Openstack.
I didn't tell you, but also load balancers and firewalls were integrated.
Maybe I will post some detail about the Layer 4 - Layer 7 service chaining in the next weeks.
We had to learn the concepts before learning the products. Actually theinvestigation of the API and their integration was the easiest part... and was also fun for my ancient memory of programmer   :-)

Now, with the current release of the products involved in this project, everything would be much easier.
Their features are more complete (actually the integration of the Neutron API in the management of Virtual Data Centers in ACI was fed back to our engineering during this project).
Skills available on the field are deeper and widespread.

I've already implemented the same use case with alternative architectures twice.
Cisco UCS Director was used once, replacing the IAC orchestration and pre-built services.
And, in another variation, the Openstack API were integrated directly instead of reusing the existing services that manage the Openstack VDC in IAC.
Just to have more fun... ;-)

January 15, 2015

The Elastic Cloud Project - Architecture

This posts is the continuation of The Elastic Cloud Project post.

There is a team at Cisco, called System Development Unit, that creates reference architectures and CVD (Cisco Validated Design).
They work with the product Business Units to define the best way to approach common use cases with the best technology.
But at the time of this project, they hadn’t completed their job yet (some of the products were not even released).
So we had to invent the solution based on our understanding of the end to end architecture and integrate the technologies on the field.

As I explained before, the most important components were:
- servers - Cisco UCS blades and rack mount servers
- network - Cisco ACI fabric, including the APIC software controller
- virtualization - ESXi, Hyper-V, KVM
- cloud and orchestration software - Cisco PSC and CPO, Openstack (PSC and CPO, plus pre-built services, make up Cisco Intelligent Automation - IAC)
IAC can integrate different “element managers” in the datacenter, so that their resources are used to deliver the cloud services (e.g. single VM or Virtual Data Centers - VDC).
Element managers include vmware vCenter and Openstack, so a end user can get a VDC based on one of these platforms.
There is a autometed process in IAC, called CloudSync, that discovers all the resources available in the element managers and allow the admin to select those he wants to use to provision services (resource management and lifecycle management are amongst the features of the product).

The ACI architecture
I will cover it in detail in one of next posts, but essentially ACI (http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html), that stands for Application Centric Infrastructure, is a holistic architecture with centralized automation and policy-driven application profiles. ACI delivers software flexibility with the scalability of hardware performance. 
Cisco ACI consists of:



The policies that you create in the software controller (APIC) are enforced by the fabric, including physical and virtual networks.
You describe the behavior your application need from the network, non the configuration you need.
This is easier for the application designer, in the collaboration with network managers, because it can be graphically described by the Application Network Profile.
A profile contains End Point Groups (EPG, representing deployment units of the application: both physical and virtual servers) and Contracts (that define the way EPG can communicate).
A profile can be saved as a XML or JSON document, stored in a repository, participate in the Devops lifecycle, used to clone a environment and managed by any orchestrator.
ACI is integrated with the main virtualization platforms (ESXi, Hyper-V, KVM).

To deploy our 3 tier application on 3 different hypervisors, we had to manage vmware and Openstack separately - but in a single process, because everything should be provisioned with a single click.
Initially we based our custom implementation of the new service on the standard IAC services, using them as building blocks.
So we had not to implement the code to create a network, create a VDC, create a Virtual Server, trigger CloudSync, integrate the virtual network with the hardware fabric.
This sequence of operations was common to the Openstack environment and the vmware environment.
The main workflow was built with two parallel branches, the Openstack branch (creating 2 web servers on one network and 1 application server on anothernetwork) and the vmware branch (doing the same for the database tier).



The problem is that the integration of IAC with Openstack, in the 4.0 release that we used at that time, only deals with Nova - that, in turn, manages both KVM and Hyper-V servers.
No Neutron integration was available out of the box, hence no virtual networks for the Openstack based VDC.
So we built the Neutron integration form scratch (implementing direct REST calls to the Neutron API) to create the networks.

The ACI plugin for Neutron does the rest: it talks to the APIC controller to create the corresponding EPG (End Point Group).
This implementation has been fed back to IAC 4.1 by the Cisco engineering, so in the current release it is available out of the box.

Solution for Openstack
A plugin distributed by Cisco ACI was installed in Openstack Neutron, to allow it to integrate into the APIC controller.
This is transparent to the Openstack user, that goes on working in the usual way: create network, create router, create VM instances.
Instructions are sent by Openstack to APIC, so that the corresponding constructs are deployed in the APIC data model (Application Profiles, End Point Groups, Contracts).
The orchestrator can then use these objects to create a specific application logic, spanning the heterogeneous server farms and allowing networks in KVM to connect to networks in ESXi and Hyper-V.
So the workflow that we built only needed to work with the native API in Openstack.

Logical flow:
— web tier --
create a virtual network for the web tier via Neutron API
     the Neutron plugin for ACI calls - implicitly - the APIC controller and creates a corresponding EPG.
     the Neutron plugin for OVS creates a virtual network in the hypervisor's virtual switch
trigger the CloudSync process, so that the new network is discovered and attached to the VDC
create a VM for the web server and attach it to the network created for the web tier
     this was initially done by reusing the existing IAC service “Provision a new VM"
— application server tier — 
create a virtual network for the app tier via Neutron API
     the Neutron plugin for ACI calls - implicitly - the APIC controller and creates a corresponding EPG.
     the Neutron plugin for OVS creates a virtual network in the hypervisor's virtual switch
trigger the CloudSync process, so that the new network is discovered and attached to the VDC
create a VM for the application server and attach it to the network created for the app tier
     this was initially done by reusing the existing IAC service “Provision a new VM"
— connect the tiers via the controller —  
connect the two EPG with a Contract, that specifies the business rules of the application to be deployed
     this is done via the APIC Controller’s API, creating the Application Profile for the new application in the right Tenant


Solution for vmware
The APIC controller has a direct integration with vmware vCenter, so the integration is slightly different from the Openstack case:
The operations are performed directly against the APIC API and, when you create a EPG there, APIC uses the vCenter integration to create a corresponding virtual network (a Port Group) in the Distributed Virtual Switch.
So we added a branch to the main process to operate on APIC and vCenter, to complete the deployment of the 3 tier application with the database tier. 

Logical flow:
— database server tier — 
call the APIC REST interface, implementing the right sequence (authentication, create Tenant, Bridge Domain, End Point Group, Application Network Profile).
     specifically a EPG for the database tier is created in the APIC data model, and this triggers the creation of a port group in vCenter.
trigger the CloudSync process, so that the new network is discovered and attached to the VDC
create a VM for the database server and attach it to the network created for the app tier
     this was initially done by reusing the existing IAC service “Provision a new VM"


Service Chaining
The communication between End Point Groups can be enriched by adding network services: load balancing, firewalling, etc.
L4-L7 services are managed by APIC by calling external devices, that could be either physical or virtual.
This automation is based on the availability of device packages (set of scripts for the target device), and a protocol (Opflex) has been defined to allow the declarative model supported by ACI being adopted by all 3rd party L4-L7devices. 
Cisco and its partners are working through the IETF and open source community to standardize OpFlex and provide a reference implementation.


In next post, I will describe the methodology we used to integrate the single pieces of the architecture, how we learned to use the API exposed by the target systems (APIC and Openstack) and to insert these calls into the orchestration flow.

Link to next post: The Elastic Cloud Project - Methodology