Showing posts with label ACI. Show all posts
Showing posts with label ACI. Show all posts

July 28, 2017

Protecting your border or offering a service to others?

The value of automation in the DataCenter

Everyone is aware of the value of the automation.
Many companies and individual engineers implemented various ways to save time, from shell scripts to complex programs and to fully automated IaaS solutions.

It helps reducing the so called "Shadow IT", a phenomenon that happens when developers can't get a fast enough response from the IT of the company and rush to the public cloud to get what they need. Doing that they complete and release their project soon, but sometimes troubles start with the production phase of the deployment (unexpected additional budget for the IT, new technologies that they are not ready to manage, etc.).


shadow IT happens when corporate IT is not fast enough
shadow IT happens when corporate IT is not fast enough

For sure, some departments are organized in silos (a team responsible for servers, one for storage, one for networking, one for virtual machines, of course one for security...) and the provisioning of even simple requests takes too long.


process inefficiency due to silos and wait time
process inefficiency due to silos and wait time


Pressure on the infrastructure managers

So there is inefficiency in the company, that affects the business outcome of every project.
Longer time to market for strategic initiatives, higher costs for infrastructure and people.
Finger pointing starts, to identify who is responsible for the bottleneck.

The efficiency of teams and individuals is questioned, and responsibility is cascaded through the organization from project managers to developers, to the server team, to the storage team and generally the network is at the end of the chain... so that they have no one else to blame.

Those on the top (they consider themselves on top of the value chain) believe - or try to demonstrate - that their work is slowed down by the inefficiency of the teams they depend on. They try to suggest solutions like: "you said that your infrastructure is programmable, now give me your API and I will create everything I need on demand".

Of course this approach could bring some value (not much, as we'll see in the rest of the post) but it mines the relevance of the specialists teams that are supposed to manage the infrastructure according to best practices, to apply architectural blueprints that have been optimized for the company's specific business, to know the technology in deeper detail.
So they can't accept to be bypassed by a bunch of developers that want to corrupt the system playing with precious assets with their dirty hands.



The definitive question is: who owns the automation?
Should it be left to people that know what they need (e.g. Developers)?
Should it be owned by people that know how technology works, and at the end of the day are responsible for the SLA including performances, security and reliability that could be affected by a configuration made by others (i.e. IT Administrators)?


In my opinion, and based on the experience shared with many customers, the second answer is the correct one.
By definition the developer is not an expert on security: if he can easily program a switch via its REST API to get a network segment, it’s not the same when traffic needs to be secured and inspected.


The IT admin patrols the infrastructure
The IT Admin patrolling the infrastructure


Offering a self service catalog (or API)

A first, immediate solution could be the introduction of an easy automation tool like Cisco UCS Director, that manages almost every element in a multi vendor Data Center infrastructure: from servers to networks to storage to virtualization in a single dashboard. But what is more interesting is that every atomic action you do in the GUI is also reflected in a task in the automation library, that allows you to create custom workflows lining all the tasks for a process that you want to automate.
A common example of automation workflow is the creation of a 4-hypervisors server farm.
A single workflow starts from the SAN storage creating a volume and 4 LUN, where the hypervisor will be installed to enable remote boot for the servers. Then a network is created (or the existing management network will be used) and 4 Service Profiles (the definition of a server in Cisco UCS) are created from a template, with individual ip address, mac address and wwn for each network interface. Then, zoning and masking are executed to map every new server to a specific LUN and the service profiles are associated to 4 available servers (either blades or rack mount servers). The hypervisors are installed using the PXE boot, writing the bytes in the remote storage, configured and customized, and finally added to a (new) cluster in the hypervisor manager (e.g. vCenter).

All this process takes less then one hour: you could launch it and go to lunch, when you're back you'll find the cluster up and running. Compare it to a manual provisioning of the same server farm, eventually performed by a number of different teams (see the picture above): it would take days, sometimes weeks. 
Other use cases are simpler: maybe just creating a 3 tier application with VM and dedicated networks.

Once the automation workflow has been built and validated, it can be used by the IT admin or by the Operations everyday, to save time and ensure consistent outcome (no manual errors). But it can also be offered as a service to all the departments that depend on the IT for their projects. 

You can build a service catalog with enterprise features: multitenancy, role based access control, reporting, chargeback, approvals, etc. But you can also offer (secured) access to the API to launch the workflow, offering a degree of autonomy to your consumers. Eventually, using a resource quota: you don’t want everyone to be able to create dozens of VMs every hour if the capacity of the system can't sustain it. 

They will appreciate the efficiency improvement, for sure.


What's in it for me?


If you allow your internal clients to self serve, you will: 

  • get less requests for trivial tasks, that consume time and give no satisfaction (let them play with it),
  • be the hero of the productivity increase (no requests pending in your queue)
  • dedicate your time and skill to designing the architectural blueprint that will be offered as a service to your clients (so that everybody plays according to your rules)
  • use policy based provisioning, so that you define the rules just once and map them to tenants and environments: every deployment will inherit them
  • maintain control on resource consumption and system capacity, hence on costs and budget
  • increase your relevance: they will come to you to discuss their needs, propose new services, collaborate in governance

Example: network provisioning


The discussion above is valid for the entire infrastructure in the Data Center.
Now I tell you the story of a customer that implemented it specifically for the networking.

They were influenced by the trend about SDN and initially they were caught in the marketing trap "SDN means software implemented networking, hence overlay". Then they realized the advantage provided by ACI and selected it as the SDN platform ("software defined networking", thanks to the software controller and the ACI policy model).

Developers and the Architecture department asked to access the API exposed to self provision what they needed for new projects, but this was seen as an invasion of the property (see the picture with the dirty hands).

It would have worked, but it implied a transfer of knowledge and delegation of responsibility on a critical asset. At the end of the day, if developers and software designers had knowledge in networking, specialists would not exist.

So the network admins built a number of workflows in UCS Director, using the hundreds of tasks offered by the automation library, to implement some use cases ranging from basic tasks (allow this VM to be reached from the DMZ) to more complex scenarios (create a new environment for a multi tier application including load balancer and firewall configuration, plus access from the monitoring tools, with a single request).


3 tier application blueprint
Blueprint designed in collaboration with Security and Software Architects



Graphical editor for the workflows, with the tasks library
Graphical Editor for the workflow


These workflows are offered in a web portal (a service catalog is offered by UCSD out of the box) and through the REST API exposed by UCSD. Sample calls were provided to consumers as python clients, powershell clients and Postman collections, so that the higher level orchestration tool maintained by the Architecture dept was able to invoke the workflows immediately, inserting them in the business process automation that was already in place.


Example of python client running a UCSD workflow
Example of python client running a UCSD workflow



All the executions of the workflows - launched through the self service catalog or through the REST API - are tracked in the system and the administrator can inspect the requests and their outcome:

The IT admin can audit the requests for the automation workflows
The Service Requests are audited and can be inspected and rolled back

 Any run of the workflow can be inspected in full detail, look at the tabs in the window:


The IT admin can inspect any run of the workflows
The Admin has full control (see the tabs in the window)


References

Cisco UCS Director
Cisco ACI 
ACI for Simple Minds
ACI for (Smarter) Simple Minds
Invoking UCS Director Workflows via the Northbound API 



January 22, 2017

Hybrid Cloud and your applications lifecycle: 7 lessons learned


Hybrid Cloud is a must nowadays, I will not spend a word to convince you (you’d not be reading this post if you didn’t believe it). This is the story of a real project.

This post provides more context about the story I summarized at Just 1 step to deploy your applications in the cloud(s).
The structure of the post is:
  • Motivation
  • Use Cases
  • Time
  • Software Stack
  • Benefit of the architecture we implemented
  • Lessons Learned (the most important part)




Motivation for hybrid cloud, and most of the work in my customers' projects, include the following areas:
- Cost control (there is a strong debate: some swear it’s cheaper, others have discovered hidden costs: e.g. network traffic in production, after they made a business case just on the cost of VM provisioning).
- Governance model (IT must find a way to maintain control over resources usage, design patterns, compliance and security when application developers chose private cloud or public cloud).
- Mature technical solution: architecture and technology (there are many good products and system integrators in the market)

But, once you have made a decision, what will you run in the hybrid cloud?
Will your applications be spread across the boundary of your datacenter (one tier inside, other tiers outside)?
Or can we say that it is rather a multi-cloud deployment, where you have a number of resource pools that you can use as a target for deployments?

This project was made by a large corporation, to test how a hybrid cloud can be built and operated and to verify the impact on their current organization.
It is not a full production environment, it’s a pilot project that demonstrated on a small scale how easily you can build a software defined fully automated data center, including both resource pools from your local data center(s) and from public cloud providers.

The solution is expected to be cost effective, of course, but the greater benefits come from business agility and consistent governance.


Use Cases:

The evaluation was focused on 3 main use cases, all requiring that end users order the deployment of a complex software stack from a service catalog: the target for the deployment can be either the private cloud or the public cloud, or a combination of the two. These are the areas where the implementation demonstrated the value of the multi-cloud solution:
  • Business Intelligence (self-deployment of R Studio and additional tools)
  • ETL (self-deployment of a common software for ETL that data scientists would use in autonomy)
  • High Performance Computing (HPC) on OpenStack, with the integration of a DevOps pipeline.

Subject matter experts were provided from different lines of business in the company to support the implementation activities and evaluate the result. 
The use cases represent some frequent activities that the company needs in their usual business, especially in R&D. Improving efficiency and quality in the associated processes will have an impact on the overall business outcome. Applications were selected for the self service catalog that are deployed frequently (every week) and whose installation process takes time (some man days, accounting for both infrastructure and software setup), delaying business objectives.

Time:

All the activities in the project were delivered in time (six weeks), including the setup of the hardware and software systems for the hybrid cloud, the implementation of the 3 main use cases and some additional use cases, the functional tests and the stress tests. This is a demonstration that a proper selection of the technology and a good organization of the project allow for immediate return.
Challenges like setup of the remote access to the lab for remote experts, constraints in the networking and security configuration in the lab, some missing information about the process to install the applications (essential to build the model for the automation) slowed down the implementation. See Lessons learned.

Software Stack:

This is a complete end to end solution: its adoption will happen with a phased approach, starting from the components that grant an easy and immediate impact on the most critical business requirements and adopting some non-functional components later to complete the architecture. The extension from private cloud (based on any combination of VMware, other hypervisors and OpenStack) to a hybrid cloud (integrating AWS, Azure and more) was very quick (it is just a matter of configuration and definition of the governance model). Checkmarks in the picture show what we realized in the short timeframe of the project. The rest is part of a phased plan. The blue boxes show the components provided by Cisco.


a full solution for the hybrid cloud

The fundamental component in this architecture is Cisco CloudCenter (CCC), that has 2 main roles: 
- providing an orchestration solution that offers users the possibility to self-deploy complex software stacks from blueprints offered in a catalog, 
- brokering cloud resources from both private and public clouds (in the project we integrated VMware, OpenStack and AWS, but more clouds are supported).
CloudCenter manages the lifecycle of software applications in the cloud (at a level of abstraction where the underlying physical infrastructure does not matter).
The OpenStack use cases for HPC are supported by a Cisco Validated Design named UCSO: it includes a reference architecture for running the Red Hat OSP8 distribution on a certified hardware platform made of Cisco UCS servers and Nexus 9000 switches. The setup process and the operations are defined by the official deployment guide and Cisco's technical support assumes responsibility on the entire stack, including the Red Hat software.
The management of the entire DC infrastructure from a single orchestration platform was made possible by Cisco UCSD (UCS Director): a single dashboard and workflow engine to manage servers, network and storage, both physical and virtual. The status, the performances and the remaining capacity of all the systems were monitored with Cisco UCSPM (UCS Performance Manager).


Benefits of the architecture we implemented

The implementation of the multi-cloud solution demonstrated the major benefits that a hybrid cloud delivers.
  • A consistent architecture based on software (and eventually hardware) components that integrate easily and satisfy all the business and technical requirements.
  • All components in the architecture are loosely coupled and their integration is based on standard protocols and documented open API. As a consequence, every component can be replaced by an alternative solution (from a different vendor, from the open source, from a custom build) with no fear of vendor lock in.
  • The adoption of a hybrid cloud solution can happen gradually, starting with a core implementation with the most critical components (e.g. CCC, ACI and UCSO), adding more features as a second step (infrastructure automation and monitoring) with UCSD and UCSPM, eventually a unified service catalog and ITSM portal later.

Lessons Learned:

  1. use cases
  2. network topology
  3. security and trust
  4. reusable work (repositories and services)
  5. engage SME and business owners
  6. document
  7. refine (iterations, devops)

Use Cases 
The selection of the use cases is important. You need a quick return to demonstrate the value of the hybrid cloud: the adoption of the hybrid model should address immediate business needs, that the end users can appreciate, rather than be driven just by an industry trend. 
IT projects should not start because a new technology is very smart, but because the outcome makes the business easier and more productive.
Always engage your end users in the planning phase and avoid academic use cases that have a limited appeal on the decision makers. In this project we were lucky because the preparation was done by the steering committee very well in advance.
Once the models for the automation were ready, we could test any combination of the deployment for the application tiers: everything in the private cloud, everything in the public cloud, or the front end deployed on one side and the back end on the other side. The benchmarking capabilities of the product (CCC) allowed to compare the price/performances ratio of the different options based on vSphere, Openstack and AWS - specifically for each application, with tailored reporting.
 
Network Topology
A hybrid environment connects - by definition - areas that were designed separately (your datacenter and the public cloud). They have security policies and configurations that are not meant to work together, and this makes it difficult. Before you start the setup, dedicate the right time to collect all the requirements and to design the connectivity properly. 
We had some issues with the network proxies and the firewalls because of the protocols and ports that we needed to open to allow a proper integration of the Cloud Management Platform (running on premise) with the orchestration engine (with one instance running in each cloud region used in the project, to leverage the local API exposed by the cloud provider and to manage the lifecycle of the applications in the cloud). 

communication among the components of Cisco CloudCenter

Another important requirement is to have a unique repository for all the artifacts, the blueprints and the installation packages for the applications: it should be reachable from all the target clouds that you plan to use, regardless its location (it can be either in the private or in the public cloud, but all the servers you deploy will access it to stand up a new instance of the application). 
The same applies to any public repository that is used in the setup of the applications (both commercial software and open source components, e.g. packages installed using yum).
See also CCC Components Overview for more detail.

Security and Trust
It's important that a good level of trust is established between the architects building the hybrid cloud and the operations team, especially the security guys. Special rules and new policies need to be setup to allow the new platform to work, it's impossible to keep the same old governance model that addresses a single end user identity. 
Sometimes I feel like I'm living - again - the same conflict that I had with Database Administrators, when I tried to configure JDBC database connection pools in the first Java application servers in the 80's. The system should be trusted, and a delegation of the decisions (authentication, authorization and audit) accepted.

Reusable work (repositories and services)
When you model a software application to automate its deployment, you should identify any building block that can potentially be reused in a different model. If you create a reusable (parametric) deliverable and save it individually in a common repository, next time you'll have the work ready to be reused.
This applies to architectural building blocks like database servers, web servers, load balancers, firewalls, distributed caches, etc. 
If they have been created as separate services, instead of just being a part of a monolithic model, they will appear in your designer's palette everytime you model a similar application and you can drag and drop them in the topology. We did that in the project and we saved a lot of time in the implementation. 

Engage SME and business owners
It is important that subject matter experts (SME) collaborate at the definition of blueprints and the build of the automation model. Even though documentation exists for the deployment of the application, you should work together. 
The user knows all the requirements, he knows how to verify and troubleshoot, he has encountered all the setup issues already.
I've learned that the best way to document the setup process for an application, so that you can use it as a reference for the automation, is to ask the SME to install it in front of you in a clean environment where the application was never run, and record a video of the process. It's faster than writing documentation, more complete and reliable. We did that using the desktop sharing feature in Webex and we recorded the sessions.  

Document
While you do the work, keep track of all the steps. Take (maybe informal) notes, but mostly take a lot of screen shots to document what you did. You can keep them on a wiki or on a shared folder, they will help a lot when you have time to create the formal documentation of the project. If you need to troubleshoot, eventually involving other people, this information will be unvaluable.
Of course, versioning and taking snapshots of all deliverables also helps in case you need to go back for whatever reason. 

Refine (iterations, devops)
Create the implementation for a minimum viable product (MVP) as soon as you can. Get the product (i.e. the entire self service catalog, or just the implementation of a single application blueprint) to early customers as soon as possible, to get their feedback before you go too deep in the implementation.
Applied to a hybrid cloud scenario, this will help to evaluate:
- quality of the service you are building, including documentation
- how much the users need it and use it in the real world
- performances of the distributed environment and any bottleneck (network, computing, configuration)
- security implications 
You will have all the time to make it perfect, through iterations that improve the implementation, collect feedback, allow for tuning the design and the configuration. No need to work in a hurry and make mistakes, while you keep your users waiting for the final "perfect" product but they don't see any progress.

October 20, 2015

DevOps, Docker and Cisco ACI - part 2

This post is a follow up to the initial discussion of the DevOps approach based on Linux containers (specifically with Docker).
Here I elaborate on the advantage provided by Cisco ACI (and some more projects in the open source space) when you work with containers.

Policies and Containers  

Cisco ACI offers a common policy model for managing IT operations.
It is agnostic: bare metal, virtual machines, and containers are treated the same, offering a unified policy language: one clear security model, regardless of how an application is deployed.


ACI models how components of an application interact, including their requirements for connectivity, security, quality of service (e.g. reserved bandwidth for a specific service), and network services.   ACI offers a clear path to migrate existing workloads towards container-based environments without any changes to the network policy, thanks to two main technologies:
  • ACI Policy Model and OpFlex   
  • Open vSwitch (OVS)   

OpFlex is a distributed policy protocol that allows application-centric policies to be enforced within a virtual switch such as OVS.
Each container can attach to an OVS bridge, just as a virtual machine would, and the OpFlex agent helps ensure that the appropriate policy is established within OVS (because it's able to communicate with the Controller, bidirectionally). 



The result of this integration is the ability to build and manage a complete infrastructure that spans across physical, virtual, and container-based environments.
Cisco plans to release support for ACI with OpFlex, OVS, and containers before the end of 2015.


Value added from Cisco ACI to containers

I will explain how ACI supports the main two networking types in Docker: veth and macvlan.
This can be done already, because it's not based only on Opflex. 

Containers Networking option 1 - veth

vEth is the networking mode that leads to virtual bridging with br0 (a linux bridge, the default option with Docker) or OVS (Open Virtual Switch, usually adopted with KVM and Openstack).
As a premise, I remind you that ACI manages the connectivity and the policies for bare metal servers, VMs on any hypervisor and network services (LB, FW, etc.) consistently and easily:

Cisco ACI as the any to any network integration in the data center

On top of that, you can add containers running on bare metal Linux servers or inside virtual machines (different use cases make one of the options preferred, but from a ACI standpoint it's the same):


That means that applications (and the policies enabling them) can span across any platform: servers, VM, containers at the same time. Every service or component that makes up the application can be deployed on the platform that is more convenient for it in terms of scalability, reliability and management:


And the extension of the ACI fabric to virtual networks (with the Opflex-enabled OVS switch) allows applying the policies to any virtual End Point that uses virtual ethernet, like Docker containers configured with the veth mode.

ACI model brought to all workloads at the same time: Docker, VM, bare metal


Advantages from ACI with Docker veth:

With this architecture we can get two main results:
- Consistency of connectivity and services policy between physical, virtual and/or container (LXC and Docker);
- Abstraction of the end-to-end network policy for location independence altogether with Docker portability (via shared repositories) 

Containers Networking option 2 - macvlan

MACVLAN does not bring a network bridge for the ethernet side of a Docker container to connect.
You can think  MACVLAN as a hypotetical cable where one side is the eth0 at the Docker and the other side is the interface on the physical switch / ACI leaf.
The hook between them is the VLAN (or the trunked VLAN) in between.
In short, when specifying a VLAN with the MACVLAN, you tell a container binding on eth0 on Linux to use the VLAN XX (defined as access or trunked).
The connectivity will be “done” when the match happens with the other side of the cable at the VLAN XX on the switch (access or trunk).

At this point you can match vlans with EPG (End Point Groups) in ACI, to build policies that group containers as End Points needing the same treatment, i.e. applying Contracts to the groups of Containers:


Advantages from ACI with Docker macvlan:

This configuration provides two advantages (the first one is common to veth):
- Extend the Docker containers based portability for applications through the independence of ACI policies from the server's location.
- Performance increase on network throughput from 5% to 15% (mileage varies, further tuning and tests will provide more detail) because there’s no virtual switching consuming CPU cycles on the host.



Intent based approach

A new intent based approach is making its way in networking. An intent based interface enables a controller to manage and direct network services and network resources based on describing the “Intent” for network behaviors. Intents are described to the controller through a generalized and abstracted policy semantics, instead of using Openflow-like flow rules. The Intent based interface  allows for a descriptive way to get what is desired from the infrastructure, unlike the current SDN interfaces which are based on describing how to provide different services. This interface will accommodate orchestration services and network and business oriented SDN applications, including OpenStack Neutron, Service Function Chaining, and Group Based Policy.

Docker plugins for networks and volumes

Cisco is working at a open source project that aims at enabling intent-based configuration for both networking and volumes. This will exploit the full ACI potential in terms of defining the system behavior via policies, but will work also with non-ACI solutions. 
Contiv netplugin is a generic network plugin for Docker, designed to handle networking use cases in clustered multi-host systems.
It's still work in progress, detail can't be shared at this time but... stay tuned to see how Cisco is leading also in the open source world.



Mantl: a Stack to manage Microservices  

And, just for you to know, another project that Cisco is delivering is targeted at the lifecycle and the orchestration of microservices.
Mantl has been developed in house, as a framework to manage the cloud services offered by Cisco. It can be used by everyone for free under the Apache License.
You can download Mantl from github and see the documentation here.

Mantl allows teams to run their services on any cloud provider. This includes bare metal services, OpenStack, AWS, Vagrant and GCE. Mantl uses tools that are industry-standard in the DevOp community, including Marathon, Mesos, Kubernetes, Docker, Vault and more.
Each layer of Mantl’s stack allows for a unified, cohesive pipeline between support, managing Mesos or Kubernetes clusters during a peak workload, or starting new VMs with Terraform. Whether you are scaling up by adding new VMs in preparation for launch, or deploying multiple nodes on a cluster, Mantl allows for you to work with every piece of your DevOps stack in a central location, without backtracking to debug or recompile code to ensure that the microservices you need will function when you need them to.

When working in a container-based DevOps environment, having the right microservices can be the difference between success and failure on a daily basis. Through the Marathon UI, one can stop and restart clusters, kill sick nodes, and manage scaling during peak hours. With Mantl, adding more VMs for QA testing or live use is simple with Terraform — without one needing to piece together code to ensure that both pieces work well together without errors. Addressing microservice conflicts can severely impact productivity. Mantl cuts down time spent working through conflicts with microservices so DevOps can spend more time working on an application.






Key takeaways

ACI offers a seamless policy framework for application connectivity for VM, physical hosts and containers.

ACI integrates Docker without requiring gateways (otherwise required if you build the overlay from within the host) so Virtual and Physical can be merged in the deployment of a single application.

Intent based configuration makes networking easier. Plugins for enabling Docker to intent based configuration and integration with SDN solutions are coming fast.

Microservices are a key component of cloud native applications. Their lifecycle can be complicated, but tools are emerging to orchestrate it end to end. Cisco Mantl is a complete solution for this need and is available for free on github.


References

Much of the information has been taken from the following sources.
You can refer to them for a deeper investigation of the subject:

https://docs.docker.com/userguide/
https://docs.docker.com/articles/security/
https://docs.docker.com/articles/networking/   
http://www.dedoimedo.com/computers/docker-networking.html
https://mesosphere.github.io/presentations/mug-ericsson-2014/  
Exploring Opportunities: Containers and OpenStack
ACI for Simple Minds
http://www.networkworld.com/article/2981630/data-center/containers-key-as-cisco-looks-to-open-data-center-os.html
http://blogs.cisco.com/datacenter/docker-and-the-rise-of-microservices    
ACI and Containers white paper 
Cisco and Red Hat white paper   
Opendaylight and intent
Intent As The Common Interface to Network Resources
Mantl Introduces Microservices as a Stack
Project mantl



October 9, 2015

DevOps, Docker and Cisco ACI - part 1

In this post I try to describe the connection between the need for a fast IT, the usage of Linux containers to quicky deploy cloud native applications and the advantage provided by Cisco ACI in containers' networking.
The discussion is split in two posts to make it more... agile.
A big thank you to Carlos Pereira (@capereir), Frank Brockners (@brockners) and Juan Lage (@JuanLage) that provided content and advice on this subject. 

DevOps – it’s not tooling, it’s a process optimization


I will not define DevOps again, you can find it in this post and in this book.
I just want to remind that it’s not a product or a technology, but it’s a way of doing things.
Its goal is to bring fences down between the software development teams and the operations team, streamlining the flow of a IT project from development to production.
Steps are:
  1. alleviate bottlenecks (systems or people) and automate as much as possible, 
  2. feed information back so problems are solved by desing in next iteration, 
  3. iterate as often as possible (continuous delivery).



Business owners push the IT to deliver faster, and application development via DevOps is changing the behavior of IT.
Gartner defined the Bimodal IT as the parallel management of cloud native applications (DevOps) and more mature systems that require consolidated best practices (like ITIL) and tools supporting their lifecycle.



One important aspect of DevOps is that the infrastructure must be flexible and provisioned on demand (and disposed when no longer needed). So, if it is programmable it fits much better in this vision. 

 

Infrastructure as code

Infrastructure as code is one of the mantra of DevOps: you can save the definition of the infrastructure (and the policies that define its behavior) in source code repository, as well as you do with the code for your applications.
In this way you can automate the build and the management very easily.

There are a number of tools supporting this operational model. Some examples:



One more example of tool for DevOps is the ACI toolkit, a set of python libraries that expose the ACI network fabric to DevOps as a code library.

 
You can download it from:


The ACI Toolkit exposes the ACI object model to programming languages so that you can create, modify and manage the fabric as needed.
Remember that one of the most important advantage of Cisco’s vision of SDN is that you can manage the entire system as a whole.
No need to configure or manage single devices one by one, like other approaches to SDN (e.g. Openflow).




So you can create, modify and delete all of the following objects and their relationships:




Linux Containers and Docker


Docker is an open platform for Sys Admins and developers to build, ship and run distributed applications.  Applications are easy and quickly assembled from reusable and portable components, eliminating the silo-ed approach between development, QA, and production environments.

Individual components can be microservices coordinated by a program that contains the business process logic (an evolution of SOA, or Service Oriented Architecture). They can be deployed independently and scaled horizontally as needed, so the project benefits from flexibility and efficient operations. This is of great help in DevOps.

At a high-level, Docker is built of:
- Docker Engine: a portable and lightweight, runtime and packaging tool
- Docker Hub: a cloud service for sharing applications and automating workflows
There are more components (Machine, Swarm) but that's beyond the basic overview I'm giving here.



Docker’s main purpose is the lightweight packaging and deployment of applications.   

Containers are lightweight, portable, isolated, self-sufficient "slices of a server" that contain any application (often they contain microservices).
They deliver on full DevOps goal:
- Build once… run anywhere (Dev, QA, Prod, DR).
- Configure once… run anything (any container).  

Processes in a container are isolated from processes running on the host OS or in other Docker containers.
All processes share the same Linux kernel.
Docker leverages Linux containers to provide separate namespaces for containers, a technology that has been present in Linux kernels for 5+ years. The default container format is called libcontainer. Docker also supports traditional Linux containers using LXC.
It also uses Control Groups (cgroups), which have been in the Linux kernel even longer, to implement resources (such as CPU, memory, I/O) auditing and limiting, and Union file systems that support layering of the container's file system.

 

Kernel namespaces isolate containers, avoiding visibility between containers and containing faults.   Namespaces isolate:
◦     pid (processes)
◦     net (network interfaces, routing)
◦     ipc (System V interprocess communication [IPC])
◦     mnt (mount points, file systems)
◦     uts (host name)
◦     user (user IDs [UIDs])    

Containers or Virtual Machines


Containers are isolated, portable environments where you can run applications along with all the libraries and dependencies they need.
Containers aren’t virtual machines. In some ways they are similar, but there are even more ways that they are different. Like virtual machines, containers share system resources for access to compute, networking, and storage. They are different because all containers on the same host share the same OS kernel, and keep applications, runtimes, and various other services separated from each other using kernel features known as namespaces and cgroups.
Not having a separate instance of a Guest OS for each VM saves space on disk and memory at runtime, improving also the performances.
Docker added the concept of a container image, which allows containers to be used on any host with a modern Linux kernel. Soon Windows applications will enjoy the same portability among Windows hosts as well.
The container image allows for much more rapid deployment of applications than if they were packaged in a virtual machine image.



Containers networking

When Docker starts, it creates a virtual interface named docker0 on the host machine.
docker0 is a virtual Ethernet bridge that automatically forwards packets between any other network interfaces that are attached to it.
For every new container, Docker creates a pair of “peer” interfaces: one “local” eth0 interface and one unique name (e.g.: vethAQI2QT), out in the namespace of the host machine.
Traffic going outside is NATted




You can create different types of networks in Docker:

veth: a peer network device is created with one side assigned to the container and the other side is attached to a bridge specified by the lxc.network.link.   

vlan: a vlan interface is linked with the interface specified by the lxc.network.link and assigned to the container.

phys:  an already existing interface specified by the lxc.network.link is assigned to the container.

empty: will create only the loopback interface (at kernel space).

macvlan:  a  macvlan interface is linked with the interface specified by the lxc.network.link and assigned to the container.  It also specifies the mode the macvlan will use to communicate between  different macvlan on the same upper device.  The accepted modes are: private, Virtual Ethernet Port Aggregator (VEPA) and bridge

Docker Evolution - release 1.7, June 2015  

Important innovation has been introduced in the latest release of Docker, that is still experimental.

Plugins  

A big new feature is a plugin system for Engine, the first two available are for networking and volumes. This gives you the flexibility to back them with any third-party system.
For networks, this means you can seamlessly connect containers to networking systems such as Weave, Microsoft, VMware, Cisco, Nuage Networks, Midokura and Project Calico.  For volumes, this means that volumes can be stored on networked storage systems such as Flocker.

Networking  

The  release includes a huge update to how networking is done.



Libnetwork provides a native Go implementation for connecting containers.  The goal of libnetwork is to deliver a robust Container Network Model that provides a consistent programming interface and the required network abstractions for applications.
NOTE: libnetwork project is under heavy development and is not ready for general use.
There are many networking solutions available to suit a broad range of use-cases. libnetwork uses a driver / plugin model to support all of these solutions while abstracting the complexity of the driver implementations by exposing a simple and consistent Network Model to users.

Containers can now communicate across different hosts (Overlay Driver).  You can now create a network and attach containers to it.

Example:
docker network create -d overlay net1    
docker run -itd --publish-service=myapp.net1 debian:latest  

Orchestration and Clustering for containers  

Real world deployments are automated, single CLI commands are less used. Most important orchestrators are Mesos/Marathon, Google Kubernetes, Docker Swarm
Most use JSON or YAML formats to describe an application: a declarative language that says what an application looks like.
That is similar to ACI declarative language with high level abstraction to say what an application needs from the network, and have a network implement it.
This validates Cisco’s vision with ACI, very different from the NSX's of the world.

Next post explains the advantage provided by Cisco ACI (and some other projects in the open source space) when you use containers.


References

Much of the information has been taken from the following sources.
You can refer to them for a deeper investigation of the subject:

https://docs.docker.com/userguide/
https://docs.docker.com/articles/security/
https://docs.docker.com/articles/networking/   
http://www.dedoimedo.com/computers/docker-networking.html
https://mesosphere.github.io/presentations/mug-ericsson-2014/ 
http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/
Exploring Opportunities: Containers and OpenStack
ACI for Simple Minds
http://www.networkworld.com/article/2981630/data-center/containers-key-as-cisco-looks-to-open-data-center-os.html
http://blogs.cisco.com/datacenter/docker-and-the-rise-of-microservices    
ACI and Containers white paper 
Cisco and Red Hat white paper    

Some content from the Docker documentation reused based on the Apache 2 License.