The discussion is split in two posts to make it more... agile.
A big thank you to Carlos Pereira (@capereir), Frank Brockners (@brockners) and Juan Lage (@JuanLage) that provided content and advice on this subject.
DevOps – it’s not tooling, it’s a process optimization
I will not define DevOps again, you can find it in this post and in this book.
I just want to remind that it’s not a product or a technology, but it’s a way of doing things.
Its goal is to bring fences down between the software development teams and the operations team, streamlining the flow of a IT project from development to production.
Steps are:
- alleviate bottlenecks (systems or people) and automate as much as possible,
- feed information back so problems are solved by desing in next iteration,
- iterate as often as possible (continuous delivery).
Business owners push the IT to deliver faster, and application development via DevOps is changing the behavior of IT.
Gartner defined the Bimodal IT as the parallel management of cloud native applications (DevOps) and more mature systems that require consolidated best practices (like ITIL) and tools supporting their lifecycle.
One important aspect of DevOps is that the infrastructure must be flexible and provisioned on demand (and disposed when no longer needed). So, if it is programmable it fits much better in this vision.
One important aspect of DevOps is that the infrastructure must be flexible and provisioned on demand (and disposed when no longer needed). So, if it is programmable it fits much better in this vision.
Infrastructure as code
Infrastructure as code is one of the mantra of DevOps: you can save the definition of the infrastructure (and the policies that define its behavior) in source code repository, as well as you do with the code for your applications.
In this way you can automate the build and the management very easily.
There are a number of tools supporting this operational model. Some examples:
One more example of tool for DevOps is the ACI toolkit, a set of python libraries that expose the ACI network fabric to DevOps as a code library.
You can download it from:
The ACI Toolkit exposes the ACI object model to programming languages so that you can create, modify and manage the fabric as needed.
Remember that one of the most important advantage of Cisco’s vision of SDN is that you can manage the entire system as a whole.
No need to configure or manage single devices one by one, like other approaches to SDN (e.g. Openflow).
So you can create, modify and delete all of the following objects and their relationships:
Docker is an open platform for Sys Admins and developers to build, ship and run distributed applications. Applications are easy and quickly assembled from reusable and portable components, eliminating the silo-ed approach between development, QA, and production environments.
Individual components can be microservices coordinated by a program that contains the business process logic (an evolution of SOA, or Service Oriented Architecture). They can be deployed independently and scaled horizontally as needed, so the project benefits from flexibility and efficient operations. This is of great help in DevOps.
At a high-level, Docker is built of:
- Docker Engine: a portable and lightweight, runtime and packaging tool
- Docker Hub: a cloud service for sharing applications and automating workflows
There are more components (Machine, Swarm) but that's beyond the basic overview I'm giving here.
Docker’s main purpose is the lightweight packaging and deployment of applications.
Individual components can be microservices coordinated by a program that contains the business process logic (an evolution of SOA, or Service Oriented Architecture). They can be deployed independently and scaled horizontally as needed, so the project benefits from flexibility and efficient operations. This is of great help in DevOps.
At a high-level, Docker is built of:
- Docker Engine: a portable and lightweight, runtime and packaging tool
- Docker Hub: a cloud service for sharing applications and automating workflows
There are more components (Machine, Swarm) but that's beyond the basic overview I'm giving here.
Docker’s main purpose is the lightweight packaging and deployment of applications.
Containers are lightweight, portable, isolated, self-sufficient "slices of a server" that contain any application (often they contain microservices).
They deliver on full DevOps goal:
- Build once… run anywhere (Dev, QA, Prod, DR).
- Configure once… run anything (any container).
They deliver on full DevOps goal:
- Build once… run anywhere (Dev, QA, Prod, DR).
- Configure once… run anything (any container).
Processes in a container are isolated from processes running on the host OS or in other Docker containers.
All processes share the same Linux kernel.
Docker leverages Linux containers to provide separate namespaces for containers, a technology that has been present in Linux kernels for 5+ years. The default container format is called libcontainer. Docker also supports traditional Linux containers using LXC.
It also uses Control Groups (cgroups), which have been in the Linux kernel even longer, to implement resources (such as CPU, memory, I/O) auditing and limiting, and Union file systems that support layering of the container's file system.
All processes share the same Linux kernel.
Docker leverages Linux containers to provide separate namespaces for containers, a technology that has been present in Linux kernels for 5+ years. The default container format is called libcontainer. Docker also supports traditional Linux containers using LXC.
It also uses Control Groups (cgroups), which have been in the Linux kernel even longer, to implement resources (such as CPU, memory, I/O) auditing and limiting, and Union file systems that support layering of the container's file system.
Kernel namespaces isolate containers, avoiding visibility between containers and containing faults. Namespaces isolate:
◦ pid (processes)
◦ net (network interfaces, routing)
◦ ipc (System V interprocess communication [IPC])
◦ mnt (mount points, file systems)
◦ uts (host name)
◦ user (user IDs [UIDs])
◦ pid (processes)
◦ net (network interfaces, routing)
◦ ipc (System V interprocess communication [IPC])
◦ mnt (mount points, file systems)
◦ uts (host name)
◦ user (user IDs [UIDs])
Containers or Virtual Machines
Containers are isolated, portable environments where you can run applications along with all the libraries and dependencies they need.
Containers aren’t virtual machines. In some ways they are similar, but there are even more ways that they are different. Like virtual machines, containers share system resources for access to compute, networking, and storage. They are different because all containers on the same host share the same OS kernel, and keep applications, runtimes, and various other services separated from each other using kernel features known as namespaces and cgroups.
Not having a separate instance of a Guest OS for each VM saves space on disk and memory at runtime, improving also the performances.
Docker added the concept of a container image, which allows containers to be used on any host with a modern Linux kernel. Soon Windows applications will enjoy the same portability among Windows hosts as well.
The container image allows for much more rapid deployment of applications than if they were packaged in a virtual machine image.
Containers networking
When Docker starts, it creates a virtual interface named docker0 on the host machine.
docker0 is a virtual Ethernet bridge that automatically forwards packets between any other network interfaces that are attached to it.
For every new container, Docker creates a pair of “peer” interfaces: one “local” eth0 interface and one unique name (e.g.: vethAQI2QT), out in the namespace of the host machine.
Traffic going outside is NATted
docker0 is a virtual Ethernet bridge that automatically forwards packets between any other network interfaces that are attached to it.
For every new container, Docker creates a pair of “peer” interfaces: one “local” eth0 interface and one unique name (e.g.: vethAQI2QT), out in the namespace of the host machine.
Traffic going outside is NATted
You can create different types of networks in Docker:
veth: a peer network device is created with one side assigned to the container and the other side is attached to a bridge specified by the lxc.network.link.
vlan: a vlan interface is linked with the interface specified by the lxc.network.link and assigned to the container.
phys: an already existing interface specified by the lxc.network.link is assigned to the container.
empty: will create only the loopback interface (at kernel space).
macvlan: a macvlan interface is linked with the interface specified by the lxc.network.link and assigned to the container. It also specifies the mode the macvlan will use to communicate between different macvlan on the same upper device. The accepted modes are: private, Virtual Ethernet Port Aggregator (VEPA) and bridge
Docker Evolution - release 1.7, June 2015
Important innovation has been introduced in the latest release of Docker, that is still experimental.Plugins
A big new feature is a plugin system for Engine, the first two available are for networking and volumes. This gives you the flexibility to back them with any third-party system.For networks, this means you can seamlessly connect containers to networking systems such as Weave, Microsoft, VMware, Cisco, Nuage Networks, Midokura and Project Calico. For volumes, this means that volumes can be stored on networked storage systems such as Flocker.
Networking
The release includes a huge update to how networking is done.Libnetwork provides a native Go implementation for connecting containers. The goal of libnetwork is to deliver a robust Container Network Model that provides a consistent programming interface and the required network abstractions for applications.
NOTE: libnetwork project is under heavy development and is not ready for general use.
There are many networking solutions available to suit a broad range of use-cases. libnetwork uses a driver / plugin model to support all of these solutions while abstracting the complexity of the driver implementations by exposing a simple and consistent Network Model to users.
Containers can now communicate across different hosts (Overlay Driver). You can now create a network and attach containers to it.
Example:
docker network create -d overlay net1
docker run -itd --publish-service=myapp.net1 debian:latest
Orchestration and Clustering for containers
Real world deployments are automated, single CLI commands are less used. Most important orchestrators are Mesos/Marathon, Google Kubernetes, Docker SwarmMost use JSON or YAML formats to describe an application: a declarative language that says what an application looks like.
That is similar to ACI declarative language with high level abstraction to say what an application needs from the network, and have a network implement it.
This validates Cisco’s vision with ACI, very different from the NSX's of the world.
Next post explains the advantage provided by Cisco ACI (and some other projects in the open source space) when you use containers.
References
Much of the information has been taken from the following sources.You can refer to them for a deeper investigation of the subject:
https://docs.docker.com/userguide/
https://docs.docker.com/articles/security/
https://docs.docker.com/articles/networking/
http://www.dedoimedo.com/computers/docker-networking.html
https://mesosphere.github.io/presentations/mug-ericsson-2014/
http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/
Exploring Opportunities: Containers and OpenStack
ACI for Simple Minds
http://www.networkworld.com/article/2981630/data-center/containers-key-as-cisco-looks-to-open-data-center-os.html
http://blogs.cisco.com/datacenter/docker-and-the-rise-of-microservices
ACI and Containers white paper
Cisco and Red Hat white paper
Some content from the Docker documentation reused based on the Apache 2 License.
That was a fun read, looking forward to the next post... !
ReplyDeleteYou highlight a great point. So many articles about comparing overlay to underlay approach only in the context of 'networking'. It changes the conversation when you bring up the advantage of the underlay model touching the underlying infrastructure making DevOps model of 'network infrastructure as code' possible
really good piece of information, I had come to know about your site from my friend shubodh, kolkatta,i have read atleast nine posts of yours by now, and let me tell you, your site gives the best and the most interesting information. This is just the kind of information that i had been looking for, i'm already your rss reader now and i would regularly watch out for the new posts, once again hats off to you! Thanks a lot once again, Regards, devops training in hyderabad
ReplyDeleteI have found here much useful information for myself. Many thanks to the editors for the info.
ReplyDeleteBelfast SEO Companies & iPhone Game Development Belfast UK
Very useful post on DevOps Docker and Cisco aci,Thank You.
ReplyDeleteRegards,
DevOps Docker Training in Hyderabad,
Puppet Devops Training in Hyderabad.
Thanks for sharing the very useful info about DevOps and please keep updating........
ReplyDelete