Showing posts with label Continuous Integration. Show all posts
Showing posts with label Continuous Integration. Show all posts

July 25, 2018

Have you ever considered CI/CD as a Service?

Introduction    


Would you like to create a complete, fully configured environment for Continuous Integration / Continuous Delivery with a single request, saving you ton of times in configuring and managing the pipeline? If so keep reading this post, the first one of a series of three where Stefano and I will focus on automation in CloudCenter to align with a DevOps methodology. The entire series is coauthored with Stefano Gioia, a colleague of mine at Cisco. We will be talking about a solution we’ve built on top of Cisco CloudCenter to support CI/CD as a Service.   

To keep things simple, we've decided to split the story into three posts.  
•  In the first post (this one), we will introduce the use case of CI/CD as a Service, describing it in detail to show the business and technical benefits.  
•  The second post will guide you through automating the deployment of a complete CI/CD environment in few minutes, by implementing a service in the catalog exposed by Cisco CloudCenter.  
•  In the third post, we will show how to apply CI/CD to the lifecycle of a sample application.   

A little refresh on DevOps    


Before taking this journey let’s first clarify what we mean by using the term “DevOps.” DevOps is not a technology or a magic wand that will instantly help you unify the development (Dev) and management (Ops) of your applications.   
DevOps encompasses more than just software development.   
It's a philosophy of cooperation between different teams in a company, mostly Development (Dev) and Operations (Ops), with the ultimate goal of being more productive and successful in launching new (or updating existing) services to reflect what your customers want.    
As shown in the picture below, this is how we see DevOps: as a human brain. We know that the right hemisphere of a human brain is said to be creative, conceptual, holistic: the opposite of the left region, which is rational and analytic.  




    
Apparently, two distinct aspects that cannot always work together. But nature finds a way to allow them to cooperate for the benefit of the human body.     
The very same concept can be utilized for your company: your Dev and Ops team has to collaborate to get significant benefits in productivity such as shorter deployment cycles which means increased frequency of software releases and finally better reaction to market and customer needs by quickly deploying new application features.   

What about Continuous Integration / Continuous Delivery / Continuous Deployment?    


Today Continuous Integration, Delivery and Deployment is a common practice in IT software development.    
The central concept is about continuously making small changes to the code, building, testing and delivering more often, more quickly and more efficiently, to able to respond rapidly to changing business contexts.    
The picture below illustrates a sample CI/CD process divided into stages:  


Stages in the CI/CD process

       

• Continuous Integration 

A common practice of frequently integrating and continuously merging code changes from a team of developers into a shared code repository. Quite often, after new code is committed to a repository, the server triggers a “build” and runs some basic test. Once the application is built and all the tests are passed, it’s time to move to the next step: delivery.
  

•  Continuous Delivery  

Simply means delivering the build to a specific target (environment), like Integration Test, Quality Assurance, or Pre-Production.   

•  Continuous Deployment  

Is fundamentally an extension of Delivery (and sometimes it’s included in the Delivery process). It allows you to repeat deployment of your application to production, even many times per day. The production environment could be an on-premises environment or a public cloud. In some advanced scenario, applications can be deployed in a hybrid model, example database on-premise while business logic and front end in a public cloud. This is called a hybrid deployment.    

Usually, when defining a CI/CD pipeline you will need at least the following components:
•  A code repository to host and manage all your source code 
•  A build server to build an application from source code 
•  An integration server/orchestrator to automate the build and run test code 
•  A repository to store all the binaries and items related to the application 
•  Tools for automatic configuration and deployment    

Let’s take a look at the typical challenges of implementing a CI/CD process.  
First of all, having one single CI/CD toolset in your company is not a good option.      


Every LOB uses a different CI/CD toolset

    
As you can see from the picture above, every LOB (line of business) might have different requirements (and sometimes also a developer team inside the same LOB) and most likely they will use different technologies to create a new application/business service. They might even use different code language (Java, nodeJS, .Net, etc.) and be more familiar with a specific tool, e.g. GitHub rather than Subversion.     

How can you accommodate this diversity?    
You probably guessed it right. You can create multiple CI/CD chains and then install multiple tools (perhaps on VM or in a container) depending on the requirements coming from the developers and/or LOB.     
However, how much time you will be spending in configuring every time, for each LOB or DevTeam, a new CI/CD chain? Moreover, what about maintaining and upgrading all the components of the CI/CD toolchain to be compliant with any new security requirements from your security department?     

How long does it take to prepare, configure, deploy and manage multiple CI/CD chains? Sounds like a typical Shadow IT problem, a phenomenon that happens when a Developer Team or LOB users can’t get a fast-enough response from the IT and rush to a public cloud to get what they need. In that case, the solution was to implement automation and self-service to quickly provides the necessary environment to the end users, with the same speed and flexibility as the public cloud.    

Wouldn’t be much easier to adopt the same approach and merely automate the deployment and configuration of the CI/CD chain with a single request generated by a simple HTML form?    
Good news: that’s precisely the purpose of “CI/CD as a Service.”   

Introducing “CI/CD as a Service” 


Let us first clarify an important point: we are not discussing relocating your CI/CD resources and process to the cloud, and then consuming from there. Nor we are discussing the automation task performed by the CI/DP pipeline (push code to the repository, the automatic build of the code, test, and deployment).    

What we are proposing here is to automate the deployment and configuration of the tools that are part of your CI/CD Pipeline. With a single request, you will be able to select, create, deploy and configure the tools that are part of your automated CI/CD pipeline.    

The key things here is that the customer (a LOB as an example) can decide which are the components that will be part of the CI/CD pipeline. One pipeline could be composed by GitLab, Jenkins, Maven, Artifactory while another one could be composed by SNV,Travis,Nexus.    

Your customer will have their own CI/CD chain preconfigured, ready to be used so they can be more productive and focus on what’s matter: create new feature and new applications to sustains your business and competitive advantages.    

Let's now have a look at some of the technical and business benefits you can expect if you embrace CI/CD as a Service.

From a technical point of view, here are some good points:

•  Adaptable: your LOB/Dev Team can cherry-pick the tools they need it, from your catalog 
•  Preconfigured: all the components selected, once deployed, are configured to work immediately for you. 
•  Error-free: as you automated all the steps to deploy and configure the elements, this leaves no room for any human error or misconfiguration 
•  Clean: always have a clean, stable and up to date environment ready to be used 
•  Multi-tenant: serving multiple Line of Business (LOB) in your company: each LOB can have his own environment 
•  Easy Plug: as it’s callable by using REST API, you can easily integrate your IT Service Management system for self-service 
•  Independent Solution: can run on top of any infrastructure/on-prem private cloud    

Don’t stop the business: are you running out of on-premise resources? The solution allows you to quickly deploy the service, temporarily, in a public cloud to avoid blocking the development of your critical project.    

The majority of customers are interested in the CI/CD approach, and they are actively looking for a solution that can be easily implemented and maintained; therefore, we firmly believe that a solution for Cisco will be seen as an enabler for their business strategy.    

In the next post, we will present a solution that decouples the tools utilized in CI/CD pipeline from the deployment targets.  The Use Case is implemented by Cisco CloudCenter (CCC) a fundamental component of the Cisco Multicloud Solution.      

Credits

This post has been authored by Stefano Gioia, a colleague of mine at Cisco.




January 22, 2017

Hybrid Cloud and your applications lifecycle: 7 lessons learned


Hybrid Cloud is a must nowadays, I will not spend a word to convince you (you’d not be reading this post if you didn’t believe it). This is the story of a real project.

This post provides more context about the story I summarized at Just 1 step to deploy your applications in the cloud(s).
The structure of the post is:
  • Motivation
  • Use Cases
  • Time
  • Software Stack
  • Benefit of the architecture we implemented
  • Lessons Learned (the most important part)




Motivation for hybrid cloud, and most of the work in my customers' projects, include the following areas:
- Cost control (there is a strong debate: some swear it’s cheaper, others have discovered hidden costs: e.g. network traffic in production, after they made a business case just on the cost of VM provisioning).
- Governance model (IT must find a way to maintain control over resources usage, design patterns, compliance and security when application developers chose private cloud or public cloud).
- Mature technical solution: architecture and technology (there are many good products and system integrators in the market)

But, once you have made a decision, what will you run in the hybrid cloud?
Will your applications be spread across the boundary of your datacenter (one tier inside, other tiers outside)?
Or can we say that it is rather a multi-cloud deployment, where you have a number of resource pools that you can use as a target for deployments?

This project was made by a large corporation, to test how a hybrid cloud can be built and operated and to verify the impact on their current organization.
It is not a full production environment, it’s a pilot project that demonstrated on a small scale how easily you can build a software defined fully automated data center, including both resource pools from your local data center(s) and from public cloud providers.

The solution is expected to be cost effective, of course, but the greater benefits come from business agility and consistent governance.


Use Cases:

The evaluation was focused on 3 main use cases, all requiring that end users order the deployment of a complex software stack from a service catalog: the target for the deployment can be either the private cloud or the public cloud, or a combination of the two. These are the areas where the implementation demonstrated the value of the multi-cloud solution:
  • Business Intelligence (self-deployment of R Studio and additional tools)
  • ETL (self-deployment of a common software for ETL that data scientists would use in autonomy)
  • High Performance Computing (HPC) on OpenStack, with the integration of a DevOps pipeline.

Subject matter experts were provided from different lines of business in the company to support the implementation activities and evaluate the result. 
The use cases represent some frequent activities that the company needs in their usual business, especially in R&D. Improving efficiency and quality in the associated processes will have an impact on the overall business outcome. Applications were selected for the self service catalog that are deployed frequently (every week) and whose installation process takes time (some man days, accounting for both infrastructure and software setup), delaying business objectives.

Time:

All the activities in the project were delivered in time (six weeks), including the setup of the hardware and software systems for the hybrid cloud, the implementation of the 3 main use cases and some additional use cases, the functional tests and the stress tests. This is a demonstration that a proper selection of the technology and a good organization of the project allow for immediate return.
Challenges like setup of the remote access to the lab for remote experts, constraints in the networking and security configuration in the lab, some missing information about the process to install the applications (essential to build the model for the automation) slowed down the implementation. See Lessons learned.

Software Stack:

This is a complete end to end solution: its adoption will happen with a phased approach, starting from the components that grant an easy and immediate impact on the most critical business requirements and adopting some non-functional components later to complete the architecture. The extension from private cloud (based on any combination of VMware, other hypervisors and OpenStack) to a hybrid cloud (integrating AWS, Azure and more) was very quick (it is just a matter of configuration and definition of the governance model). Checkmarks in the picture show what we realized in the short timeframe of the project. The rest is part of a phased plan. The blue boxes show the components provided by Cisco.


a full solution for the hybrid cloud

The fundamental component in this architecture is Cisco CloudCenter (CCC), that has 2 main roles: 
- providing an orchestration solution that offers users the possibility to self-deploy complex software stacks from blueprints offered in a catalog, 
- brokering cloud resources from both private and public clouds (in the project we integrated VMware, OpenStack and AWS, but more clouds are supported).
CloudCenter manages the lifecycle of software applications in the cloud (at a level of abstraction where the underlying physical infrastructure does not matter).
The OpenStack use cases for HPC are supported by a Cisco Validated Design named UCSO: it includes a reference architecture for running the Red Hat OSP8 distribution on a certified hardware platform made of Cisco UCS servers and Nexus 9000 switches. The setup process and the operations are defined by the official deployment guide and Cisco's technical support assumes responsibility on the entire stack, including the Red Hat software.
The management of the entire DC infrastructure from a single orchestration platform was made possible by Cisco UCSD (UCS Director): a single dashboard and workflow engine to manage servers, network and storage, both physical and virtual. The status, the performances and the remaining capacity of all the systems were monitored with Cisco UCSPM (UCS Performance Manager).


Benefits of the architecture we implemented

The implementation of the multi-cloud solution demonstrated the major benefits that a hybrid cloud delivers.
  • A consistent architecture based on software (and eventually hardware) components that integrate easily and satisfy all the business and technical requirements.
  • All components in the architecture are loosely coupled and their integration is based on standard protocols and documented open API. As a consequence, every component can be replaced by an alternative solution (from a different vendor, from the open source, from a custom build) with no fear of vendor lock in.
  • The adoption of a hybrid cloud solution can happen gradually, starting with a core implementation with the most critical components (e.g. CCC, ACI and UCSO), adding more features as a second step (infrastructure automation and monitoring) with UCSD and UCSPM, eventually a unified service catalog and ITSM portal later.

Lessons Learned:

  1. use cases
  2. network topology
  3. security and trust
  4. reusable work (repositories and services)
  5. engage SME and business owners
  6. document
  7. refine (iterations, devops)

Use Cases 
The selection of the use cases is important. You need a quick return to demonstrate the value of the hybrid cloud: the adoption of the hybrid model should address immediate business needs, that the end users can appreciate, rather than be driven just by an industry trend. 
IT projects should not start because a new technology is very smart, but because the outcome makes the business easier and more productive.
Always engage your end users in the planning phase and avoid academic use cases that have a limited appeal on the decision makers. In this project we were lucky because the preparation was done by the steering committee very well in advance.
Once the models for the automation were ready, we could test any combination of the deployment for the application tiers: everything in the private cloud, everything in the public cloud, or the front end deployed on one side and the back end on the other side. The benchmarking capabilities of the product (CCC) allowed to compare the price/performances ratio of the different options based on vSphere, Openstack and AWS - specifically for each application, with tailored reporting.
 
Network Topology
A hybrid environment connects - by definition - areas that were designed separately (your datacenter and the public cloud). They have security policies and configurations that are not meant to work together, and this makes it difficult. Before you start the setup, dedicate the right time to collect all the requirements and to design the connectivity properly. 
We had some issues with the network proxies and the firewalls because of the protocols and ports that we needed to open to allow a proper integration of the Cloud Management Platform (running on premise) with the orchestration engine (with one instance running in each cloud region used in the project, to leverage the local API exposed by the cloud provider and to manage the lifecycle of the applications in the cloud). 

communication among the components of Cisco CloudCenter

Another important requirement is to have a unique repository for all the artifacts, the blueprints and the installation packages for the applications: it should be reachable from all the target clouds that you plan to use, regardless its location (it can be either in the private or in the public cloud, but all the servers you deploy will access it to stand up a new instance of the application). 
The same applies to any public repository that is used in the setup of the applications (both commercial software and open source components, e.g. packages installed using yum).
See also CCC Components Overview for more detail.

Security and Trust
It's important that a good level of trust is established between the architects building the hybrid cloud and the operations team, especially the security guys. Special rules and new policies need to be setup to allow the new platform to work, it's impossible to keep the same old governance model that addresses a single end user identity. 
Sometimes I feel like I'm living - again - the same conflict that I had with Database Administrators, when I tried to configure JDBC database connection pools in the first Java application servers in the 80's. The system should be trusted, and a delegation of the decisions (authentication, authorization and audit) accepted.

Reusable work (repositories and services)
When you model a software application to automate its deployment, you should identify any building block that can potentially be reused in a different model. If you create a reusable (parametric) deliverable and save it individually in a common repository, next time you'll have the work ready to be reused.
This applies to architectural building blocks like database servers, web servers, load balancers, firewalls, distributed caches, etc. 
If they have been created as separate services, instead of just being a part of a monolithic model, they will appear in your designer's palette everytime you model a similar application and you can drag and drop them in the topology. We did that in the project and we saved a lot of time in the implementation. 

Engage SME and business owners
It is important that subject matter experts (SME) collaborate at the definition of blueprints and the build of the automation model. Even though documentation exists for the deployment of the application, you should work together. 
The user knows all the requirements, he knows how to verify and troubleshoot, he has encountered all the setup issues already.
I've learned that the best way to document the setup process for an application, so that you can use it as a reference for the automation, is to ask the SME to install it in front of you in a clean environment where the application was never run, and record a video of the process. It's faster than writing documentation, more complete and reliable. We did that using the desktop sharing feature in Webex and we recorded the sessions.  

Document
While you do the work, keep track of all the steps. Take (maybe informal) notes, but mostly take a lot of screen shots to document what you did. You can keep them on a wiki or on a shared folder, they will help a lot when you have time to create the formal documentation of the project. If you need to troubleshoot, eventually involving other people, this information will be unvaluable.
Of course, versioning and taking snapshots of all deliverables also helps in case you need to go back for whatever reason. 

Refine (iterations, devops)
Create the implementation for a minimum viable product (MVP) as soon as you can. Get the product (i.e. the entire self service catalog, or just the implementation of a single application blueprint) to early customers as soon as possible, to get their feedback before you go too deep in the implementation.
Applied to a hybrid cloud scenario, this will help to evaluate:
- quality of the service you are building, including documentation
- how much the users need it and use it in the real world
- performances of the distributed environment and any bottleneck (network, computing, configuration)
- security implications 
You will have all the time to make it perfect, through iterations that improve the implementation, collect feedback, allow for tuning the design and the configuration. No need to work in a hurry and make mistakes, while you keep your users waiting for the final "perfect" product but they don't see any progress.

September 6, 2015

The Phoenix Project - how DevOps can change your life

It’s been a long time since I did my last post: as promised, I only post information from my experience in the real world and I avoid echoing messages from marketing   :-)
I’ve not been at rest, though, but I’ve worked at customer projects that can’t be mentioned publicly (yet).

But I’ve also been in vacation and I could finally read a great book, “The Phoenix Project”. 
It is a novel and a very educational reading at the same time.
I wholehearted recommend you to read it (though I’m not earning anything from the book) because I enjoied it a lot and I learned important lessons that deserve to be spread - for our common benefit as IT community.








You are not required to be a IT professional but, if you are, you will benefit the most and it will recall many familiar stories.
Since I’ve led some mission critical projects, and my skin is still impressed with both tragedy and triumph, this story reminded me those great moments. 
If you are new to DevOps, you can read my introductory posts in this blog.

Essentially, The Phoenix Project describes the evolution of IT in a company that, on the verge of a complete failure, pioneers DevOps and revolutionizes the way they work.
The impact on the core business is huge and their strategy creates a gap with the competition thanks to agility and flexibility.
Also personal lives are affected because the new organization ends the tribal war among Development, Operations, Security and the business stakeholders: they establish respect, trust and satisfaction for all the involved parties.
Of course the DevOps methodology is not a magic wand that makes the miracle for them: it is the outcome of a new way of thinking and working together.
This is a story of people, rather that technology.

If every IT department put themselves in the shoes of the others, instead of finger pointing, they can help each other to reach a common goal.
If the whole IT is not a counterpart of the LOBs but is a partner (understanding why they are asked something instead of focusing on how to do it), they can offer a huge value to the company… and be highly rewarded (see the coup de théâtre at the end of the story).
This would stop the “dysfunctional marriage” between two parties that don’t understand each other and suffer from a forced relationship.
In my experience, most of the business people see the IT as the provider of a services that is never satisfactory.
On the other side, IT sees that business people don’t understand the complexity and the effort required and ask for impossible things.
In most cases, they are bound to a traditional way of working and don’t even raise their head to see that they already own what’s needed to win.
They are overwhelmed by current tasks, troubleshooting and budget cuts, so they can’t think strategically.

The great idea, here, is importing the concepts and the experience from Lean Manufacturing into IT.
They start considering the IT organization similar to a production plant and optimizing its organization.
Finding bottlenecks and avoiding rework are the first steps, then automation follows to free the smart guys from the routine work and so the quality skyrockets.
At the end of the story the release of new features required by the business no longer takes months (and high risk at the roll out) but they can deploy 10 project builds per day!

That is not impressing if you think that these days some companies achieve 1000s of deployments per day thanks to Continuos Integration and Continuous Deployment.
But it is light years ahead of what most of my customers are doing, though some are exploring DevOps now.
Of course, one organization cannot change overnight.
You shouldn’t see the adoption of DevOps as a single step, and be scared by the effort.
In the book, they learn gradually and improve accordingly: you could do the same.
They go through a process that is made of Three Ways, until they master all.
A brief description of the three ways follows, thanks to Richard Campbell:

The First Way – Systems Thinking
• Understand the entire flow of work
• Seek to increase the flow of work
• Stop problems early and often – Don’t let them flow downstream
• Keep everyone thinking globally
• Deeply understand your systems

First Way Goals
• One source of truth – Code, environment and configuration in one place
• Consistent release process – Automation is essential (one click)
• Decrease cycle times, Faster release cadence

The Second Way – Feedback Loops
• Understand and respond to the needs of all customers (internal and external)
• Shorten and amplify all feedback loops
• With feedback comes quality

Second Way Goals
• Defects and performance issues fixed faster
• Ops and InfoSec user stories appear as part of the application
• Everyone is communicating better
• More work getting done

The Third Way – Synergy
• Consistent process and effective feedback result in agility
• Now use that agility to experiment
• You only learn from failure – So fail often, but recover quickly

Third Way Goals
• Ability to anticipate, even define new business needs through visibility in the systems
• Ability to test and optimize new business opportunities in the system while managing risk
• Joy

You should not think that The Phoenix Project is a technical book: though I’ve learned new things or reinforced concepts I knew already, the value I found in it is motivational.
It really moves you to action, and you want to measure the immediate improvement you can get.
More, you want to partner with other stakeholders to achieve common goals.

The Essence of DevOps
• Better Software, Faster
• Pride in the Software You Build and Operate
• Ability to Identify, Respond and Improve Business Needs

My final take from this story is that everybody in the IT (like in other fields) should:

- take risk and innovate - if you fail, probably the result would not be worse than staying still
- invest time - at cost of delaying important targets - to think strategically: the return will overpay the effort
- study what others have done already: learning by examples is much easier
- always try to understand your counterpart before fighting by principle, there could be a common advantage if you shift your perspective

Some useful references:
Other DevOps books:
- Visible Ops Handbook (Gene Kim)
- Web Operations (Allspaw/Robbins)
- Continuous Delivery (Humble/Farley)
- Lean Startup (Eric Reis)

May 23, 2015

A powerful DevOps tool: Ansible

Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy. Avoid writing scripts or custom code to deploy and update your applications— automate in a language that approaches plain English, using SSH, with no agents to install on remote systems.
At the Openstack Summit in Vancouver I attended a great session presented by two Cisco colleagues:
Juergen Brendel (@brendelconsult), David Lapsey (@devlaps) both from Cisco Metacloud.
These are my notes, that you could find useful as a easy introduction.
But I suggest you to watch the recording of their session at the end of this post, because it is very educational.

Configuration Management tools
They are better than scripts, that in turn are better than written manual instructions, that are better than a seasoned administrator's memory.
CM tools describe the desired state of a resource (i.e. a server) via assertions (ensure that… exists/installed/...): a declarative way to provision resources.
Comparison of existing tools:
puppet dates 2005, chef dates 2009 - they are powerful and rich
salt dates 2011, ansible dates 2012 - they are easy and quick

Ansible
It's written in Python, uses YAML to create Playbooks (description of the desired state)
It's simple: no central server to maintain, no keys management, NO AGENT on the managed servers - but requires ssh and python on the target server (powershell support is coming).
Ansible executes commands in explicit order (so there are no race conditions due to dependencies).

Modules
Modules are pieces of code that do a single thing.
There are hundreds of modules available to reuse.
They’re copied to the target server at runtime, executed there (they return results) and then deleted.

Inventory file
It defines hosts and groups them so that you can apply same commands to all at once.
Adhoc commands apply to groups - example: ansible -i hosts europe -a “uname -a", where europe is a group.

Playbooks
they are written in YAML and tell Ansible what to do (a sequence of tasks)

Projects layout
A Ansible project is made of:
config files
inventory files
group variables
yaml file

Roles
contains tasks, handlers, templates, files, vars
apply to servers (that have the same role)
can be included in playbooks

Usage of API
to manage infrastructure and services
there are modules available for public cloud and private cloud management systems

Vagrant
Vagrant is a tool that matches Ansible very well:
it is used to create VM in cloud
it can use Ansible as a provisioner
written in Ruby
commands:
vagrant up - creates the vm
vagrant provision - calls Ansible

Takeaways
A single Ansible playbook can be used to deploy apps locally and in the cloud
Download Ansible for free from Github.


February 12, 2015

DevOps - Tools and Technology

This post is the continuation of the DevOps - Operational model post in this blog.

We have seen how DevOps processes and organization can help the agility of IT, enabling a huge value for the business.
Let’s investigate the tools that smart organizations use to implement DevOps in the real world.
And let’s try to understand how, in addition to code management, the lifecycle of a sw application can be optimized by managing the infrastructure as code.
At the end of the day, we want to apply the following picture to the infrastructure as well.




Usually different environments are created to run application, often cloned for each Tenant (customer, project...): development, integration test, QA test, production, Disaster Recovery
The infrastructure must provide similar topology and functions, with different scale and HA requirements.
Those environments are sometimes used for few days, then they are no longer needed and the resources could be reused for next project.
If we were able to generate a new environment "end to end" when it is required, and to release all the resources to a shared pool, this  would help a lot in the optimization of resources usage.
The economy of scale provided by shared infrastructure and resource pools will add to the simplicity and speed of the operations.

The following picture shows the cycle of the builds (for both the sw application and the infrastructure) that optimizes the time and the resources.




There are a number of tools and solutions that can help automating this process.
Some apply to specific phases, other to the end to end DevOps.
Also collaboration tools help the team(s) to work together for their own and the entire company's benefit: from http://www.collab.net/solutions/devops



The most used DevOps tools, as far as I know from direct experience and investigation, are jenkins, vagrant, puppet, chef.
Here is another possible chain of tools that cover the entire process:


Stateless Infrastructure (also known as SDDC)

We understood that the maximum benefit comes from being able to create and destroy environment on demand, allocate resources just when needed (we can also consider Disaster Recovery as a important use case in this scenario, but in that case you should also ensure that data have been replicated before the event).

Infrastructure as code is a core capability of DevOps that allows organizations to manage the scale and the speed with which environments need to be provisioned and configured to enable continuous delivery.
Evolving around the notion of infrastructure as code is the notion of software-defined environments.
Whereas infrastructure as code deals with capturing node definitions and configurations as code, software-defined environments use technologies that define entire systems made up of multiple nodes — not just their configurations, but also their definitions, topologies, roles, relationships, workloads and workload policies, and behavior.

Stateless Computing and Stateless Networking are important innovations that some vendors (Cisco could be considered a leader here) brought to the market in last 5 years.
Policy based configuration and the availability of software controllers for all the components of the architecture allow the separation of the modeling from the physical topology.

Servers

As an example, UCS servers (up to 160 in one management domain, but domains can be joined to share resources and policies) are stateless.
You can imagine each server (either a blade or a rack-mount server) as a dumb piece of iron, before you push its identity, its features (e.g. number, type and configuration of the network interfaces) and its behavior as a piece of configuration.
It is like adding the soul to a body.
Later you can move the same soul to a different body (maybe more powerful, such as from a 2-CPU server to a 4-CPU one). The new machine will be restarted as if it was the same.
This can be useful to recover a faulty server, to do DR but also to repurpose a server farm in few minutes (and eventually restore the previous state the day after).
The state (identity, features and behavior) is defined by a XML document that can be stored, versioned and managed as code in a repository (other than in the embedded UCS Manager).
This abstraction of the server from the actual machine makes the management easier and was the main factor for the incredible success of UCS as a server platform.

Networks

Similarly, in the networking domain, we have had a quantum leap in network management with Cisco ACI (Application Centric Infrastructure).
For those that have not met ACI yet, I have published a “ACI for Dummies” post.
In few words, ACI brings the management of physical and virtual networks together.
It has a very performant and scalable fabric, made of spine and leaf switches, that are managed by a software controller called APIC.
APIC also integrates the virtual switches in the different hypervisors, so that its policy model can be extended to the virtual end points.
A GUI is provided to manage APIC, but essentially you would drive it through the excellent open API offered to orchestration systems and - of course - DevOps tools.
XML (or JSON) artifacts can be stored in a repository as code, and pushing them to APIC will create your new Data Center on the fly.
You can create new Tenants with dedicated resources, or deploy the infrastructure for a new application in such a way that it is isolated (in terms of security, performances and stability) from others, though running on a shared infrastructure.
It would take just the time of a REST call, where you push the new policy to the controller.
And of course you could use the same templates in the different environments: development, integration test, QA test, production, Disaster Recovery

The previous generation of network devices (e.g. the Nexus family) can be managed in a DevOps scenario as well.
They offer API and have puppet agents onboard. And a version of the APIC controller has been created also for networks outside ACI (APIC-EM - https://developer.cisco.com/site/apic-em/discover/overview/).
The Cisco DevNet community prodives a lot of information and samples at https://developer.cisco.com/site/devnet/home/index.gsp

I wrote a short post on Ansible here: http://lucarelandini.blogspot.com/2015/05/a-powerful-devops-tool-ansible.html where a great recorded session from the Openstack Summit is linked.

You might be interested also in my post on DevOps, Docker and Cisco ACI.

 

February 2, 2015

DevOps - Operational model

This posts is the continuation of the Why DevOps: definition and business benefit post.

As it happens in other areas of the IT, technology is an important factor for success but it is not the most important one.
The human factor is what really makes the difference for successful projects.
So skills, common goals, organization and governance (and a business strategy) will make you win with any tool.
But if you lack them, the best technology in the world will fail to provide a positive outcome.

In this post we’ll see how a lot of companies have adopted DevOps practices, using a variety of products (that we'll examine next time), and they got a important return.

Why Project Fail: The Business Management Chasm

Question: Over the past year, what percentage of your current projects have failed to meet your success criteria?
Answer: 19% (n=84)
Question: Why?
Answer:
  1. Poor requirements gathering/scope creep: 23%
  2. Lack of resources (staff and budget): 21%
  3. Changed business priorities: 19%
  4. Lack of business stakeholders ownership: 16%
  5. Testing delays: 10%
  6. User requirements changes: 10%
  7. Vendor performance: 1% 
If you sum up points 3 and 4 you get 35%.
You can easily see that if the application lifecycle was leaner and faster, they wouldn't lose their chances for success.
Quick wins are the most important key to lead a project to its final goal: you should deliver a tangible value as early as possible, to keep traction, and be able to react to changes


In this post we’ll see how a lot of companies have adopted DevOps practices, using a variety of products, and they got a important return.

Businesses today are moving toward continuous delivery as a methodology and tool to meet the ever-increasing demand to deliver better software faster. Continuous delivery, with its emphasis on keeping software in a release-ready state at all times, can be seen as a natural evolution from continuous integration and agile software development practices. However, the cultural and operational challenges to achieve continuous delivery are even greater.
For most organisations, continuous delivery requires adaptation and extension of existing software release processes. The roles, relationships, and responsibilities of people across the organisation may be impacted. The tools used to deliver, update, and maintain software must support automation and collaboration properly, minimising delays and providing tight feedback cycles across the organisation. While these changes can be a huge challenge to implement for organisations that must live within regulatory and operational constraints, there are many practical steps you can take to make real progress today.

With that in mind, here are 7 key pre-requisites organisations should consider when making a successful transition to Continuous Delivery.
1. Make Sure Development, QA & Operations Teams Have Shared Goals & Communicate
2. Get Continuous Integration Right Before Making The Step To Continuous Delivery
3. Automate & Version Everything
4. Share Tools & Procedures Between Teams
5. Make Your Application Production-Friendly: Make Deployments Non-Events
6. Make Your Infrastructure Project-Friendly: Empower The People & The Teams
7. Make Application Versions Ready To Be Shipped Into Production

Continuous Delivery is not just about a set of tools, ultimately it is also about the people and organisational culture. Technology, people and process all have to be aligned to make Continuous Delivery successful in any organisation, a collaborative approach is fundamental to its success. If organisations are to reap the rewards of a more fluid, automated approach to software development that can also provide them business agility – they need to implement these best practice steps on the path to Continuous Delivery.


(1) “ Emphasize the performance of the entire system” – a holistic viewpoint from requirements all the way through to Operations
(2) “Creating feedback loops” – to ensure that corrections can continually be made. A TQM philosophy, basically.
(3) “Creating a culture that fosters continual experimentation and understanding that repetition and practice are the pre-requisites to mastery”
These are excellent guidelines at a high level, but we’d like to see a more operational definition. So we’ve made up our own list!
As a starter – we propose that;
  1. You must have identified executive sponsors / stake holders who you are actively working with to promote the DevOps approach.
  2. You must have developed a clear understanding of your organisation’s “value chain” and how value is created (or destroyed) along that chain.
  3. You must have organizationally re-structured your development and operations teams to create an integrated team – otherwise you’re still in Silos.
  4. You must have changed your team incentives (e.g. bonus incentives) to reinforce that re-alignment – without shared Goals you’re still in Silos.
  5. You must be seeking repeatable standardized processes for all key activities along the value chain (the “pre-requisite to mastery”)
  6. You must be leveraging automation where possible – including continuous integration, automated deployments and “infrastructure as code”
  7. You must be adopting robust processes to measure key metrics – PuppetLab’s report focuses on improvement in 4 key metrics – Change Frequency, Change Lead Time, Change Failure Rate and MTTR. We suggest Availability, Performance and MTBF should be in there too.
  8. You must have identified well-defined feedback mechanisms to create continuous improvement.


Of course, you will need some investment to get there. It can be gradual and the payback from the adoption of DevOps will help next steps:



Two main processes that make DevOps work are Continuous Integration and Continuous Delivery.

Continuous integration (CI) is the practice, in software engineering, of merging all developer working copies with a shared mainline several times a day.
CI was originally intended to be used in combination with automated unit tests written through the practices of test-driven development. Initially this was conceived of as running all unit tests in the developer's local environment and verifying they all passed before committing to the mainline.
Later elaborations of the concept introduced build servers, which automatically run the unit tests periodically or even after every commit and report the results to the developers.
In addition to automated unit tests, organisations using CI typically use a build server to implement continuous processes of applying quality control in general — small pieces of effort, applied frequently. In addition to running the unit and integration tests, such processes run additional static and dynamic tests, measure and profile performance, extract and format documentation from the source code and facilitate manual QA processes. This continuous application of quality control aims to improve the quality of software, and to reduce the time taken to deliver it, by replacing the traditional practice of applying quality control after completing all development.



Continuous Delivery (CD) is a design practice used in software development to automate and improve the process of software delivery. Techniques such as automated testing, continuous integration and continuous deployment allow software to be developed to a high standard and easily packaged and deployed to test environments, resulting in the ability to rapidly, reliably and repeatedly push out enhancements and bug fixes to customers at low risk and with minimal manual overhead. The technique was one of the assumptions of extreme programming but at an enterprise level has developed into a discipline of its own, with job descriptions for roles such as "buildmaster" calling for CD skills as mandatory.



Continuous delivery defines a deployment pipeline as a set of validations through which a piece of software must pass on its way to release. Code is compiled if necessary and then packaged by a build server every time a change is committed to a source control repository, then tested by a number of different techniques (possibly including manual testing) before it can be marked as releasable.


Characteristics of a Successful DevOps Team

No matter how you’re using DevOps practices — whether your company has a DevOps department or cross-functional teams that share DevOps tools and practices — there are distinct characteristics of DevOps teams that align with high IT performance.
Here’s a checklist that’s food for thought (and fuel for future improvement!).
These points are drawn from the 2014 State of DevOps Report, and from suggestions of DevOps experts like Paul DuvallJez Humble and Joanne Molesky

Effective DevOps teams don’t think of issues as “someone else’s problem”. 

Developers, IT operations, quality assurance engineers, database admins, and business analysts collaborate, and everyone checks code into the version control system. Everyone is part of the delivery process — and held accountable for it.

We Automate Build, Deployment, and Testing Phases.

With automation, you reduce the chance of human error as you transition code from one phase to the next. Because you’re automating configuration of all environments, you’re minimizing issues caused by writing code in a development environment that is different from the production environment.

Our Culture Reflects Open Communication and Collaboration.

Developers and IT operations attend planning meetings, standups, and release postmortems. Developers share responsibility for writing testable and deployable code, and if code fails in production, the team is kept in the loop, working together to review causes and identify solutions. 

We Have Routine Deployment Processes and Shared Monitoring Practices.

Team members can accurately report how long it’ll take to deploy a new feature, or even a few lines of code, to production. They can identify and remove roadblocks, without a lot of red tape. They understand the key performance and availability metrics to measure, and track them against larger business goals.

We Implement a Continuous Delivery Pipeline.

Continuous delivery, implemented right, lets you release changes continually to production. That lets you testnew features with real customers, facilitating quick feedback about how they’re being used. Continuous delivery helps companies make better business decisions and move more quickly than their competitors.

We Use Version Control For All Production Artifacts.

Version control systems help you track changes and quickly find the source of an error, reducing time to recovery. Everything required to launch a change into the production environment must be checked into version control, including application code, application and system configurations, tests, and deployment scripts.

We Trust Each Other, and Collectively Enable Continous Improvement.

We deliver on our promises to the business, and to our customers. We continually work on developing collaboration, clear communication and trust between team members. We are continually learning and improving as a team. Most important of all: We spend less time fighting fires and more time focusing on great work.



When it’s well executed, continuous delivery allows an organization to respond more quickly to its market and to customers, both internal and external. It also makes life saner for people in IT operations, software development and quality testing teams. Instead of long periods of development punctuated by looming deadlines, big dramatic releases and panicked remediation of serious bugs, software releases are small, predictable and less dramatic… even boring :-)

Top Benefits of Continuous Delivery

Deliver software with fewer bugs and lower risk.
When you release smaller changes more frequently, you catch errors much earlier in the development process. When you implement automated testing at every stage of development, you don’t pass failed code to the next stage. And it’s easier to roll back smaller changes when you need to.

Release new features to market more frequently — and learn.
Releasing new features early and often — even in a minimally viable state — means you get more frequent feedback, giving you the ability to iterate and learn from your customers. Enlisting customers as development partners gives them a sense of co-ownership and loyalty, and makes them more likely to forgive when you stumble.

Respond to marketing conditions more quickly.
Market conditions change constantly. Whether you’ve just discovered a new product is losing money, or that more customers are visiting your site from smartphones than laptops, it’s much easier to make a fast change if you are already practicing continuous delivery.

Life is saner for everyone: IT operations, software development, QA, product owners and business line owners.
Continuous delivery means the responsibility for software delivery is distributed much more widely, and this shared responsibility and collaboration make life better. Continuous delivery also take a lot of stress out of software release. Releasing smaller changes more often gets everyone used to a regular, predictable pace, leaving room tocome up with ideas and actually enjoy your the work. Best of all, a successful release becomes a shared success, one you can all celebrate together.


In next post, we’ll discuss the most used tools for DevOps and how the infrastructure can be managed “as code”, that means dynamically provisioned creating the needed environment every time you deploy a new version of the code.
Link to the DevOps - Tools and Technology post.

Sources: