By Tytus Kurek, cloud product manager at Canonical – the publisher of Ubuntu.
OpenStack is one of the three most active open-source projects in the world, with 1,125 developers from 165 organizations contributing to the 20th release of the cloud computing platform, Train, last fall. OpenStack’s combined worldwide market size is $7.7 billion according to 451 Research.
While OpenStack has become standard for implementing cloud computing infrastructure, organizations typically face significant challenges when adopting it. Those challenges may include:
Lack of resources: OpenStack is a large ecosystem that can grow very fast and usually requires a dedicated team to maintain it. This may be an obstacle for small and mid-size organizations, pushing them toward public clouds that are often more costly and may not meet security requirements.
Lack of knowledge and experience: OpenStack demands specialized skills that are in short supply. From the outset, technical questions have to be looked into before the OpenStack can even be deployed. This knowledge gap can lead to poor choices in architecture and hardware.
Security concerns: Because OpenStack is a complex ecosystem, security must be addressed at various layers. It is crucial to find the right balance of robust security while avoiding over-engineering that can kill the project.
Ongoing operations: Potential challenges don’t stop once the OpenStack cloud is deployed. Consideration needs to be given to the ongoing operations and upgrades that can become complex and fragile.
There’s good news, however: Organizations can benefit from a successful OpenStack deployment when using tools that dramatically accelerate and simplify the entire process. This can be broken down into five phases: design, model, deploy, operate, and transfer. Let’s look at each.
Several important questions must be answered at the start of the OpenStack journey: Which reference architecture should be used? Which vendors should hardware be purchased from? Which hypervisor and SDN solution is the best choice, and what about security?
During the design phase, these questions are answered. The requirements are gathered and transitioned into deliverables based on the input from analytics and architects. At the end of this phase, it is clear what is going to be built, when it will be built, and at what costs.
Typically, the design phase is followed directly by the deploy phase. Although the model phase brings its own benefits, many organizations have struggled with a lack of tools to translate the model into a real implementation.
Fortunately, application modelling tools have become available to model factors. These factors include the number of machines being deployed and their hardware specifications, applications being deployed and their configurations, number of application units being deployed and the placement of the units and relations between the applications.
Even if properly designed and modelled, a manual OpenStack deployment and configuration is complex and can take weeks. That’s because OpenStack consists of various interconnected components that have to be carefully configured to work together.
There are various tools that allow automated OpenStack installation in multi-node scenarios, but in any more sophisticated scenario, complexity starts growing exponentially and raises the need for abstraction. It may require integration with an extraordinary SDN solution or a non-default storage system.
Before a single package can be installed, however, the underlying infrastructure has to be prepared. Since OpenStack usually runs on bare metal, physical hardware and network equipment needs to be installed and configured first. Adopting a reference architecture can accelerate this process. Once all machines are racked and cabled, the entire cloud can be deployed in less than an hour.
A successful deployment is just the first step toward an OpenStack private cloud. On “Day 2,” the team may discover that requirements were not initially gathered carefully and the cluster is quickly running out of capacity. Fortunately, there are some spare nodes that can be borrowed, but how quickly can they be made part of the cluster? Can the OpenStack scale out on demand?
Down the road, the OpenStack cloud will need an upgrade to the latest stable version. Yet, this can mean having to deploy a new cloud and migrate the workloads — a time-consuming process.
OpenStack maintenance is difficult, but fortunately, all of these complex operations — such as scaling out the cluster, relocating application units or database recovery — can be fully automated through technology that provides service orchestration capabilities.
There are situations where all of the above are insufficient. For example, what if a critical security vulnerability is found to be affecting the environment? Security patches and bug fixes usually become available within a few days, but what if the cloud needs patching in a few hours? What if there is no in-house knowledge to fix a simple issue? Furthermore, what if an organization simply lacks enough people to operate OpenStack?
In these cases, it is wise to bring in outside experts to assist with OpenStack operations, including support services and managed services. These services can cover critical security patches, 24/7 support, and more — all of which aim to ensure maximum uptime and stability.
Organizations may also consider fully offloading the effort and risk associated with OpenStack operations by hiring a partner for fully managed services. This allows the company to fully focus on maximizing the business value brought by OpenStack while outsourcing the challenges.
Although OpenStack is a complex system consisting of various interconnected services, by following these five steps, enterprises can significantly reduce the pains of the adoption process. As a result, they can save the budget spent on dedicated headcounts and focus on maximizing the value brought by the cloud platform rather than struggling with its implementation.