Eclipse Authors: Pat Romanski, Elizabeth White, Liz McMillan, David H Deans, JP Morgenthal

Related Topics: @DevOpsSummit, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Article

Rapidly Evolving DevOps Tools By @OmniTI | @DevOpsSummit [#DevOps]

DevOps is all about removing barriers to rapid, safe delivery of new experiences to your customers

Four Ways to Manage Rapidly Evolving DevOps Tools

We all want rapid innovation, but we don't want to have to integrate with it every day.

What Is a DevOps Toolset?
DevOps is all about removing barriers to rapid, safe delivery of new experiences to your customers. Much of this revolves around automating error-prone, human-driven processes so that processes can be standardized, scaled, and varied programmatically. Some of the types of tools used in a DevOps-minded organization might include version control systems, automation servers, and configuration management systems. Many tools can be used across categories, with varying amounts of success. Some vendors offer products that claim to address all of these needs with one solution - most rarely deliver on that promise.

The Pain
Automation is not helpful if each stage of the software delivery and operation cycle automation is isolated. Steps need to flow continuously, with each tool integrating into the overall picture: your CI server needs to talk to your version control system, and probably your IaaS system (for ephemeral test environments). The IaaS system has to talk to the monitoring system, to ensure new nodes are monitored and decommissioned nodes are not flagged as down. And everything has to talk to the communications layer, but not fill the channel with noise.

This is a huge integration job, especially as each piece is changing rapidly. As the DevOps landscape evolves, it can be very hard to get stability in your architecture.

Approach: Stop Time
One approach is to simply adopt a current version of a tool and commit to using it. Instead of dealing with upgrades and tool changes, your team's time will now be focused on delivering features. This may sound appealing as it is much easier to certify a toolset for security, and to fully operationalize it if it does not change. Each toolset change requires changes in run books and training, so fewer changes means less team-impacting shifts.

Unfortunately, this approach is typically not practical for more than a brief period of time (perhaps a year). For example, any new features or bug fixes for the tool will not be available to the team and, worse, as tools come in and out of favor, you may find yourself using a tool that no longer has community support or documentation, or reached an end-of-life. In addition, if one of the tools is a SaaS (which is increasingly likely), you may find that the provider dropped support for your desired version of the API, or shifted its product line to an entirely new API structure.

More subtly, the implementation of the tools may make it very difficult to freeze time. For example, many tools have cross-dependencies on libraries that result in forced upgrades when you upgrade one component. While this is tedious on a day-to-day basis, it can be disastrous if left unmanaged for long periods of time - you may find yourself wanting to upgrade one piece due to a security issue, but forced to upgrade several pieces because of overtight version dependencies. It may even force you to swap out a tool that is no longer actively developed and supported. Each dependency change can have a snowball effect and the longer the time interval between updates, the more likely it will become an avalanche.

Approach: Try to Keep Up
On the opposite end of the spectrum from stopping time, some shops choose to constantly stay on the bleeding edge of the tooling space and adopt tools, versions and workflows as they become available. Typically, passionate individuals will follow a particular project closely, which can translate to expertise within the team and excellent support for the tool.

More commonly, however, individual passion does not translate directly into organizational success. Not all groups will have the same tolerance for instability and integration rework, so it's easy for this approach to become divisive and increase friction among teams. Enthusiasm for the tool may wane, or the passionate individual may be re-assigned or leave the company. Even if support for a tool is broad, staying on top of the latest changes will always require much effort and tradeoffs: the tools that integrate with the new tool may not yet support the new features.

One compromise that often works well is to have a "skunk works," or R&D group, that experiments with new techniques, then sees them through to adoption once they are stable and integrated with the rest of the toolchain in use. It is important to brand this group as a tooling team, not a "DevOps Team."

Approach: Outsourcing
Whenever you're faced with a large integration project across several closely related tools, consider looking to a major vendor for an integrated approach. For example, using a combination of AWS offerings, you can construct a working, nearly complete DevOps toolchain that you can be assured will work well together. By spending more on professional services, you can also have Amazon build out any missing pieces of integration to ensure you have a smooth flow. Other vendors provide similar offerings.

Vendor lock-in is a major drawback here. It will be very difficult, if not prohibitive, to switch providers at a later date: the APIs, tool features, and capabilities will be similar but different enough to invalidate all integration efforts to date. Additionally, outsourcing the entire toolchain will not be cheap, and the ever-evolving nature of the tools means that the custom development cost will never go away - the integration work will never be "done."

Most perniciously, however, is that by hiring people outside your organization to make your DevOps toolchain, you explicitly push outside the lessons learned by having developers and operations staff work side by side to solve each other's' problems. When people share their problems, they tend to come up with solutions quickly. But if a vendor is providing the interface, tooling, and support, you have a big wall between the people who encounter a new problem and the people who are remedying it. That is antithetical to the DevOps approach, and your team will not magically "become DevOps-y" if they don't actually solve problems together.

Approach: Make a Local Wrapper API
Increasingly the underlying components of the toolchain are being offered as services with an API in front of it. In some cases the service is run on-premises, in others they are SaaS; but either way, the coupling is much looser. This allows you to write your own API end points, which perform the tasks your internal customers need to perform, while calling out to the various back-end tools and services. Users need not know which components are actually tools (possibly with awful dependencies) and which are services; in some cases, you may also choose to hide provider-specific details, such as which cloud provider was used to provision a node. Passionate individuals may work on the internal API layer, adapting it to the latest version when new features are desirable. This can usually be done while still providing a stable, backwards-compatible API to the internal customer. The local wrapper API is also an ideal location for various bits of integration (like security checks, inventory management, etc.).

Locally developed APIs are not without their drawbacks. Each internal customer has to agree to use it; the documentation and support must be excellent, and the value compelling or people will revert to simply using the various tools directly. That may not be a problem, especially for groups with unusual needs. Interface levelling - in which unique features of a provider are masked in favor of broader commonalities - can often be more of a problem; in some cases, it may make sense for a user needing Azure-specific features to have an Azure-specific part of the API, for example. Finally, it can easily turn into "one API to rule them all" in which scope creep forces the pull of more and more services under the same roof that was not designed to accommodate the diversity.

That highlights an important aspect of the local wrapper API: when internal customers have unmet needs, they can reach out internally to the API developers and operators - which, in fact, may be the same people.

There is no obvious, simple way forward when dealing with rapidly changing toolsets. Each approach has serious drawbacks, but some compelling advantages. Most organizations will end up using a mixture of approaches - perhaps "stopping time" with automation tools like Chef, but staying bleeding edge by leveraging latest features from a platform like AWS, and a custom local API gluing together their monitoring, communications and inventory control systems. Approaches will vary from group to group as well. As with anything in DevOps, the goal is not some ideal destination as implemented at a unicorn company, but rather gradual, continuous improvement to the processes that most impact your ability to deliver and operate quality software quickly.

More Stories By Clinton Wolfe

Clinton Wolfe leads the DevOps Practice at OmniTI and has been helping organizations deploy and operate web-scale applications since early days of the web. He is especially versed in testable infrastructure, the people problems of highly constrained workplaces, and aligning business needs with engineering capabilities. He collects metrics for fun.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPO also offers s...
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...