Welcome!

Eclipse Authors: Pat Romanski, Elizabeth White, Liz McMillan, David H Deans, JP Morgenthal

Related Topics: @DevOpsSummit, Eclipse, @CloudExpo

@DevOpsSummit: Blog Feed Post

Making Agile Real: The Zen of Continuous Delivery By @XebiaLabs | @DevOpsSummit [#DevOps]

Much of what is written about Continuous Delivery at present seems to revolve around technical challenges

Making Agile Real: The Zen of Continuous Delivery

By Andrew Phillips

Much of what is written about Continuous Delivery at present seems to revolve around technical challenges and technical choices: “Which is better: Puppet, Chef or Salt?”, “Should I use Jenkins, Go or XL Release?”, “How do I build a CD pipeline with containers?” etc. etc. If we’re looking at CD properly, though, this is the sideshow – an implementation detail at best. The real CD story is much bigger.

Don’t get me wrong: as a technical kind of person, I can get very enthusiastic about the tech: there are a lot of cool tools and frameworks out there and the tooling landscape is expanding rapidly. Working with evolving technology is a lot of fun, and there are plenty of challenges to be solved around supporting Continuous Delivery at scale…which is why we develop XL Release, XL Deploy and XL Test.

The fact that we now have a set of tools to automatically build and deploy applications, provision environments, run tests and more is not what I think is important about CD, however. Even the notion of wiring all these tools up efficiently to create delivery pipelines isn’t all that interesting. To me, the thing that is really exciting about Continuous Delivery is that it can allow us to fundamentally change the way we interact with our users. I think CD can finally allow us to turn software delivery into something approaching a modern consumer experience.

In short, Continuous Delivery isn’t a toolset, and it isn’t a business process either. Continuous Delivery is a new way of doing business.

What does this “new way of doing business” look like? What is the mindset, or the culture of a Continuous Delivery organization? To me, these are the most important aspects: focus on the end user and improve through data.

Part 1: Focus on the end user

Having a focus on the end user sounds like an obvious statement that every organization should claim to aspire to, but given the way we release software today, there is often an enormous gap between the people creating the software and those that actually use it. I’ve worked on numerous teams where we never really knew who was actually going to use the software, or why. We certainly didn’t have any personal contact with the end users of our software, and thus little ability to empathize with their needs and constraints, or put ourselves into their shoes to try to understand how the system could best work for them.

This gap, which was reinforced by organizational structures that put lots of layers between developers and users, resulted in something like an “emotional variant” of Conway’s Law: not only did we build systems that reflected the layers of organization, we developed mentalities and boundaries of empathy to match: “The system is totally unworkable for the users? OK, but look how clean our repository layout is/how elegantly we’ve managed to decouple these components/how complete our test coverage is…”

If your team knows your users, if you have some kind of personal connection with them, if you meet them from time to time for a chat, to talk about software, but also baseball, or family, or your next vacation, or whatever…you cannot feel proud of a system that is totally unworkable for them.

It’s not just enough to create an emotional connection, however. In order to turn the intrinsic motivation into great software and a great user experience, every member of the team needs to understand how the overall service they are creating is put together and delivered to the user. They also need to be able to contribute to improving the service if they see an opportunity to do so – whether in “their patch” or area of expertise or outside of it.

If you’re thinking that this sounds suspiciously like “product teams” or “end-to-end teams” or “Devops,” you’re very much on the right track. The key point here, though, is that this goes beyond development, QA and Operations to include the business and, beyond them, the end users for whom the software is actually being built. And the aim of giving everyone visibility over the entire process, and the ability to influence it, is not because more pairs of eyes can spot more opportunities for optimization: it’s because a more engaged team that has a greater influence on the delivered service and a stronger connection to the end user is more motivated to look for improvements in the first place.

Part 2: Improve through data

If we have a team that is motivated to continuously make things better for our end users, how do we identify where the opportunities for improvement are in the first place? If we have given them the chance to make changes, how do we figure out whether those changes are helping? In a word: data, data and more data.

It’s not as though we don’t have any data today, of course. Most organizations collect tons of information on many, many aspects of system behaviour. But the vast majority of this information is of a technical, operational nature: in most enterprises, it’s much, much easier to figure out whether a particular CPU in your data center is doing OK than understanding whether a particular user of your services is satisfied.

Which of the two pieces of knowledge is more important for your business, however? And if it’s the user that really matters, why then are we basing so many of decisions about the user-facing functionality and behaviour of our services on educated guesses and gut feel, without any kind of follow-up plan to figure out whether we guessed right?

Improving through data means applying the same methodology to making services better for our users that we apply to improving the technical performance of our systems:

  1. Measure to identify the pain point
  2. Formulate a hypothesis as to what is causing the pain point
  3. Make a small change to a single part of the system based on the hypothesis
  4. Measure to check for the expected improvement
  5. Rinse and repeat

So: no more all night feature brainstorm sessions; instead suggestions based on accurate, detailed information on how your users are actually interacting with your services. Feature requests with measurable (!) estimates of how the feature is supposed to affect users. Service architectures that allow for small changes to be quickly and efficiently deployed to production, and reverted if necessary. And, of course, applications that are designed to that collect whatever data is required to allow the impact of new features to be measured and the associated hypothesis to be verified or disproved.

Enough dreaming already..?

If you’re thinking that this all sounds very nice but will certainly not happen anytime soon in your organization, bear this in mind: you’re probably working with people in your own organization that are doing this today. Tell this story to your colleagues in Marketing or even HR and you’re quite likely to hear that this is all rather old hat. Advertising campaigns designed and targeted by analyzing the effect of previous efforts. A/B testing to help choose between multiple options, or to ensure comparison against a valid baseline. Full instrumentation to track every user interaction and collect all relevant metrics. Standard stuff in Marketing nowadays.

Or take HR: Personal development programs tailored to individual preferences and learning styles. Training courses that adapt to your current performance and suggest additional programs that might be useful for you. Again, all pretty normal today.

And all of it powered by software! The stuff that we are writing. It’s more than a little ironic that we have made it possible for so many fields to revolutionize their ways of working, yet have so far been unable to provide a better user experience ourselves.

That, to me, is what Continuous Delivery (and, by extension, Agile) is all about: revolutionizing the way we serve our users. Making the delivery of IT services a modern experience. An experience we can be proud of.

Let’s do it!

The post Making Agile Real: The Zen of Continuous Delivery appeared first on XebiaLabs.

Read the original blog entry...

More Stories By XebiaLabs Blog

XebiaLabs is the technology leader for automation software for DevOps and Continuous Delivery. It focuses on helping companies accelerate the delivery of new software in the most efficient manner. Its products are simple to use, quick to implement, and provide robust enterprise technology.

IoT & Smart Cities Stories
Contextual Analytics of various threat data provides a deeper understanding of a given threat and enables identification of unknown threat vectors. In his session at @ThingsExpo, David Dufour, Head of Security Architecture, IoT, Webroot, Inc., discussed how through the use of Big Data analytics and deep data correlation across different threat types, it is possible to gain a better understanding of where, how and to what level of danger a malicious actor poses to an organization, and to determin...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.