Welcome!

Eclipse Authors: Pat Romanski, Elizabeth White, Liz McMillan, David H Deans, JP Morgenthal

Related Topics: Microservices Expo

Microservices Expo: Article

SOA Is Here - Are You Ready for IT?

How loosely coupled applications and their need for stronger governance will impact your IT organization

While significant attention has been paid to the benefits offered by service-oriented architecture (SOA), which has led to an increased understanding of the challenges that SOA poses as well, far less consideration has been given to the changes that this approach will impart on the IT organization itself. With the discussions around SOA having recently shifted from "if" and "why" to "when" and "how," three important questions now need to be addressed by organizations embarking on an SOA strategy: How will you manage your SOA, how will you pay for your SOA, and how will you staff your SOA?

As most would agree, using existing services within an SOA to develop and support new applications provides IT with the opportunity to take a quantum leap forward with regard to productivity and efficiency. As a result, enterprises can address a variety of process requirements faster and more completely than otherwise possible. However, this expanded reuse of existing assets is predicated on a consistent adherence to common standards, which requires most IT organizations to demonstrate far more discipline around governance than they've delivered to date. In reality, this approach produces faster development cycles that are running headfirst into the greater scrutiny required within an SOA, which significantly reduces margin of error as it eliminates many of the safety nets upon which enterprises have come to rely.

Consequently, Eric Austvold of AMR Research recently wrote [Service-Oriented Architectures: The Promise and the Challenge (October 6, 2005)], "SOA will expose the gap between the disciplined and undisciplined IT organization, creating the opportunity for fantastic success and spectacular failure." For example, competing SOA fiefdoms are rising in some organizations. At some point, mass confusion will emerge as these systems are unable to reconcile which "get credit" service is which. Instead of using SOA to streamline their operations, these organizations run the risk of adding further complexity as a new layer of middleware - the super SOA - is now needed to coordinate these various initiatives. The end result is that this "hybrid" approach further limits abstraction, cost effectiveness, and enterprise flexibility. What this means is that the approach to developing, deploying, and managing enterprise applications within an SOA needs to change to secure the promised benefits, and this process entails a variety of significant changes that impact the IT organization.

The Rise of the Shared Service Organization
Most IT organizations are already familiar with the concept of a shared service organization, which is often used to support the "common" assets of the enterprise such as mainframe computing, networks, and the corporate database infrastructure. Because applications are now becoming universal enterprise services, there is a need to increasingly view these individual services as a shared corporate asset as well. As such, the rationale of a shared service model as applied to other asset management requirements begins to make sense here as well.

For example, while many application development and deployment functions will remain closely tied to specific business units or operating groups, there is also an overriding need for the enterprise itself to govern the use of these common assets. As a matter of fact, the effectiveness of these governance efforts will be the key determinant of SOA success. Granted, some of these governance issues are technological in nature and can be solved with centralized registries, automated service monitoring, a common metadata repository, or through the use of an enterprise service bus. However, an even more fundamental need exists to simply define the standards that these technologies will use and to monitor and enforce usage requirements across the asset life cycle. To fulfill this requirement, an SOA Center of Excellence is needed.

Depending on the unique parameters of the organization and its culture, the role of the SOA Center of Excellence can range from light oversight or simple coordination through overriding responsibility for the delivery of services. In any of these scenarios, the fundamental goal should be the elimination of any doubt regarding the appropriate usage of a specific asset, and the SOA Center of Excellence should ultimately deliver the discipline and coordination needed to ensure efficient and effective operations.

As such, an SOA Center of Excellence should be entrusted with maintaining a single view of available services via a common registry along with their definitions. This organization would also be responsible for the enforcement of specific standards that govern usage such as metadata models, versioning standards, release protocols, and testing procedures.

Beyond just managing these services, the SOA Center of Excellence can also be used to deliver the necessary training and additional development standards needed to ensure a common SOA development methodology as well. The most forward-looking enterprises will even look to this organization to help prioritize long-term technology investments against existing business and IT requirements with a goal of ensuring that the SOA fully supports all of these requirements.

Another important role for the SOA Center of Excellence is helping to overcome the human factors that can potentially limit service reuse. As anyone who has ever run a development shop can attest, many projects are hampered by user concerns regarding the quality or suitability of "third-party" services, or by an unwillingness to make up-front investments that might only benefit those who are able to subsequently reuse the service as a result. In regard to overcoming this grassroots resistance by developers, a variety of "carrot & stick" approaches can and should be employed, and many of these enforcement tools fall under the existing mandate for service governance. With regard to the carrot, other ways to facilitate greater reuse of existing services include the integration of registry information into the development platform to maximize awareness of available services (this approach is typically supported by other forms of educational outreach). Because the ultimate goal is to create a culture in which service reuse is recognized and appreciated, it's not unreasonable to suggest that organizations tally "reusage" and respond and reward accordingly.

Paying the Piper
Of course, these added development and management steps produce additional up-front costs that the enterprise must address if it is to enjoy the benefit of subsequent reuse. With regard to specific models for addressing development costs, a number of approaches have already begun to emerge. The most simplistic and easy to implement is what I would call the "anti-enterprise" model, in which these additional costs are solely borne by the development group because they're the ones in the most immediate need of the core functionality. The additional cost associated with service enablement simply becomes a mandated requirement for all development efforts. Unfortunately, this approach is often shortsighted because it gives little incentive outside of decree for investing the additional funds needed to ensure widespread reuse of the developed service. As such, organizations are left to pursue the bare minimum as oppose to the optimal.

Likewise, some organizations have taken a "head in the sand" approach that completely ignores the issue of added cost, arguing that service reuse is so new a concept that little data exists for developing a cost model. Therefore, the true cost of service enablement is typically ignored within the overall budget. The challenge that this approach creates is that the IT organization or business group may be subsequently unable to show effective ROI for these projects. Thus, users have an incentive to do the bare minimum possible, including avoiding this requirement altogether.

Arguably, the best approach is to recognize these costs up front because this encourages both accountability and efficiency throughout the development process. For example, the added cost for service enablement can be defined as a fixed percentage of the total project cost and these additional costs are fully borne by a dedicated source of enterprise funding. With regard to specific budget parameters, a recent study by the Aberdeen Group offers some guidance. According to the research firm, a $10 billion company with a $300 million annual IT budget can save $30 million a year in five years by service-enabling 75 percent of their applications. As such, a $2 million fund for service enablement would result in a very favorable ROI.

In addition, enterprise budget models also need to address the costs associated with actual usage. For example, who bears the budgetary impact when a service developed by your group is subsequently employed as the cornerstone of another group's business model? For most organizations, the chargeback mechanisms or other activity-based pricing that they already employ become the model to be used for funding these ongoing costs. Specific mechanisms could include shared service units in which costs are closely tied to consumption, tiered service units that make allowance for each group's business objectives and modify pricing accordingly, or an enterprise pool model that relies upon headcount or other non-usage-based metrics. The important point to remember is that these fees are in lieu of additional development costs, and therefore represent significant savings for the business.

More Stories By Lance Hill

Lance Hill is the vice president of webMethods' product and solution marketing, where he leads a number of strategic initiatives focused on the development, commercialization, and adoption of webMethods' SOA-based technology. Prior to joining webMethods, he served as the vice president of enterprise engineering and later the Fusion Technology Group for National City Bank. In this capacity, he spearheaded the creation of an internal, end-to-end solution delivery and support organization with responsibilities for integration, application development, workflow, imaging, business intelligence, and portal technology.

Comments (2)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...