Welcome!

Eclipse Authors: Pat Romanski, Elizabeth White, Liz McMillan, David H Deans, JP Morgenthal

Related Topics: Eclipse, Containers Expo Blog

Eclipse: Article

Virtualization 2.0 Is All About Manageability

IT organizations need a new breed of management tools

In this example, if the applications supporting the business service were running on physical machines, we would have concluded, based on the earlier scenario, that the database server was the root cause of the performance problem. However, the applications are in fact running in a virtual infrastructure, i.e., inside VMs. In this example, suppose the Oracle database server is running on the same physical server as a Citrix application and a media server (see Figure 2). A sudden increase in access to the media server can cause disk access on the physical server to increase to the point that disk access becomes a bottleneck.

Figure 2: Root-cause diagnosis in a virtual infrastructure is even harder than in a physical infrastructure.  Oracle, Citrix, and Media Server applications are hosted on VMs residing on the same physical server. A sudden surge in requests to the media server causes excessive disk reads on the physical server, thereby slowing down the performance of the Oracle database server.

At this stage, queries handled by the database server start to take longer and longer. Thus the database slowdown in Figure 1 may actually be caused by a sudden increase in workload to the media server in Figure 2. In this case, the root cause of the problem is a disk bottleneck on the physical server caused by an increase in workload for the media server application.

From this example, it should be clear that root-cause diagnosis technologies for virtual environments need to go beyond how they operate in a physical world. For true root-cause diagnosis, VMs running on each physical server must be auto-discovered. Applications running inside each of the VMs need to be detected and the monitoring system should automatically determine which applications coexist on the same physical server. This information is then used to determine where the root cause of a problem lies.

The extent of the automation determines the cost savings that the monitoring solution offers. Reduced downtime directly contributes to a business's bottom line. Further, by pinpointing the root cause of a problem, a monitoring solution can save endless hours of the finger pointing that goes on in most IT organizations. This results in cost savings from enhanced operational efficiency and reduces the man hours spent in routine fire-fighting.

7. Scale as the infrastructure monitored grows - As virtualization penetrates the enterprise, a large deployment will have hundreds of physical servers and thousands of VMs that require monitoring. In fact, as virtualization for desktops becomes popular, the ratio of VMs to physical servers could be as high as 30:1. The monitoring solution must be able to scale to handle such large infrastructures.

8. Support for virtualized desktop environments - Virtual Desktop Infrastructure (VDI) is being viewed as a viable alternative to Citrix- and terminal server-based remote access technologies. For situations where each user requires his/her own desktop as opposed to shared access to an operating system (e.g., for software development or to run a legacy application), VDI is being viewed as the technology of choice for remote access.

Virtual desktop environments have different characteristics than environments where VMs are used to host server applications such as databases and web servers (see Table 1). VDI environments also have an ecosystem of new application technologies, such as the connection brokers, terminal access controllers, etc. A Virtualization 2.0 Ready monitoring solution should be capable of handling the diverse monitoring requirements of virtual server and virtual desktop environments.

Virtualized Application Server Environments

Virtual Desktop Environments

Few VMs (<10) per physical server

30-40 VMs per physical server

VMs are mostly powered on all the time

VMs are powered on/off dynamically

Monitoring is mostly from the VM perspective - which VMs are on, what resources are they using

Monitoring is needed from the user perspective (who is logged in, what resources are they using)

In-depth application monitoring is required (Citrix, Oracle, etc.)

In-depth monitoring of the applications on the desktop is not required

Table 1: Differences exist in monitoring requirements between virtualized application server environments and virtual desktop environments. A Virtualization 2.0 Ready monitoring solution should be able to handle both environments.

9. Offer personalized views for the various stakeholders in an organization to enable collaborative management - Different stakeholders responsible for supporting a business service may need different views of the monitored infrastructure. Virtualization administrators, application experts, database admins, infrastructure architects, help desk personnel, and capacity planners may require different views of the infrastructure in keeping with their roles and responsibilities. The monitoring system must be flexible, providing each stakeholder with views that are aligned with their roles in the organization.

Organizational Process Challenges in Virtualization 2.0
While the previous discussion focused on the monitoring requirements for Virtualization 2.0, it is equally important to understand that Virtualization 2.0 also affects the core of most organizations' operational processes.

Most organizations handle VM provisioning in much the same manner as they do for physical server procurement. Business units and application owners specify the sizing of the virtual machines they need, and the appropriate VMs are provisioned by the virtualization group that handles the physical servers on which the VMs are set up. However, the virtualization group usually does not have any information or visibility into what applications are being hosted inside the VMs. When the physical servers are overprovisioned and fewer VMs are executed in parallel, this siloed approach, wherein the virtualization group and the application teams do not interoperate, is sufficient.

But with Virtualization 2.0, organizations seek better return on investment for virtualization technologies and deploy more complex applications inside virtual environments. Now it is no longer sufficient for the virtualization group to remain oblivious to the resource requirements of the application groups and their VMs. For instance, two memory-intensive applications hosted on the same physical server may contend for the same resources, thereby affecting each other's performance.

Of course, by strictly partitioning the resource usage of each of the VMs, the virtualization group can offer performance guarantees. But this has two key disadvantages. First, strict partitioning reduces the possibility of resource sharing across VMs, thereby limiting the consolidation benefits that virtualization offers. Second, due to limitations in the virtualization technologies, not all resources can be completely isolated across virtual machines; e.g., disk I/O. Hence, Virtualization 2.0 requires that virtualization groups of organizations play a more active role in how VMs are provisioned, including understanding which applications are to be hosted in each VM, what assumptions have been made regarding their workloads and resource requirements, and how the workload of different applications varies over time and with load. All of these details are essential for effective load balancing and optimizing resource usage in a virtual infrastructure.

For example, by hosting a memory-intensive application and a CPU-intensive application on the same physical server, the virtualization group can make best use of the available resources rather than by hosting all CPU-intensive applications on the same physical server.

Yet another problem that virtualization administrators have to contend with under Virtualization 2.0 is finger-pointing and problem diagnosis (see Figure 3). A single business service often spans multiple application and network tiers, so when a problem occurs, it is unclear what caused the problem; i.e., is it the network? The application? The database? The server? In a virtualized infrastructure, there are additional possibilities for where the problem could lie: In a VM? In the physical server? In the hardware? In the virtual network interface? In the SAN?

Figure 3: Monitoring an IT infrastructure as silos does not suffice because finger-pointing across silo administrators takes endless hours, resulting in high downtime for the business service.

Since most administrators already have silo tools for monitoring and management, there is no common dashboard from where the entire infrastructure can be monitored and diagnosed. Virtualization administrators will need to get accustomed to working in a multi-silo organization where finger-pointing is common. Monitoring and management solutions that provide deep visibility into every layer of every tier of the infrastructure and serve as a common dashboard for all the different administrators in an organization can go a long way toward ensuring that Virtualization 2.0 environments operate properly.

Conclusion
Virtualization 2.0 identifies fundamental changes that are needed in terms of how virtualized environments can be monitored most effectively and efficiently. This article outlined the key management and organizational challenges that must be overcome as the use of virtualization continues to increase in production enterprise environments.

Resources
Websites

Articles

More Stories By Srinivas Ramanathan

Srinivas Ramanathan is the founder and CEO of eG Innovations (www.eginnovations.com), a global provider of performance monitoring and triage solutions for both virtual and physical IT infrastructures. The company’s eG VM Monitor software was chosen as the Gold level winner in the Application and Infrastructure Management category in the Best of VMworld 2008 Awards. He has a PhD in computer science and engineering from the University of California, San Diego.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...