Eclipse Authors: Pat Romanski, Elizabeth White, Liz McMillan, David H Deans, JP Morgenthal

Related Topics: Eclipse, Containers Expo Blog, @CloudExpo

Eclipse: Blog Feed Post

The Goldfish Effect

When you combine virtualization with auto-scaling without implementing proper controls you run a risk of scaling yourself silly

When you combine virtualization with auto-scaling without implementing proper controls you run the risk of scaling yourself silly or worse – broke.

You virtualized your applications. You set up an architecture that supports auto-scaling (on-demand) to free up your operators. All is going well, until the end of the month.

Applications are failing. Not just one, but all of them. After hours of digging into operational dashboards and logs and monitoring consoles you find the problem: one of the applications, which experiences extremely heavy processing demands at the end of the month, has scaled itself out too far and too fast for its environment. One goldfish has gobbled up the food and has grown too large for its bowl.


It’s not as crazy an idea as it might sound at first. If you haven’t implemented the right policies in the right places in your shiny new on-demand architecture, you might just be allowing for such a scenario to occur. Whether it’s due to unforeseen legitimate demand or a DoS-style attack without the right limitations (policies) in place to ensure that an application has scaling boundaries you might inadvertently cause a denial of service and outages to other applications by eliminating resources they need.

Automating provisioning and scalability is a Good Thing. It shifts the burden from people to technology, and it is often only through the codification of the processes IT follows in a more static, manual network to scale an application can inefficiencies be discovered and subsequently eliminated. But an easily missed variable in this equation are limitations that were once imposed by physical containment. An application can only be scaled out as far as its physical containers, and no further. Virtualization breaks applications free from its physical limitations and allows it to ostensibly scale out across a larger pool of compute resources located in various physical nooks and crannies across the data center.

But when you virtualized resources you will need to perform capacity planning in a new way. Capacity planning becomes less about physical resources and more about costs and priorities for processing. It becomes a concerted effort to strike a balance between applications in such a way that resources are efficiently used based on prioritization and criticalness to the business rather than what’s physically available. It becomes a matter of metering and budgets and factoring costs into the auto-scaling process.


From a technical perspective this means you need to have strategic points of control at which such decisions are made and policies enforced. The system controlling provisioning in the auto-scaling process must take into consideration not only Application A and its resource requirements, but Application B and C and D as well. It must have visibility into the total pool of resources available and be able to make decisions on scaling in real-time. It must be able to view the data center from a holistic point of view; treating resources more like an operating system treats a CPU and schedules discrete workloads based on a variety of parameters.

From these variables can be derived the limitations that impose policy on applications and resource consumption. Those limitations must be enforced, but they must also be flexible. The same limitations that exist at on the 5th of May are not necessarily applicable on the 31st of May, and they may be applicable only at certain times of the day. The orchestration of the data center is about balancing all the variables and ensuring that the limitations can be just as quickly increased as they can be decreased. A set of heuristics needs to be developed to take into consideration all the variables and solve the riddle: which application gets what resources, and for how long? How can we automatically adjust the network to meet the needs of all applications? We must be able to balance the performance needs of one application against the time-sensitive processing of another. Can we add an acceleration policy to the one to reduce resource consumption and give its extra resources to the other? Are the costs of applying the acceleration policy worth the benefit of meeting both SLAs? Can we degrade functionality in Application Z to reduce its consumption because Application B is experiencing unanticipated demand?

Once the decision is made, it must still be enforced, and that means collaboration and integration.

The orchestration or management system responsible for provisioning must be able to communicate with the infrastructure responsible for delivering those applications to enforce the decisions it has made. When one application is being scaled back, limitations on the number of instances or connections to the instances available should be communicated to the load balancing solution, in real-time, to ensure that the policy is enforced. When an application is being allowed to scale out by adding more instances the same communication must occur, and limitations must be increased or modified to match the new policy.

A new kind of network is needed to support this kind of dynamism; a dynamic infrastructure, a connected infrastructure, a collaborative and interactive infrastructure. An integrated infrastructure.

Hat tip to Brenda Michelson of Elemental Links for offering up the goldfish analogy during a recent Twitter conversation and James Urquhart for his clarity of thought on the subject.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

IoT & Smart Cities Stories
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
Chris Matthieu is the President & CEO of Computes, inc. He brings 30 years of experience in development and launches of disruptive technologies to create new market opportunities as well as enhance enterprise product portfolios with emerging technologies. His most recent venture was Octoblu, a cross-protocol Internet of Things (IoT) mesh network platform, acquired by Citrix. Prior to co-founding Octoblu, Chris was founder of Nodester, an open-source Node.JS PaaS which was acquired by AppFog and ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
JETRO showcased Japan Digital Transformation Pavilion at SYS-CON's 21st International Cloud Expo® at the Santa Clara Convention Center in Santa Clara, CA. The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...