Welcome!

Eclipse Authors: Pat Romanski, Elizabeth White, Liz McMillan, David H Deans, JP Morgenthal

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Blog Post

Logs for Better Clouds - Part 6

Not all Log Management solutions are created equal

Log Collection and Reporting requirements
So far in this series we have addressed:

Trust, visibility, transparency. SLA reports and service usage measurement.

Daisy chaining clouds. Transitive Trust.

Intelligent reports that don't give away confidential information.

Logs.  Log Management.

Now, not all Log Management solutions are created equal, so what are some high-level Log Collection and Reporting requirements that apply to Log Management solutions?

Log Collection
A sound Log Management solution needs to be flexible to collect logs from a wide variety of log sources, including bespoke applications and custom equipments. Universal Collection is important to collect, track and measure all of the possible metrics that are in scope for our use case, for example the number and size of mails scanned for viruses, or number and size of files encrypted and stored in a Digital Vault, or number of network packets processed, or number of Virtual CPU consumed...

And collection needs to be as painless and transparent as possible. Log Management needs to be an enabler, not a disabler! In order for the solution to be operationally sound, it needs to be easily integrated even in complex environments.

Open Reporting Platform
In addition to an easy universal collection, the Log Management solution needs to be an Open Platform, allowing a Cloud Provider to define very specific custom reports on standard and non-standard types of logs.

Many different types of reports will be used but they will fall under 2 categories.

External facing reports will be the ones shared with adjacent layers, for example service usage reporting, SLA compliance, security traceability, etc.  These will have to show information about all the resources required to render a service while not disclosing information considered confidential.

Internal reports will deal with internal "housekeeping" needs, for example security monitoring, operational efficiency, business intelligence...

And for the sake of Trust, all of these reports need to be generated with the confidence that all data (all raw logs in our case) has been accounted for and computed.

We can see that many internal and external facing reports need to be generated and precisely customized, and again this needs to be achieved easily. An open reporting platform.

This will allow several populations of users to generate their own set of ad-hoc reports showing exactly what they need to see based on specific needs and requirements.

Operational Model
The following diagram depicts the high-level Operational Model a Cloud Provider, with the Log Management solution and associated flows of information.

 

Figure 6 - Log Management solution and interaction within a Cloud Provider

At Layer N, internal processing is comprised of processes A through F, each having logs collected by the Log Management solution at Layer N.

These "local" logs, logs about local processing, will be augmented by logs collected from the subcontracting layer, which will give visibility into the complete lifecycle of end-to-end processing.

Logs are the data points that will be used 1) as "counters" of minute operations for pay-per-use purposes, 2) for SLA reporting, 3) for traceability and also for security, operational efficiency etc.

The requirement for inter-layer visibility means that there are logs and reports that a Cloud Provider (Layer N) needs from its subcontractor (its Layer N+1). Likewise, logs and reports from Layer N will need to be made available to its client (its Layer N-1). If logs are deemed confidential, and a Cloud Provider does not want them collected by its client(s) then proper reports need to be put in place so as to give client visibility into the metrics that are mutually agreed upon without disclosing actual confidential raw logs.

Anti-inference solutions and approaches already exist in the database world and can be used in this situation.

Sounds complex?

Actually it's not that bad, just understand what information you need from the layer below, and understand what you'll need to give to the layer above. Work out your reports so that you get the information that you need, and give the information that is required.

In case of major dispute, and undisputable proof is required, all the raw logs are centralized and easily accessible via the layer in question anyways.

Next time, we'll talk about the requirements concerning integrity and proof of immutability of logs, and what it means for end-to-end treatment on logs, and especially storage.

More Stories By Gorka Sadowski

Gorka is a natural born entrepreneur with a deep understanding of Technology, IT Security and how these create value in the Marketplace. He is today offering innovative European startups the opportunity to benefit from the Silicon Valley ecosystem accelerators. Gorka spent the last 20 years initiating, building and growing businesses that provide technology solutions to the Industry. From General Manager Spain, Italy and Portugal for LogLogic, defining Next Generation Log Management and Security Forensics, to Director Unisys France, bringing Cloud Security service offerings to the market, from Director of Emerging Technologies at NetScreen, defining Next Generation Firewall, to Director of Performance Engineering at INS, removing WAN and Internet bottlenecks, Gorka has always been involved in innovative Technology and IT Security solutions, creating successful Business Units within established Groups and helping launch breakthrough startups such as KOLA Kids OnLine America, a social network for safe computing for children, SourceFire, a leading network security solution provider, or Ibixis, a boutique European business accelerator.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...