Welcome!

Eclipse Authors: Pat Romanski, Elizabeth White, Liz McMillan, David H Deans, JP Morgenthal

Related Topics: Java IoT, Industrial IoT, Microservices Expo

Java IoT: Article

i-Technology Viewpoint: Laziness Sometimes Pays

The Gains Made by Better Algorithms Almost Always Outstrip the Gains From Better Hardware

Let me begin by a philosophical rant. There is a motto from scientific computing that carries to many areas of computer science:

/The gains made by better algorithms almost always outstrip the gains from better hardware./

I've frequently seen where algorithm improvements pay by factors of tens to tens of thousands in CPU time. One change I made in a numerical algorithm improved CPU requirements by a factor of 50,000: from weeks on a super-computer to minutes on a workstation.

Any business-savvy engineer knows that algorithm improvements come at a price: the engineer's time. Striking that balance makes software systems move forward rather than staggering to a halt in bloat and dysfunction. It also helps to use people who actually know what they are doing: knowing how to compile code doesn't make you a software engineer any more than knowing how to spell makes you a writer. End of rant.

On to (rant related) business. On most Web sites, think of how many times a data source will be used to retrieve the same data and produce the same content over and over again. Most successful services deliver a highly redundant amount of information to their users. For example, the JDJ website will deliver this (same) content to perhaps a hundred thousand users. If the servers are overtaxed, customers will experience significant delays or malfunctions.

There are several useful solutions to this. Well configured caching proxy servers come to mind, although server-side scripting make this difficult. Buying more hardware will eventually fix the problem, which may be the correct business solution.

But what about asking programmers to be a little more lazy?

For this article I've included the source for the LazyFileOutputStream. It acts just like a regular FileOutputStream except that, if created on a file that already exists, it /reads/ the data from the file instead of writes it. The stream compares what is already in the file with what you are currently writing to it. If at any point it sees there is a difference in the data you are writing this time compared to what is already there, the stream automatically switches to a write-mode that writes over the remainder of the file with the changes.

The upshot is, if your program generates the same output twice, the output file is unmodified the second time (leaving the original modification date). First, by simply changing FileOutputStream to LazyFileOutputStream, any downstream processing can use timestamp information on the files to check if they need to do anything at all. If the timestamp hasn't changed, then neither has the contents.

But wait, there's more! In addition to the standard close(), the LazyFileOutputStream also supports abort(). This method effectively states "I'm done now, leave the rest of the file alone." The remainder of the file will be the same, even without reproducing it. This means that, if you determine at an early point in the processing of the file that it's going to turn out the same, you can simply abort() to leave it alone. Its similar to the idea of not changing the modification dates on files which are rewritten with the same data, but allows for saving CPU time for the current process step as well as downstream processing..

Certain engines produce part of the template before you can conveniently intervene to decide if you really need to regenerate it. By opening up the output as a Lazy file, you can just abort() early and have the old version, with the the old modification time, around for downstream processing.

Okay, rant concluded and point made: CPUs around the planet are spinning through the same data tens of thousands of times producing the same content tens of thousands of times. Instead of buying great big servers to manage this, a smart caching policy based on lazy file writers and some modification time testing could save some sites that same wild-sounding factor of 50,000. Without having to buy 50,000 new servers.

Anecdote # 1. There is a certain technical advantage to this style of writing data as well: most storage devices are easier to read from than write to, adhering to the 80/20 rule: 80% of file access will be reads, 20% will be writes.. The LazyFileOutputStream takes advantage of that for the many files which are simply rewritten with the same content.

Anecdote # 2. There must be a few curled toes out there saying to themselves, "Why not LazyFileWriter?" There are good technical reasons for the OutputStream: the logic of the data written must be checked in its raw /byte/ format for the idea to work correctly, and you can always wrap this in an OutputStreamWriter, followed by a BufferedWriter, which is what I recommend.

Now I'm even done with the anecdotes. Have a nice day.

More Stories By Warren MacEvoy

Warren D. MacEvoy is Asst Professor of Comp Science in the department of Computer Science, Mathematics & Statistics at Mesa State College, Grand Junction, Colorado.

Comments (8) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
Bruce VanOrder 10/19/04 09:25:42 AM EDT

I remember the first PC I bought for myself ... A CompuAdd 286 with lots of memory - 2 MB RAM and a whopping 40Mb hard drive ! On this gargantuan drive I was able to put everytrhing I needed.... WordPerfect 5.1, TurboPascal 5.5, Lotus 123, dBaseIII+, etc, etc, and a few games .... AND I STILL HAD ROOM !

Now I have a Pentium with 256Mb RAM & a 20 Gb hard drive...
MS Office Professional, Borland Delphi, JBuilder, Oracle, SQL Server.

dBaseIII+ could fit on a 1.44Mb 3.5" floppy !

Those were the days my friend, we thought would never end ..
:-)

Warren MacEvoy 10/16/04 12:57:28 AM EDT

Response to Mark M.

Back in the bad-old-days designers would kick around which sort would be better to use. Now practically all sort problems are best solved with Collections.sort() or using a TreeSet or TreeMap. This is a total win situation: it's faster to write, easier to maintain, and better optimized than any roll-your-own sort. So there's almost no context: the Collections' sort is almost always better.

I'll claim LazyFileOutputStream sits one step lower than this: it's almost never worse, and sometimes better than a plain FileOutputStream. If you are writing small chunks to an unbuffered stream (or flushing() after every character), then the adapter pattern it uses to implement its magic may cost you a little in time (but negligibly compared to other costs related to this approach). There is also a buffer overhead because of the (IMHO silly) decision to leave fundamental memory operations like POSIX memcmp out of the java system libraries. But you're writing to a file, and well, that's just kinda slow.

But what you gain is information. When you're done .isDifferent() will tell you if there was a change without having to keep the old copy around to see if there was a change, and the timestamps will tell you even if downstream processing occurs in some logically distant place, like another process.

So there's very little to lose in almost any situation, and a great deal to gain if:

1. your template processing is file-based.
2. you generally only rebuild things unless
they are out of date with respect to their dependencies.

Without timestamp information, implementing part 2 may have seemed like a waste of time (which it would have been, since every template rebuild would look like it was different), but switching to LazyFileOutputStreams can make it effective.

Warren MacEvoy 10/15/04 07:07:48 PM EDT

My apologies, but somehow the wrong link was placed for the source file. The correct adddress to the LazyFileOutputStream which the article refers to is:

http://bpp.sourceforge.net/download/bpp-0.8.5b/src/bpp/LazyFileOutputStr...

JavaDoc'ed at:
http://bpp.sourceforge.net/download/bpp-0.8.5b/doc/javadoc/index.html

You might also note that the class uses abandon() instead of abort(), which is a minor change.

This has nothing to do with

http://www.jdocs.com/ant/1.6.2/api/org/apache/tools/ant/util/LazyFileOut...

Again, my apologies for any confusion this may have created...

Mark M 10/15/04 11:47:56 AM EDT

Response to Warren M.

The key question is not how much more complicated it is to write the class. The key question is what is the context of the problem? Too often generic solutions to problems are presented (even if that is not always the author's intention, these things can be easily mis-interpreted as such) and their validity/necessity almost always depends upon the context of the problem. You yourself emphasize the need for context in your response to Jim M. You have created a useful tool for yourself given the context of the problem you were trying to solve. When the next programmer comes along, the context may be completely different. Often times, many are lead to believe incorrectly in one size fits all philosophies, for instance, it is widely viewed within the industry by working folk like myself that the notions of Bertand Meyer and Kent Beck conflict when in fact they both may be valid solutions under differing contexts. Lack of context is the biggest complaint I have with books on process in this industry. Without it, many arguments are neither valid nor invalid, just ambiguous. There is at least one really bright fella who says a lot about context when he writes. His name is Fred Brooks.

Warren MacEvoy 10/14/04 10:42:09 PM EDT

Response to Mark M.

I agree that it usually a waste of time to optimize without profiling to know where your problems are. You must also have a business argument that the problem needs to be solved and that optimizations are the best way to solve it.

It is wrong to think that optimizations must be complicated. There's plenty of code out there that make poor or no use of Collections, which would be faster to write, maintain and execute if better choices were made. Good programmers should know how to use these features to improve turnaround, defects, and efficiency (the rant part of my article).

The purpose of the article is to point another kind of "low hanging fruit" related to file processing. After all, how much more complicated is it to write "LazyFileOutputStream" compared to "FileOutputStream"?

Response to Jim M.

Completely? Substantially. Completely claims they have nothing to learn from each other, yet there are many business problems with a short lifetime and plenty of rustic scientific codes are dutifully solving the problems they were designed to solve twenty and thirty years after they were written.

Again, the optimizations I'm suggesting don't need to be complicated. The LazyFileOutputStream is as simple as the code it replaces. How does that detriment readability or maintainability?

As far as longevity, I like the analogy of building a wall. The last row of the wall (business or scientific) can be very slipshod and the wall will still be a wall. Much software is written with the (sometimes correct) assumption that they will be part of the last row of bricks. But people change their minds, and what was once the last row is not anymore. In the real world, this is why tens of thousands of people die when there is an earthquake in a third-world country.

Should businesses be happy with a software design model analogous to the slums of Mexico City?

Response to Justin S.

Edit one line of an XML configuration file, changing one attribute. Many elements of your design depend on this XML file, but almost none of them depend on this one attribute. Your solution suggests detailed code to see if the attributes each dependency requires has changed, which would be hugely complicated to write and maintain.

Mine asks that you to rebuild the elements that directly depend on the configuration file. If they don't change, then you don't have to propagate updates further. Not a perfect optimization, but a much more practical one.

The LazyFileOutputStream supports your idea if you choose to pursue it. If a template decides it does not need to regenerate a target, it can simply abort() to leave the current contents alone without going to the trouble of regenerating all of it.

Justin Sadowski 10/14/04 08:27:34 PM EDT

While I agree with your thoughts about the value of avoiding writing the same data over and over, I have to disagree with your LazyFileOutputStream solution. If you find that you are writing the same data to the same file repeatedly, I would suggest that you improve this by avoiding the rewriting altogether, instead of just making the rewriting more efficient.

For example, perhaps you are writing the same output repeatedly because you are operating on the same input; i.e. the data in a database hasn't changed, or a source XML file hasn't changed. If you can detect your input hasn't been modified, you can avoid writing the output altogether.

I would like to hear more details about the specific situation(s) that you have used LazyFileOutputStream -- I would be interested to hear an example of a situation where my logic above does not apply.

Jim T. 10/14/04 07:09:02 PM EDT

Scientific computing and business computing are completely different. In the scientific communality you usually have a very small number of highly skilled people working on a program. That just isn't so in the business world. In the business world I care much more about readability and maintainability rather than speed for 99% of our code. In science, nobody will be using my programs 5 years from now, the data will have all been analyzed and the papers published, in business, the exact same code will be used 5 years from now (or at least it will be for the basis of the code). I believe this is true because it is true of my code from 5 years ago. The physics code is gone/useless and the business code is being resold every day.

mark mcconkey 10/14/04 05:51:12 PM EDT

Several years back I began reading Kent Beck's stuff (XP) and it struck a chord for me because many of my experiences were similar. I believe Kent's general notion is something to the effect that one should not optimize up front because its too difficult to predict the future, and the majority of the time you will have made your code unreadable for no reason whatsoever. Of course, any seasoned programmer has experienced enough to have a feel for when big troubles are over the hill and thus that some optimization up front will be needed. I think though, that what is missing in your article is a lack of discussion of context. If I have 3 weeks to finish something that will take 6 and 50 big whigs in a fortune 500 company have goals dependent upon the completion of my software, it doesn't matter how clever I am. It matters how fast I can produce what is needed. On the other hand, the creators of Amazon probably needed to be quite clever in order to deal with the magnitude of hits on their servers. Without context, its sort of useless to talk about optimization.

@ThingsExpo Stories
DXWorldEXPO LLC announced today that All in Mobile, a mobile app development company from Poland, will exhibit at the 22nd International CloudEXPO | DXWorldEXPO. All In Mobile is a mobile app development company from Poland. Since 2014, they maintain passion for developing mobile applications for enterprises and startups worldwide.
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
I think DevOps is now a rambunctious teenager - it's starting to get a mind of its own, wanting to get its own things but it still needs some adult supervision," explained Thomas Hooker, VP of marketing at CollabNet, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Major trends and emerging technologies – from virtual reality and IoT, to Big Data and algorithms – are helping organizations innovate in the digital era. However, to create real business value, IT must think beyond the ‘what’ of digital transformation to the ‘how’ to harness emerging trends, innovation and disruption. Architecture is the key that underpins and ties all these efforts together. In the digital age, it’s important to invest in architecture, extend the enterprise footprint to the cl...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
Two weeks ago (November 3-5), I attended the Cloud Expo Silicon Valley as a speaker, where I presented on the security and privacy due diligence requirements for cloud solutions. Cloud security is a topical issue for every CIO, CISO, and technology buyer. Decision-makers are always looking for insights on how to mitigate the security risks of implementing and using cloud solutions. Based on the presentation topics covered at the conference, as well as the general discussions heard between sessio...
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
No hype cycles or predictions of zillions of things here. IoT is big. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, Associate Partner at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He discussed the evaluation of communication standards and IoT messaging protocols, data analytics considerations, edge-to-cloud tec...
DXWorldEXPO LLC announced today that the upcoming DXWorldEXPO | CloudEXPO New York event will feature 10 companies from Poland to participate at the "Poland Digital Transformation Pavilion" on November 12-13, 2018.
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
CloudEXPO | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
The best way to leverage your CloudEXPO | DXWorldEXPO presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering CloudEXPO | DXWorldEXPO will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at CloudEXPO. Product announcements during our show provide your company with the most reach through our targeted audienc...
Everything run by electricity will eventually be connected to the Internet. Get ahead of the Internet of Things revolution. In his session at @ThingsExpo, Akvelon expert and IoT industry leader Sergey Grebnov provided an educational dive into the world of managing your home, workplace and all the devices they contain with the power of machine-based AI and intelligent Bot services for a completely streamlined experience.
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world.
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
JETRO showcased Japan Digital Transformation Pavilion at SYS-CON's 21st International Cloud Expo® at the Santa Clara Convention Center in Santa Clara, CA. The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.