Green IT is nothing new, or so it would appear. Wikipedia, for instance, dates it back to 1992. But Green IT still concentrates very much on the operation and use of applications, with little focus on what goes on during their development. Up in our ivory towers, we software engineers often feel that such matters are not relevant to us. However, a great deal can be achieved with just a little effort. Often it is a quite manageable set of decisions, which then leads to significant differences in power consumption.

Unorthodox means
The real hurdle in energy-efficient software development is not so much a technical one, but rather more a mental barrier. Developers must decide on a case-by-case basis whether they should depart from familiar engineering paths and turn to alternative methods, if an appreciable amount of energy can be saved. These alternative methods include some that might seem unorthodox at first glance.

In our case, in-memory computing is a very good example, because, if data processing only happens in the main memory, limitless high performance seems quite a plausible promise. As a result, some developers think there is really not much point in devoting time to energy saving or runtime optimization. However, even in the in-memory world, there are cases that justify a departure from standard procedures.

Indexing, for example. “But how can that be?” colleagues will ask almost automatically. After all, indexes are simply a tool to compensate for performance deficits in row-based databases. Such databases make it necessary to scan the whole table. To optimize response times, you index the column that is to be used for the impending query. The index duplicates the column and sorts its content as required by the query. The result: Response times are reduced.

In the in-memory world, such a workaround seems completely outdated. After all, the column-based structure of in-memory databases means that each column is an index anyway. Why should developers then devote any more thought to using indexes? But, in fact, there are cases where using them can bring genuine benefits. That is the case when we encounter extremely large columns that we want to query very selectively and very frequently. We calculated the resource savings for a table with 100 million entries. Without an index, power consumption would have been 5.4 watt-seconds per query. With the index, we had 0.035 watt-seconds. This corresponds to a reduction by a factor of 155.

Computing requirements are growing, the availability of energy is shrinking
Some developers may raise the objection here that such queries are the exception rather than the rule. Well, in the face of the huge changes that digitalization is sparking. we are moving headlong into an application world where the number of highly specific queries will increase by leaps and bounds. If we think the current IoT scenarios consistently through to the end, the lot sizes that we will be encountering will decrease drastically. It is hardly an exaggeration to say that our systems won’t have to process just one sales order with maybe 1,000 items, but rather 1,000 sales orders each with a single item.

Our architectures will therefore have to face new challenges, also in terms of the durability of the end devices involved. Fewer and fewer of them will have permanent access to the power grid. In decentralized IoT solutions, it will therefore become increasingly necessary for computational work to be completed with minimal electricity.

Clever use of caches
We software developers should devote much more of our own brain power to the subject of caches and consider them throughout the entire system architectures on which our applications run. Whenever possible, the aim is to buffer the required data where the computational tasks arise. This is the only way to avoid energy-intensive and time-consuming round trips through the complete stack in a system landscape.

Let us start at the front end with the implementation of suitable browser caches. Moving on to the network, content delivery networks are the architecture of choice. They enable data and documents to be kept as close as possible to the components that access them.

Should it make more sense to process the request in the backend, we can use the cache infrastructure of the application server to cache metadata or frequently used application data.

At database level, the use of SQL plan caches is recommended. This is where plans are stored that were generated from previous SQL statement executions. If such a plan – that is, the optimal access path – is reused, considerable energy can be saved. Energy consumption for statement preparation is typically two watt-seconds. In itself, this is clearly an insignificant amount. However, because many production systems work with 100,000 statements per second, the potential total savings are quite attractive.

A strategy to see the wood for the trees
The more you come to grips with sustainable programming, the greater the range of options will become. However, we should devise an end-to-end strategy. As a starting point, it makes sense to remind ourselves of the four major energy guzzlers in software development: CPU, working memory, hard disk, and network. And it is important to bear in mind that the energy requirements of these four guzzlers are connected. Reducing consumption at one point always affects the energy appetite of the other three factors.

There is no magic bullet for all software-development areas and organizations. As described, our optimization strategy is based on these three cornerstones:

  • In-memory computing rather than disk I/O
  • Caches rather than CPU cycles
  • Content delivery network and code pushdown rather than high data transfer and a large number of round trips

Although this strategy leads to increased energy consumption in the memory the total energy balance is positive thanks to savings in the other components (that is, CPU, hard disk, and network). And, at the same time, we improve the end-to-end response time and the user experience.