y way of contrast, monolithic application architectures continue to fall out of favour. Less than 1% of teams are sticking with their monoliths and have no plans to transition away in the next year, down from 2% in 2021 and 5.5% in 2020.
But there’s already evidence creeping in that the design of APIs, and the microservices and APIs used to compose applications, are prone to old problems like code bloat, complexity and inefficiency.
That isn’t just harming their performance; it’s also a financial drain, and increasingly a hindrance to sustainability goals and licence to operate. At the end of the day, the cost of running an inefficient architecture pattern is high.
Resource utilisation has become a more pressing concern in recent years, initially driven by financial pressures. In fact, the latest Google Cloud Survey found that a vast majority of tech execs see sustainability as a key goal for their business, they just don't know how to measure it or what to do. For example, cloud costs may be inflated by running application workloads on over-specified instances, where a smaller, cheaper instance would suffice. Similarly, as we decompose the monolith into microservices, we go from a handful of in-app connections to an exponentially increasing number of microservices all talking to each other over various networks. Very chatty application architectures can cause network or bandwidth cost overruns, and can be similarly candidates for rightsizing.
The buck on this typically stops with the CIO (or possibly with their boss, the CFO).
Increasingly, though, we anticipate application teams coming under additional pressure to trim resource utilisation, and for that pressure to come from non-traditional places, for non-traditional reasons.
As more organisations commit to net-zero emissions, sustainable development goals (SDGs) or certify to green or B Corp standards, for example, every business unit and product or development team will face pressure to demonstrate efficiency. Australia, alone, has about 257 certified BCorps, and the number is growing all the time.
This trend is driven partially by social licence to operate, by internal ethics and culture, but also by customer spend. Eighty-five percent of consumers have in fact shifted their purchasing behaviour to prioritise sustainability in the last five years, and a third of consumers will actually pay more for products that are sustainable.
Within these organisations, CIOs are likely to be put under pressure by CEOs, chief sustainability officers, or even boards, to rein in their brigades, particularly the amount of compute and networking resources they consume.
Within the next 10 years, it’s predicted that 21% of all the energy consumed in the world will be by IT. For net-zero committed companies, that equates to a lot of carbon offsets.
Code, microservices and API bloat, or otherwise outdated architectural design decisions, will become obvious in this scenario, because the carbon offset costs these incur will be higher than for comparative teams that have slimmer, more efficient codebases. There will be nowhere for teams with libraries of inefficient APIs or microservices to hide.
The low hanging fruit on efficiency
APIs represent perhaps the single largest area of opportunity to extract greater efficiencies, due to their number and widespread use.
On some estimates, APIs are responsible for 83% of traffic running through CDNs. But how much of this traffic volume is really necessary?
One of the key design issues seen in APIs today is they return too much information, or no new information at all. Neither is really acceptable, and point to flawed design.
APIs that facilitate customer requests for data from a backend system can be overly helpful, sending 100 fields when most of the time consumers only use the top 10. Sending 90 unused and unhelpful bits of data every time that API is called means resources are being wasted.
Consider building a parser to enable consumers to pass the name of the fields they want in the request (GET /customer/32944?fields=name,email), or building in GraphQL instead of REST so consumers can specify only the fields they need; but bear in mind consumers will need to be educated on how to make computationally efficient requests rather than always asking for everything. Using a tool like Insomnia you can also help optimise and perfect the design of your API so you can make sure what you build isn’t wasteful or unnecessary.
An increasing number of APIs don’t respond to customer requests for data, but are set up to automatically poll a system, potentially every x seconds, looking for changes that they can extract for downstream processing. The proliferation of these types of APIs is why 98.5% of API polls don’t return any new information. The vast majority of polls are a total waste of energy because the polling frequency is set too high. An efficient service polling for updates every 10 seconds will be worse than an efficient service that just pushes updates when there are some. And when there is an update, we just want the new data to be sent, not the full record. Policies like rate limiting can be implemented in an API gateway to control how many requests a client can make in any given time period.
To reduce the energy consumption of APIs, we must ensure they are as efficient as possible. We need to eliminate unnecessary processing, minimise their infrastructure footprint and monitor and govern their consumption.
When you use an API management platform that is itself efficient and reduces your infrastructure footprint, you are in a much better position to ensure your APIs are built and managed with sustainability in mind. At the same time, you can play your part in making IT decisions today to support our precious planet and its future.