Monday, 23 November 2020 12:25

How continuous improvement jumped the network fence

0
Shares
By Mike Hicks, ThousandEyes
Mike Hicks, Thousand Eyes Principal Solutions Architect

This year has redefined the reasons for baselining your network.

GUEST OPINION by Mike Hicks, Principal Solutions Architect at ThousandEyes: For businesses on a cloud or DevOps journey, continuous improvement is a desirable outcome. It encourages new features and enhancements to be delivered at speed and scale, and there’s a whole ecosystem of tooling designed to help businesses achieve that.

But continuous improvement isn’t just for coders and cloud engineers. As networks become more software-defined, the application of software engineering practices like continuous improvement to the network is a natural progression.

In addition, continuous improvement has taken on a kind of elevated importance in 2020 as networks are subject to more significant load and variability in the demands placed upon them.

In turn, network management needs to become more dynamic to deal with changing topologies, demands and patterns of access.

Define baseline

On any continuous improvement journey, one must first set a baseline.

A baseline shows what is happening today in your environment. Without a baseline, it is difficult to know what you might be able to change in your network in order to optimise it.

However, when a baseline is defined, it sometimes becomes viewed as a maximum - rather than minimum - standard. In much the same way as a service level agreement (SLA) risks driving the behaviour of doing just enough to meet it, businesses that set a baseline for network performance might wind up just tracking close to it.

That misunderstands not only the purpose but also the structure of what a baseline is or should be. A baseline isn't just one figure - it should comprise of measures for all the components used to deliver a great experience to users. It can then be used to understand how to move these experiences forward. What changes can be made that would create the most substantial and significant optimisation?

A baseline provides visibility into the opportunity to optimise what you’re going through today but also shows you where there are opportunities to optimise your setup.

It is not a stopping point, nor a static point-in-time metric. Instead, it should be dynamic and form the basis for a cycle of continuous improvement to take place.

It’s a living thing

This year has caused many businesses to re-evaluate the performance they get from their existing networks and infrastructure setups. In March alone, network disruptions spiked more than 60% as compared to earlier in the year, before the impact of the pandemic. Which, in turn, has fuelled a rapid acceleration of digital transformation initiatives.

These disruptions exposed businesses that did not have accurate measures of their baselines or adequate visibility into how their network infrastructures were performing. As the operating environment and modes of work shifted from a controlled corporate network environment to a remote workforce, those without visibility faced greater uncertainty in where they needed to focus investments and upgrades to improve the user experience for both employees and customers. Some made assumptions; others looked to past data to try to project future needs.

What businesses need to work towards is a ‘living’ baseline; one that can be dynamically adjusted to network conditions, to where people are and how they’re connecting into the network.

Into the setup

New monitoring technology that powers visibility into cloud and Internet networks allow businesses to see and improve both customer and employee digital experiences everywhere, thanks to thousands of global vantage points, recording billions of daily measurements.

With that level of visibility into networks that businesses today rely on but don’t control, organisations can see what kind of performance baseline they are achieving today. They can then undergo a period where they make changes or remediations to that environment, and then check whether or not the baseline has improved.

Performance thresholds can be set in the system with alerts tied to deviations away from those; this should ultimately start to function as a kind of closed-loop, whereby the thresholds are adjusted downward as ongoing improvements are made to the environment.

For example, the triggering of a certain threshold may warrant a change of service provider or the application of extra quality-of-service (QoS) provisions. Once the difference this change has made is apparent, the thresholds can be adjusted and the cycle of improvements repeated.

This is the basis for a continuous improvement model, one where you can make changes based on the thresholds that you're seeing, and quantify the impact of changes back to the organisation. A QoS overlay might save one second of processing time per transaction. That means being able to pass more transactions over the same network. If you’re an insurance company or a health company, for example, you can quantify that as being able to process more on a daily basis.

For organisations to operate in a continuous improvement model that brings benefits to a business such as increased productivity, they need to first effectively understand what their environments are doing today in order to see where they need to optimise to get the biggest bang for buck on connectivity and experience delivery.


Subscribe to ITWIRE UPDATE Newsletter here

Now’s the Time for 400G Migration

The optical fibre community is anxiously awaiting the benefits that 400G capacity per wavelength will bring to existing and future fibre optic networks.

Nearly every business wants to leverage the latest in digital offerings to remain competitive in their respective markets and to provide support for fast and ever-increasing demands for data capacity. 400G is the answer.

Initial challenges are associated with supporting such project and upgrades to fulfil the promise of higher-capacity transport.

The foundation of optical networking infrastructure includes coherent optical transceivers and digital signal processing (DSP), mux/demux, ROADM, and optical amplifiers, all of which must be able to support 400G capacity.

With today’s proprietary power-hungry and high cost transceivers and DSP, how is migration to 400G networks going to be a viable option?

PacketLight's next-generation standardised solutions may be the answer. Click below to read the full article.

CLICK HERE!

WEBINAR PROMOTION ON ITWIRE: It's all about webinars

These days our customers Advertising & Marketing campaigns are mainly focussed on webinars.

If you wish to promote a Webinar we recommend at least a 2 week campaign prior to your event.

The iTWire campaign will include extensive adverts on our News Site itwire.com and prominent Newsletter promotion https://www.itwire.com/itwire-update.html and Promotional News & Editorial.

This coupled with the new capabilities 5G brings opens up huge opportunities for both network operators and enterprise organisations.

We have a Webinar Business Booster Pack and other supportive programs.

We look forward to discussing your campaign goals with you.

MORE INFO HERE!

BACK TO HOME PAGE

Related items

Share News tips for the iTWire Journalists? Your tip will be anonymous