Not quite a crystal ball, but data-driven decisions are arguably the best way to mitigate the risks of a future in flux and deliver tomorrow’s goals today.
Deterministic models ─ modelling the future based on the past ─ don’t handle changes very well. The past is a great predictor of the future, but only until it’s not. The great toilet paper debacle of 2020 is a shining example. If you’re lucky enough to be in an industry that sees very little change in either supply or demand and faces no threats to its supply chain, then perhaps luck is all you need.
If you’re not, I suggest you keep reading.
For many, S&Op planning is a heterogeneous process, consisting of a mashup of information derived from disparate systems. Organisational data often lives in silos. Although you may have an ERP, or WMS in place, more often than not when it comes to S&Op planning, including procurement, more is needed.
Hence, each department often has its own additional set of spreadsheets, apps, and other information gathering and analysis techniques that are all then pieced together to arrive at an S&Op plan. To say it’s a complex thing is putting it lightly. It’s a beast. Now let’s add to the confusion.
The Bullwhip Effect
Rest assured, once you’ve mashed up your S&Op plan and have secured all of your resources, it will be time to do it all over again, because the landscape never remains static.
The Wall Street Journal describes the bullwhip effect as: “This phenomenon occurs when companies significantly cut or add inventories. Economists call it a bullwhip because even small increases in demand can cause a big snap in the need for parts and materials further down the supply chain.”
I, however, prefer Wikipedia’s description, “The bullwhip effect is a distribution channel phenomenon in which demand forecasts yield supply chain inefficiencies. It refers to increasing swings in inventory in response to shifts in consumer demand as one moves further up the supply chain.”
Avoiding the costly downfalls of the bullwhip effect and maintaining efficiencies requires a highly responsive agile supply chain and a robust procurement and S&Op process. Without it, companies are left where many of them find themselves today, in reactive mode, spending their days putting out fires and piecing back a broken plan.
Which brings us to big data and data analytics.
Big Data vs Data Analytics vs Data Science
With new tech and big data comes a bevy of big buzzwords. Let’s decipher.
Big Data: From Gartner, is high-volume, high-velocity and/or high-variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation.
Data Science: From IBM, is a multidisciplinary approach to extracting actionable insights from the large and ever-increasing volumes of data collected and created by today’s organisations. Data science encompasses preparing data for analysis and processing, performing advanced data analysis, and presenting the results to reveal patterns and enable stakeholders to draw informed conclusions.
Data Analytics: From , Data analytics is the science of analysing raw data in order to make conclusions about that information. Many of the techniques and processes of data analytics have been automated into mechanical processes and algorithms that work over raw data for human consumption.
Data analytics techniques can reveal trends and metrics that would otherwise be lost in the mass of information. This information can then be used to optimise processes to increase the overall efficiency of a business or system.
Building Supply Chain Agility
Until now, the lack of capabilities and the absence of usable, manageable data hindered better informed value-based decision making and proactive risk management efforts. But no longer must this be the case.
Organisations today have a host of new technology and powerful allies to leverage in their fight against the Amazonian giants and retail powerhouses. Procurement, supply chain risk management, and S&Op planning can all be fueled by big data, data analytics, data science, or whatever you choose to call it.
Big data analytics can provide tools such as probabilistic and predictive modelling that can be leveraged for supply chain risk management methods such as scenario building, what-If and risk/reward analysis.
IBM is an excellent example of a company leveraging these powerful tools for improved risk management and increased supply chain agility. Watson, IBMs cognitive computing engine, monitors and evaluates IBM’s global supply chain around the clock to identify potential disruptions, evaluates possible impacts and provides possible risk mitigation action plans. It then pushes alerts to laptops and mobile devices, having already taken into consideration risk/reward trade-offs, corporate risk appetite rules, and more.
In logistics, companies like use data science and IoT to pull traffic and weather data from sensors, monitors and forecast systems. Vehicle diagnostics, driving patterns, and location information can also be monitored globally, across every node in your supply chain, regardless of the mode of transportation, for real-time, actionable data that predicts and mitigates the risks of delays.
Furthermore, you can fine-tune your supply chain to find even more areas of improvement by identifying variabilities in delivery times, sourcing the root causes and finding more consistent routes and suppliers, thereby lowering lead times, and the need for hefty safety stocks.
Behold the power of data, build your supply chain agility, safeguard your business continuity, improve your operational efficiencies and reach your corporate objectives, despite what the future may hold.