A large national retail and consumer products company in the US had systems in place to forecast, monitor and control the progress through its fast-moving supply chain of popular items that made up a large portion of daily sales. These systems reside in physical data centres— and if those data centres were to suffer outages, disruption to the distribution network and store inventories would cause significant loss of revenue every hour services are unavailable.
The cost of creating redundant legacy systems to mitigate this risk was significant, but the data that helped forecast and control product movement—such as SKUs, purchase orders, invoice data and pricing—wouldn’t change. The intelligence the company needed to extract from that data during every minute of operation remained the same. But the system that translated one into the other had to be backed up by the most reliable architecture available.
Rather than add redundancy to its data centres that would only extend legacy systems at significant cost, Deloitte counseled the organisation to create a backup solution born in the cloud. Deloitte then designed and implemented the system on an aggressive timetable. A 150-member team assembled from more than 13 Deloitte practice groups dived in to create a cloud-native solution hosted on the organisation’s Microsoft Azure cloud platform, using services like Azure Databricks and Spark to provide advanced artificial intelligence and machine learning capabilities.
Integrating cloud services so they would comply with the company’s security requirements took customisation, which was a collaborative achievement among development, compliance and operations members of the team from both the organisation and Deloitte. The system was designed to be able to forecast movement of the most in-demand retail items and take over replenishment decisioning in minutes when needed. Now, the threat of interruption that any data centre outage posed to the distribution network was greatly reduced.
Because the supply chain backup system’s ultimate output was not automated control but rather intelligence for action by human decision-makers, the team built in a user-centric web interface. Cloud microservices and APIs kept the solution scalable and flexible. In side-by-side tests, the new system’s predictive metrics were notably more accurate than the existing primary production system. Though the cloud system “takes over” only during a data centre outage, its AI-powered forecasting runs all the time, offering distribution centre and store managers an overlay they can use to extrapolate greater control from the older business-as-usual inventory system.
These systems reside in physical data centres— and if those data centres were to suffer outages, disruption to the distribution network and store inventories would cause significant loss of revenue every hour services are unavailable.
1. The team's use of Agile best practices and DevOps techniques from start to finish helped deliver the solution incrementally.
2. In end-to-end testing, the system met all plan requirements with zero critical issues.
3. The system as designed has been extended from the retailer's core inventory to speciality areas such as pharmacy and member sales.
4. The reference architecture of the new inventory system is adaptable to pivot to other parts of the business and the company's Finance department is already working to adopt it.
5. In side-by-side tests, the new system’s predictive metrics were notably more accurate than the existing primary production system.
<12 months
The project moved from discovery to implementation in less than 12 months, finishing three weeks ahead of schedule.
10x more affordable
The total cost of ownership (TCO) for the cloud-based system was 10 times more affordable than a backup using data centre redundancy would have been.
<72 hours
The recovery time objective to emerge from the effects of a data centre outage shrank from 72 hours to a matter of minutes.
300+ user stories
300+ user stories and 230 data pipelines were involved in designing the new system and ingesting and processing the operational data.
Opens in new window