WHAT IS MACHINE LEARNING?

This article covers the introduction to machine learning and the directly related concepts.

Machine learning is the field of study that gives computers the ability to learn without being explicitly programmed. It is a subset of artificial intelligence (AI) and computer science that focuses on the use of data and algorithms to imitate the way humans learn, and in doing so it gradually improving its accuracy. By using statistical learning (link resides outside Axisto) and optimisation methods, computers can analyse datasets and identify patterns in the data. Machine learning techniques leverage data mining to identify historic trends to inform future models.

According to the University of California, Berkeley, the typical supervised machine learning algorithm consists of three main components:

  • A decision process: A recipe of calculations or other steps that takes in the data and returns a guess at the kind of pattern in the data that the algorithm is looking to find.
  • An error function: A method of measuring how good the guess was by comparing it to known examples (when they are available). Did the decision process get it right? If not, how do you quantify how bad the miss was?
  • An updating or optimisation process: The algorithm looks at the miss and then updates how the decision process comes to the final decision so that the miss will not be as great the next time.

Machine learning is a key component in the growing field of data science. Using statistical methods, algorithms are trained to make classifications or predictions and uncover key insights from data.

HOW DOES A MACHINE LEARNING ALGORITHM LEARN?

The technology company Nvidia (link resides outside Axisto) distinguishes four learning models that are defined by the level of human intervention:

  • Supervised learning: If you are learning a task under supervision, someone is with you, prompting you and judging whether you’re getting the right answer. Supervised learning is similar in that it uses a full set of labelled* data to train an algorithm.
  • Unsupervised learning: In unsupervised learning, a deep learning model is handed a dataset without explicit instructions on what to do with it. The training dataset is a collection of examples without a specific desired outcome or correct answer. The neural network then attempts to automatically find structure in the data by extracting useful features and analysing its structure. It learns by looking for patterns.
  • Semi-supervised learning: Semi-supervised learning is, for the most part, just what it sounds like: a training dataset with both labelled and unlabelled data. This method is particularly useful in situations where extracting relevant features from the data is difficult or where labelling examples is a time-intensive task for experts.
  • Reinforcement learning: In this kind of machine learning, AI agents are trying to find the optimal way to accomplish a particular goal or improve the performance of a specific task. If the agent takes action that moves the outcome towards the goal, it receives a reward. To make its choices, the agent relies both on learnings from past feedback and on exploration of new tactics that may present a larger payoff. The overall aim is to predict the best next step that will earn the biggest final reward. Just as the best next move in a chess game may not help you eventually win the game, the best next move the agent can make may not result in the best final result. Instead, the agent considers the long-term strategy to maximise the cumulative reward. It is an iterative process: the more rounds of feedback, the better the agent’s strategy becomes. This technique is especially useful for training robots to make a series of decisions for tasks such as steering an autonomous vehicle or managing inventory in a warehouse.

* Fully labelled means that each example in the training dataset is tagged with the answer the algorithm should produce on its own. So a labelled dataset of flower images would tell the model which photos were of roses, daisies and daffodils. When shown a new image, the model compares it to the training examples to predict the correct label.

In all four learning models, the algorithm learns from datasets based on human rules or knowledge.

In the domain of artificial intelligence, you will come across the terms machine learning (ML), deep learning (DL) and neural networks (artificial neural networks – ANN). Artificial intelligence and machine learning are often used interchangeably, as are machine learning and deep learning. But, in fact, these terms are progressive subsets within the larger AI domain, as illustrated in Figure 1.

Axisto - Introduction to Machine Learning
Figure 1. Artificial neural networks are a subset of deep learning, which is a subset of machine learning, which in turn is a subset of artificial intelligence.

Therefore, when discussing machine learning, we must also consider deep learning and artificial neural networks.

THE DIFFERENCE BETWEEN MACHINE LEARNING AND DEEP LEARNING IS THE WAY AN ALGORITHM LEARNS

Unlike machine learning, deep learning does not require human intervention to process data. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required, which means it can be used for larger data sets.

“Non-deep” machine learning is more dependent on human intervention for the learning process to happen because human experts must first determine the set of features so that the algorithm can understand the differences between data inputs, and this usually requires more structured data for the learning process.

“Deep” machine learning can leverage labelled datasets, also known as supervised learning, to inform its algorithm. However, it does not necessarily require a labelled dataset. It can ingest unstructured data in its raw form (e.g., text and images), and it can automatically determine the set of features that distinguishes between different categories of data. Figure 2 illustrates the difference between machine learning and deep learning.

Axisto - Machine Learning and Deep Learning
Figure 2. The difference between machine learning and deep learning.

Deep learning uses multiple layers to progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human, such as digits or letters or faces.

In deep learning, each level learns to transform its input data into a slightly more abstract and composite representation. In an image-recognition application, the raw input may be a matrix of pixels. The first representational layer may abstract the pixels and encode edges; the second layer may compose and encode arrangements of edges; the third layer may encode a nose and eyes; and the fourth layer may recognise that the image contains a face. Importantly, a deep learning process can learn which features to optimally place in which level on its own. This does not fully eliminate the need for manual-tuning – for example, varying numbers of layers and layer sizes can provide different degrees of abstraction. The word “deep” in “deep learning” refers to the number of layers through which the data is transformed.

NEURAL NETWORKS

An artificial neural network (ANN) is a computer system designed to work by classifying information in the same way a human brain does, while still retaining the innate advantages they hold over us, such as speed, accuracy and lack of bias. For example, it can be taught to recognise images and classify these according to elements they contain. Essentially, it works on a system of probability – based on data fed to it, it can make statements, decisions or predictions with a degree of certainty. The addition of a feedback loop enables “learning” – by sensing or being told whether its decisions are right or wrong, it modifies the approach it takes in the future.

Artificial neural networks consist of a multilevel learning of detail or representations of data. Through these different layers, information passes from low-level parameters to higher-level parameters. These different levels correspond to various levels of data abstraction, leading to learning and recognition. An ANN is based on a collection of connected units called artificial neurons (analogous to biological neurons in a biological brain). Each connection (synapse) between neurons can transmit a signal from one neuron to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal to neurons connected to it downstream. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream. Typically, neurons are organised in layers, as illustrated in Figure 3. Different layers can perform various kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times.

Axisto - Artificial Neural Network
Figure 3. Layers in an artificial neural network.

USES OF MACHINE LEARNING

There are many applications for machine learning; it is one of the three key elements of Intelligent Automation and a autonomous operating model within Industry 4.0. The computer programs can read text and work out whether the writer was making a complaint or offering congratulations. They can listen to a piece of music, decide whether it is likely to make someone happy or sad, and find other pieces of music to match the mood. In some cases, they can even compose their own music that either expresses the same themes or is likely to be appreciated by the admirers of the original piece.

Neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games, and medical diagnosis. As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Although this number is several orders of magnitude less than the number of neurons in a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognising faces, playing “Go”).

 

INVESTING IN INDUSTRY 4.0 TECHNOLOGIES YIELDS SIGNIFICANT BENEFITS

In 2018 the World Economic Forum (WEF) launched an initiative, Shaping the Future of Advanced Manufacturing and Production, to demonstrate the true potential of Industry 4.0 technologies to transform the very nature of manufacturing. Learnings from 69 frontrunner companies boosting 450 use cases in action reveal that organisations investing in Industry 4.0 technology are realising significant improvements in productivity, sustainability, operating cost, customisation and speed to market.

Here are just a few numbers from the 450 use cases: labour productivity up by 32% to 86%, order lead times down by 29% to 82%, field quality up 32%, manufacturing costs down 33%, OEE up 27%, new product design lead time down 50%.

Additionally, frontrunner companies showed that by investing in Industry 4.0 technologies they can solve business problems while simultaneously reducing environmental detractors such as waste, consumption and emissions. While the greatest environmental benefits come from core green sustainability initiatives (such as commitments to renewable energy), Industry 4.0 use cases have shown significant environmental impact as well, reducing energy consumption by more than one-third and water use by more than one-quarter.

Out of the 69 frontrunners within the WEF initiative that exist to date across the globe, 64% have been able to drive growth by adopting Industry 4.0 solutions. In all those cases, with little to no capital expenditure, they were able to unlock capacity and grow by coupling some of the technology solutions together with a much more flexible production system. The business case is big and the pay back is short, both for large companies and for SMEs.

HOWEVER, MOST COMPANIES STRUGGLE TO IMPLEMENT

Most companies struggle to start and scale an Industry 4.0 transformation because they lack people with the right skills and knowledge and because of a limited understanding of technology and vendor landscape. On average, 72% of companies don’t get beyond the pilot phase.

AIMA enables manufacturing companies to understand where they stand and to design an implementation roadmap that helps them start their Industry 4.0 implementation journey or progress to the next level. AIMA assesses your operations along eight elements, as shown in Figure 1.

Axisto - The AIMA comprises 8 elements with in total 33 categories
Figure 1. The AIMA comprises 8 elements with in total 33 categories

In total, the eight elements are made up of 33 categories (see Figure 2), and each category spans the four fundamental building blocks of Industry 4.0: processes, technology, people and competencies, and organisation.

Figure 2. The eight elements of the AIMA with the 33 categories, each spanning processes, technology, people and competencies, and organisation.
Figure 2. The eight elements of the AIMA with the 33 categories, each spanning processes, technology, people and competencies, and organisation.

HOW AIMA SUPPORTS YOU ON YOUR INDUSTRY 4.0 IMPLEMENTATION JOURNEY

AIMA helps you:

  • build knowledge
  • tear down interdepartmental walls and create strategic alignment
  • understand where your operations stand – what is strong and must be maintained
    and what needs to be improved
  • understand what your key areas are and what you need to focus on.

AIMA helps you establish a company-specific interpretation of key principles and concepts. It creates an improved case for change and provides more momentum to implement the change.

HOW AIMA WORKS

AIMA consists of four steps:

  • Preparation – get to know the members of the leadership team and understand: the vision and strategy, how the team views market developments, challenges, opportunities and how the company develops within this context and inventory of expectations for the next days.
  • The first workshop day – identification of and alignment on the case for change: introduction to Industry 4.0 and explore how it affects the strategy (execution), test the extent of alignment within the leadership team and identify the (/ check if there is a) case for change.
  • The second workshop day – the Industry 4.0 Maturity Assessment: assessing operations using a selection from the AIMA categories, prioritising the KPIs and identifying the focus areas.
  • The third workshop day – design of the implementation roadmap: sequence of steps that address processes, technology, people & capabilities and organisation, identification of risks and design of a risk mitigation plan.

Focusing on these areas will accelerate performance improvements in operations. AIMA provides the insights, designs an implementation roadmap and is a strategic tool to regularly assess progress and refine your roadmap based on new insights. Starting at the operations leadership level allows us to create an overall framework. AIMA is then deployed at the next level down into respective factories. Again, we begin with a preparation; followed by three workshop days, now with the factory leadership team:

  • Preparation – get to know the members of the factory leadership team and understand: the factory vision and strategy, how the team views market developments, challenges, opportunities and how the factory develops within this context and inventory of expectations for the next days.
  • The first workshop day – identification of and alignment on the case for change: introduction to Industry 4.0 and explore how it affects the strategy (execution), test the extent of alignment within the factory leadership team and identify the (/ check if there is a) case for change.
  • The second workshop day – the Industry 4.0 Maturity Assessment: assessing operations using a selection from the AIMA categories, prioritising the KPIs and identifying the focus areas.
  • The third workshop day – design of the implementation roadmap: prioritisation of factory KPIs and the identification of focus areas, sequence of steps that address processes, technology, people & capabilities and organisation, identification of risks and design of a risk mitigation plan.

Making improvements in these focus areas will make the biggest impact on the factory’s performance within the overall framework. Leveraging this cascaded approach creates the biggest wins for the whole business rather than just a sub-optimisation of an individual factory.

AIMA OUTCOMES FOR YOUR ORGANISATION

AIMA provides four key outcomes:

  • Understanding of Industry 4.0, its key principles and concepts, and how they affect strategy (execution)
  • Alignment within the operations leadership team and factory leadership teams
  • Understanding of your Industry 4.0 maturity level / readiness
  • Priority of focus areas to create short-term business value within a long-term context

PUT YOUR PEOPLE AT THE CENTRE OF YOUR INDUSTRY 4.0 IMPLEMENTATION

AIMA will generate initial momentum. However, it is worth noting that any Industry 4.0 implementation will only be successful if you put your people at the centre of it.

The biggest challenge for a company is not in choosing the right technology, but in having a lack of digital culture and skills in the organisation. Investing in the right technologies is important – but the success or failure does not ultimately depend on specific sensors, algorithms or analysis programs.

The crux lies in a wide range of people-oriented factors. Axisto supports you in the development of a robust digital culture and ensures change is developed from within and is driven by clear leadership from the top.

WHY AXISTO?

Axisto was founded in 2006 to help companies accelerate their operational performance – fast, measurable and lasting. We have executed more than 150 projects across Europe.

We have concrete on-the-ground experience, which is why our approach is practical and pragmatic. We combine subject-matter expertise with excellent change management skills.

We see change through and do whatever it takes to make our clients successful.

 

THE GOAL OF USING INTELLIGENT AUTOMATION

The goal of using Intelligent Automation (IA) is to achieve better business outcomes through streamlining and scaling decision making across businesses. IA adds value to business by increasing process speed, reducing costs, improving compliance and quality, increasing process resilience and optimising decision results. Ultimately, it improves customer and employee satisfaction and improves cash flow and EBITDA and decreases working capital.

WHAT IS INTELLIGENT AUTOMATION?

IA is a concept leveraging a new generation of software based automation. It combines methods and technologies to execute business processes automatically on behalf of knowledge workers. This automation is achieved by mimicking the capabilities of knowledge that workers use in performing their work activities (e.g., language, vision, execution and thinking & learning).IA effectively creates a software-based digital workforce that enables synergies by working hand-in-hand with the human workforce.

On the simpler end of the spectrum, IA helps perform the repetitive, low-value add and tedious work activities such as reconciling data or digitising and processing paper invoices. On the other end, IA augments workers by providing them with superhuman capabilities. For example, it provides the ability to analyse millions of data points from various sources in a few minutes and generate insights from.

THREE KEY COMPONENTS OF INTELLIGENT AUTOMATION

IA consists of three key components:

Axisto - Process MiningBusiness Process Management with Process Mining to provide greater agility and consistency to business processes.

 

Axisto - Robotic Process AutomationRobotic Process Automation (RPA). Robotic process automation uses software robots, or bots, to complete repetitive manual tasks. RPA is both the gateway to artificial intelligence and can leverage insights from Artificial Intelligence to handle more complex tasks and use cases.

Axisto - Artificial IntelligenceArtificial Intelligence. By using machine learning and complex algorithms to analyse structured and unstructured data, businesses can develop a knowledge base and formulate predictions based on that data. This is the decision engine of IA.

WHERE AND HOW TO START WITH INTELLIGENT AUTOMATION?

Implementing Intelligent Automation might come across as a daunting endeavour, but it doesn’t need be. Like any business leader, you will have a keen eye on accelerating operations performance, which in essence is improving the behaviour and outcomes of your business processes. Process Mining is a perfect tool to help you with that.

Process Mining is a data-driven analysis technique, i.e., analysis software, to objectively analyse and monitor business processes. It does this based on transactional data that is recorded in a company’s business information systems. The analysis software is system agnostic and doesn’t need any adaptation of your systems. Process Mining provides fact-based insight into how processes run in daily reality: all process variants (you will be surprised how many variations of one process there actually are in your business) and where the key problems and opportunities lie to improve process efficiency and effectiveness.

Process Mining is also an excellent way to prepare the introduction of Robotic Process Automation, which could be the most relevant next step on your IA journey. Process Mining can be purely used as an analysis tool, but it can also be installed permanently to constantly monitor the performance of and the issues in the processes. It is a non-intimidating approach and a gradual implementation of Intelligent Automation.

THE IMPORTANCE OF A COMPANY-WIDE VISION AND SHARED ROADMAP

However, at some point, rather sooner than later, it is important to establish and communicate a comprehensive, company-wide vision for what you want Intelligent Automation to achieve: how will automation deliver value and boost competitive advantage. You need a shared roadmap for a successful implementation that covers processes, technology (including legacy systems), people & competencies and organisation.

Such a shared Intelligent Automation/Industry 4.0 Roadmap ensures a consistent, thoughtful approach to selecting, developing, applying, and evolving the IA/I4.0 structure to achieve the intended impact. The Axisto Industry 4.0 Maturity Assessment (AIMA) is an effective way to create such a shared implementation roadmap.

THE CRUX TO SUCCESS LIES IN A WIDE-RANGE OF PEOPLE-ORIENTED FACTORS

Axisto - Change InsiderImportantly, the biggest challenge for a company is not in choosing the right technology, but in having a lack of digital culture and skills in the organisation. Investing in the right technologies is important – but the success or failure does not ultimately depend on specific sensors, algorithms or analysis programs. The implementation and scaling of Intelligent Automation/Industry 4.0 requires a fundamental shift in mindset and behaviours at all levels in the organisation. The crux to success lies in a wide range of people-oriented factors.

Digital transformation programmes, a new strategy, performance improvement programmes are all notoriously difficult to implement successfully. The vast majority change initiatives struggle to achieve and maintain the planned programme goals. In fact, only 30% are successful. Timely and complete delivery of a critical initiative is therefore the true determinant of competitive advantage for any company.

 The vast majority of change initiatives stumble over the very thing they are trying to transform: the attitudes and behaviours of people at all levels of the organisation. Our Change Insider® (CI) measures people’s attitudes and behaviours towards the initiative and how they experience it. Based on these insights, the CI facilitates concrete strategic and tactical actions you need to take to make sure your change initiative is delivered successfully: on time, in full and in a sustainable manner.

HOW THE CHANGE INSIDER® ENSURES SUCCESS

The CI measures people’s perception of an initiative and how they experience it. This is done by asking a number of custom-designed questions as part of a short online survey that takes about 6 minutes to complete. The questions are created by collaborating with people from a cross-section of your organisation and cover the context, objectives, content and approach of your specific initiative. Therefore, the questions are tailored to your organisation and your change initiative. These are the crucial questions that live in your organisation about this specific initiative.

Everyone within the scope of the initiative answers these crucial questions. They do this in a confidential manner and can also add further comments. The survey results are then presented in practical, actionable reports for every relevant cross-section of your organisation.

The reports allow you to compare business units, departments, teams and levels in the organisation. You can see how your people are experiencing the initiative and how the chosen approach influences both the adoption of the change and the actual change itself. The feedback also highlights any differences between different levels or parts of the organisation. Therefore, you can carry out differentiated interventions and keep the entire initiative on track. The survey is repeated at fixed intervals. This way, the CI tracks the effect of interventions on the adoption of change, the actual change itself and the perception of the initiative over time. This provides information about what needs to be done when and where in the organisation to achieve the desired progress and sustainability of the change (see Figure 1).

Axisto Change Insider - perception of the change initiative
Figure 1. An example from an CI report that demonstrates how people are experiencing the initiative by showing the development of answers to critical questions over two survey cycles.

The goal of the Change Insider® is very different to employee engagement surveys. The CI focuses on bringing about sustainable change with a specific initiative. Employee engagement surveys measure how dedicated employees are to their workplace or their employer.

THE DYNAMICS OF CHANGE

People’s ability to change their attitudes and behaviours is determined mainly by their perceptions and intentions. So we must first change perceptions and intentions before any change in attitudes and behaviours occurs. But how do we do this? The best way to influence people’s perceptions and intentions is to provide information and encourage people to gain new experiences.

Perceptions and intentions record people’s motivations and are indicators of how hard people are willing to try or how much effort they intend to put in to display the required behaviour. During a change process, people are confronted with two forces: first, a change tension (the perceived need and urgency of the initiative) and, second, the power to change (the willingness to support and adopt the change and the ability to contribute). Both forces are needed in a programme to bring about change.

The way people experience these forces is the most important indicator of people’s perceptions and intentions towards an initiative. The rating for these two indicators gives the best prediction about a person’s intention to adopt the attitude and behaviour that is required. In different parts and levels of the organisation, the two forces are likely to develop differently, as shown in Figure 2. This drives the need for specific interventions for different parts of the organisation.

Axisto Change Insider - adoption of change - change tension and power to change
Figure 2. An example from an CI report showing the survey results of three teams (A, B, M) and the development of the effects of interventions over two survey cycles. The teams and their development can easily be compared.

The Change Insider® provides fact-based guidance for precisely these differentiated interventions to enable the timely and complete delivery of your mission-critical initiative.

“Tell me where you spend your money and I will tell you what your strategy is.” There is probably no better sentence to describe the potential difference between an intended strategy and a defacto strategy. Zero-based budgeting (ZBB) is a powerful approach to accelerate growth, create value and make your strategy happen.

WHAT IS ZERO-BASED BUDGETING?

ZBB starts from a blank sheet of paper, not from last year’s budget. On a very granular level, you start by determining what resources various business units require to deliver the strategic goals. You then address individual cost categories across all business units and justify all expenditure. In ZBB the base line is not last year’s budget, but “zero”.

ZBB was introduced in the 1960s and was slow in getting traction. It had a brief spell of popularity and then sank away into obscurity. Now, supported by progressed digitisation, it is on the rise again. But it’s no longer just being used in the consumer packaged goods industry, nor focused only on sales and general administrative expenditure. It has begun to spread across industries and functions. And rightfully so because ZBB is appropriate for any industry and all functions: procurement, supply chain, sales and marketing, service and support, and others.

ZERO-BASED BUDGETTING IS NOT JUST A COST-CONTROL TOOL

Many companies use it as a cost-control tool. However, this is vastly underestimating its real power. When used in a strategic context, ZBB can reconfigure cost structures, free up investment funds and accelerate growth. Successful companies start with a solid “What by How” objective that gives the company direction. The related goals then lead to questions about which investments are necessary and what the total cost structure needs to be to enable these investments. This way, ZBB is tightly integrated with the company’s strategy. It addresses both the cost discipline and the investments and opportunities that drive growth. However, using ZBB as a one-time exercise won’t cut it.

ZERO-BASED BUDGETING TRANSFORMS YOUR BUSINESS

ZBB is not a one-time exercise; it is a way of doing business and part of the DNA of an organisation. Its implementation not only redesigns your processes, policies and systems, but also instils new mindsets and behaviours. ZBB establishes clear cost accountability and disciplines to reduce and permanently eliminate costs that add little or no value. At the same time, it establishes a clear accountability to maximise the added value of the right expenditure. ZBB challenges companies to operate more efficiently and effectively across functions, geographies, divisions and business units to grow the top line and margin. It drives people to make conscious, strategic decisions and to get the right things done.

ZERO-BASED BUDGETING IN GOOD TIMES AND IN BAD TIMES – MAINTAIN STRATEGIC MOMENTUM

During a recession – and more so just afterwards – successful companies grow their EBIT whereas others stall. So why do some companies win while others lose? The common denominator with the winners is that they maintain a strict cost discipline and fund their growth levers in both the high and low phases of the economic cycle. They maintain strategic momentum regardless of market conditions.

We know that the total shareholder return a company achieves is mainly determined by its margin. The companies that generate a significantly higher long-term value grow their EBIT most and implement the required change during economic highs – i.e., pre-emptively. So the earlier a company transforms, the better its future performance.

AND WHAT ABOUT LEAN SIX SIGMA (LSS)?

Lean is often talked about as being an extensive toolbox. This misses the point. Lean is all about mindset and behaviours – it’s about strict cost discipline and fast cash conversion cycle. Lean originated at Toyota when it was rebuilding its business just after World War II. The company was cash strapped – as were its customers.

The whole concept of flow within Toyota’s way of working was, and still is, to ensure a fast cash conversion cycle and eliminate low value-added costs. What’s more, they approached everything from the customer’s point of view – what is the customer willing to pay for? Everything else is waste. Having a fast cash conversion cycle creates the opportunity to grow faster. And that is what they did.

Similarly, Six Sigma is often talked about as being an extensive toolbox. But Six Sigma is also all about mindset and behaviours – one of relentlessly eliminating variation. Six Sigma was developed at Motorola in the late 1980s. The company was crippled by the cost of poor quality, which drained their margins and eroded their revenue. For the company to have a viable future, it had to drive down variation.

SO HOW ARE ZERO-BASED BUDGETING AND LEAN SIX SIGMA RELATED?

Zero-based budgeting is the overarching approach to drive the short- and long-term success of a company. From a business strategy point of view, first the “What by How” objective is set and then the top goals and targets are set. ZBB views the company as a whole from the highest level, informed by its purpose, vision and ambition. It affects every aspect of a company: the operating model including the organisation structure and policies. ZBB thrives on the right mindset and behaviours that are incorporated in the DNA of the organisation.

The mindset and behaviours behind Lean Six Sigma (LSS) fit fully with the mindset and behaviours behind zero-based budgeting. ZBB will steer the selection of tools from the LSS toolbox that best contribute to the business needs in the company’s drive to deliver on its vision and ambition – in the same way that Toyota and Motorola developed and acquired skills and tools that were in line with their business needs and informed by their mindset.

Disciplined cash and working capital management drives good operational and financial performance. However, performance in order to cash, inventory management and procure to pay  slumped over the 5 years prior to the COVID outbreak. A closer analysis reveals that inventory optimisation poses companies the biggest challenge – both in volatile and non-volatile markets. More Cash – Lower Inventory – Better Service, good inventory management is the key.

DELIVER DOUBLE DIGIT INVENTORY REDUCTIONS AND MAINTAIN OR IMPROVE SERVICE LEVELS

Decades of experience have taught us that going straight for the inventories themselves is both the quickest and the surest way of delivering a high-performing supply chain. Inventory sits right at the heart of your supply chain and is both a symptom and cause of your supply chain performance. Getting inventory right keeps your customers happy, increases flow and reduces cost and waste and frees up cash.

At Axisto, we combine the practical business focus of management consulting with the high-speed analytical capability of advanced information technology. We rapidly distil practical insights from data in Enterprise Resource Planning (ERP) systems. Our people concentrate on the human challenges of implementing and sustaining resilient and lean supply chains.

Our unique approach to supply chain puts inventory optimisation front and centre. This allows us to help deliver double digit reductions in inventory while maintaining or improving service levels – at speed in a low risk manner compared to traditional approaches.

OUR INVENTORY MANAGEMENT PROPOSITIONS

Axisto provides three inventory management propositions: inventory optimisation programmes, inventory analytics and inventory maturity assessments.

Our starting point with most clients is a quick scan. On the basis of just 3 standard reports from your ERP system, we quantify improvement potential item by item as well as overall. The output is both an immediate high-level quantification of improvement potential and the basis of a road map to deliver sustainable improvements quickly.

INVENTORY OPTIMISATION PROGRAMMES

We provide expert analytics and effective change management backed up by a clearly measurable business case. Improvements to inventory positions of 20% or more, sometimes much more, are usually achievable within the first year, at a high return on investment.

 INVENTORY ANALYTICS

Do you find it difficult to really understand what your inventory data is telling you, or what you should do about it? Do you have optimisation tools that are difficult to use or which give results you know to be wrong, but you’re not sure why? With the proprietary technology that we use, we provide clients with rapid actionable insights into their inventory data.

In addition, we help clients with a range of targeted analytical exercises, ranging from strategic inventory positioning (where in your supply chain should you hold inventory?) through to setting inventory policies for items that are hard to optimise, such as spare parts, or make to order products.

INVENTORY MATURITY ASSESSMENTS

Inventory is influenced by almost every aspect of your business. Therefore, it can be hard to know at an enterprise level where the biggest opportunities for further improvement are, or how you compare to your competitors.

Axisto can take the temperature of your inventory management. We combine a granular, bottom-up quantitative assessment of your potential for improvement with a qualitative overview of your people, processes and systems, including relevant benchmarks, to give you actionable insights into where to find the next step change in your performance journey.

A CASE

CHALLENGE

A medium-sized industrial manufacturing firm with a strong market position and profitability had little historical focus on inventory. The consequence was that inventory was increasing gradually. It was time to act.

RESULTS

Inventory was reduced by more than 50% from the initial baseline over a period of 3 years, while service levels were maintained or improved. Improvements in the underlying data led to a better understanding of how and why to act – inventory management capability was significantly developed within the client’s teams.

SOME QUOTES

“We finally have full transparency of what we have, so we can make fact-based decisions on a weekly basis.” – Automotive manufacturer

Since starting a programme, we have reduced our inventories by over 50%.” –  Industrial manufacturer

The results are exceptional and have made a major difference to our cash flow.” – Global manufacturing company

The inventory programme brought a wide range of process issues into sharp focus, with an impact much broader than just inventory.” – Market-leading manufacturer

These days, customers expect shorter fulfilment timeframes and have a lower tolerance for late or incomplete deliveries. At the same time, supply chain leaders face growing costs and volatility. how process mining creates value in the supply chain is by creating transparency and visibility across the supply chain and providing proposals for decisions with their trade-offs for real-time optimisation of flows.

FULL TRANSPARENCY

Instead of working with the designed process flow or the process flow that is depicted in the ERP system, process mining monitors the actual process at whatever granularity you want: end-2-end process, procure-2-pay, manufacturing, inventory management, accounts payable, for a specific type of product, supplier, customer, individual order, individual SKU. Process mining monitors compliance, conformance, cooperation between departments or between client, own departments and suppliers, etc.

VISIBILITY ACROSS THE SUPPLY CHAIN

Dashboards are created to suit your requirements. These are flexible and can be easily altered whenever your needs change and/or bottlenecks shift. They create real-time insights into the process flow. At any time, you know, how much revenue is at stake because of inventory issues, what root-causes are and which decisions you can take and what their effects and trade-offs will be.

 

 

 

If supplier reliability is not at the target level at the highest reporting level, you can easily drill down in real-time to a specific supplier and a particular SKU to discover what is causing the problem in real-time. Suppliers could also be held to the best-practice service level of competitive suppliers.

MAKING INFORMED DECISIONS AND TAKING THE RIGHT ACTIONS

The interactive reports highlight gaps between actual and target values and give details of the discrepancies, figure A. By clicking on one of the highlighted issues, you can assign an appropriate action to a specific person, figure B. Or it can even be done automatically when a discrepancy is detected. And direct communication with respect to the action is facilitated in real-time, figure C.

Fig. A, details of the discrepancies.    Fig. B, pop up to write a task. Fig. C, exchanging information.

HOW PROCESS MINING CREATES VALUE IN THE SUPPLY CHAIN – WRAP UP

Process mining is an effective tool to optimise the end-2-end supply chain flows in terms of margin, working capital, inventory level and profile, cash, order cycle times, supplier reliability, customer service levels,  sustainability, risk, predictability, etc. Because process mining monitors the actual process flows in real-time, it creates full transparency and therefore adds significant value to the classic BI-suites. Process mining can be integrated with existing BI-applications and can enhance reporting and decision-making. We consider process mining to be a core element of Industry 4.0.

THIS INTERVIEW WAS PUBLISHED BY THE GUARDIAN

Zoë Corbyn

Sun 6 Jun 2021 09.00 BST

‘AI systems are empowering already powerful institutions – corporations, militaries and police’: Kate Crawford. Photograph: Stephen Oxenbury

The AI researcher on how natural resources and human labour drive machine learning and the regressive stereotypes that are baked into its algorithms

Kate Crawford studies the social and political implications of artificial intelligence. She is a research professor of communication and science and technology studies at the University of Southern California and a senior principal researcher at Microsoft Research. Her new book, Atlas of AI, looks at what it takes to make AI and what’s at stake as it reshapes our world.

You’ve written a book critical of AI but you work for a company that is among the leaders in its deployment. How do you square that circle?
I work in the research wing of Microsoft, which is a distinct organisation, separate from product development. Unusually, over its 30-year history, it has hired social scientists to look critically at how technologies are being built. Being on the inside, we are often able to see downsides early before systems are widely deployed. My book did not go through any pre-publication review – Microsoft Research does not require that – and my lab leaders support asking hard questions, even if the answers involve a critical assessment of current technological practices.

What’s the aim of the book?
We are commonly presented with this vision of AI that is abstract and immaterial. I wanted to show how AI is made in a wider sense – its natural resource costs, its labour processes, and its classificatory logics. To observe that in action I went to locations including mines to see the extraction necessary from the Earth’s crust and an Amazon fulfilment centre to see the physical and psychological toll on workers of being under an algorithmic management system. My hope is that, by showing how AI systems work – by laying bare the structures of production and the material realities – we will have a more accurate account of the impacts, and it will invite more people into the conversation. These systems are being rolled out across a multitude of sectors without strong regulation, consent or democratic debate.

What should people know about how AI products are made?
We aren’t used to thinking about these systems in terms of the environmental costs. But saying, “Hey, Alexa, order me some toilet rolls,” invokes into being this chain of extraction, which goes all around the planet… We’ve got a long way to go before this is green technology. Also, systems might seem automated but when we pull away the curtain we see large amounts of low paid labour, everything from crowd work categorising data to the never-ending toil of shuffling Amazon boxes. AI is neither artificial nor intelligent. It is made from natural resources and it is people who are performing the tasks to make the systems appear autonomous.

Unfortunately the politics of classification has become baked into the substrates of AI

Problems of bias have been well documented in AI technology. Can more data solve that?
Bias is too narrow a term for the sorts of problems we’re talking about. Time and again, we see these systems producing errors – women offered less credit by credit-worthiness algorithms, black faces mislabelled – and the response has been: “We just need more data.” But I’ve tried to look at these deeper logics of classification and you start to see forms of discrimination, not just when systems are applied, but in how they are built and trained to see the world. Training datasets used for machine learning software that casually categorise people into just one of two genders; that label people according to their skin colour into one of five racial categories, and which attempt, based on how people look, to assign moral or ethical character. The idea that you can make these determinations based on appearance has a dark past and unfortunately the politics of classification has become baked into the substrates of AI.

You single out ImageNet, a large, publicly available training dataset for object recognition…
Consisting of around 14m images in more than 20,000 categories, ImageNet is one of the most significant training datasets in the history of machine learning. It is used to test the efficiency of object recognition algorithms. It was launched in 2009 by a set of Stanford researchers who scraped enormous amounts of images from the web and had crowd workers label them according to the nouns from WordNet, a lexical database that was created in the 1980s.

Beginning in 2017, I did a project with artist Trevor Paglen to look at how people were being labelled. We found horrifying classificatory terms that were misogynist, racist, ableist, and judgmental in the extreme. Pictures of people were being matched to words like kleptomaniac, alcoholic, bad person, closet queen, call girl, slut, drug addict and far more I cannot say here. ImageNet has now removed many of the obviously problematic people categories – certainly an improvement – however, the problem persists because these training sets still circulate on torrent sites .

And we could only study ImageNet because it is public. There are huge training datasets held by tech companies that are completely secret. They have pillaged images we have uploaded to photo-sharing services and social media platforms and turned them into private systems.

You debunk the use of AI for emotion recognition but you work for a company that sells AI emotion recognition technology. Should AI be used for emotion detection?
The idea that you can see from somebody’s face what they are feeling is deeply flawed. I don’t think that’s possible. I have argued that it is one of the most urgently needed domains for regulation. Most emotion recognition systems today are based on a line of thinking in psychology developed in the 1970s – most notably by Paul Ekman – that says there are six universal emotions that we all show in our faces that can be read using the right techniques. But from the beginning there was pushback and more recent work shows there is no reliable correlation between expressions on the face and what we are actually feeling. And yet we have tech companies saying emotions can be extracted simply by looking at video of people’s facesWe’re even seeing it built into car software systems.

What do you mean when you say we need to focus less on the ethics of AI and more on power?
Ethics are necessary, but not sufficient. More helpful are questions such as, who benefits and who is harmed by this AI system? And does it put power in the hands of the already powerful? What we see time and again, from facial recognition to tracking and surveillance in workplaces, is these systems are empowering already powerful institutions – corporations, militaries and police.

What’s needed to make things better?
Much stronger regulatory regimes and greater rigour and responsibility around how training datasets are constructed. We also need different voices in these debates – including people who are seeing and living with the downsides of these systems. And we need a renewed politics of refusal that challenges the narrative that just because a technology can be built it should be deployed.

Any optimism?
Things are afoot that give me hope. This April, the EU produced the first draft omnibus regulations for AI. Australia has also just released new guidelines for regulating AI. There are holes that need to be patched – but we are now starting to realise that these tools need much stronger guardrails. And giving me as much optimism as the progress on regulation is the work of activists agitating for change.

The AI ethics researcher Timnit Gebru was forced out of Google late last year after executives criticised her research. What’s the future for industry-led critique?
Google’s treatment of Timnit has sent shockwaves through both industry and academic circles. The good news is that we haven’t seen silence; instead, Timnit and other powerful voices have continued to speak out and push for a more just approach to designing and deploying technical systems. One key element is to ensure researchers within industry can publish without corporate interference, and to foster the same academic freedom that universities seek to provide.

Atlas of AI by Kate Crawford is published by Yale University Press (£20). To support the Guardian order your copy at guardianbookshop.com. Delivery charges may apply.