In all sectors, companies are dealing with an increased frequency and magnitude of disruptions. Businesses must quickly scale down and then ramp up their operations once demand returns. They have to switch product portfolios depending on the availability of components. Some of the events that have caused havock in the past decade include the Fukushima earthquake and tsunami in Japan, Suez Canal blockage, lockdowns related to Covid19 and variants, semiconductor shortages(link resides outside Axisto), staff shortages, war in Ukraine, exploding energy costs(link resides outside Axisto), high inflation.
Understandably, most of these disruptions took leadership teams by surprise. The worst of these disruptions have taken a toll on business output, revenue and profitability. Recovery can take months or even years.
Process mining provides the much-needed overview of the end-to-end supply chain and provides better insight and information for better, proactive collaboration internally and in the overall supply chain. Process mining also provides proposals for decisions with their consequences for real-time optimisation of flows.
(*) click here for introduction to process mining.
PROCESS MINING – WHAT IT IS AND WHAT IT CAN DO
It provides all insights for targeted performance & efficiency improvements: fast, end-to-end and fact-based.
DISCOVER AND IMPROVE YOUR REAL PROCESSES
APPLICATIONS OF PROCESS MINING
Instead of working with the designed process flow or the process flow that is depicted in the ERP system, process mining monitors the actual process at whatever granularity you want: end-2-end process, procure-2-pay, manufacturing, inventory management, accounts payable, for a specific type of product, supplier, customer, individual order, individual SKU. Process mining monitors compliance, conformance, cooperation between departments or between client, own departments and suppliers, etc.
OVERVIEW OF THE ENTIRE SUPPLY CHAIN
Dashboards are created to suit your requirements. These are flexible and can be easily altered whenever your needs change and/or bottlenecks shift. They create real-time insights into the process flow. At any time, you know, how much revenue is at stake because of inventory issues, what root-causes are and which decisions you can take and what their effects and trade-offs will be.
If supplier reliability is not at the target level at the highest reporting level, you can easily drill down in real-time to a specific supplier and a particular SKU to discover what is causing the problem in real-time. Suppliers could also be held to the best-practice service level of competitive suppliers.
MAKING INFORMED DECISIONS AND TAKING THE RIGHT ACTIONS
The interactive reports highlight gaps between actual and target values and give details of the discrepancies, figure A. By clicking on one of the highlighted issues, you can assign an appropriate action to a specific person, figure B. Or it can even be done automatically when a discrepancy is detected.
And direct communication with respect to the action is facilitated in real-time, figure C.
A process manufacturer with a network of assets spread across Europe needed to respond more flexibly to changes in customer demand while maintaining high asset utilisation, low working capital and low transport costs.
The situation was complex. The assets were different and had their own characteristics. The outflow from the installations could not simply be stopped between production runs, and a change of material resulted in a massive production loss – although a product type change without a change of material was doable.
The producer had 25 production lines and served 1,000 customers with a total of 2,500 products. In short, the perfect complex planning issue for which our More Optimal platform was designed.
The planners had been working with a combination of SAP and Excel spreadsheets. They were handling a huge number of variables and attempting to incorporate increasingly shorter delivery times. The planners understood their trade, but the complexity of the puzzle was too great for the resources available. There was much to gain.
Our generic More Optimal platform makes it possible to create a customer-specific application in a short time, with all relevant planning rules built in. The platform is set up in close consultation with the user. First, the relevant Key Performance Indicators (KPIs) were defined. These included (1) demand fulfilment, (2) asset pull / productivity, (3) inventory, (4) transport costs and (5) planning effort.
In a number of joint work sessions, we established the planning process and drew up the rules for allocating products to the various production lines. In addition, the transport options relating to production locations and the rules for product changes were built in. By working closely with the planners at every step, we gradually developed the More Optimal platform, and this now shows in real-time the consequences of the decisions made by the planners and gives advice on how to improve the planning process.
The application is also used to evaluate what-if scenarios and their impact on the KPIs. The manufacturer uses this functionality as part of the annual planning and budgeting process and relies on it for concrete operational issues on a more regular basis.
Companies that pack fresh products face massive complexity and unpredictability. They process many different products, all of which have specific requirements in terms of quality, class and size. They deal with a multitude of packaging requirements and variability in price agreements for each customer. And they handle huge swings in supply and demand. But the time frame in which packers must match supply and demand is short.
How do you balance customer requirements with product and process complexity to achieve high customer satisfaction and high ‘valorisation’? And how do you deal with last minute changes in supply and demand – for example, if a batch is rejected because it does not meet the quality requirements?
The packer had been using Excel spreadsheets to allocate products on packaging lines and carry out detailed line planning. This had caused misunderstandings and mistakes – and a higher workload than necessary for the planners. They were losing time creating iterative plans, and there was uncertainty about which version of the plan was most up-to-date and about which numbers were correct.
We knew that the More Optimal platform would resolve these problems and explained the benefits to our client. The need was so great and the benefits so obvious that the packer did not even want a ‘proof of value’, but immediately decided to develop and implement a dedicated application based on the More Optimal platform.
The goals were (1) to arrive at a workable schedule faster, (2) more efficiency in the operation, (3) shorter lead times relating to product freshness, (4) better demand fulfilment and (5) increased flexibility.
The More Optimal platform makes it possible to build a customer-specific application in a short time with all relevant planning rules built in. The application is set up in close consultation with the user. First, the relevant Key Performance Indicators (KPIs) are defined to quantitatively determine the quality of the allocation plan. Two of these KPIs were demand fulfilment and lead time (related to product freshness).
In a number of joint work sessions, we drew up the allocation rules for products and determined how products from suppliers should be allocated to customers. By working intensively with the packer, we developed a dedicated application that shows the consequences of the decisions made by the planners and gives advice for better planning. This application was further expanded with support from the planners in order to optimise the detailed planning per packaging line to minimise changeover times on the lines and to increase the throughput capacity (OEE) of the lines. The application measures the operational performance based on the agreed KPIs.
Artificial Intelligence is hot. We can hardly do anything without coming into contact, consciously or unconsciously, with forms of Artificial Intelligence. And it is becoming increasingly important. This article is an introduction to the field of Artificial Intelligence. It starts with a definition and then explores the different sub-specialties, complete with description and some applications.
WHAT IS ARTIFICIAL INTELLIGENCE?
Artificial Intelligence (AI) uses computers and machines to imitate people’s problem-solving and decision-making skills. One of the leading textbooks in the field of AI is Artificial Intelligence: A Modern Approach (link resides outside Axisto) by Stuart Russell and Peter Norvig. In it they elaborate four possible goals or definitions of AI.
Systems that think like people
Systems that behave like people
Systems that think rationally
Systems that act rationally
Artificial intelligence plays a growing role in (I)IoT (Industrial) Internet of Things, among others), where (I)IoT platform software can provide integrated AI capabilities.
SUB-SPECIALTIES WITHIN ARTIFICIAL INTELLIGENCE
There are several subspecialties that belong to the domain of Artificial Intelligence. While there is some interdependence between many of these specialties, each has unique characteristics that contribute to the overarching theme of AI. The Intelligent Automation Network (link resides outside Axisto) distinguishes seven subspecialties, figure 1.
Each subspecialty is further explained below.
Machine learning is the field that focuses on using data and algorithms to imitate the way humans learn using computers, without being explicitly programmed, while gradually improving accuracy. The article “Axisto – an introduction to Machine Learning” takes a closer look at this specialty.
MACHINE LEARNING AND PREDICTIVE ANALYTICS
Predictive analytics and machine learning go hand in hand. Predictive analytics encompasses a variety of statistical techniques, including machine learning algorithms. Statistical techniques analyse current and historical facts to make predictions about future or otherwise unknown events. These predictive analytics models can be trained over time to respond to new data.
The defining functional aspect of these engineering approaches is that predictive analytics provides a predictive score (a probability) for each “individual” (customer, employee, patient, product SKU, vehicle, part, machine, or other organisational unit) to determine, to inform or influence organisational processes involving large numbers of “individuals”. Applications can be found in, for example, marketing, credit risk assessment, fraud detection, manufacturing, healthcare and government activities, including law enforcement.
Unlike other Business Intelligence (BI) technologies, predictive analytics is forward-looking. Past events are used to anticipate the future. Often the unknown event is of significance in the future, but predictive analytics can be applied to any type of “unknown,” be it past, present, or future. For example, identifying suspects after a crime has been committed, or credit card fraud if it occurs. The core of predictive analytics is based on capturing relationships between explanatory variables and the predicted variables from past events, and exploiting them to predict the unknown outcome. Of course, the accuracy and usefulness of the results strongly depends on the level of data analysis and the quality of the assumptions.
Machine Learning and predictive analytics can make a significant contribution to any organisation, but implementation without thinking about how they fit into day-to-day operations will severely limit their ability to deliver relevant insights.
To extract value from predictive analytics and machine learning, it’s not just the architecture that needs to be in place to support these solutions. High-quality data must also be available to nurture them and help them learn. Data preparation and quality are important factors for predictive analytics. Input data can span multiple platforms and contain multiple big data sources. To be usable, they must be centralised, unified and in a coherent format.
To this end, organisations must develop a robust approach to monitor data governance and ensure that only high-quality data is captured and stored. Furthermore, existing processes need to be adapted to include predictive analytics and machine learning as this will enable organisations to improve efficiency at every point in the business. Finally, they need to know what problems they want to solve in order to determine the best and most appropriate model.
NATURAL LANGUAGE PROCESSING (NLP)
Natural language processing is the ability of a computer program to understand human language as it is spoken and written – also known as natural language. NLP is a way for computers to analyse and extract meaning from human language so that they can perform tasks such as translation, sentiment analysis, and speech recognition.
This is difficult, as it involves a lot of unstructured data. The style in which people speak and write (“tone of voice”) is unique to individuals and is constantly evolving to reflect popular language use. Understanding context is also a problem – something that requires semantic analysis from machine learning. Natural Language Understanding (NLU) is a branch of NLP and picks up these nuances through machine “reading understanding” rather than simply understanding the literal meanings. The purpose of NLP and NLU is to help computers understand human language well enough so that they can converse naturally.
All these functions get better the more we write, speak and talk to computers: they are constantly learning. A good example of this iterative learning is a feature like Google Translate that uses a system called Google Neural Machine Translation (GNMT). GNMT is a system that works with a large artificial neural network to translate more smoothly and accurately. Instead of translating one piece of text at a time, GNMT tries to translate entire sentences. Because it searches millions of examples, GNMT uses a broader context to derive the most relevant translation.
The following is a selection of tasks in natural language processing (NLP). Some of these tasks have direct real-world applications, while others more often serve as sub-tasks used to solve larger tasks.
Optical Character Recognition (OCR)
Determining the text associated with a given image representing printed text.
Determine the textual representation of the speech on the basis of a sound fragment of a speaking person or persons. This is the opposite of text-to-speech and is an extremely difficult problem. In natural speech, there are hardly any pauses between consecutive words, so speech segmentation is a necessary subtask of speech recognition (see ‘word segmentation below). In most spoken languages, the sounds representing successive letters merge into one another in a process called coarticulation. Thus, the conversion of the analog signal to discrete characters can be a very difficult process. Since words are spoken in the same language by people with different accents, the speech recognition software must also be able to recognise a wide variety of inputs as identical to each other in terms of textual equivalents.
The elements of a given text are transformed and a spoken representation is produced. Text-to-speech can be used to help the visually impaired.
Word Segmentation (Tokenization)
Splitting a piece of continuous text into individual words. For a language like English, this is quite trivial, as words are usually separated by spaces. However, some written languages such as Chinese, Japanese, and Thai do not mark word boundaries in such a way, and in those languages, text segmentation is an important task that requires knowledge of the vocabulary and morphology of words in the language. Sometimes word segmentation is also applied in, for example, making words in data mining.
A Document AI platform sits on top of NLP technology, allowing users with no previous experience with artificial intelligence, machine learning, or NLP to quickly train a computer to extract the specific data they need from different document types. NLP-powered Document AI enables non-technical teams to quickly access information hidden in documents, e.g. lawyers, business analysts and accountants.
Grammatical Error Correction
Grammatical error detection and correction involves a wide range of problems at all levels of linguistic analysis (phonology/orthography, morphology, syntax, semantics, pragmatics). Grammatical error correction has a major impact because it affects hundreds of millions of people who use or learn a second language. In terms of spelling, morphology, syntax, and certain aspects of semantics, with the development of powerful neural language models such as GPT-2, this can be regarded as a largely solved problem since 2019. Various commercial applications are available in the market.
Automatically translating text from one human language to another is one of the most difficult problems: all different kinds of knowledge are required to do it properly, such as grammar, semantics, real world facts, etc..
Natural Language Generation (NLG)
Converting information from computer databases or semantic intent into human readable language.
Natural Language Understanding (NLU)
NLU concerns the understanding of human language, such as Dutch, English, and French, which allows computers to understand commands without the formalised syntax of computer languages. NLU also allows computers to communicate back to people in their own language. The main goal of NLU is to create chat and voice-enabled bots that can communicate with the public unsupervised. Answer questions and determine the answer to a question in human language. Typical questions have a specific correct answer, such as “What is the capital of Finland?”, but sometimes open questions are also considered (such as “What is the meaning of life?”). How does understanding natural language work? NLU analyses data to determine its meaning by using algorithms to reduce human speech to a structured ontology – a data model made up of semantics and pragmatic definitions. Two fundamental concepts of NLU are intent and entity recognition. Intent recognition is the process of identifying user sentiment in input text and determining its purpose. This is the first and most important part of NLU as it captures the meaning of the text. Entity Recognition is a specific type of NLU that focuses on identifying the entities in a message and then extracting key information about those entities. There are two types of entities: named entities and numeric entities. Named entities are grouped into categories, such as people, businesses, and locations. Numeric entities are recognised as numbers, currency and percentages.
Describe an image and generate an image that matches the description.
Natural language processing – understanding people – is key to AI justifying its claim to intelligence. New deep learning models are constantly improving the performance of AI in Turing tests. Google’s Director of Engineering Ray Kurzweil predicts AIs will “reach human levels of intelligence by 2029“(link resides outside Axisto).
By the way, what people say is sometimes very different from what people do. Understanding human nature is by no means easy. More intelligent AIs expand the perspective of artificial consciousness, opening up a new field of philosophical and applied research.
Speech recognition is also known as automatic speech recognition (ASR), computer speech recognition or speech-to-text. It is a capability that uses natural language processing (NLP) to process human speech in a written format. Many mobile devices incorporate speech recognition into their systems to perform voice searches, e.g. Siri from Apple.
An important area of speech in AI is speech-to-text, the process of converting audio and speech into written text. It can help visually or physically impaired users and can promote safety with hands-free operation. Speech-to-text tasks contain machine learning algorithms that learn from large datasets of human voice samples to arrive at adequate usability quality. Speech-to-text has value for businesses because it can help transcribe video or phone calls. Text-to-speech converts written text into audio that sounds like natural speech. These technologies can be used to help people with speech disorders. Polly from Amazon is an example of a technology that uses deep learning to synthesise human-sounding speech for the purposes of e-learning and telephony, for example.
Speech recognition is a task where speech is received by a system through a microphone and checked against a database of large pattern recognition vocabulary. When a word or phrase is recognised, it will respond with the corresponding verbal response or a specific task. Examples of speech recognition include Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and Google’s Google Assistant. These products must be able to recognise a user’s speech input and assign the correct speech output or action. Even more sophisticated are attempts to create speech based on brain waves for those who cannot speak or have lost the ability to speak.
An expert system uses a knowledge base about its application domain and an inference engine to solve problems that normally require human intelligence. An interference engine is a part of the system that applies logical rules to the knowledge base to derive new information. Examples of expert systems include financial management, business planning, credit authorisation, computer installation design, and airline planning. For example, an expert traffic management system can help design smart cities by acting as a “human operator” to relay traffic feedback for appropriate routes.
A limitation of expert systems is that they lack the common sense people have, such as an understanding of the limits of their skills and how their recommendations fit into the bigger picture. They lack the self-awareness of people. Expert systems are not a substitute for decision makers because they lack human capabilities, but they can dramatically ease the human work required to solve a problem.
PLANNING SCHEDULING AND OPTIMALISATION
AI planning is the task of determining how a system can best achieve its goals. It is choosing sequential actions that have a high probability of changing the state of the environment incrementally in order to achieve a goal. These types of solutions are often complex. In dynamic environments with constant change, they require frequent trial-and-error iteration to fine-tune.
Planning is making schedules, or temporary assignments of activities to resources, taking into account goals and constraints. To design an algorithm, planning determines the sequence and timing of actions generated by the algorithm. These are typically performed by intelligent agents, autonomous robots and unmanned vehicles. When designed properly, they can solve organisational scheduling problems in a cost-effective way. Optimisation can be achieved by using one of the most popular ML and Deep Learning optimisation strategies: gradient descent. This is used to train a machine learning model by changing its parameters in an iterative way to minimise a particular function to the local minimum.
Intelligence is at one end of the Intelligent Automation spectrum, while Robotic Process Automation (RPA), software robots that mimic human actions, is at the other end. One is concerned with replicating how people think and learn, while the other is concerned with replicating how people do things. Robotics develops complex sensor-motor functions that enable machines to adapt to their environment. Robots can sense the environment using computer vision.
The main idea of robotics is to make robots as autonomous as possible through learning. Despite not achieving human-like intelligence, there are still many successful examples of robots performing autonomous tasks such as carrying boxes, picking up and putting down objects. Some robots can learn decision making by associating an action with a desired outcome. Kismet, a robot at M.I.T.’s Artificial Intelligence Lab, learns to recognise both body language and voice and respond appropriately. This MIT video (link is outside Axisto) gives a good impression.
Computer vision is an area of AI that trains computers to capture and interpret information from image and video data. By applying machine learning (ML) models to images, computers can classify and respond to objects, such as facial recognition to unlock a smartphone or approve intended actions. When computer vision is coupled with Deep Learning, it combines the best of both worlds: optimised performance combined with accuracy and versatility. Deep Learning offers IoT developers greater accuracy in object classification.
Machine vision goes one step further by combining computer vision algorithms with image registration systems to better control robots. An example of computer vision is a computer that can “see” a unique series of stripes on a universal product code and scan it and recognize it as a unique identifier. Optical Character Recognition (OCR) uses image recognition of letters to decipher paper printed records and/or handwriting, despite the wide variety of fonts and handwriting variations.
WHAT IS MACHINE LEARNING?
This article covers the introduction to machine learning and the directly related concepts.
Machine learning is the field of study that gives computers the ability to learn without being explicitly programmed. It is a subset of artificial intelligence (AI) and computer science that focuses on the use of data and algorithms to imitate the way humans learn, and in doing so it gradually improving its accuracy. By using statistical learning (link resides outside Axisto) and optimisation methods, computers can analyse datasets and identify patterns in the data. Machine learning techniques leverage data mining to identify historic trends to inform future models.
According to the University of California, Berkeley, the typical supervised machine learning algorithm consists of three main components:
A decision process: A recipe of calculations or other steps that takes in the data and returns a guess at the kind of pattern in the data that the algorithm is looking to find.
An error function: A method of measuring how good the guess was by comparing it to known examples (when they are available). Did the decision process get it right? If not, how do you quantify how bad the miss was?
An updating or optimisation process: The algorithm looks at the miss and then updates how the decision process comes to the final decision so that the miss will not be as great the next time.
Machine learning is a key component in the growing field of data science. Using statistical methods, algorithms are trained to make classifications or predictions and uncover key insights from data.
HOW DOES A MACHINE LEARNING ALGORITHM LEARN?
The technology company Nvidia (link resides outside Axisto) distinguishes four learning models that are defined by the level of human intervention:
Supervised learning: If you are learning a task under supervision, someone is with you, prompting you and judging whether you’re getting the right answer. Supervised learning is similar in that it uses a full set of labelled* data to train an algorithm.
Unsupervised learning: In unsupervised learning, a deep learning model is handed a dataset without explicit instructions on what to do with it. The training dataset is a collection of examples without a specific desired outcome or correct answer. The neural network then attempts to automatically find structure in the data by extracting useful features and analysing its structure. It learns by looking for patterns.
Semi-supervised learning: Semi-supervised learning is, for the most part, just what it sounds like: a training dataset with both labelled and unlabelled data. This method is particularly useful in situations where extracting relevant features from the data is difficult or where labelling examples is a time-intensive task for experts.
Reinforcement learning: In this kind of machine learning, AI agents are trying to find the optimal way to accomplish a particular goal or improve the performance of a specific task. If the agent takes action that moves the outcome towards the goal, it receives a reward. To make its choices, the agent relies both on learnings from past feedback and on exploration of new tactics that may present a larger payoff. The overall aim is to predict the best next step that will earn the biggest final reward. Just as the best next move in a chess game may not help you eventually win the game, the best next move the agent can make may not result in the best final result. Instead, the agent considers the long-term strategy to maximise the cumulative reward. It is an iterative process: the more rounds of feedback, the better the agent’s strategy becomes. This technique is especially useful for training robots to make a series of decisions for tasks such as steering an autonomous vehicle or managing inventory in a warehouse.
* Fully labelled means that each example in the training dataset is tagged with the answer the algorithm should produce on its own. So a labelled dataset of flower images would tell the model which photos were of roses, daisies and daffodils. When shown a new image, the model compares it to the training examples to predict the correct label.
In all four learning models, the algorithm learns from datasets based on human rules or knowledge.
In the domain of artificial intelligence, you will come across the terms machine learning (ML), deep learning (DL) and neural networks (artificial neural networks – ANN). Artificial intelligence and machine learning are often used interchangeably, as are machine learning and deep learning. But, in fact, these terms are progressive subsets within the larger AI domain, as illustrated in Figure 1.
Therefore, when discussing machine learning, we must also consider deep learning and artificial neural networks.
THE DIFFERENCE BETWEEN MACHINE LEARNING AND DEEP LEARNING IS THE WAY AN ALGORITHM LEARNS
Unlike machine learning, deep learning does not require human intervention to process data. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required, which means it can be used for larger data sets.
“Non-deep” machine learning is more dependent on human intervention for the learning process to happen because human experts must first determine the set of features so that the algorithm can understand the differences between data inputs, and this usually requires more structured data for the learning process.
“Deep” machine learning can leverage labelled datasets, also known as supervised learning, to inform its algorithm. However, it does not necessarily require a labelled dataset. It can ingest unstructured data in its raw form (e.g., text and images), and it can automatically determine the set of features that distinguishes between different categories of data. Figure 2 illustrates the difference between machine learning and deep learning.
Deep learning uses multiple layers to progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human, such as digits or letters or faces.
In deep learning, each level learns to transform its input data into a slightly more abstract and composite representation. In an image-recognition application, the raw input may be a matrix of pixels. The first representational layer may abstract the pixels and encode edges; the second layer may compose and encode arrangements of edges; the third layer may encode a nose and eyes; and the fourth layer may recognise that the image contains a face. Importantly, a deep learning process can learn which features to optimally place in which level on its own. This does not fully eliminate the need for manual-tuning – for example, varying numbers of layers and layer sizes can provide different degrees of abstraction. The word “deep” in “deep learning” refers to the number of layers through which the data is transformed.
An artificial neural network (ANN) is a computer system designed to work by classifying information in the same way a human brain does, while still retaining the innate advantages they hold over us, such as speed, accuracy and lack of bias. For example, it can be taught to recognise images and classify these according to elements they contain. Essentially, it works on a system of probability – based on data fed to it, it can make statements, decisions or predictions with a degree of certainty. The addition of a feedback loop enables “learning” – by sensing or being told whether its decisions are right or wrong, it modifies the approach it takes in the future.
Artificial neural networks consist of a multilevel learning of detail or representations of data. Through these different layers, information passes from low-level parameters to higher-level parameters. These different levels correspond to various levels of data abstraction, leading to learning and recognition. An ANN is based on a collection of connected units called artificial neurons (analogous to biological neurons in a biological brain). Each connection (synapse) between neurons can transmit a signal from one neuron to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal to neurons connected to it downstream. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream. Typically, neurons are organised in layers, as illustrated in Figure 3. Different layers can perform various kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times.
USES OF MACHINE LEARNING
There are many applications for machine learning; it is one of the three key elements of Intelligent Automation and a autonomous operating model within Industry 4.0. The computer programs can read text and work out whether the writer was making a complaint or offering congratulations. They can listen to a piece of music, decide whether it is likely to make someone happy or sad, and find other pieces of music to match the mood. In some cases, they can even compose their own music that either expresses the same themes or is likely to be appreciated by the admirers of the original piece.
Neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games, and medical diagnosis. As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Although this number is several orders of magnitude less than the number of neurons in a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognising faces, playing “Go”).
A container terminal reached the limits of its capacity due to a further increase in the number of units to be processed. The terminal also had to become more attractive for ships to dock by faster loading and unloading for shorter waiting times. Furthermore, container ships are getting bigger, increasing complexity and time pressure at the terminal.
The assignment was to increase efficiency to make more inbound and outbound truck movements possible and to shorten ship waiting times.
Based on data from the ERP system regarding plan and actual over a representative period, the current working method of the terminal was reconstructed in our Planning Platform. The actual operation was visualised and animated, allowing the movements of each individual container to be tracked from position to position. The reconstruction was validated and further fine-tuned in a highly interactive process with the client.
Subsequently, with our Planning Platform, the current operational performance of the terminal was determined based on jointly identified Key Performance Indicators (KPIs), such as the mooring time per barge, the number of crane movements in/out and the number of truck movements in/out. Subsequently, a simulation of an optimised operation was performed using the exact same dataset and boundary conditions. The comparison of the KPIs of the current and optimised operations immediately gave a clear picture of the improvement potential.
In close collaboration with the client, the plan for a number of containers was then optimised step-by-step until the total was finally optimised. After each step, the improvement was measured against the identified KPIs.
INVESTING IN INDUSTRY 4.0 TECHNOLOGIES YIELDS SIGNIFICANT BENEFITS
In 2018 the World Economic Forum (WEF) launched an initiative, Shaping the Future of Advanced Manufacturing and Production, to demonstrate the true potential of Industry 4.0 technologies to transform the very nature of manufacturing. Learnings from 69 frontrunner companies boosting 450 use cases in action reveal that organisations investing in Industry 4.0 technology are realising significant improvements in productivity, sustainability, operating cost, customisation and speed to market.
Here are just a few numbers from the 450 use cases: labour productivity up by 32% to 86%, order lead times down by 29% to 82%, field quality up 32%, manufacturing costs down 33%, OEE up 27%, new product design lead time down 50%.
Additionally, frontrunner companies showed that by investing in Industry 4.0 technologies they can solve business problems while simultaneously reducing environmental detractors such as waste, consumption and emissions. While the greatest environmental benefits come from core green sustainability initiatives (such as commitments to renewable energy), Industry 4.0 use cases have shown significant environmental impact as well, reducing energy consumption by more than one-third and water use by more than one-quarter.
Out of the 69 frontrunners within the WEF initiative that exist to date across the globe, 64% have been able to drive growth by adopting Industry 4.0 solutions. In all those cases, with little to no capital expenditure, they were able to unlock capacity and grow by coupling some of the technology solutions together with a much more flexible production system. The business case is big and the pay back is short, both for large companies and for SMEs.
HOWEVER, MOST COMPANIES STRUGGLE TO IMPLEMENT
Most companies struggle to start and scale an Industry 4.0 transformation because they lack people with the right skills and knowledge and because of a limited understanding of technology and vendor landscape. On average, 72% of companies don’t get beyond the pilot phase.
AIMA enables manufacturing companies to understand where they stand and to design an implementation roadmap that helps them start their Industry 4.0 implementation journey or progress to the next level. AIMA assesses your operations along eight elements, as shown in Figure 1.
In total, the eight elements are made up of 33 categories (see Figure 2), and each category spans the four fundamental building blocks of Industry 4.0: processes, technology, people and competencies, and organisation.
HOW AIMA SUPPORTS YOU ON YOUR INDUSTRY 4.0 IMPLEMENTATION JOURNEY
AIMA helps you:
tear down interdepartmental walls and create strategic alignment
understand where your operations stand – what is strong and must be maintained
and what needs to be improved
understand what your key areas are and what you need to focus on.
AIMA helps you establish a company-specific interpretation of key principles and concepts. It creates an improved case for change and provides more momentum to implement the change.
HOW AIMA WORKS
AIMA consists of four steps:
Preparation – get to know the members of the leadership team and understand: the vision and strategy, how the team views market developments, challenges, opportunities and how the company develops within this context and inventory of expectations for the next days.
The first workshop day – identification of and alignment on the case for change: introduction to Industry 4.0 and explore how it affects the strategy (execution), test the extent of alignment within the leadership team and identify the (/ check if there is a) case for change.
The second workshop day – the Industry 4.0 Maturity Assessment: assessing operations using a selection from the AIMA categories, prioritising the KPIs and identifying the focus areas.
The third workshop day – design of the implementation roadmap: sequence of steps that address processes, technology, people & capabilities and organisation, identification of risks and design of a risk mitigation plan.
Focusing on these areas will accelerate performance improvements in operations. AIMA provides the insights, designs an implementation roadmap and is a strategic tool to regularly assess progress and refine your roadmap based on new insights. Starting at the operations leadership level allows us to create an overall framework. AIMA is then deployed at the next level down into respective factories. Again, we begin with a preparation; followed by three workshop days, now with the factory leadership team:
Preparation – get to know the members of the factory leadership team and understand: the factory vision and strategy, how the team views market developments, challenges, opportunities and how the factory develops within this context and inventory of expectations for the next days.
The first workshop day – identification of and alignment on the case for change: introduction to Industry 4.0 and explore how it affects the strategy (execution), test the extent of alignment within the factory leadership team and identify the (/ check if there is a) case for change.
The second workshop day – the Industry 4.0 Maturity Assessment: assessing operations using a selection from the AIMA categories, prioritising the KPIs and identifying the focus areas.
The third workshop day – design of the implementation roadmap: prioritisation of factory KPIs and the identification of focus areas, sequence of steps that address processes, technology, people & capabilities and organisation, identification of risks and design of a risk mitigation plan.
Making improvements in these focus areas will make the biggest impact on the factory’s performance within the overall framework. Leveraging this cascaded approach creates the biggest wins for the whole business rather than just a sub-optimisation of an individual factory.
AIMA OUTCOMES FOR YOUR ORGANISATION
AIMA provides four key outcomes:
Understanding of Industry 4.0, its key principles and concepts, and how they affect strategy (execution)
Alignment within the operations leadership team and factory leadership teams
Understanding of your Industry 4.0 maturity level / readiness
Priority of focus areas to create short-term business value within a long-term context
PUT YOUR PEOPLE AT THE CENTRE OF YOUR INDUSTRY 4.0 IMPLEMENTATION
AIMA will generate initial momentum. However, it is worth noting that any Industry 4.0 implementation will only be successful if you put your people at the centre of it.
The biggest challenge for a company is not in choosing the right technology, but in having a lack of digital culture and skills in the organisation. Investing in the right technologies is important – but the success or failure does not ultimately depend on specific sensors, algorithms or analysis programs.
Axisto was founded in 2006 to help companies accelerate their operational performance – fast, measurable and lasting. We have executed more than 150 projects across Europe.
We have concrete on-the-ground experience, which is why our approach is practical and pragmatic. We combine subject-matter expertise with excellent change management skills.
We see change through and do whatever it takes to make our clients successful.
THE GOAL OF USING INTELLIGENT AUTOMATION
The goal of using Intelligent Automation (IA) is to achieve better business outcomes through streamlining and scaling decision making across businesses. IA adds value to business by increasing process speed, reducing costs, improving compliance and quality, increasing process resilience and optimising decision results. Ultimately, it improves customer and employee satisfaction and improves cash flow and EBITDA and decreases working capital.
WHAT IS INTELLIGENT AUTOMATION?
IA is a concept leveraging a new generation of software based automation. It combines methods and technologies to execute business processes automatically on behalf of knowledge workers. This automation is achieved by mimicking the capabilities of knowledge that workers use in performing their work activities (e.g., language, vision, execution and thinking & learning).IA effectively creates a software-based digital workforce that enables synergies by working hand-in-hand with the human workforce.
On the simpler end of the spectrum, IA helps perform the repetitive, low-value add and tedious work activities such as reconciling data or digitising and processing paper invoices. On the other end, IA augments workers by providing them with superhuman capabilities. For example, it provides the ability to analyse millions of data points from various sources in a few minutes and generate insights from.
THREE KEY COMPONENTS OF INTELLIGENT AUTOMATION
IA consists of three key components:
Business Process Management with Process Mining to provide greater agility and consistency to business processes.
Robotic Process Automation (RPA). Robotic process automation uses software robots, or bots, to complete repetitive manual tasks. RPA is both the gateway to artificial intelligence and can leverage insights from Artificial Intelligence to handle more complex tasks and use cases.
Artificial Intelligence. By using machine learning and complex algorithms to analyse structured and unstructured data, businesses can develop a knowledge base and formulate predictions based on that data. This is the decision engine of IA.
WHERE AND HOW TO START WITH INTELLIGENT AUTOMATION?
Implementing Intelligent Automation might come across as a daunting endeavour, but it doesn’t need be. Like any business leader, you will have a keen eye on accelerating operations performance, which in essence is improving the behaviour and outcomes of your business processes. Process Mining is a perfect tool to help you with that.
Process Mining is a data-driven analysis technique, i.e., analysis software, to objectively analyse and monitor business processes. It does this based on transactional data that is recorded in a company’s business information systems. The analysis software is system agnostic and doesn’t need any adaptation of your systems. Process Mining provides fact-based insight into how processes run in daily reality: all process variants (you will be surprised how many variations of one process there actually are in your business) and where the key problems and opportunities lie to improve process efficiency and effectiveness.
Process Mining is also an excellent way to prepare the introduction of Robotic Process Automation, which could be the most relevant next step on your IA journey. Process Mining can be purely used as an analysis tool, but it can also be installed permanently to constantly monitor the performance of and the issues in the processes. It is a non-intimidating approach and a gradual implementation of Intelligent Automation.
THE IMPORTANCE OF A COMPANY-WIDE VISION AND SHARED ROADMAP
However, at some point, rather sooner than later, it is important to establish and communicate a comprehensive, company-wide vision for what you want Intelligent Automation to achieve: how will automation deliver value and boost competitive advantage. You need a shared roadmap for a successful implementation that covers processes, technology (including legacy systems), people & competencies and organisation.
Such a shared Intelligent Automation/Industry 4.0 Roadmap ensures a consistent, thoughtful approach to selecting, developing, applying, and evolving the IA/I4.0 structure to achieve the intended impact. The Axisto Industry 4.0 Maturity Assessment (AIMA) is an effective way to create such a shared implementation roadmap.
THE CRUX TO SUCCESS LIES IN A WIDE-RANGE OF PEOPLE-ORIENTED FACTORS
Importantly, the biggest challenge for a company is not in choosing the right technology, but in having a lack of digital culture and skills in the organisation. Investing in the right technologies is important – but the success or failure does not ultimately depend on specific sensors, algorithms or analysis programs. The implementation and scaling of Intelligent Automation/Industry 4.0 requires a fundamental shift in mindset and behaviours at all levels in the organisation. The crux to success lies in a wide range of people-oriented factors.
A global insurance company was receiving 40-50 claims a day which needed to be evaluated and verified according to several factors before being approved for payment. Most of the claims were arriving as unstructured data, either as PDFs or scanned documents, making it difficult to pull information from them to be entered into various systems – in a timely manner. As a result, claims weren’t being processed fast enough. The company was concluding each year with millions of dollars in claims left open, impacting customer service.
The company implemented RPA with ABBYY Flexicapture to streamline claims processing and payments. The software Robots took scanned claims sent through email and ran them through Flexicapture to turn the unstructured data into structured formats readable by robots. From there, the robots took the data, verified that all the information was correct and checked all exceptions. Claims that were accurate were approved for payment and sent back to the brokers. If any information was incorrect or there were exceptions, the claims were routed to an employee for further investigation.
Industry 4.0 means the growing together of the digital and manufacturing industries. All physical assets are digitised and integrated into digital ecosystems with partners in the value chain.
Industry 4.0 represents a huge step in performance. You can improve your speed, flexibility and productivity by 40%. You can develop a new business strategy and take the opportunity to innovate your products and services portfolio.
Axisto works with you to map the digital maturity of your business with our AIMA (Axisto Industry 4.0 Maturity Assessment) and choose the elements that will deliver the most value in line with your vision. Well-chosen pilots will help you get on the learning curve and achieve some initial success. You will gain insights into the skills gap, and this can direct your HR strategy. We can help you to properly organise data analytics and develop your organisation more digitally. Axisto’s experience will ensure you avoid any pitfalls on your journey to becoming a digital enterprise.
Importantly, the biggest challenge for a company is not in choosing the right technology, but in having a lack of digital culture and skills in the organisation. Investing in the right technologies is important – but the success or failure does not ultimately depend on specific sensors, algorithms or analysis programs. The crux lies in a wide range of people-oriented factors. Axisto supports you in the development of a robust digital culture and ensures change is developed from within and driven by clear leadership from the top.