"Automation, data reduction and training will make machine learning technology easier, cheaper and faster for businesses to adopt and implement across a variety of sectors."

The key trends that Deloitte predicts for enterprise machine learning intensity:

  • In 2018, large and medium-sized enterprises will intensify their use of machine learning. 
  • The number of machine learning implementations and pilot projects using the technology should double compared with 2017, and should have doubled again by 2020. 

What is machine learning and how are enterprises using it?

Machine learning (ML) is an artificial intelligence (AI), or cognitive technology that enables systems to learn and improve from experience – by exposure to data – without being explicitly programmed.

Most enterprises using ML have only a handful of deployments and pilots under way. In a 2017 Deloitte survey of executives in the US, of those who said their companies were actively using cognitive technologies and were familiar with those activities, 62% had five or fewer implementations and the same number of pilots under way.

Why is enterprise use of machine learning likely to increase in 2018?

A number of factors have held back adoption of ML. These include: a shortage of qualified practitioners, the current immaturity of tools and frameworks for ML, the time and cost challenges in obtaining large data sets required by some ML model-development techniques, and the inscrutability of some ML models. 

In 2018 and beyond, these five key developments will make it easier and faster to develop ML solutions:

1. Automating data science. 

Time-consuming ML tasks, such as data exploration and feature engineering, can increasingly be automated. These should help shrink the time required to execute a machine learning proof of concept from months to days.

2. Reducing the need for training data. 

A number of techniques are emerging that aim to reduce the amount of training data required for ML, including the use of synthetic data, generated algorithmically to mimic the characteristics of the real data; and transfer learning, whereby an ML model is pre-trained on one data set as a shortcut to learning a new data set in a similar domain, such as language translation or image recognition. 

3. Accelerating training.

Hardware manufacturers are developing specialised hardware (such as graphics processing units, field-programmable gate arrays and application-specific integrated circuits) to slash the time required to train ML models, by accelerating the calculations required and the transfer of data within the chip.  

4. Explaining results. 

Machine learning models often suffer from a critical flaw: many are black boxes. This inability to explain with confidence how they make their decisions, makes them unsuitable or unpalatable for many applications. A number of techniques have been created that help shine light into the black box of certain ML models, making them easier to interpret and more accurate. 

5. Deploying locally. 

ML use will grow along with the ability to deploy it where it is needed. As we predicted last year, ML is increasingly coming to mobile devices and smart sensors, expanding the technology's applications to smart homes and cities, autonomous vehicles, wearable technology, and the industrial Internet of Things. 

What can companies do in relation to machine learning?

  • Look for opportunities to automate some of the work of oversubscribed data scientists, and ask consultants how they can use data science automation.
  • Keep an eye on emerging techniques, such as data synthesis and transfer learning that could ease the bottleneck often created by the challenge of acquiring training data. 
  • Find out what computing resources optimised for ML are offered by their cloud providers. If they are running workloads in their own data centres, they may want to investigate adding specialised hardware to the mix.
  • Explore state-of-the-art techniques for improving interpretability that may not yet be in the commercial mainstream, as interpretability of ML is still in its early days. 
  • Track the performance benchmarks being reported by makers of next-generation chips, to help predict when on-device deployment is likely to become feasible.

What can you do?

Read the full global prediction

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.