Deep Learning

Deep learning methods have found widespread application, one of the most used examples being speech recognition

Deep nets are the current state of the art in pattern recognition, but they build upon a decades-old technology of neural networks talked about in the past module. It took many decades after the initial concept to arrive at functional deep nets because they are very hard to train. The method suffered from an issue called the vanishing gradient problem. Up until around 2006 deep nets underperformed relative to more basic nets and other machine learning algorithms. But everything started to change after three breakthrough papers published at that time and today they are the hottest topic in machine learning. Deep learning is a machine learning method based on neural networks. What distinguishes deep learning from the more general approach of neural networks is its use of multiple layers within the network to represent different levels of abstraction. Deep learning algorithms use a cascading structure with multiple layers of nonlinear processing units for feature extraction and transformation.1 Each successive layer uses the output from the previous layer as input. In this way, they learn multiple levels of representation that correlate to different levels of abstraction.

Just like neural networks, deep-learning software attempts to mimic the activity in layers of neurons in the neocortex. It uses multiple layers of nodes with each successive layer using the output from the previous layer as input. Varying numbers of layers and layer sizes can provide different degrees of abstraction. Deep learning exploits this idea of hierarchical representation, where higher level, more abstract concepts are learned from the lower level ones. When you simply have ten or fewer parameters as input then other forms of machine learning are typically better such as support vector machine or logistic regression. Basic classification engines and shallow neural networks are not sufficed for complex tasks and neural nets with only a small number of layers can become unmanageable. Because the number of nodes in each layer grows exponentially with the number of possible patterns in the data, eventually training becomes expensive and accuracy starts to deteriorate. Thus when the patterns get very complex, neural nets start to outperform their competition.

Pattern Abstraction

The key to deep learning can be largely ascribed to breaking the processing of the patterns down and distributing that out across many different layers in the network. For example, we might be applying this ML system to detect for flowers in an image, we would then use edges to detect different parts of the flower, petals, stalk, etc. and then combine them to create the whole flower. The process of using simpler patterns as modules that can be combined to create more complex patterns is a key part of the power of deep learning. As another example, if you feed it a bunch of images of lorries, down at the lowest layers there will be things like edges and then higher up things that look like tires, wheels or a cab and at a level above that things that are clearly identifiable as lorries.2

Training deep learning networks typically requires providing the system with a great many example data sets and thus often requires both large amounts of data and computing capacity

Once the network is trained, you can put one image in at the front, and the nodes will fire when they see the thing they are trained to identify. In the example of face detection, it first learns features like edges and color contrasts, these simple features form more complex facial features like the eyes and nose which are then combined to form the face. The neural network does all of this on its own during the training process without any direction from the person building it. These neural nets are almost always built for a specific task, such as voice recognition or various other forms of data mining.

The system self-organizes in such a way that the nodes in the layers closest to the input data become reactive to simple features and then as you move through the layers the features that the neurons respond to become higher and higher order. Interestingly people have found a very similar structure in our own brains, where the visual system for different layers also extracts higher and higher order features. Once you have a deep learning network that is trained this way it should be possible to also run it backward, if you have trained a network so that it knows everything about what a cat is like it should be able to produce new pictures that look like cats or dogs, these are called generative neural networks. Deep nets take a long time to train but the advent of new hardware in the form of graphics processing units can reduce the processing time by one or even two orders of magnitude.3

Applications

There are now lots of different types of deep nets to use. For text analysis, such as name recognition and sentiment analysis, recursive tensor networks are typically used. Image recognition processes often involve a convolutional net or deep belief net. For object recognition one may use a convolutional net, or recurrent nets may be used for speech recognition. These deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than labeled data. The end result of training a deep learning neural net yields a self-organizing stack of transducers, well-tuned to their operating environment and capable of modeling complex non-linear relationships.

A deep learning platform is an out-of-the-box application that lets you configure deep nets without needing to know anything about coding in order to use the tools. A platform provides a set of tools and an interface for building custom deep nets. Typically, they provide a user with a selection of deep nets to choose from, along with the ability to integrate data from different sources, manipulate data, and manage models through a user interface. Some platforms may also help with performance if a net needs to be trained with a large data set. The downside is that you are constrained by the platform’s selection of deep nets as well as the configuration options. But for anyone looking to quickly deploy a deep net, a platform is the best way to go. There are now a variety of such platforms, one of the most widely used is Tensorflow, an open source library of ML methods created by Google which has grown rapidly in popularity.

1. Wang, J., Ma, Y., Zhang, L., Gao, R. and Wu, D. (2018). Deep learning for smart manufacturing: Methods and applications. Journal of Manufacturing Systems.

2. YouTube. (2018). A friendly introduction to Deep Learning and Neural Networks. [online] Available at: https://www.youtube.com/watch?v=BR9h47Jtqyw&t=1066s [Accessed 12 Feb. 2018].

3. Recurrent Neural Networks – Ep. 9 (Deep Learning SIMPLIFIED). (2018). YouTube. Retrieved 12 February 2018, from https://www.youtube.com/watch?v=_aCuOwF1ZjU

2018-02-12T11:23:26+00:00
Yes No