Every day we are facing AI in some ways: from common phone use through face detection, speech or image recognition to more sophisticated — selfdriving cars, gene diseases predictions, etc. We think it is time to finally sort out what it consists of and how it works.
In the beginning was the ANN
In general terms, an Artificial Neural Network (ANN) is a technology for pattern recognition and the passage of input through various layers of simulated neural connections. It was inspired by the human brain and the way it works.
In its simplest form, an ANN can have only three layers of neurons: the input layer (where the data enters the system), the hidden layer (where the information is processed) and the output layer (where the system decides what to do based on the data).
An ANN that is made up of more than three layers – i.e. an input layer, an output layer, and multiple hidden layers – is called a ‘deep neural network’, and this is what underpins deep learning. A deep learning system is selfteaching, learning as it goes by filtering information through multiple hidden layers, in a similar way to humans.
Deep Neural Networks (DNN) is a neural network with a certain level of complexity — more than two layers. DNN combines simple mathematical operations in layers, which allows expressing more complex dependencies through them. And as the depths increase, complexity and abstraction level increase too, i. e. the data is processed in more complex ways.
Everything into neat little piles
 A neural network is a directed graph of nodes connected by synaptic and activation connections, which is characterized by the following properties:
 Each neuron is represented by a set of linear synaptic connections, and, possibly, by a nonlinear activation connection.
 Neuron synaptic connections are used to weigh the corresponding input signals.
 The weighted sum of the input signals determines the induced local field of each particular neuron.
 Activation links modify the induced local field of the neuron, creating an output signal.
 The neural network is trained on examples, making the inputoutput correspondence table for a specific task defined by data.
From a mathematical point of view, learning neural networks is a multiparameter problem of nonlinear optimization.
The signal propagates from the input layer to the output layer of the neural network and collides with parameterized transformations. Deep learning algorithms are contrasted to shallow learning algorithms by the number of these transformations. It is believed that deep learning is characterized by several nonlinear layers (> 2).
Thus, deep learning of neural networks is machine learning algorithms for modeling highlevel abstractions using numerous nonlinear transformations.
Deep training solves the central problem of teaching performances. It introduces representations that are expressed in terms of simpler representations obtained at lower levels. Thus, deep learning allows a computer to build complex concepts from simpler ones. A typical example is a deep direct distribution network, or a multilayer perceptron (MLP).
It wasn’t smooth at first

 For a long time in neural networks application there were problems with the DNN training. First of all, it was determined by the problem of vanishing gradients and exploding gradients. These problems were solved after the following innovations:
 Increasing the size of data sets to several TB.
 Increasing the size of the models.
 The use of massively parallel computing with GPGPU for training and increasing hardware performance.
 The use of piecewise linear functions as a nonlinear connection, for example, rectified linear units (ReLU).
 Gradient descent algorithms with adaptive learning speed (AdaDelta, AdaGrad, RMSPror, Adam).
 Development of regularization methods in neural networks:

 Convolutional networks, where a priori knowledge of the input data is determined by regularization (spatial image relations)

 Dropout
 Normalization of minibatch

 The method of constructing the architecture Network In Network.
 Dimensionality reduction via 1×1 convolutions in Convolution networks.
 Development of residual learning method (deep residual learning)
All this made it possible in the end to build neural networks with sizes greater than 150 layers — now the size is unlimited and can reach thousands of layers.
Updates are coming
Based on the general definition of DNN, they can be used to solve any issues where artificial neural networks were previously used. However, DNN shows significantly better results and opens up more opportunities.
Using DNN methods in Computer Vision it is possible to create an application which will analyze different sports games and give detailed statistics about players performance.
All the analytics will be made via DNN video recognition algorithms. Such system is capable to process and transform data into required stat: players/ball speed, expected goals, successful entries, failed passes, and etc. Recognizing both individual player movements with a ball and a whole team performance is not a problem thanks to DNN. It can even recognize players by numbers on their jerseys.
Having a good coach under a team’s belt is really good, but when they are empowered with such analytical system — isn’t it a doublethreat, huh?
 Or it is possible to develop a system for manufacture needs which will define fragments in a video stream with required objects to gather the necessary information. For example, such system can:
 Monitor and control different moving parts of assembly lines in realtime;
 Collect information about particular coordinates of selected devices parts from a camera video stream;
 Сheck the state of any mechanical system, people movements, and gestures in realtime;
 Measure the coordinates of the necessary fragments or parts of the required machines or use DNN to process information from the video streams.
These are just brief examples of DNN capabilities. If we dig deeper, there probably wouldn’t be any boundaries for them.
We really mean to learn
As DNN “grow stronger” it soon will be almost at any sphere of our life, we suppose. And since modern technologies developed in the way to understand human and their needs, it’s a good point to learn and understand how these technologies work. As Immortal Technique once said — “you never know” (Hi, Skynet).
Keen to learn about some more DNN use examples? We have them! Check out the system which remotely measures the mass of homogeneous products in real time. Or the real example of a DNN use in CV — the system for valuing sports players performance and helping them to improve their trainings.
We are always eager to share our best practices and wide open to learn something new, so if you have any questions or ideas – fill free to write to us. Let’s develop the world together!