Deep Learning architectures learn representations of data by modeling high-level abstractions in data. The word deep refers to the use of multiple (deep) processing layers. These layers form a hierarchical learning structure that is well suited for obtaining new insights from big data. The overall architectures can be classified into Deep Neural Network, Convolutional Neural Networks, Deep Belief Network, and Recurrent Neural Network all with unique characteristics. In Deep Learning networks are many layers between the input and output layers enabling multiple processing layers that are composed of multiple linear and non-linear transformations Layers are not (all) made of neurons. However it helps to think about this analogy to understand them.
The overall idea is not new but was limited in former times by having not enough computing power to perform the learning through a hugh amount of layers. Traditional neural networks often had two to four layers while deep learning layer can have much more layers. As a consequence the deep learning architectures need much computing power. This power is available today also because of the rise of GPGPUs that are used by many deep learning frameworks. Deep Learning performs (unsupervised) learning of multiple levels of features whereby higher level features are derived from lower level features and thus form a hierarchical representation. There are a couple of further new approaches partly derived from traditional artificial neural Networks and more information can be found on this page. Another important element of the idea is the availability of big data that was not available in the past.
Deep Learning Details
We refer to the following video about this subject: