Autoencoder refers to an unsupervised learning technique that is often used in the context of dimensionality reduction of big data. It projects some data from a higher dimension to a lower dimension via linear transformations. The idea of an autoencoder is thus to preserve the important features of the data while removing non-essential parts like noise. This technique is therefore similar like other
dimensionality reduction techniques like Principal Component Analysis (PCA).
It is an unsupervised learning, because there is no need for labels when training such deep learning models. One good example to understand an autoencoder is image data. An autoencoder takes an image as data input and tries to reconstruct this image using fewer number of bits as data ouput from the latent space. This refers to a process of using an encoder and decoder architecture whereby in between is this latent space also known as bottleneck. For example, the image is majorly compressed at the latent space. This compression is achieved by training the deep learning network while the network tries to best represent the input image at the latent space. In other words it learns an efficient image representation in an unsupervised way.
The following video offers more details: