What Is Machine Learning?
Machine learning is a subfield within artificial intelligence that deals with computer algorithms that can improve themselves via training data without explicit programming. It’s widely considered the most promising path for achieving true human-like artificial intelligence.
Machine-learning algorithms can be broadly classified into three categories:
- Supervised learning: You provide labels and present example inputs with their desired outputs and allow the algorithm to learn the rules that map inputs to outputs.
- Unsupervised learning: You don’t provide any labels, so the algorithm is allowed to find its own structure for processing inputs (e.g., discovering hidden patterns in data).
- Reinforcement learning: The algorithm repeatedly interacts with a dynamic environment with a specific goal, such as winning a game or driving a car. The algorithm approximates the optimal solution to the problem through repeated trial and error.
In this article, we’ll give a brief overview of machine learning and deep learning, and the differences between the two concepts.
What Is Deep Learning?
Deep learning is a branch of machine learning that uses artificial neural networks to approximate human-like intelligence. Inspired by human neurons, deep learning uses graph theory to arrange weighting algorithms into layers of nodes and edges. Deep-learning algorithms are great at processing unstructured data such as images or language.
Technically, to be classified as “deep,” a neural network must contain hidden layers between the input and output layers of a perceptron—the base structure of a neural network. These layers are considered “hidden” because they have no connection to the outside world. Examples of deep-learning architectures include:
- Feed-forward (FF): Data passes in one direction from the input layer through the hidden layers and out the output layer—all nodes are connected and data never cycles back through the hidden layers. FF is used in data compression and basic image processing.
- Recurrent neural networks (RNN): A type of FF network that adds a time delay to the hidden layers which allows access to previous information during a current iteration. This feedback loop approximates memory and makes RNNs great for language processing. A good example of this is predictive text which relies on the words you use most often to tailor its suggestions.
- Convolutional neural networks (CNN): A convolution is a mathematical operation on two functions that produces a third function describing how one is modified by the other. Used primarily for image recognition and classification, CNNs are the “eyes” of AI. The hidden layers in a CNN act as mathematical filters using weighted sums to identify edges, color, contrast, and other elements of a pixel.