Another big family of classifiers consists of decision trees and their ensemble descendants.
The idea of a decision tree is to split the original data set into two or more subsets at each algorithm step, so as to better isolate the desired classes. Each step then produces a split on the data set and each split can be graphically represented as a node. The sequence of nodes, i.e. the sequence of splits, can be visualized as a tree, whose branches define a rule path to isolate the desired classes.
In the modern ensemble evolution of the decision tree, instead of working with only one decision tree we work with a number of them. Their cooperation in the decision process makes the algorithm more robust in terms of possible misclassifications. Implementations of ensemble decision trees are random forest and gradient boosted trees, for example.
- Gradient Boosted Tree (coming)