Chapter 13: Classifiers
- 13.1.1. Fisher linear discriminant
- 13.1.2. Least mean square classifier
- 13.1.2.1. Performing linear regression
- 13.1.3. Logistic classifier
- 13.1.3.1. Polynomial expansion
- 13.1.3.2. Optimization algorithm
- 13.2.1. Introduction
- 13.2.2. Nearest mean classifier
- 13.2.2.1. Scaled nearest mean
- 13.2.3. Linear discriminant assuming normal densities
- 13.2.4. Quadratic classifier assuming normal densities
- 13.2.5. Gaussian model or classifier
- 13.2.6. Constructing Gaussian model from parameters
- 13.2.7. Generating data based on Gaussian model
- 13.2.8. Gaussian mixture models
- 13.2.8.1. Automatic estimation of number of mixture components
- 13.2.8.2. Choosing number of mixture components manually
- 13.2.8.3. Clustering data using a mixture model
- 13.2.9. Regularization of Gaussian models
- 13.3.1. k-NN classifier
- 13.3.1.1. Changing neighborhood size k
- 13.3.1.2. Data scaling
- 13.3.1.3. Different k-NN formulations
- 13.3.2. Prototype selection for k-NN
- 13.3.3. k-means as a classifier
- 13.3.3.1. Adjusting number of centers in the k-means classifier
- 13.3.3.2. Changing k in the final k-NN classifier
- 13.3.3.3. Prototype pruning
- 13.4.1. Introduction
- 13.4.2. Adjusting smoothing parameter manually
- 13.4.3. Vector smoothing
- 13.5.1. Introduction
- 13.6.1. Feed-forward networks
- 13.6.1.1. Adjusting the number of units and iterations
- 13.6.1.2. Details on the process of training
- 13.6.1.3. Data scaling for neural networks
- 13.6.1.4. Continuing the training process
- 13.6.2. Radial-Basis Function (RBF) networks
- 13.6.2.1. Number of units
- 13.6.2.2. Soft outputs per-unit
- 13.6.2.3. Speed and scalability
- 13.6.3. Deep convolutional networks
- 13.6.3.1. Introduction
- 13.6.3.1.1. Installation and GPU support
- 13.6.3.1.2. Deep learning example
- 13.6.3.2. Fine-tuning deep network training
- 13.6.3.3. Defining architecture in a separate cell array
- 13.6.3.4. Details about training process
- 13.6.3.5. Repeatability of training
- 13.6.3.6. Providing custom training/validation sets
- 13.6.3.7. Training without GUI
- 13.6.3.8. Suppressing all display output
- 13.6.3.9. Convolution layer (conv)
- 13.6.3.10. Fully-connected layers
- 13.6.3.11. Batch-normalization layer (bnorm)
- 13.6.3.12. Maximum spatial pooling (mpool)
- 13.6.3.13. Rectified linear unit (relu)
- 13.6.3.14. Dropout (dropout)
- 13.6.3.15. Custom Matconvnet installation
13.7 Support vector classifier
- 13.7.1. Introduction
- 13.7.2. Linear support vector machine
- 13.7.3. Polynomial support vector machine
- 13.7.4. Grid search for sigma and C parameters
- 13.7.5. Multi-class using one-against-all approach
- 13.7.6. Multi-class using one-against-one approach
- 13.7.7. One-class support vector machines (RBF)
- 13.7.8. Repeatability of grid search
- 13.7.8.1. Fixing random seed
- 13.7.8.2. Split data outside
- 13.7.9. Probabilistic output for two-class SVM
13.8 Decision trees and random forests
- 13.8.1. Decision tree classifier
- 13.8.1.1. Growing tree without pruning
- 13.8.1.2. Controlling the tree pruning
- 13.8.1.3. Using decision tree for feature selection
- 13.8.2. Random forest classifier