- 1.1. This manual
- 1.2. Introduction to perClass
- 1.2.1. Versions
- 1.2.2. System requirements
- 1.2.3. Useful general commands
- 1.2.3.1. Displaying perClass version and license information
- 1.2.3.2. Demo examples
- 1.2.3.3. Provide direct feedback to PR Sys Design
- 1.2.3.4. Control messages displayed by perClass
- 1.3. Release notes
- 2.1. Introduction
- 2.2. Installing free perClass Lite
- 2.3. Installing perClass Toolbox
- 2.4. Making USB dongle working on Linux
- 2.5. Installing perClass Group (floating license)
- 2.5.1. Starting the license server
- 2.5.2. Using perClass on client machines
- 2.6. License error messages
- 2.6.1. No license for product (-1)
- 2.6.2. Wrong host for license (-4)
- 2.6.3. Requested version not supported (-6)
- 4.1. Introduction
- 4.2. Basic handling of labels
- 4.3. Creating labels
- 4.3.1. Label construction by per-sample class names
- 4.3.2. Label construction with unique categories and per-sample indices
- 4.3.2.1. Categories as cell arrays
- 4.3.2.2. Categories as character arrays
- 4.3.2.3. Categories in an sdlist object
- 4.3.3. Label construction by consecutive labeling
- 4.3.4. Labels with one entry per class
- 4.4. Operations on labels
- 4.4.1. Accessing label information
- 4.4.2. Retrieving class sizes and fractions
- 4.4.3. Searching for samples with specific labels
- 4.4.4. Subsets of labels by index
- 4.4.5. Relabeling: Changing class names and defining meta-classes
- 4.4.6. Concatenating label sets
- 4.5. Creating label lists
- 4.5.1. From list of class names
- 4.5.2. Using string prefix
- 4.6. Operations on label lists
- 4.6.1. Accessing list content
- 4.6.2. Converting between names and indices
- 5.1. Introduction
- 5.2. Constructing data sets
- 5.3. Importing and exporting data sets
- 5.3.1. Importing data sets
- 5.3.1.1. Importing multiple properties
- 5.3.2. Exporting data sets
- 5.4. Basic operations of data sets
- 5.4.1. Accessing samples, feature, and classes
- 5.4.2. Accessing raw data
- 5.4.3. Accessing class labels
- 5.5. Data set properties
- 5.5.1. Displaying available properties
- 5.5.2. Retrieving properties
- 5.5.3. Setting properties
- 5.5.3.1. Sample properties
- 5.5.3.2. Feature properties
- 5.5.3.3. Data properties
- 5.5.4. Testing presence of a property
- 5.5.5. Removing properties
- 5.6. Multiple sets of labels
- 5.7. Selecting data subsets by property values
- 5.8. Selecting data subsets randomly
- 5.9. Renaming classes and defining meta-classes
Chapter 6: Data visualization[-]
- 6.1. Interactive scatter plot
- 6.1.1. Legend
- 6.1.2. Changing features
- 6.1.3. Sample inspector
- 6.1.4. Switching between different sets of labels
- 6.1.5. Visualizing subsets of samples
- 6.1.6. Visualizing confusion matrix with visible and hidden samples
- 6.1.7. Bringing class to top, z-order of classes
- 6.1.8. Creating new label set
- 6.1.9. Hand-painting class labels
- 6.1.10. Tagging individual samples
- 6.1.11. Label visible samples as...
- 6.1.12. Renaming classes
- 6.1.13. Visualizing live feature distributions in scatter plot
- 6.2. Interactive plot of per-class feature distributions
Chapter 7: Image visualization [-]
- 7.1. Introduction
- 7.2. Visualizing images with sdimage
- 7.3. Creating image data sets objects directly
- 7.4. Hand-painting class labels
- 7.5. Cropping images
- 7.6. Saving image data set to workspace
- 7.7. Working with image subsets
- 7.8. Creating image matrix from a data set
- 7.9. Storing multiple images in data sets
- 7.10. Connecting sdimage and sdscatter
- 7.11. Clustering image with k-means
- 7.12. Defining connected components
Chapter 8: Feature extraction[-]
- 8.1.1. perClass feature extraction
- 8.1.2. Feature extraction domains
- 8.1.2.1. Extract features in local image neighborhoods
- 8.1.2.2. Representing objects identified in an image
- 8.1.2.3. Representing a band in spectral data
- 8.1.2.4. Transform color space
- 8.2.1. Introduction
- 8.2.2. Quick example
- 8.2.3. Region feature types
- 8.2.3.1. Raw neighborhood values
- 8.2.3.2. Local mean and standard deviation
- 8.2.3.3. Local histograms
- 8.2.3.4. Features extracted from local histograms
- 8.2.3.5. Co-occurrence matrices
- 8.2.3.6. Gaussian filter and its derivatives
- 8.2.3.7. Sobel filter
- 8.2.3.8. User-defined filter bank
- 8.2.3.9. Leung-Malik multi-orientation/multi-scale filter bank
- 8.2.3.10. Schmid rotationally-invariant filter bank
- 8.2.3.11. Maximum Response filter bank combining multiple orientations
- 8.2.3.12. Maximum Response filter bank combining multiple orientations and scales
- 8.2.3.13. Gray-level morphology features
- 8.2.3.14. User-defined feature extractors
- 8.2.4. Grid definition
- 8.2.5. Visualization of feature images computed on a grid
- 8.2.6. Computing features based on foreground mask
- 8.2.7. Propagating image labels
- 8.2.8. Visualizing regions in original image
- 8.3.1. Introduction
- 8.3.2. Quick example of the entire process
- 8.3.2.1. Preparing pixel classifier
- 8.3.2.2. Applying pixel classifier
- 8.3.2.3. Segmenting objects
- 8.3.2.4. Extracting object features
- 8.3.3. Object features
- 8.3.3.1. Object size
- 8.3.3.2. Mean of object pixels
- 8.3.3.3. Sum of object pixels
- 8.3.3.4. Histogram of a specific input feature per object
- 8.3.3.5. Shape features on object mask
- 8.3.3.6. Shape features on object content
- 8.3.3.7. Example of computing per-object histogram of local gradient
- 8.3.4. Copying labels into object data set
- 8.3.5. Bounding box of objects
- 8.4.1. Introduction
- 8.4.2. Quick example
- 8.4.3. Spectral pre-processing
- 8.4.3.1. Computation of spectral indices
- 8.4.4. perClass band extractors
- 8.4.5. Examples
- 8.4.5.1. Define bands by clustering
- 8.4.5.2. Defining band extraction pipeline
- 8.4.5.3. Display band information
- 8.4.5.4. LDA spectral feature extractor
- 8.4.5.5. Defining bands manually
8.2 Feature extraction in local image regions
8.3 Describe objects identified in an image
8.4 Feature extraction for spectral data
Chapter 9: Handling nominal features[-]
- 9.1. Introduction
- 9.2. Creating data sets with nominal features
- 9.3. Testing if data set contains nominal features
- 9.4. Display info about nominal data
- 9.5. Converting nominal feature to labels
- 9.6. Training a pipeline on nominal data set
- 9.7. Combining nominal data sets
- 9.8. Testing if two nominal reprsentations are identical
- 9.9. Making two nominal representations identical
- 9.10. Applying pipelines to nominal data sets
- 9.11. Turning labels into nominal features
Chapter 10: Interfacing databases[-]
- 10.1. Introduction
- 10.2. Creating an empty database
- 10.3. Opening an existing database from Matlab
- 10.3.1. Converting a query subset
- 10.3.2. Converting a query into numerical matrix
- 10.4. Closing database connections
- 10.5. Sorting entries
- 10.6. Inserting new records
- 10.7. Updating existing records
- 10.8. Table information
- 11.1. Introduction
- 11.1.1. Execution on new data
- 11.1.2. Accessing pipeline steps
- 11.1.3. Displaying pipeline details
- 11.1.4. Untrained pipelines
Chapter 12: Dimensionality reduction and data representation[-]
- 12.1. Feature extraction
- 12.1.1. Principal Component Analysis (PCA)
- 12.1.2. Linear Discriminant Analysis (LDA)
- 12.2. Feature space expansion
- 12.3. Feature selection
- 12.3.1. Manual feature selection
- 12.3.2. Individual feature selection (feature ranking)
- 12.3.3. Random feature selection
- 12.3.4. Forward search
- 12.3.5. Backward search
- 12.3.6. Floating search
- 12.3.7. Initialization of the selection searches
- 12.3.8. Using a decision tree classifier for feature selection
- 13.1.1. Fisher linear discriminant
- 13.1.2. Least mean square classifier
- 13.1.2.1. Performing linear regression
- 13.1.3. Logistic classifier
- 13.1.3.1. Polynomial expansion
- 13.1.3.2. Optimization algorithm
- 13.2.1. Introduction
- 13.2.2. Nearest mean classifier
- 13.2.2.1. Scaled nearest mean
- 13.2.3. Linear discriminant assuming normal densities
- 13.2.4. Quadratic classifier assuming normal densities
- 13.2.5. Gaussian model or classifier
- 13.2.6. Constructing Gaussian model from parameters
- 13.2.7. Generating data based on Gaussian model
- 13.2.8. Gaussian mixture models
- 13.2.8.1. Automatic estimation of number of mixture components
- 13.2.8.2. Choosing number of mixture components manually
- 13.2.8.3. Clustering data using a mixture model
- 13.2.9. Regularization of Gaussian models
- 13.3.1. k-NN classifier
- 13.3.1.1. Changing neighborhood size k
- 13.3.1.2. Data scaling
- 13.3.1.3. Different k-NN formulations
- 13.3.2. Prototype selection for k-NN
- 13.3.3. k-means as a classifier
- 13.3.3.1. Adjusting number of centers in the k-means classifier
- 13.3.3.2. Changing k in the final k-NN classifier
- 13.3.3.3. Prototype pruning
- 13.4.1. Introduction
- 13.4.2. Adjusting smoothing parameter manually
- 13.4.3. Vector smoothing
- 13.5.1. Introduction
- 13.6.1. Feed-forward networks
- 13.6.1.1. Adjusting the number of units and iterations
- 13.6.1.2. Details on the process of training
- 13.6.1.3. Data scaling for neural networks
- 13.6.1.4. Continuing the training process
- 13.6.2. Radial-Basis Function (RBF) networks
- 13.6.2.1. Number of units
- 13.6.2.2. Soft outputs per-unit
- 13.6.2.3. Speed and scalability
- 13.6.3. Deep convolutional networks
- 13.6.3.1. Introduction
- 13.6.3.1.1. Installation and GPU support
- 13.6.3.1.2. Deep learning example
- 13.6.3.2. Fine-tuning deep network training
- 13.6.3.3. Defining architecture in a separate cell array
- 13.6.3.4. Details about training process
- 13.6.3.5. Repeatability of training
- 13.6.3.6. Providing custom training/validation sets
- 13.6.3.7. Training without GUI
- 13.6.3.8. Suppressing all display output
- 13.6.3.9. Convolution layer (conv)
- 13.6.3.10. Fully-connected layers
- 13.6.3.11. Batch-normalization layer (bnorm)
- 13.6.3.12. Maximum spatial pooling (mpool)
- 13.6.3.13. Rectified linear unit (relu)
- 13.6.3.14. Dropout (dropout)
- 13.6.3.15. Custom Matconvnet installation
- 13.7.1. Introduction
- 13.7.2. Linear support vector machine
- 13.7.3. Polynomial support vector machine
- 13.7.4. Grid search for sigma and C parameters
- 13.7.5. Multi-class using one-against-all approach
- 13.7.6. Multi-class using one-against-one approach
- 13.7.7. One-class support vector machines (RBF)
- 13.7.8. Repeatability of grid search
- 13.7.8.1. Fixing random seed
- 13.7.8.2. Split data outside
- 13.7.9. Probabilistic output for two-class SVM
- 13.8.1. Decision tree classifier
- 13.8.1.1. Growing tree without pruning
- 13.8.1.2. Controlling the tree pruning
- 13.8.1.3. Using decision tree for feature selection
- 13.8.2. Random forest classifier
13.7 Support vector classifier
13.8 Decision trees and random forests
Chapter 14: Performance evaluation[-]
- 14.1. Introduction
- 14.2. Confusion matrices
- 14.2.1. Normalized confusion matrices
- 14.2.2. Visualizing confusion matrix in a figure
- 14.2.3. Storing confusion matrices as strings
- 14.2.4. Rectangular confusion matrices
- 14.2.5. Confusion matrices for a set of operating points
- 14.2.6. Visualization of the per class errors
- 14.2.7. Cross-validation by rotation
- 14.2.8. How are the errors computed
- 14.2.9. Setting random seed
- 14.3. Accessing algorithms trained in cross-validation
- 14.4. Accessing per-fold data sets
- 14.5. Cross-validation by randomization
- 14.6. Leave-one-out evaluation
- 14.6.1. Leave-one-out over property
Chapter 15: Classifier optimization with ROC Analysis[-]
- 15.1. Introduction
- 15.2. Using sdroc objects
- 15.2.1. Setting current operating point
- 15.2.2. Performing decisions based on ROC
- 15.2.3. Interactive visualization of ROC decisions
- 15.2.4. Interactive visualization of confusion matrices
- 15.2.5. Defining constraints in a confusion matrix
- 15.2.6. Interactively minimizing errors in a confusion matrix
- 15.2.7. Accessing estimated performances
- 15.2.8. Using different performance measures
- 15.3. Multi-class ROC Analysis
- 15.4. ROC Analysis using target thresholding (detection)
- 15.5. Selecting application-specific operating point
- 15.5.1. The most common use-case
- 15.5.2. Applying performance constraints
- 15.5.3. Constraints using the low-level methods
- 15.5.4. Cost-sensitive optimization
- 15.5.5. Applying multiple performance constraints
- 15.6. Estimating ROC with variances
Chapter 16: Detection and rejection[-]
- 16.1. Detection
- 16.1.1. Training a one-class detector
- 16.1.2. Renaming detector decisions
- 16.1.3. Define target class covering all samples in the data
- 16.1.4. Adjusting the amount of rejected target examples
- 16.1.5. Training a two-class detector
- 16.1.6. Repeatable two-class detector
- 16.1.7. Visualizing detector decisions on image data
- 16.1.8. Specifying performance measures for internal ROC
- 16.1.9. Storing confusion matrices in detector ROC
- 16.2. Rejection
Chapter 17: Classifier combining and cascades[-]
- 17.1. Classifier combining introduction
- 17.2. Soft-output combining
- 17.2.1. Stacking multiple classifiers
- 17.2.1.1. Creating stack pipelines by concatenation
- 17.2.1.2. Creating stack pipelines from cell arrays
- 17.2.1.3. Accessing base classifiers from the stack
- 17.2.2. Fixed combiners
- 17.2.2.1. Comparable soft output types
- 17.2.2.2. Changing fixed combination rule
- 17.2.3. Trained combiners
- 17.3. Crisp combining of classifier decisions
- 17.3.1. Stacks of classifiers returning decisions
- 17.3.2. Crisp combining with 'all agree' rule
- 17.3.3. Crisp combining with 'at least' rule
- 17.4. Hierarchical classifiers and cascades
Chapter 18: Cluster analysis[-]
- 18.1. Introduction
- 18.2. Clustering with sdcluster command
- 18.2.1. Clustering all data irrespective of classes
- 18.2.2. How to cluster already clustered data set?
- 18.2.3. Changing default cluster names
- 18.2.4. Removing cluster labels
- 18.3. Clustering algorithms
- 18.3.1. k-means algorithm
- 18.3.1.1. Obtaining cluster labels with sdkmeans
- 18.3.1.2. Accessing prototypes derived by sdkmeans
- 18.3.2. k-centers algorithm
- 18.3.3. Gaussian mixture model
Chapter 19: Custom algorithms[-]
Chapter 20: Classifier deployment using perClass runtime library[-]
- 20.1. Introduction
- 20.2. Execution of classifiers with command-line sdrun utility
- 20.2.1. Displaying pipeline information
- 20.2.2. Executing classifier on a data file
- 20.2.3. Executing classifier on samples provided in a string
- 20.2.4. Displaying license info
- 20.3. Classifier execution in Microsoft Excel worksheets
- 20.4. Executing classifiers from LabView
- 20.5. Executing classifiers from Matlab/Matlab compiler
- 20.5.1. Loading a classifier pipeline
- 20.5.2. Executing a classifier on new data
- 20.5.3. Working with multiple pipelines
- 20.5.4. Removing pipelines from memory
- 20.6. Classifier embedding using C/C++ language API
- 20.6.1. Complete C application example
- 20.6.2. Using multiple pipelines
- 20.6.3. Handling decisions
- 20.7. Directly applying pipelines to uint8/uint16 data
- 20.7.1. Introduction
- 20.7.2. Feature selection on uint8 data
- 20.7.3. Runtime API supporting uint8/uint16 data types
- 20.8. Measuring time of classifier execution
- 20.8.1. Timers in C API
- 20.8.2. Timing classifier execution via sdrun