perClass Documentation
version 5.4 (7-Dec-2018)
SDTEST Estimate classifier error/performance on test set

    ERR=SDTEST(PD,DATA)   % using trained pipeline and data set
    ERR=SDTEST(DATA,PD)   % using trained pipeline and data set
    ERR=SDTEST(DATA,DEC) % from a test set and decisions
    ERR=SDTEST(LAB,DEC)  % from labels and decisions

 Specifying performance measures:
    [ERR,R,T]=SDTEST(DATA,PD,'measures',{'class-errors','TPr','apple','precision','banana'})

 INPUTS
   DATA     Test set
   PD       Trained pipeline returning decisions
   LAB      Ground truth labels (SDLAB)
   DEC      Classifier decisions (SDLAB)

 OUTPUT
   ERR      Estimated error/performance measures (scalar or vector)
   R        SDROC object with performances
   T        Time of execution in seconds

 DESCRIPTION
 SDTEST estimates errors or performances by applying trained pipeline PD
 on the test set or by comparing true labels and decisions. By default,
 SDTEST returns mean error over classes. Other measures may be defined
 using 'measures' option similarly to the SDROC.

 MEASURES
  TP,FP,TN,FN  - true/false positives and negatives (counts of samples)
                 param: target class
  TPr,FPr,TNr,FNr - fractions of true/false positives/negatives (normalized
                    by the total number of samples per class), param: target class
  sensitivity   - identical to TPr
  specificity   - identical to TNr
  class-errors  - all per-class errors
  mean-error    - mean error over classes (parameter: class priors,
                  default: equal priors)
  precision     - TP/(TP+FP) How much of what we search for, we find
                  among our decisions. (param: target class)
  posfrac       - (TP+FP)/N How much of all observations we flag as positive
  detrate       - detection rate, (TP+FP)/Nt, (param: target class)

 SEE ALSO
 SDROC

sdtest is referenced in examples: