perClass Documentation
version 5.4 (7-Dec-2018)
 SDROC Estimate ROC of a classifier

    R=SDROC(TS,P) % provide test set and trained classifier
    R=SDROC(OUT)  % provide soft outputs out=ts*-p

    R2=SDROC(R,OUT)  % re-estimate from OUT using op.points in R

 INPUT
    TS    test set
    P     trained classifier pipeline
    OUT   data set with classifier soft outputs

 OUTPUT
    R,R2  SDROC objects

 OPTIONS
  'target' Name of the target decision
  'non-target'  Name of the non-target decision in the resulting op.point.
  'reject'  Add a reject option and construct reject curve.
            - if (0,1) fraction is given, set threshold by rejecting
              percentage of all samples
            - if SDOPS or SDROC is given, use current op.point
  'measures' Cell array with measure names and parameters.
  'noconfmat' - Do not store confusion matrices ('confmat' option is used by default)
  'polarity',P - Set polarity of the soft output (P is 'similarity' or 'distance')
  'maxpoints',M - Set maximum number of operating point (2000 by default)

 DESCRIPTION
 SDROC performs ROC analysis of classifier P on the test set TS.
 Alternatively, soft output set OUT can be provided.
 SDROC performs two- or multi- class analysis using output thresholding or
 weighting. The output is an object with estimated measures as a set of
 operating points.
 ROC can be visualized using interactive SDDRAWROC plot. Operating points
 can be selected using SETCUROP or CONSTRAIN.

 EXAMPLES
 Estimate from test set and trained classifier:
   p=sdparzen(tr)
   r=sdroc(ts,p)
 Estimate from soft outputs
   out=ts*-p   % -p removes decision step so that out is sddata with soft outputs
   r=sdroc(out)
 Specifying the performance measures to estimate:
   r=sdroc(ts,p,'measures',{'FPr','apple','TPr','apple'})
   r=sdroc(ts,p,'measures',{'custom:F',@custom_F_measure}) % custom measure

 SEE ALSO
 CUSTOM_F_MEASURE, SETCUROP