perClass Documentation
version 5.4 (7-Dec-2018)
 SDSVC  Support vector machine (trained by libSVM)


   DATA      SDDATA object

   P         Pipeline object
   E         Structure with grid search results

  'type'     kernel type: 'RBF','poly','linear' (default: RBF)
  'sigma'    RBF sigma (default: select by grid-search)
  'degree'   Polynomial degree (default: select by grid-search)
  'C'        cost parameter C (default: select by grid-search)
  'w',W      weighting each class C for imbalanced problems (W is a class-size vector)
  'noscale'  Do not include data scaling
  'test'     Provide external sddata for evaluating error in parameter search
  'tsfrac'   If 'test' is not specified, fraction of DATA selected
             randomly per class for evaluating error criterion (def: 0.25)
  'one-against-one'  Use one-against-one multi-class strategy instead of
                     default one-against-all
  'verbose'  Show verbose output of libsvm
  'no shrink' Disable libsvm shrinking heuristic (may speedup optimization)
  'prob'      Probabilistic soft outputs (for two-class or one-against-one multiclass)
  'target',T Numerical target vector for regression

 SDSVC trains a support vector machine using libSVM.  By default, RBF
 kernel is used with sigma and C parameters optimized using grid search
 minimizing mean error. Polynomial and linear SVM is available using
 'type' option. For multi-class problems, one-against-all strategy is
 adopted by default. It is possible to use one-against-one strategy using
 the identically-named option.
 By default, for RBF and polynomial kernel, sdsvc scales data
 (standardization). Scaling may be switched off using 'noscale' option.
 sdsvc is splitting the DATA into a subset used for training the model and
 a subset used for error estimatiom/parameter selection (by default 25% od
 DATA). This fraction may be adjusted by 'tsfrac' option. Alternatively,
 the user may provide external set for error estimation using 'test'
 Trained SDSVC pipeline provides access to the kernel parameter
 (sigma/degree), C constant used in training, number of support vectors
 (svcount) and indices of support vectors in the training set (svind).

 Probabilistic soft outputs are available for two-class classifiers with
 'prob' option (Platt 2000, improved by Lin et al., see

 For imbalanced problems, weight for each class can be specified with
 'C weights',W option (or 'w',W shorthand) where W is a vector with one
 weight per class. Multi-class is supported with one-against-all strategy.

   origSV=b( p(2).svind ) % p(2) because the first step is scaling


 Chih-Chung Chang and Chih-Jen Lin, LIBSVM : a library for support vector
 machines, 2001.

 J.Platt, Probabilistic outputs for support vector machines and comparison
 to regularized likelihood methods, Advances in Large Margin Classifiers,
 Cambridge, MA, 2000,

sdsvc is referenced in examples: