SkopeRules

The documentation of the SkopeRules module.

class skrules.skope_rules.SkopeRules(feature_names=None, precision_min=0.5, recall_min=0.01, n_estimators=10, max_samples=0.8, max_samples_features=1.0, bootstrap=False, bootstrap_features=False, max_depth=3, max_depth_duplication=None, max_features=1.0, min_samples_split=2, n_jobs=1, random_state=None, verbose=0)[source]

An easy-interpretable classifier optimizing simple logical rules.

Parameters:

feature_names : list of str, optional

The names of each feature to be used for returning rules in string format.

precision_min : float, optional (default=0.5)

The minimal precision of a rule to be selected.

recall_min : float, optional (default=0.01)

The minimal recall of a rule to be selected.

n_estimators : int, optional (default=10)

The number of base estimators (rules) to use for prediction. More are built before selection. All are available in the estimators_ attribute.

max_samples : int or float, optional (default=.8)

The number of samples to draw from X to train each decision tree, from which rules are generated and selected.

  • If int, then draw max_samples samples.
  • If float, then draw max_samples * X.shape[0] samples.

If max_samples is larger than the number of samples provided, all samples will be used for all trees (no sampling).

max_samples_features : int or float, optional (default=1.0)

The number of features to draw from X to train each decision tree, from which rules are generated and selected.

  • If int, then draw max_features features.
  • If float, then draw max_features * X.shape[1] features.

bootstrap : boolean, optional (default=False)

Whether samples are drawn with replacement.

bootstrap_features : boolean, optional (default=False)

Whether features are drawn with replacement.

max_depth : integer or List or None, optional (default=3)

The maximum depth of the decision trees. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples. If an iterable is passed, you will train n_estimators for each tree depth. It allows you to create and compare rules of different length.

max_depth_duplication : integer, optional (default=None)

The maximum depth of the decision tree for rule deduplication, if None then no deduplication occurs.

max_features : int, float, string or None, optional (default=”auto”)

The number of features considered (by each decision tree) when looking for the best split:

  • If int, then consider max_features features at each split.
  • If float, then max_features is a percentage and int(max_features * n_features) features are considered at each split.
  • If “auto”, then max_features=sqrt(n_features).
  • If “sqrt”, then max_features=sqrt(n_features) (same as “auto”).
  • If “log2”, then max_features=log2(n_features).
  • If None, then max_features=n_features.

Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than max_features features.

min_samples_split : int, float, optional (default=2)

The minimum number of samples required to split an internal node for each decision tree.

  • If int, then consider min_samples_split as the minimum number.
  • If float, then min_samples_split is a percentage and ceil(min_samples_split * n_samples) are the minimum number of samples for each split.

n_jobs : integer, optional (default=1)

The number of jobs to run in parallel for both fit and predict. If -1, then the number of jobs is set to the number of cores.

random_state : int, RandomState instance or None, optional

  • If int, random_state is the seed used by the random number generator.
  • If RandomState instance, random_state is the random number generator.
  • If None, the random number generator is the RandomState instance used

by np.random.

verbose : int, optional (default=0)

Controls the verbosity of the tree building process.

Attributes

rules_ (dict of tuples (rule, precision, recall, nb).) The collection of n_estimators rules used in the predict method. The rules are generated by fitted sub-estimators (decision trees). Each rule satisfies recall_min and precision_min conditions. The selection is done according to OOB precisions.
estimators_ (list of DecisionTreeClassifier) The collection of fitted sub-estimators used to generate candidate rules.
estimators_samples_ (list of arrays) The subset of drawn samples (i.e., the in-bag samples) for each base estimator.
estimators_features_ (list of arrays) The subset of drawn features for each base estimator.
max_samples_ (integer) The actual number of samples
n_features_ (integer) The number of features when fit is performed.
classes_ (array, shape (n_classes,)) The classes labels.

Methods

decision_function(X)[source]

Average anomaly score of X of the base classifiers (rules).

The anomaly score of an input sample is computed as the weighted sum of the binary rules outputs, the weight being the respective precision of each rule.

Parameters:

X : array-like, shape (n_samples, n_features)

The training input samples.

Returns:

scores : array, shape (n_samples,)

The anomaly score of the input samples. The higher, the more abnormal. Positive scores represent outliers, null scores represent inliers.

fit(X, y, sample_weight=None)[source]

Fit the model according to the given training data.

Parameters:

X : array-like, shape (n_samples, n_features)

Training vector, where n_samples is the number of samples and n_features is the number of features.

y : array-like, shape (n_samples,)

Target vector relative to X. Has to follow the convention 0 for normal data, 1 for anomalies.

sample_weight : array-like, shape (n_samples,) optional

Array of weights that are assigned to individual samples, typically the amount in case of transactions data. Used to grow regression trees producing further rules to be tested. If not provided, then each sample is given unit weight.

Returns:

self : object

Returns self.

predict(X)[source]

Predict if a particular sample is an outlier or not.

Parameters:

X : array-like, shape (n_samples, n_features)

The input samples. Internally, it will be converted to dtype=np.float32

Returns:

is_outlier : array, shape (n_samples,)

For each observations, tells whether or not (1 or 0) it should be considered as an outlier according to the selected rules.

predict_top_rules(X, n_rules)[source]

Predict if a particular sample is an outlier or not, using the n_rules most performing rules.

Parameters:

X : array-like, shape (n_samples, n_features)

The input samples. Internally, it will be converted to dtype=np.float32

n_rules : int

The number of rules used for the prediction. If one of the n_rules most performing rules is activated, the prediction is equal to 1.

Returns:

is_outlier : array, shape (n_samples,)

For each observations, tells whether or not (1 or 0) it should be considered as an outlier according to the selected rules.

rules_vote(X)[source]

Score representing a vote of the base classifiers (rules).

The score of an input sample is computed as the sum of the binary rules outputs: a score of k means than k rules have voted positively.

Parameters:

X : array-like, shape (n_samples, n_features)

The training input samples.

Returns:

scores : array, shape (n_samples,)

The score of the input samples. The higher, the more abnormal. Positive scores represent outliers, null scores represent inliers.

score_top_rules(X)[source]

Score representing an ordering between the base classifiers (rules).

The score is high when the instance is detected by a performing rule. If there are n rules, ordered by increasing OOB precision, a score of k means than the kth rule has voted positively, but not the (k-1) first rules.

Parameters:

X : array-like, shape (n_samples, n_features)

The training input samples.

Returns:

scores : array, shape (n_samples,)

The score of the input samples. Positive scores represent outliers, null scores represent inliers.