|
| 1 | +[role="xpack"] |
| 2 | +[[ml-configuring-detector-custom-rules]] |
| 3 | +=== Customizing detectors with rules and filters |
| 4 | + |
| 5 | +<<ml-rules,Rules and filters>> enable you to change the behavior of anomaly |
| 6 | +detectors based on domain-specific knowledge. |
| 7 | + |
| 8 | +Rules describe _when_ a detector should take a certain _action_ instead |
| 9 | +of following its default behavior. To specify the _when_ a rule uses |
| 10 | +a `scope` and `conditions`. You can think of `scope` as the categorical |
| 11 | +specification of a rule, while `conditions` are the numerical part. |
| 12 | +A rule can have a scope, one or more conditions, or a combination of |
| 13 | +scope and conditions. |
| 14 | + |
| 15 | +Let us see how those can be configured by examples. |
| 16 | + |
| 17 | +==== Specifying rule scope |
| 18 | + |
| 19 | +Let us assume we are configuring a job in order to DNS data exfiltration. |
| 20 | +Our data contain fields "subdomain" and "highest_registered_domain". |
| 21 | +We can use a detector that looks like `high_info_content(subdomain) over highest_registered_domain`. |
| 22 | +If we run such a job it is possible that we discover a lot of anomalies on |
| 23 | +frequently used domains that we have reasons to trust. As security analysts, we |
| 24 | +are not interested in such anomalies. Ideally, we could instruct the detector to |
| 25 | +skip results for domains that we consider safe. Using a rule with a scope allows |
| 26 | +us to achieve this. |
| 27 | + |
| 28 | +First, we need to create a list with our safe domains. Those lists are called |
| 29 | +`filters` in {ml}. Filters can be shared across jobs. |
| 30 | + |
| 31 | +We create our filter using the {ref}/ml-put-filter.html[put filter API]: |
| 32 | + |
| 33 | +[source,js] |
| 34 | +---------------------------------- |
| 35 | +PUT _xpack/ml/filters/safe_domains |
| 36 | +{ |
| 37 | + "description": "Our list of safe domains", |
| 38 | + "items": ["safe.com", "trusted.com"] |
| 39 | +} |
| 40 | +---------------------------------- |
| 41 | +// CONSOLE |
| 42 | + |
| 43 | +Now, we can create our job specifying a scope that uses the filter for the |
| 44 | +`highest_registered_domain` field: |
| 45 | + |
| 46 | +[source,js] |
| 47 | +---------------------------------- |
| 48 | +PUT _xpack/ml/anomaly_detectors/dns_exfiltration_with_rule |
| 49 | +{ |
| 50 | + "analysis_config" : { |
| 51 | + "bucket_span":"5m", |
| 52 | + "detectors" :[{ |
| 53 | + "function":"high_info_content", |
| 54 | + "field_name": "subdomain", |
| 55 | + "over_field_name": "highest_registered_domain", |
| 56 | + "custom_rules": [{ |
| 57 | + "actions": ["skip_result"], |
| 58 | + "scope": { |
| 59 | + "highest_registered_domain": { |
| 60 | + "filter_id": "safe_domains", |
| 61 | + "filter_type": "include" |
| 62 | + } |
| 63 | + } |
| 64 | + }] |
| 65 | + }] |
| 66 | + }, |
| 67 | + "data_description" : { |
| 68 | + "time_field":"timestamp" |
| 69 | + } |
| 70 | +} |
| 71 | +---------------------------------- |
| 72 | +// CONSOLE |
| 73 | + |
| 74 | +As time advances and we see more data and more results, we might encounter new |
| 75 | +domains that we want to add in the filter. We can do that by using the |
| 76 | +{ref}/ml-update-filter.html[update filter API]: |
| 77 | + |
| 78 | +[source,js] |
| 79 | +---------------------------------- |
| 80 | +POST _xpack/ml/filters/safe_domains/_update |
| 81 | +{ |
| 82 | + "add_items": ["another-safe.com"] |
| 83 | +} |
| 84 | +---------------------------------- |
| 85 | +// CONSOLE |
| 86 | +// TEST[setup:ml_filter_safe_domains] |
| 87 | + |
| 88 | +Note that we can provide scope for any of the partition/over/by fields. |
| 89 | +In the following example we scope multiple fields: |
| 90 | + |
| 91 | +[source,js] |
| 92 | +---------------------------------- |
| 93 | +PUT _xpack/ml/anomaly_detectors/scoping_multiple_fields |
| 94 | +{ |
| 95 | + "analysis_config" : { |
| 96 | + "bucket_span":"5m", |
| 97 | + "detectors" :[{ |
| 98 | + "function":"count", |
| 99 | + "partition_field_name": "my_partition", |
| 100 | + "over_field_name": "my_over", |
| 101 | + "by_field_name": "my_by", |
| 102 | + "custom_rules": [{ |
| 103 | + "actions": ["skip_result"], |
| 104 | + "scope": { |
| 105 | + "my_partition": { |
| 106 | + "filter_id": "filter_1" |
| 107 | + }, |
| 108 | + "my_over": { |
| 109 | + "filter_id": "filter_2" |
| 110 | + }, |
| 111 | + "my_by": { |
| 112 | + "filter_id": "filter_3" |
| 113 | + } |
| 114 | + } |
| 115 | + }] |
| 116 | + }] |
| 117 | + }, |
| 118 | + "data_description" : { |
| 119 | + "time_field":"timestamp" |
| 120 | + } |
| 121 | +} |
| 122 | +---------------------------------- |
| 123 | +// CONSOLE |
| 124 | + |
| 125 | +Such a detector will skip results when the values of all 3 scoped fields |
| 126 | +are included in the referenced filters. |
| 127 | + |
| 128 | +==== Specifying rule conditions |
| 129 | + |
| 130 | +Imagine a detector that looks for anomalies in CPU utilization. |
| 131 | +Given a machine that is idle for long enough, small movement in CPU could |
| 132 | +result in anomalous results where the `actual` value is quite small, for |
| 133 | +example, 0.02. Given our knowledge about how CPU utilization behaves we might |
| 134 | +determine that anomalies with such small actual values are not interesting for |
| 135 | +investigation. |
| 136 | + |
| 137 | +Let us now configure a job with a rule that will skip results where CPU |
| 138 | +utilization is less than 0.20. |
| 139 | + |
| 140 | +[source,js] |
| 141 | +---------------------------------- |
| 142 | +PUT _xpack/ml/anomaly_detectors/cpu_with_rule |
| 143 | +{ |
| 144 | + "analysis_config" : { |
| 145 | + "bucket_span":"5m", |
| 146 | + "detectors" :[{ |
| 147 | + "function":"high_mean", |
| 148 | + "field_name": "cpu_utilization", |
| 149 | + "custom_rules": [{ |
| 150 | + "actions": ["skip_result"], |
| 151 | + "conditions": [ |
| 152 | + { |
| 153 | + "applies_to": "actual", |
| 154 | + "operator": "lt", |
| 155 | + "value": 0.20 |
| 156 | + } |
| 157 | + ] |
| 158 | + }] |
| 159 | + }] |
| 160 | + }, |
| 161 | + "data_description" : { |
| 162 | + "time_field":"timestamp" |
| 163 | + } |
| 164 | +} |
| 165 | +---------------------------------- |
| 166 | +// CONSOLE |
| 167 | + |
| 168 | +When there are multiple conditions they are combined with a logical `and`. |
| 169 | +This is useful when we want the rule to apply to a range. We simply create |
| 170 | +a rule with two conditions, one for each end of the desired range. |
| 171 | + |
| 172 | +Here is an example where a count detector will skip results when the count |
| 173 | +is greater than 30 and less than 50: |
| 174 | + |
| 175 | +[source,js] |
| 176 | +---------------------------------- |
| 177 | +PUT _xpack/ml/anomaly_detectors/rule_with_range |
| 178 | +{ |
| 179 | + "analysis_config" : { |
| 180 | + "bucket_span":"5m", |
| 181 | + "detectors" :[{ |
| 182 | + "function":"count", |
| 183 | + "custom_rules": [{ |
| 184 | + "actions": ["skip_result"], |
| 185 | + "conditions": [ |
| 186 | + { |
| 187 | + "applies_to": "actual", |
| 188 | + "operator": "gt", |
| 189 | + "value": 30 |
| 190 | + }, |
| 191 | + { |
| 192 | + "applies_to": "actual", |
| 193 | + "operator": "lt", |
| 194 | + "value": 50 |
| 195 | + } |
| 196 | + ] |
| 197 | + }] |
| 198 | + }] |
| 199 | + }, |
| 200 | + "data_description" : { |
| 201 | + "time_field":"timestamp" |
| 202 | + } |
| 203 | +} |
| 204 | +---------------------------------- |
| 205 | +// CONSOLE |
| 206 | + |
| 207 | +==== Rules in the life-cycle of a job |
| 208 | + |
| 209 | +Rules only affect results created after the rules were applied. |
| 210 | +Let us imagine that we have configured a job and it has been running |
| 211 | +for some time. After observing its results we decide that we can employ |
| 212 | +rules in order to get rid of some uninteresting results. We can use |
| 213 | +the update-job API to do so. However, the rule we added will only be in effect |
| 214 | +for any results created from the moment we added the rule onwards. Past results |
| 215 | +will remain unaffected. |
| 216 | + |
| 217 | +==== Using rules VS filtering data |
| 218 | + |
| 219 | +It might appear like using rules is just another way of filtering the data |
| 220 | +that feeds into a job. For example, a rule that skips results when the |
| 221 | +partition field value is in a filter sounds equivalent to having a query |
| 222 | +that filters out such documents. But it is not. There is a fundamental |
| 223 | +difference. When the data is filtered before reaching a job it is as if they |
| 224 | +never existed for the job. With rules, the data still reaches the job and |
| 225 | +affects its behavior (depending on the rule actions). |
| 226 | + |
| 227 | +For example, a rule with the `skip_result` action means all data will still |
| 228 | +be modeled. On the other hand, a rule with the `skip_model_update` action means |
| 229 | +results will still be created even though the model will not be updated by |
| 230 | +data matched by a rule. |
0 commit comments