Issues are a tool for calling attention to - and taking action on - your data problems. An Issue is generated when a metric has an alert.

You can view your issues by selecting the Issues tab from the top of the Bigeye homepage. The default view is by table, shown below. The Open and Closed counts in the table view are table counts. You can click on the table name to drill into the issues on that table.


You can click on View by to toggle to a list of individual issues. The Open and Closed counts in the issue view are issue counts.


In order to help you address your most important data problems first, Issues now have a priority score of 1-100, where 100 is the highest possible priority. The Issues view is sorted by default in Priority order, though you can choose to sort by created date as well if you prefer. Priorities are broken into three categories:


You can see an Issue’s priority score by hovering your mouse over the priority icon:


Priority categorization (high/med/low) is based on alerts across all Bigeye workspaces, so if you have more 'high' priority issues, those alerts are among the most anomalous of all workspaces. If you have more 'low' issues, then give yourself a pat on the back, because your data is among the most stable of all users!

Priority scores (1-100) are currently based on an alert’s severity. The severity is a measure of how far away the metric’s actual value is from the expected (predicted) value. The more anomalous the metric value, the higher the severity and thus the priority. Because severity values can range from 0 to very large numbers, we normalize the severity to a 1-100 Priority score for simplicity. In the future, priority scores will also consider other factors (like how popular a table is, whether it is in an SLA, etc), and we'd love to hear your feedback on what you'd like to see included.

When an Issue has more than one alert - and those alerts have varying severity values - the Priority is based on the highest severity score of all the alerts.

The Priority of a table is equal to the highest issue priority on the table.

Upon clicking into the Issues page, you can see a time series visualization of the underlying alerting metric, as well as a timeline of the issue.


Giving feedback to the Autothresholds model

Once you have reviewed the issue, you can either Acknowledge or Close it. You can move an issue to Acknowledge to mute the issue for 24hrs and indicate to others on your team that you are looking into the root cause.

If you choose to Close the issue, you will be asked to give some feedback, which will help your Autothresholds adapt more precisely to your data. The choices are:

  • Good alert (Metric was anomalous and you were correctly alerted). If the metric is an Autometric, you will be asked whether you would like to adapt the thresholds with the metric value.

    • Select Adapt thresholds with data if the alert represents a fundamental change in your data, will now be the new normal, and you would like the model to adjust to it.
    • Select Do not adapt thresholds with data if the alert represents a one-off anomaly. Because Autometric thresholds are trained from existing data, you'll want to remove that broken data to prevent biasing the training model.
  • Bad alert (Metric was not anomalous and you were incorrectly alerted). If the metric is an Autometric, the model will always adapt (and widen) the thresholds with data.

Note: Alert and Training Feedback are required fields to close the Issue.