Insights Report - Classification
Statistical Report of Model Training Performance (Standard Predictive)
Last updated
Statistical Report of Model Training Performance (Standard Predictive)
Last updated
After training a model, you will want to review its performance. The insights report will provide this information. This section will describe the Classification report. Other sections will describe the differences in the regression and forecasting reports.
This section reviews how well the model performed against the reserve data. When training, the model will use 80% of your data for training and reserve 20% to test against. The model correctly predicted 95% of the reserve data in this case. You can retrain the model with different settings at different speeds to see how that changes the accuracy.
You can also select 'See accuracy details' to drill down into specifics:
Depending on your data, false positives or negatives could affect your data more or less respectively. This drill down into how the model performed with those two groupings can give you insights that will be valuable for your eventual use of the model going forward. In this case, we have a common situation with significantly more false negatives than positives. Since this is a health concern forecast, it may be worth tweaking the Decision thresholds to be more conservative and err on false positives, but we will explore that in the Decision Threshold Graph section.
This is how well the model performs at predicting each outcome.
Accuracy - measures how often a prediction is correct. It is calculated by dividing the number of correct predictions by the total number of predictions.
Precision - is the fraction of true positives out of the predicted positives. This is useful to consider when the cost of a false positive is high, such as in email spam detection. Higher is better.
Recall - is how many of the actual positives your model captures. This is useful to consider when the cost of a false negative is high, such as in cancer prediction. Higher is better.
F1 Score - combines precision and recall into one metric and weights them to balance, considering false positives and false negatives. This is useful for comparing different ML models that predict the same outcome. Higher is better.
Count - is the number of times this outcome appears in the validation set.
Click on 'Show Advanced Model Details.'
Akkio tests several models for each training and only returns the best-performing model for the data. This section details how that model performed over time and provides information on what type of model it is. In this case, we used a Deep Neural Network with Attention.
This section shows which fields contributed the most to determining the likelihood of the requested outcome, in this case, a stroke. Here you can see that the average glucose level was almost a quarter of the determining factors of whether a stroke occurred. This should track with existing logic and passes the 'sniff test.' You can also see on the right of this section how that field affected the outcome; in this case, higher average glucose levels led to more strokes.
Similar to Top Fields, Top Factors show what data in specific fields most often led to the outcome (stroke). This will look similar to the fields data but now targets the information in each field. As you can see here, age between 65 and 82 has more impact than blood glucose between 123.94 and 271.74, even though the field of blood glucose contributes more.
Segments break the data up into similar groupings based on outcomes. A group of patients with a high risk of stroke is shown here and displays the values they have in common. Older patients with high glucose and BMI form this high-risk group.
The Decision Threshold Graph allows users to visualize their data by breaking it down into categories of Unlikely, Uncertain, and Likely results. By adjusting sliders, users can observe how changing the threshold affects the percentage distribution across these categories. Here's a breakdown of the key components in the Decision Threshold Graph:
Density - Indicates the rate at which the outcome of interest occurs within a specific group compared to the overall dataset. Density provides insights into how concentrated or dispersed the relevant outcomes are within that group.
Group Size - Represents the number of rows within a particular group as a percentage of the total dataset. Understanding the group size relative to the whole dataset helps in assessing the impact and significance of that specific group on the overall analysis.
Sample rows of your data sorted by the probability of the outcome of interest. By adjusting the slider, users can explore rows with varying levels of likelihood. This enables users to gain insights into how different probabilities impact a selection of rows within their dataset, providing a practical way to analyze and understand the distribution of data based on the likelihood of specific outcomes.