Akkio FAQ
Frequently Asked Questions about the platform
This section contains common questions we have come across.
Yes, actions purchased by an account owner are used by the teams they own. Users who are on your team but have a separate team they own will not use your actions for that team however.
No.
While trial accounts automatically convert to free accounts and will not charge you unless you choose to upgrade you can also always delete your account and data from the Account page. This can be accessed from the cog on the home page of the app.
We suggest reducing the majority class down to 10% or 2%, so Akkio can learn what causes the model to predict the minority class.
No, Akkio doesn’t have the ability to exclude prediction outcomes based on input works or allow users to select exclusion words.
Akkio’s algorithms look at 256 features of text (e.g. words, order, length, etc.). Akkio focuses on learning the user’s business language.
The baseline is Akkio guessing (predicting) the most frequent class in a dataset, and the comparison to baseline is Akkio’s way of showing how Akkio’s selected model does compared to predicting the most frequent class (e.g. 5.6x better than baseline means we were 5.6 times better than baseline)
We don’t use any Bayes or Naive Bayes models .We use Neural Networks, Random Forests, Linear and Logistic Regression, among other models.
No, Akkio uses model architectures that remove the need for it.
We do not currently support that but it is a roadmap feature.
We treat them as null fields and if there were matching null fields in the training set we look for patterns from there.
Akkio is robust to missing data and will tell a user how accurate the model is with missing data.
A user can improve their model’s performance by providing more data or doing data cleaning/imputations with Chat Explore
No, while the tool is evolving and there will be limitations on its understanding it is not case sensitive.
As of now, no. The best thing to do is after merging the data in Akkio, download the merged dataset, reupload, and then Chat Explore can be done.
In the future, merge will be part of data prep, and then Chat Explore will work on merged datasets.
Yes, you can deploy a data prep workflow back into the integration it came from.
Yes all shareable content can be white-labeled on plans that allow white-labeling.
Akkio doesn’t remove multicollinearity beforehand but addresses it in the modeling step by trying a variety of models which are variously sensitive or insensitive to multicollinearity.
Akkio uses k-fold cross validation to avoid model overfitting.
This is an upcoming feature but is not currently supported.
Depending on the data distribution, Akkio might apply a log transform.
For time series modeling, does Akkio show which Time Series algorithm was selected?
At present, we don't show this, however, it is on our roadmap
Can a model be tweaked where a regressor is added? Can the model be configured by the user?
This is something our Engineering is discussing. From my understanding, Engineering doesn't think this would be too hard to implement
How does Akkio determine Top Fields?
Top Fields are determined by how much the field (column) corresponds to how much the predicted value changes as the top field (column) changes. Similar to Permutation Importance.
No. If a user is having trouble connecting with one of the pre-built integrations, they might have not given sufficient permissions or there is an authorization error
Yes, data is moved into Akkio and stored natively.
The API can be used to connect to other systems, we are also always working to expand our native integrations, we encourage you to reach out to support with requests for new integrations.
10 million rows.
No, you can as noted before use a free Akkio trial with integrations but the Snowflake free demo does not function with Akkio.
Yes, Akkio encrypts data at rest and in flight.
No, however we are SOC 2 Type II compliant.
Akkio sends the dataset's shape, column names, data types, and typical values for each column to GPT. Specifically we use the commercial API and decline info sharing. Details on the OpenAI API policy can be found on their website. Linked Here
Yes we do, and we inherit Amazon or Google security.
This is coming very soon, if this is a requirement for your business please reach out to support.
We use several modeling methods including Neural Networks, Random Forests, and Decision Trees. Those are describe as such:
Neural networks model complex input-target relationships using linear and non-linear transformations, optimized by gradient descent.
Random forests use bagging and feature randomness to combine the outputs of multiple decision trees for higher accuracy and reduced overfitting.
Decision trees recursively split input data based on feature values, aiming for homogeneous target variable subsets, determined by techniques like entropy, Gini impurity, or information gain.
Different algorithms make varying assumptions about the data distribution. Non-parametric models like decision trees and random forests make fewer assumptions, while neural networks assume differentiability in input-target relationships. Though statistical significance isn't directly evaluated, performance metrics like accuracy, precision, recall, and F1 score can be used to assess a model's effectiveness.
Multicollinearity is addressed within the platform to help remove redundant features and improve model performance.
Singularity, often caused by a high degree of correlation between features or perfect collinearity, can be resolved by removing one of the collinear features.
We are generally robust to outliers, but if necessary, they can be removed with chat data prep or the soon-to-be-launched data cleaning tool. Some models, like decision trees and random forests, are less sensitive to outliers than others.
We provide insight into the driving factors for all models as part of the model creation process. We call this the insights report, and it works to make the model decision-making more transparent.
The goal of our platform is to provide ML capabilities without need for code so the transparency comes in these reports in digestible form. More detail can be found by drilling into the advanced sections of the report.
Five requests per second. These requests can be bulk calls however which makes the API handling more case by case. Feel free to reach out to support for your specific use case.
Last modified 22d ago