Model execution logs
Last updated
Was this helpful?
Last updated
Was this helpful?
The Execution Logs dialog (open via ⚙️ > Logs) records every model run across your workspace. It captures metadata such as start/end time, model type, hyper-parameters, and test-set metrics—providing a single place to verify, audit, and debug model training at scale.
Start time / Finished time
UTC timestamps marking the beginning and end of training.
Duration
How long the run took (e.g., 2m37s).
Status
done-ok, error, or running; useful for spotting failures quickly.
Model name & Model Type
Friendly name plus classification / regression / time-series, etc.
Model Code
Unique 12-character hash—required for API calls (/prediction, /fetch-result, etc.).
Tenant code
Internal workspace identifier (visible for multi-tenant admins).
Actionable Insights goal
“Show Value…” link with the text prompt you supplied when enabling AI insights.
Model advanced run parameters
All non-default parameters—outlier threshold, collinearity cutoff, imbalance handling, etc.
Metrics for test dataset
F1, Accuracy, AUC for classification; R², MAE, RMSE for regression/time-series.
Trained model hyper-parameters
Captures grid-search results or any user-defined hyper-settings.
Dataset shape
Rows and columns fed into the trainer after preprocessing.
Use the 🔍 field beneath each header to search by model name, code, or date.
Click the funnel icon to show only errors, a specific model type, or a date range.
Confirm completion —Refresh logs to ensure today’s run shows done-ok.
Grab model code for API endpoints without opening the model UI.
Compare durations to detect unusually long or short runs, hinting at data issues.
Audit hyper-parameters before sharing results with stakeholders.
Investigate failures (error status) and cross-reference with advanced parameters.
Refresh first – Click Refresh Logs after a run to pull the latest status.
Export for audit – Copy rows or take a screenshot before purging old models.
Track trends – Rising durations or frequent errors can indicate growing data size or schema drift.
Secure access – Only Admins can view logs; restrict role permissions if needed.