# Model execution logs

### Overview

The Execution Logs dialog (open via ⚙️ > Logs) records every model run across your workspace. It captures metadata such as start/end time, model type, hyper-parameters, and test-set metrics—providing a single place to verify, audit, and debug model training at scale.

<figure><img src="https://3727300098-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FgnR78y9L7FDWeb4jdvdW%2Fuploads%2FCFR7EB49WtBqX3peNhz2%2Fimage.png?alt=media&#x26;token=3c55d31b-a603-44b0-8ea3-1b12bac357ac" alt=""><figcaption></figcaption></figure>

***

### Key Features

| Column                         | What it tells you                                                                           |
| ------------------------------ | ------------------------------------------------------------------------------------------- |
| Start time / Finished time     | UTC timestamps marking the beginning and end of training.                                   |
| Duration                       | How long the run took (e.g., 2m37s).                                                        |
| Status                         | done-ok, error, or running; useful for spotting failures quickly.                           |
| Model name & Model Type        | Friendly name plus classification / regression / time-series, etc.                          |
| Model Code                     | Unique 12-character hash—required for API calls (/prediction, /fetch-result, etc.).         |
| Tenant code                    | Internal workspace identifier (visible for multi-tenant admins).                            |
| Actionable Insights goal       | “Show Value…” link with the text prompt you supplied when enabling AI insights.             |
| Model advanced run parameters  | All non-default parameters—outlier threshold, collinearity cutoff, imbalance handling, etc. |
| Metrics for test dataset       | F1, Accuracy, AUC for classification; R², MAE, RMSE for regression/time-series.             |
| Trained model hyper-parameters | Captures grid-search results or any user-defined hyper-settings.                            |
| Dataset shape                  | Rows and columns fed into the trainer after preprocessing.                                  |

***

### Filters and Search

* Use the 🔍 field beneath each header to search by model name, code, or date.
* Click the funnel icon to show only errors, a specific model type, or a date range.

***

### Typical Workflows

* Confirm completion —Refresh logs to ensure today’s run shows done-ok.
* Grab model code for API endpoints without opening the model UI.
* Compare durations to detect unusually long or short runs, hinting at data issues.
* Audit hyper-parameters before sharing results with stakeholders.
* Investigate failures (error status) and cross-reference with advanced parameters.

***

### Best Practices

* Refresh first – Click Refresh Logs after a run to pull the latest status.
* Export for audit – Copy rows or take a screenshot before purging old models.
* Track trends – Rising durations or frequent errors can indicate growing data size or schema drift.
* Secure access – Only Admins can view logs; restrict role permissions if needed.
