LogoLogo
Log InSign UpHomepage
  • đź‘‹Welcome
  • Account and Team Setup
    • Sign up
    • Subscription Plans
    • Profile information
    • Account information
    • Roles
    • Users
    • Tags
    • Logs
  • FAQ
  • UNDERSTANDING MACHINE LEARNING
    • What is Graphite Note?
      • Graphite Note Insights Lifecycle
    • Introduction to Machine Learning
      • What is Machine Learning
      • Data Analitycs Maturity
    • Machine Learning concepts
      • Key Drivers
      • Confusion Matrix
      • Supervised vs Unsupervised ML
  • Demo datasets
    • Demo Datasets
      • Ads
      • Churn
      • CO2 Emission
      • Diamonds
      • eCommerce Orders
      • Housing Prices
      • Lead Scoring
      • Mall Customers
      • Marketing Mix
      • Car Sales
      • Store Item Demand
      • Upsell
    • What Dataset do I need for my use case?
      • Predict Cross Selling: Dataset
      • Predict Customer Churn: Dataset
      • Predictive Lead Scoring: Dataset
      • Predict Revenue : Dataset
      • Product Demand Forecast: Dataset
      • Predictive Ads Performance: Dataset
      • Media Mix Modeling (MMM): Dataset
      • Customer Lifetime Value Prediction : Dataset
      • RFM Customer Segmentation : Dataset
    • Dataset examples - from online sources
      • Free datasets for Machine Learning
  • Datasets
    • Introduction
    • Prepare your Data
      • Data Labeling
      • Expanding datasets
      • Merging datasets
      • CSV File creating and formatting
    • Data sources in Graphite Note
      • Import data from CSV file
        • Re-upload or append CSV
        • CSV upload troubleshooting tips
      • MySQL Connector
      • MariaDB Connector
      • PostgreSQL Connector
      • Redshift Connector
      • Big Query Connector
      • MS SQL Connector
      • Oracle Connector
  • Models
    • Introduction
    • Preprocessing Data
    • Machine Learning Models
      • Timeseries Forecast
      • Binary Classification
      • Multiclass Classification
      • Regression
      • General Segmentation
      • RFM Customer Segmentation
      • Customer Lifetime Value
      • Customer Cohort Analysis
      • ABC Pareto Analysis
      • New vs Returning Customers
    • Advanced ML model settings
      • Actionable insights
      • Advanced parameters
      • Model Overview
      • Regressors
      • Model execution logs
    • Predict with ML Models
    • Improve your ML Models
  • Notebooks
    • What is Notebook?
    • My first Notebook
    • Data Visualization
  • REST API
    • API Introduction
    • Dataset API
      • Create
      • Fill
      • Complete
    • Prediction API
      • Request v1
        • Headers
        • Body
      • Request v2
        • Headers
        • Body
      • Response
      • Usage Notes
    • Model Results API
      • Request
        • Headers
        • Body
      • Response
      • Usage Notes
      • Code Examples
    • Model Info API
      • Request
        • Headers
        • Body
      • Response
      • Usage notes
      • Code Examples
Powered by GitBook
On this page
  • Overview
  • Key Features
  • Filters and Search
  • Typical Workflows
  • Best Practices

Was this helpful?

Export as PDF
  1. Models
  2. Advanced ML model settings

Model execution logs

PreviousRegressorsNextPredict with ML Models

Last updated 22 days ago

Was this helpful?

Overview

The Execution Logs dialog (open via ⚙️ > Logs) records every model run across your workspace. It captures metadata such as start/end time, model type, hyper-parameters, and test-set metrics—providing a single place to verify, audit, and debug model training at scale.


Key Features

Column
What it tells you

Start time / Finished time

UTC timestamps marking the beginning and end of training.

Duration

How long the run took (e.g., 2m37s).

Status

done-ok, error, or running; useful for spotting failures quickly.

Model name & Model Type

Friendly name plus classification / regression / time-series, etc.

Model Code

Unique 12-character hash—required for API calls (/prediction, /fetch-result, etc.).

Tenant code

Internal workspace identifier (visible for multi-tenant admins).

Actionable Insights goal

“Show Value…” link with the text prompt you supplied when enabling AI insights.

Model advanced run parameters

All non-default parameters—outlier threshold, collinearity cutoff, imbalance handling, etc.

Metrics for test dataset

F1, Accuracy, AUC for classification; R², MAE, RMSE for regression/time-series.

Trained model hyper-parameters

Captures grid-search results or any user-defined hyper-settings.

Dataset shape

Rows and columns fed into the trainer after preprocessing.


Filters and Search

  • Use the 🔍 field beneath each header to search by model name, code, or date.

  • Click the funnel icon to show only errors, a specific model type, or a date range.


Typical Workflows

  • Confirm completion —Refresh logs to ensure today’s run shows done-ok.

  • Grab model code for API endpoints without opening the model UI.

  • Compare durations to detect unusually long or short runs, hinting at data issues.

  • Audit hyper-parameters before sharing results with stakeholders.

  • Investigate failures (error status) and cross-reference with advanced parameters.


Best Practices

  • Refresh first – Click Refresh Logs after a run to pull the latest status.

  • Export for audit – Copy rows or take a screenshot before purging old models.

  • Track trends – Rising durations or frequent errors can indicate growing data size or schema drift.

  • Secure access – Only Admins can view logs; restrict role permissions if needed.