Skip to main content
After training completes, the Onyx Engine platform provides tools to analyze results, compare models, and download trained weights.

Jobs Tab

The Jobs tab shows all training and optimization jobs:
Jobs tab showing training history
Each job displays:
  • Status: Queued, Running, Completed, Failed
  • Model name: Target model for this job
  • Dataset: Source training data
  • Progress: Iterations completed
  • Metrics: Final loss values

Model Details

Click on a model to view detailed information:
Model details view

Overview

  • Configuration: Architecture, hyperparameters, features
  • Training history: Loss curves over iterations
  • Source dataset: Which data trained this model

Versions

Each training run creates a new model version. The platform tracks:
  • Version ID
  • Training date
  • Final metrics
  • Configuration differences

Test Predictions

Visualize model predictions on test data:
  • Ground truth: Actual values from dataset
  • Predictions: Model outputs
  • Error: Difference between predicted and actual

Lineage Tracking

View the complete data-to-model lineage:
Data lineage visualization
The lineage view shows:
  • Which raw datasets were processed
  • Which training datasets were created
  • Which models were trained from each dataset
  • Version history for each object

Comparing Models

To compare multiple models:
  1. Select models from the Table view
  2. Click “Compare”
  3. View side-by-side metrics and configurations
Comparison shows:
  • Architecture differences
  • Hyperparameter differences
  • Loss curve overlays
  • Final metric comparison

Downloading Models

Via Platform

  1. Navigate to the model
  2. Click “Download”
  3. Save the .pt file locally

Via SDK

from onyxengine import Onyx

# Initialize the client
onyx = Onyx()

# Download latest version
model = onyx.load_model('my_model')

# Download specific version
model = onyx.load_model('my_model', version_id='abc123...')

# Model is cached locally at ~/.onyx/models/

Understanding Metrics

Single-Step Loss

  • Measures one-step prediction accuracy
  • Lower is better
  • Good baseline metric for model quality

Multi-Step Loss

  • Measures trajectory simulation accuracy
  • More relevant for deployment
  • Sensitive to error accumulation

Interpreting Values

Loss RangeInterpretation
< 0.001Excellent fit
0.001 - 0.01Good fit
0.01 - 0.1Moderate fit, may need improvement
> 0.1Poor fit, check configuration
Loss values are relative to your data. Compare across models trained on the same dataset.

Optimization Results

For optimization jobs, additional information is available:

Trial Comparison

  • Each trial’s configuration
  • Metrics for each trial
  • Best trial identification

Hyperparameter Analysis

  • Which parameters had the biggest impact
  • Correlation between parameters and performance
  • Recommended configurations

Managing Versions

Setting Default Version

  1. Navigate to the model
  2. Go to Versions tab
  3. Click “Set as Default” on your preferred version
The default version is returned by load_model() without a version ID.

Deleting Versions

  1. Navigate to the model version
  2. Click “Delete”
  3. Confirm deletion
Deleted versions cannot be recovered. Downloaded copies remain valid.

Exporting Results

Export Metrics

Download training metrics as CSV:
  • Loss history
  • Learning rate schedule
  • Validation metrics

Export Configuration

Download model configuration as JSON:
  • Architecture parameters
  • Feature definitions
  • Training settings

Next Steps