from onyxengine.modeling import RNNConfig
config = RNNConfig(
outputs: List[Output],
inputs: List[Input],
dt: float,
sequence_length: int = 1,
rnn_type: Literal['RNN', 'LSTM', 'GRU'] = 'LSTM',
hidden_layers: int = 2,
hidden_size: int = 32,
dropout: float = 0.0,
bias: bool = True
)
Configuration class for Recurrent Neural Network models.
Parameters
List of output feature definitions.
List of input feature definitions.
Time step in seconds. Must match your dataset’s sampling rate.
Length of the input sequence. Range: 1-100.
Type of recurrent unit. Options: 'RNN', 'LSTM', 'GRU'.
Number of stacked RNN layers. Range: 1-10.
Number of hidden units per layer. Range: 1-1024.
Dropout rate between RNN layers. Range: 0.0-1.0.
Whether to include bias terms.
RNN Types
| Type | Description | Use Case |
|---|
'RNN' | Basic recurrent unit | Simple sequences, debugging |
'LSTM' | Long Short-Term Memory | Long-range dependencies (default) |
'GRU' | Gated Recurrent Unit | Similar to LSTM, fewer parameters |
Example
from onyxengine.modeling import RNNConfig, Input, Output
outputs = [Output(name='acceleration')]
inputs = [
Input(name='velocity', parent='acceleration', relation='derivative'),
Input(name='position', parent='velocity', relation='derivative'),
Input(name='control_input'),
]
config = RNNConfig(
outputs=outputs,
inputs=inputs,
dt=0.01,
sequence_length=12,
rnn_type='LSTM',
hidden_layers=2,
hidden_size=64,
dropout=0.1,
bias=True
)
Architecture
Input: (batch, sequence_length, num_inputs)
↓ RNN/LSTM/GRU layers (with hidden state)
↓ Take final hidden state
↓ Linear(hidden_size, num_outputs)
Output: (batch, num_outputs)
RNNOptConfig
For hyperparameter optimization:
from onyxengine.modeling import RNNOptConfig
rnn_opt = RNNOptConfig(
outputs=outputs,
inputs=inputs,
dt=0.01,
rnn_type={"select": ['RNN', 'LSTM', 'GRU']},
sequence_length={"select": [4, 8, 12, 16]},
hidden_layers={"range": [2, 4, 1]},
hidden_size={"select": [32, 64, 128]},
dropout={"range": [0.0, 0.4, 0.1]},
bias=True
)