tsgm.models.architectures
¶
Package Contents¶
- class Sampling[source]¶
Bases:
tensorflow.keras.layers.Layer
Custom Keras layer for sampling from a latent space.
This layer samples from a latent space using the reparameterization trick during training. It takes as input the mean and log variance of the latent distribution and generates samples by adding random noise scaled by the standard deviation to the mean.
- call(inputs: Tuple[tsgm.types.Tensor, tsgm.types.Tensor]) tsgm.types.Tensor [source]¶
Generates samples from a latent space.
- Parameters:
inputs (tuple[tsgm.types.Tensor, tsgm.types.Tensor]) – Tuple containing mean and log variance tensors of the latent distribution.
- Returns:
Sampled latent vector.
- Return type:
tsgm.types.Tensor
- class Architecture[source]¶
Bases:
abc.ABC
Helper class that provides a standard way to create an ABC using inheritance.
- class BaseGANArchitecture[source]¶
Bases:
Architecture
Base class for defining architectures of Generative Adversarial Networks (GANs).
- property discriminator: tensorflow.keras.models.Model¶
Property for accessing the discriminator model.
- Returns:
The discriminator model.
- Return type:
keras.models.Model
- Raises:
NotImplementedError – If the discriminator model is not found.
- property generator: tensorflow.keras.models.Model¶
Property for accessing the generator model.
- Returns:
The generator model.
- Return type:
keras.models.Model
- Raises:
NotImplementedError – If the generator model is not implemented.
- get() Dict [source]¶
Retrieves both discriminator and generator models as a dictionary.
- Returns:
A dictionary containing discriminator and generator models.
- Return type:
- Raises:
NotImplementedError – If either discriminator or generator models are not implemented.
- class BaseVAEArchitecture[source]¶
Bases:
Architecture
Base class for defining architectures of Variational Autoencoders (VAEs).
- property encoder: tensorflow.keras.models.Model¶
Property for accessing the encoder model.
- Returns:
The encoder model.
- Return type:
keras.models.Model
- Raises:
NotImplementedError – If the encoder model is not implemented.
- property decoder: tensorflow.keras.models.Model¶
Property for accessing the decoder model.
- Returns:
The decoder model.
- Return type:
keras.models.Model
- Raises:
NotImplementedError – If the decoder model is not implemented.
- get() Dict [source]¶
Retrieves both encoder and decoder models as a dictionary.
- Returns:
A dictionary containing encoder and decoder models.
- Return type:
- Raises:
NotImplementedError – If either encoder or decoder models are not implemented.
- class VAE_CONV5Architecture(seq_len: int, feat_dim: int, latent_dim: int)[source]¶
Bases:
BaseVAEArchitecture
This class defines the architecture for a Variational Autoencoder (VAE) with Convolutional Layers.
- Parameters:
seq_len (int): Length of input sequence. feat_dim (int): Dimensionality of input features. latent_dim (int): Dimensionality of latent space.
Initializes the VAE_CONV5Architecture.
- class cVAE_CONV5Architecture(seq_len: int, feat_dim: int, latent_dim: int, output_dim: int = 2)[source]¶
Bases:
BaseVAEArchitecture
Base class for defining architectures of Variational Autoencoders (VAEs).
- class cGAN_Conv4Architecture(seq_len: int, feat_dim: int, latent_dim: int, output_dim: int)[source]¶
Bases:
BaseGANArchitecture
Architecture for Conditional Generative Adversarial Network (cGAN) with Convolutional Layers.
Initializes the cGAN_Conv4Architecture.
- class tcGAN_Conv4Architecture(seq_len: int, feat_dim: int, latent_dim: int, output_dim: int)[source]¶
Bases:
BaseGANArchitecture
Architecture for Temporal Conditional Generative Adversarial Network (tcGAN) with Convolutional Layers.
Initializes the tcGAN_Conv4Architecture.
- class cGAN_LSTMConv3Architecture(seq_len: int, feat_dim: int, latent_dim: int, output_dim: int)[source]¶
Bases:
BaseGANArchitecture
Architecture for Conditional Generative Adversarial Network (cGAN) with LSTM and Convolutional Layers.
Initializes the cGAN_LSTMConv3Architecture.
- class BaseClassificationArchitecture(seq_len: int, feat_dim: int, output_dim: int)[source]¶
Bases:
Architecture
Base class for classification architectures.
- Parameters:
Initializes the base classification architecture.
- Parameters:
- property model: tensorflow.keras.models.Model¶
Property to access the underlying Keras model.
- Returns:
The Keras model.
- Return type:
keras.models.Model
- class ConvnArchitecture(seq_len: int, feat_dim: int, output_dim: int, n_conv_blocks: int = 1)[source]¶
Bases:
BaseClassificationArchitecture
Convolutional neural network architecture for classification. Inherits from BaseClassificationArchitecture.
Initializes the convolutional neural network architecture.
- class ConvnLSTMnArchitecture(seq_len: int, feat_dim: int, output_dim: int, n_conv_lstm_blocks: int = 1)[source]¶
Bases:
BaseClassificationArchitecture
Base class for classification architectures.
- Parameters:
Initializes the base classification architecture.
- class BlockClfArchitecture(seq_len: int, feat_dim: int, output_dim: int, blocks: list)[source]¶
Bases:
BaseClassificationArchitecture
Architecture for classification using a sequence of blocks.
Inherits from BaseClassificationArchitecture.
Initializes the BlockClfArchitecture.
- class BasicRecurrentArchitecture(hidden_dim: int, output_dim: int, n_layers: int, network_type: str, name: str = 'Sequential')[source]¶
Bases:
Architecture
Base class for recurrent neural network architectures.
Inherits from Architecture.
- Parameters:
hidden_dim – int, the number of units (e.g. 24)
output_dim – int, the number of output units (e.g. 1)
n_layers – int, the number of layers (e.g. 3)
network_type – str, one of ‘gru’, ‘lstm’, or ‘lstmLN’
name – str, model name Default: “Sequential”
- class TransformerClfArchitecture(seq_len: int, feat_dim: int, num_heads: int = 2, ff_dim: int = 64, n_blocks: int = 1, dropout_rate=0.5, output_dim: int = 2)[source]¶
Bases:
BaseClassificationArchitecture
Base class for transformer architectures.
Inherits from BaseClassificationArchitecture.
Initializes the TransformerClfArchitecture.
- Parameters:
seq_len (int) – Length of input sequences.
feat_dim (int) – Dimensionality of input features.
num_heads (int) – Number of attention heads (default is 2).
ff_dim (int) – Feed forward dimension in the attention block (default is 64).
output_dim (int, optional) – Dimensionality of the output.
dropout_rate (float, optional) – Dropout probability (default is 0.5).
n_blocks (int, optional) – Number of transformer blocks (default is 1).
output_dim – Number of classes (default is 2).
- class cGAN_LSTMnArchitecture(seq_len: int, feat_dim: int, latent_dim: int, output_dim: int, n_blocks: int = 1, output_activation: str = 'tanh')[source]¶
Bases:
BaseGANArchitecture
Conditional Generative Adversarial Network (cGAN) with LSTM-based architecture.
Inherits from BaseGANArchitecture.
Initializes the cGAN_LSTMnArchitecture.
- Parameters:
seq_len (int) – Length of input sequences.
feat_dim (int) – Dimensionality of input features.
latent_dim (int) – Dimensionality of the latent space.
output_dim (int) – Dimensionality of the output.
n_blocks (int, optional) – Number of LSTM blocks in the architecture (default is 1).
output_activation (str, optional) – Activation function for the output layer (default is “tanh”).
- class WaveGANArchitecture(seq_len: int, feat_dim: int = 64, latent_dim: int = 32, output_dim: int = 1, kernel_size: int = 32, phase_rad: int = 2, use_batchnorm: bool = False)[source]¶
Bases:
BaseGANArchitecture
WaveGAN architecture, from https://arxiv.org/abs/1802.04208
Inherits from BaseGANArchitecture.
Initializes the WaveGANArchitecture.
- Parameters:
seq_len (int) – Length of input sequences.
feat_dim (int) – Dimensionality of input features.
latent_dim (int) – Dimensionality of the latent space.
output_dim (int) – Dimensionality of the output.
kernel_size (int, optional) – Sizes of convolutions
phase_rad (int, optional) – Phase shuffle radius for wavegan (default is 2)
use_batchnorm (bool, optional) – Whether to use batchnorm (default is False)
- class BaseDenoisingArchitecture(seq_len: int, feat_dim: int, n_filters: int = 64, n_conv_layers: int = 3, **kwargs)[source]¶
Bases:
Architecture
Base class for denoising architectures in DDPM (Denoising Diffusion Probabilistic Models,
tsgm.models.ddpm
).- Attributes:
arch_type: A string indicating the type of architecture, set to “ddpm:denoising”. _seq_len: The length of the input sequences. _feat_dim: The dimensionality of the input features. _n_filters: The number of filters used in the convolutional layers. _n_conv_layers: The number of convolutional layers in the model. _model: The Keras model instance built using the
_build_model
method.
Initializes the BaseDenoisingArchitecture with the specified parameters.
- Args:
seq_len (int): The length of the input sequences. feat_dim (int): The dimensionality of the input features. n_filters (int, optional): The number of filters for convolutional layers. Default is 64. n_conv_layers (int, optional): The number of convolutional layers. Default is 3. **kwargs: Additional keyword arguments to be passed to the parent class
Architecture
.
- property model: tensorflow.keras.models.Model¶
Provides access to the Keras model instance.
- Returns:
keras.models.Model: The Keras model instance built by
_build_model
.
- class DDPMConvDenoiser(**kwargs)[source]¶
Bases:
BaseDenoisingArchitecture
A convolutional denoising model for DDPM.
This class defines a convolutional neural network architecture used as a denoiser in DDPM. It predicts the noise added to the input samples during the diffusion process.
- Attributes:
arch_type: A string indicating the architecture type, set to “ddpm:denoiser”.
Initializes the DDPMConvDenoiser model with additional parameters.
- Args:
**kwargs: Additional keyword arguments to be passed to the parent class.
- _build_model() tensorflow.keras.Model [source]¶
Constructs and returns the Keras model for the DDPM denoiser.
- The model consists of:
A 1D convolutional layer to process input features.
An additional input layer for time embedding to incorporate timestep information.
n_conv_layers
convolutional layers to process the combined features and time embeddings.A final convolutional layer to output the predicted noise.
- Returns:
keras.Model: The Keras model instance for the DDPM denoiser.