tsgm.models.timeGAN

Module Contents

class LossTracker[source]

Bases: collections.OrderedDict

Dictionary of lists, extends python OrderedDict. Example: Given {‘loss_a’: [1], ‘loss_b’: [2]}, adding key=’loss_a’ with value=0.7

gives {‘loss_a’: [1, 0.7], ‘loss_b’: [2]}, and adding key=’loss_c’ with value=1.2 gives {‘loss_a’: [1, 0.7], ‘loss_b’: [2], ‘loss_c’: [1.2]}

Initialize self. See help(type(self)) for accurate signature.

__setitem__(key: Any, value: Any) None[source]

Set self[key] to value.

to_numpy() numpy.typing.NDArray[source]

:return 2d vector of losses

labels() List[source]

:return list of keys

class TimeGAN(seq_len: int = 24, module: str = 'gru', hidden_dim: int = 24, n_features: int = 6, n_layers: int = 3, batch_size: int = 256, gamma: float = 1.0)[source]

Bases: tensorflow.keras.Model

Time-series Generative Adversarial Networks (TimeGAN)

Reference: Jinsung Yoon, Daniel Jarrett, Mihaela van der Schaar, “Time-series Generative Adversarial Networks,” Neural Information Processing Systems (NeurIPS), 2019.

Paper link: https://papers.nips.cc/paper/8789-time-series-generative-adversarial-networks

compile(d_optimizer: tensorflow.keras.optimizers.Optimizer = keras.optimizers.legacy.Adam(), g_optimizer: tensorflow.keras.optimizers.Optimizer = keras.optimizers.legacy.Adam(), emb_optimizer: tensorflow.keras.optimizers.Optimizer = keras.optimizers.legacy.Adam(), supgan_optimizer: tensorflow.keras.optimizers.Optimizer = keras.optimizers.legacy.Adam(), ae_optimizer: tensorflow.keras.optimizers.Optimizer = keras.optimizers.legacy.Adam(), emb_loss: tensorflow.keras.losses.Loss = keras.losses.MeanSquaredError(), clf_loss: tensorflow.keras.losses.Loss = keras.losses.BinaryCrossentropy()) None[source]

Assign optimizers and loss functions.

Parameters:
  • d_optimizer – An optimizer for the GAN’s discriminator

  • g_optimizer – An optimizer for the GAN’s generator

  • emb_optimizer – An optimizer for the GAN’s embedder

  • supgan_optimizer – An optimizer for the adversarial supervised network

  • ae_optimizer – An optimizer for the autoencoder network

  • emb_loss – A loss function for the embedding recovery

  • clf_loss – A loss function for the discriminator task

Returns:

None

_train_autoencoder(X: tensorflow.python.types.core.TensorLike, optimizer: tensorflow.keras.optimizers.Optimizer) float[source]
  1. Embedding network training: minimize E_loss0

_train_supervisor(X: tensorflow.python.types.core.TensorLike, optimizer: tensorflow.keras.optimizers.Optimizer) float[source]
  1. Training with supervised loss only: minimize G_loss_S

_train_generator(X: tensorflow.python.types.core.TensorLike, Z: tensorflow.python.types.core.TensorLike, optimizer: tensorflow.keras.optimizers.Optimizer) Tuple[float, float, float, float, float][source]
  1. Joint training (Generator training twice more than discriminator training): minimize G_loss

_train_embedder(X: tensorflow.python.types.core.TensorLike, optimizer: tensorflow.keras.optimizers.Optimizer) Tuple[float, float][source]

Train embedder during joint training: minimize E_loss

_train_discriminator(X: tensorflow.python.types.core.TensorLike, Z: tensorflow.python.types.core.TensorLike, optimizer: tensorflow.keras.optimizers.Optimizer) float[source]

minimize D_loss

static _compute_generator_moments_loss(y_true: tensorflow.python.types.core.TensorLike, y_pred: tensorflow.python.types.core.TensorLike) float[source]
Parameters:
  • y_true – TensorLike

  • y_pred – TensorLike

Return G_loss_V:

float

_check_discriminator_loss(X: tensorflow.python.types.core.TensorLike, Z: tensorflow.python.types.core.TensorLike) float[source]
Parameters:
  • X – TensorLike

  • Z – TensorLike

Return D_loss:

float

_generate_noise() tensorflow.python.types.core.TensorLike[source]

Random vector generation :return Z, generated random vector

get_noise_batch() Iterator[source]

Return an iterator of random noise vectors

_get_data_batch(data: tensorflow.python.types.core.TensorLike, n_windows: int) Iterator[source]

Return an iterator of shuffled input data

fit(data: tensorflow.python.types.core.TensorLike | tensorflow.data.Dataset, epochs: int, checkpoints_interval: int | None = None, generate_synthetic: Tuple = (), *args, **kwargs)[source]
Parameters:
  • data – TensorLike, the training data

  • epochs – int, the number of epochs for the training loops

  • checkpoints_interval – int, the interval for printing out loss values (loss values will be print out every ‘checkpoints_interval’ epochs) Default: None (no print out)

  • generate_synthetic – list of int, a list of epoch numbers when synthetic data samples are generated Default: [] (no generation)

:return None

generate(n_samples: int) tensorflow.python.types.core.TensorLike[source]

Generate synthetic time series