aphin.identification package
Submodules
aphin.identification.aphin module
- class aphin.identification.aphin.APHIN(*args, **kwargs)[source]
Bases:
PHBasemodel
,ABC
Autoencoder-based port-Hamiltonian Identification Network (ApHIN)
- build_autoencoder(x)[source]
Build the encoder and decoder of the autoencoder.
- Parameters:
x (array-like) – Input data.
- Returns:
Tuple containing inputs and outputs of the autoencoder.
- Return type:
tuple
- build_model(x, u, mu)[source]
Build the model.
- Parameters:
x (array-like) – Full state with shape (n_samples, n_features).
u (array-like, optional) – Inputs with shape (n_samples, n_inputs).
mu (array-like, optional) – Parameters with shape (n_samples, n_params).
- build_nonlinear_autoencoder(z_pca)[source]
Build a fully connected autoencoder with layers of size layer_sizes.
- Parameters:
z_pca (tf.Tensor) – Input to the autoencoder.
- Returns:
Tuple containing encoded and decoded tensors.
- Return type:
tuple
- build_pca_decoder(z_dec)[source]
Build a linear decoder which is equivalent to the backprojection of the PCA.
- Parameters:
z_dec (tf.Tensor) – Decoded PCA tensor.
- Returns:
Decoded tensor.
- Return type:
tf.Tensor
- build_pca_encoder(x, x_input)[source]
Calculate the PCA of the data and build a linear encoder which is equivalent to the PCA.
- Parameters:
x (array-like) – Input data.
x_input (tf.Tensor) – Input tensor.
- Returns:
Encoded PCA tensor.
- Return type:
tf.Tensor
- calc_latent_time_derivatives(x, dx_dt)[source]
Calculate time derivatives of latent variables given the time derivatives of the input variables.
- Parameters:
x (array-like) – Full state with shape (n_samples, n_features).
dx_dt (array-like) – Time derivative of state with shape (n_samples, n_features).
- Returns:
Tuple containing latent variables and their time derivatives.
- Return type:
tuple
- calc_pca_time_derivatives(x, dx_dt)[source]
Calculate time derivatives of PCA variables given the time derivatives of the input variables.
- Parameters:
x (array-like) – Full state with shape (n_samples, n_features).
dx_dt (array-like) – Time derivative of state with shape (n_samples, n_features).
- Returns:
Tuple containing PCA coordinates and their time derivatives.
- Return type:
tuple
- calc_physical_time_derivatives(z, dz_dt)[source]
Calculate time derivatives of physical variables given the time derivatives of the latent variables.
- Parameters:
z (array-like) – Latent state with shape (n_samples, r).
dz_dt (array-like) – Time derivative of latent state with shape (n_samples, r).
- Returns:
Tuple containing physical variables and their time derivatives.
- Return type:
tuple
- decode(z)[source]
Decode latent variable.
- Parameters:
z (array-like) – Latent variable with shape (n_samples, reduced_order).
- Returns:
x – Full state with shape (n_samples, n_features, n_dof_per_feature).
- Return type:
array-like
- encode(x)[source]
Encode full state.
- Parameters:
x (array-like) – Full state with shape (n_samples, n_features, n_dof_per_feature).
- Returns:
z – Latent variable with shape (n_samples, reduced_order).
- Return type:
array-like
- get_loss_second_part(xr, dz_dxr, dxr_dt, z, u, mu)[source]
Second part of loss calculation (loss calclulation is split into two parts as the second part differs for the different autoencoder implementations, while the first part remains the same).
- Parameters:
xr (tf.Tensor) – Intermediate latent space tensor.
dz_dxr (tf.Tensor) – Jacobian of latent variables with respect to intermediate latent space.
dxr_dt (tf.Tensor) – Time derivative of intermediate latent space.
z (tf.Tensor) – Latent variables.
u (tf.Tensor) – System inputs.
mu (tf.Tensor) – System parameters.
- Returns:
Tuple containing individual losses.
- Return type:
tuple
- get_projection_properties(x=None, x_test=None, file_dir=None)[source]
Compute and save the projection and Jacobian error.
- Parameters:
x (array-like, optional) – Training data.
x_test (array-like, optional) – Test data.
file_dir (str, optional) – Directory to save the projection properties.
- Returns:
Tuple containing projection and Jacobian errors for training and test data.
- Return type:
tuple
- get_trainable_weights()[source]
Returns the trainable weights of the model.
- Returns:
List of trainable weights.
- Return type:
list
- implicit_midpoint(t0, z0, t_bound, step_size, B=None, u=None, decomp_option=1)[source]
Calculate time integration of linear ODE through implicit midpoint rule ODE system E*dz_dt = A*z + B*u Theory: We got a pH-system E*Dx = (J-D)*Q*x + B*u we define A:=(J-D)*Q and the RHS as f(t,x) use the differential slope equation at midpoint (x(t+h)-x(t))/h=Dx(t+h/2)=E^-1 * f(t+h/2,x(t+h/2)) since x(t+h/2) is unknown we use the approximation x(t+h/2) = 1/2*(x(t)+x(t+h)) insert the linear system into the differential equation leads to x(t+h) = x(t) + h * E^-1 *(1/2*A*(x(t)+x(t+h))+ B*u(t+h/2)) reformulate the equation to (E-h/2*A)x(t+h) = (E+h/2*A)*x(t) + h*B*u(t+h/2) solve the linear equation system, e.g. via LU-decomposition
- Parameters:
t0 (float) – Initial time.
z0 (array-like) – Initial state vector.
t_bound (float) – End time.
step_size (float) – Constant step width.
B (array-like, optional) – Input matrix, default is None (will be set to zero).
u (callable, optional) – Input function at time midpoints, default is None (will be set to zero).
decomp_option (int, optional) – Option for decomposition (1-lu_solve), default is 1.
- Returns:
z (array-like) – Integrated state vector.
———————————————————————–
- projection_properties(x)[source]
Compute the projection and Jacobian error.
- Parameters:
x (array-like) – Input data.
- Returns:
Tuple containing projection error and Jacobian error.
- Return type:
tuple
- reconstruct(x, _=None)[source]
Reconstruct full state.
- Parameters:
x (array-like) – Full state with shape (n_samples, n_features, n_dof_per_feature).
- Returns:
x_rec – Reconstructed full state with shape (n_samples, n_features, n_dof_per_feature).
- Return type:
array-like
- reshape_dxr_dz(dxr_dz)[source]
Reshape data for conformity with Convolutional Autoencoder.
- Parameters:
dxr_dz (tf.Tensor) – Jacobian of reconstructed state with respect to latent variables.
- Returns:
dxr_dz – Same as input
- Return type:
tf.Tensor
- test_step(inputs)[source]
Perform one test step.
- Parameters:
inputs (array-like) – Input data.
- Returns:
Dictionary containing loss values.
- Return type:
dict
- train_step(inputs)[source]
Perform one training step.
- Parameters:
inputs (array-like) – Input data.
- Returns:
Dictionary containing loss values.
- Return type:
dict
- vis_modes(x, mode_ids=3, latent_ids=None, block=True)[source]
Visualize the reconstruction of the reduced coefficients of the PCA modes.
- Parameters:
x (array-like) – Original dataset.
mode_ids (int or array-like, optional) – Scalar (plots mode_ids) or array (plots modes with indices from mode_ids).
latent_ids (int or array-like, optional) – Scalar (plots latent_ids) or array (plots modes with indices from latent_ids).
block (bool, optional) – Whether to block the display of the plot.
- Return type:
None
aphin.identification.conv_aphin module
- class aphin.identification.conv_aphin.ConvAPHIN(*args, **kwargs)[source]
Bases:
APHIN
Convolutional autoencoder-based port-Hamiltonian Identification Network (Conv-ApHIN). Model to discover low-dimensional dynamics of a high-dimensional system using a convolutional autoencoder and pHIN
- build_autoencoder(x)[source]
Build the encoder and decoder of the autoencoder.
- Parameters:
x (array-like) – Input data.
- Returns:
Tuple containing the input tensor, dummy PCA tensor, encoded tensor, decoded tensor, and reconstructed tensor.
- Return type:
tuple
- build_nonlinear_autoencoder(x_input)[source]
Build the convolutional autoencoder with specified layers and filter sizes.
- Parameters:
x_input (tf.Tensor) – Input tensor to the encoder.
- Returns:
Tuple containing the encoded tensor and the decoded tensor.
- Return type:
tuple
- get_loss_second_part(xr, dz_dxr, dxr_dt, z, u, mu)[source]
Calculate the second part of the loss function. In contrast to the classic APHIN, our data (and its time derivative) are multidimensional. Consequently, we need to vectorize the data before we can calculate the loss.
- Parameters:
xr (array-like) – Reconstructed data.
dz_dxr (array-like) – Derivative of the latent variables with respect to the reconstructed data.
dxr_dt (array-like) – Time derivative of the reconstructed data.
z (array-like) – Latent variables.
u (array-like) – Control inputs.
mu (array-like) – Parameters.
- Returns:
The second part of the loss.
- Return type:
tf.Tensor
- reshape_conv_data(dz_dxr, dxr_dt)[source]
In contrast to the classic APHIN, our data (and its time derivative) are multidimensional. Consequently, we need to vectorize the data before we can calculate the loss.
- Parameters:
dz_dxr (array-like) – Derivative of the latent variables with respect to the reconstructed data.
dxr_dt (array-like) – Time derivative of the reconstructed data.
- Returns:
Tuple containing the reshaped derivatives and time derivatives.
- Return type:
tuple
aphin.identification.ph_basemodel module
- class aphin.identification.ph_basemodel.PHBasemodel(*args, **kwargs)[source]
Bases:
Model
,ABC
Base model for port-Hamiltonian identification networks.
- build_loss(inputs)[source]
Split input into state, its derivative, and the parameters, perform the forward pass, calculate the loss, and update the weights.
- Parameters:
inputs (list of array-like) – Input data.
- Returns:
List of loss values.
- Return type:
list
- fit(x, y=None, validation_data=None, **kwargs)[source]
Wrapper for the fit function of the Keras model to flatten the data if necessary.
- Parameters:
x (array-like) – Training data.
y (array-like, optional) – Target data, by default None.
validation_data (tuple or array-like, optional) – Data for validation, by default None.
**kwargs (dict) – Additional arguments for the fit function.
- Returns:
- A History object. Its History.history attribute is a record of training loss values and metrics values
at successive epochs, as well as validation loss values and validation metrics values (if applicable).
- Return type:
History
- get_system_weights()[source]
Get the weights of the system identification part of the model.
- Returns:
List of system weights.
- Return type:
list
- get_trainable_weights()[source]
Get the trainable weights of the model.
- Returns:
List of trainable weights.
- Return type:
list
- static load(ph_network, x=None, u=None, mu=None, path: str = None, kwargs_overwrite: dict = None)[source]
Load the model from the given path.
- Parameters:
ph_network (callable) – The port-Hamiltonian network to be loaded.
x (array-like, optional) – Data needed to initialize the model, by default None.
u (array-like, optional) – Control inputs, by default None.
mu (array-like, optional) – Parameters used to create the model the first time, by default None.
path (str, optional) – Path to the model, by default None.
kwargs_overwrite (dict, optional) – Additional kwargs to overwrite the config, by default None.
- Returns:
Loaded model.
- Return type:
aphin.identification.phin module
- class aphin.identification.phin.PHIN(*args, **kwargs)[source]
Bases:
PHBasemodel
,ABC
port-Hamiltonian identification network (phin). Model to discover the dynamics of a system using a layer for identification of other dynamical systems (see SystemLayer), e.g., a PHLayer (port-Hamiltonian).
- build_model(x, u, mu)[source]
Build the model.
- Parameters:
x (array-like) – Full state with shape (n_samples, n_features).
u (array-like, optional) – Inputs with shape (n_samples, n_inputs).
mu (array-like, optional) – Parameters with shape (n_samples, n_params).
- Return type:
None
- get_loss(x, dx_dt, u, mu=None)[source]
Calculate loss.
- Parameters:
x (array-like) – Full state with shape (n_samples, n_features).
dx_dt (array-like) – Time derivative of state with shape (n_samples, n_features).
u (array-like) – System input with shape (n_samples, n_inputs).
mu (array-like, optional) – System parameters with shape (n_samples, n_parameters), by default None.
- Returns:
Tuple containing dz_loss, reg_loss, and total loss.
- Return type:
tuple
aphin.identification.projection_aphin module
- class aphin.identification.projection_aphin.DecoderLatentTransformation(*args, **kwargs)[source]
Bases:
Layer
Linear Transformation of the form: z to z_ with z_ = (psi^T @ phi)^-1 @ z
- class aphin.identification.projection_aphin.DecoderLinearProjection(*args, **kwargs)[source]
Bases:
Layer
Linear backprojection/decoding following dec_lin: z_ to x_lin with x_lin = phi @ z_
- class aphin.identification.projection_aphin.DecoderNonlinearProjection(*args, **kwargs)[source]
Bases:
Layer
Nonlinear/decoding following dec_nl: z_ to x_nl with x_nl = (I - phi @ (psi^T @ phi)^-1 @ psi^T) @ H(z_)
- class aphin.identification.projection_aphin.DecoderNonlinearTransformation(*args, **kwargs)[source]
Bases:
Layer
Nonlinear transformation in the latent space on the decoder side (needs to be inverse of the latent correspondent): z_1 to z_2 with z_2 = (act^-1(z_1) - b) @ W^-1
- class aphin.identification.projection_aphin.EncoderNonlinearTransformation(*args, **kwargs)[source]
Bases:
Layer
Nonlinear transformation in the latent space on the encoder side following: z to z_ with z_ = act(W @ z + b)
- class aphin.identification.projection_aphin.EncoderProjection(*args, **kwargs)[source]
Bases:
Layer
Linear projection/encoding following enc: x to z with z = psi^T @ x
- class aphin.identification.projection_aphin.ProjectionAPHIN(*args, **kwargs)[source]
Bases:
APHIN
Projection-Conserving autoencoder-based port-Hamiltonian Identification Network (ApHIN) This is an implementation of an autoencoder that really is a projection, i.e. AE(x) = AE(AE(x)) with AE=Enc(Dec(x))
- Encoder:
- z = h o … o h o psi^t @ x
with nonlinear transformation function h( )
- Decoder:
- (phi + (I - phi(psi^T @ phi)^-1 @ psi^T) H) o h^-1 o … o h^-1 o (psi^t @ phi)^-1 @ z
with inverse nonlinear transformation function h( ) and nonlinear Decoder H( )
- build_decoder(z)[source]
Build the decoder part of the autoencoder. Decoder: (phi + (I - phi(psi^T @ phi)^-1 @ psi^T) H) o h^-1 o … o h^-1 o (psi^t @ phi)^-1 @ z with invertible nonlinear transformation function h( ) and nonlinear Decoder H( )
- Parameters:
z (array-like) – Latent variable.
- Returns:
z_dec – Decoded variable.
- Return type:
array-like
- build_encoder(z_pca)[source]
Build the encoder part of the autoencoder. z = h o … o h o psi^t @ x with nonlinear transformation function h( )
- Parameters:
z_pca (array-like) – Input to the autoencoder.
- Returns:
z – Encoded latent variable.
- Return type:
array-like
- aphin.identification.projection_aphin.activation_custom(x, alpha=0.39269908169872414, dtype=tf.float32)[source]
Invertible nonlinear activation function. see Otto, S. E. (2022). Advances in Data-Driven Modeling and Sensing for High-Dimensional Nonlinear Systems Doctoral dissertation, Princeton University. Eq. 3.71
- Parameters:
x (array-like) – Input data.
alpha (float, optional) – Parameter for the activation function, by default np.pi / 8.
dtype (tf.DType, optional) – Data type, by default tf.float32.
- Returns:
Transformed data.
- Return type:
array-like
- aphin.identification.projection_aphin.activation_custom_inv(x, alpha=0.39269908169872414, dtype=tf.float32)[source]
Inverse of the nonlinear invertible activation function. see Otto, S. E. (2022). Advances in Data-Driven Modeling and Sensing for High-Dimensional Nonlinear Systems Doctoral dissertation, Princeton University. Eq. 3.71
- Parameters:
x (array-like) – Input data.
alpha (float, optional) – Parameter for the activation function, by default np.pi / 8.
dtype (tf.DType, optional) – Data type, by default tf.float32.
- Returns:
Transformed data.
- Return type:
array-like