NeuralROMs.CosineTransform
NeuralROMs.FourierTransform
NeuralROMs.GalerkinProjection
NeuralROMs.OpConv
NeuralROMs.OpConv
NeuralROMs.OpConvBilinear
NeuralROMs.OpConvBilinear
NeuralROMs.PeriodicLayer
NeuralROMs.SplitRows
NeuralROMs.AutoDecoder
NeuralROMs.FlatDecoder
NeuralROMs.HyperDecoder
NeuralROMs.ImplicitEncoderDecoder
NeuralROMs.OpKernel
NeuralROMs.PSNR
NeuralROMs.PSNR
NeuralROMs.PermutedBatchNorm
NeuralROMs.__opconv
NeuralROMs._ntimes
NeuralROMs.callback
NeuralROMs.codereg_autodecoder
NeuralROMs.elasticreg
NeuralROMs.forwarddiff_deriv1
NeuralROMs.fullbatch_metric
NeuralROMs.get_state
NeuralROMs.interp_cubic
NeuralROMs.linear_nonlinear
NeuralROMs.mae
NeuralROMs.mae_clamped
NeuralROMs.make_minconfig
NeuralROMs.makecallback
NeuralROMs.mse
NeuralROMs.normalize_t
NeuralROMs.normalize_u
NeuralROMs.opconv__
NeuralROMs.opconv_wt
NeuralROMs.optimize
NeuralROMs.optimize
NeuralROMs.plot_1D_surrogate_steady
NeuralROMs.pnorm
NeuralROMs.regularize_autodecoder
NeuralROMs.regularize_decoder
NeuralROMs.regularize_flatdecoder
NeuralROMs.rsquare
NeuralROMs.statistics
NeuralROMs.train_model
NeuralROMs.CosineTransform
— Typestruct CosineTransform{D} <: NeuralROMs.AbstractTransform{D}
NeuralROMs.FourierTransform
— Typestruct FourierTransform{D} <: NeuralROMs.AbstractTransform{D}
NeuralROMs.GalerkinProjection
— TypeGalerkinProjection
original: u' = f(u, t)
ROM map : u = g(ũ)
⟹ J(ũ) * ũ' = f(ũ, t)
⟹ ũ' = pinv(J) (ũ) * f(ũ, t)
solve with timestepper ⟹ ũ' = f̃(ũ, t)
e.g. (J*u)_n+1 - (J*u)_n = Δt * (f_n + f_n-1 + ...)
NeuralROMs.OpConv
— TypeNeural Operator convolution layer
TODO OpConv
design consierations
- create AbstractTransform interface
- innitialize params Wre, Wimag if eltype(Transform) isn't isreal
so that eltype(params) is always real
NeuralROMs.OpConv
— MethodOpConv(ch_in, ch_out, modes; init, transform)
NeuralROMs.OpConvBilinear
— TypeNeural Operator bilinear convolution layer
NeuralROMs.OpConvBilinear
— MethodExtend OpConv to accept two inputs
Like Lux.Bilinear in modal space
NeuralROMs.PeriodicLayer
— Typex -> sin(π⋅x/L)
Works when input is symmetric around 0, i.e., x ∈ [-1, 1). If working with something like [0, 1], use cosines instead.
NeuralROMs.SplitRows
— TypeSplitRows
Split rows of ND array, into Tuple
of ND arrays.
NeuralROMs.AutoDecoder
— MethodAutoDecoder
Assumes input is (xyz, idx)
of sizes [in_dim, K]
, [1, K]
respectively
NeuralROMs.FlatDecoder
— FunctionFlatDecoder
Input: (x, param)
of sizes [x_dim, K]
, and [p_dim, K]
respectively. Output: solution field u
of size [out_dim, K]
.
NeuralROMs.HyperDecoder
— MethodHyperDecoder
Assumes input is (xyz, idx)
of sizes [D, K]
, [1, K]
respectively
NeuralROMs.ImplicitEncoderDecoder
— MethodImplicitEncoderDecoder
Composition of a (possibly convolutional) encoder and an implicit neural network decoder.
The input array [Nx, Ny, C, B]
or [C, Nx, Ny, B]
is expected to contain XYZ
coordinates in the last dim
entries of the channel dimension which is dictated by channel_dim
. The number of channels in the input array must match encoder_width + D
, where encoder_width
is the expected input width of your encoder. The encoder
network is expected to work with whatever channel_dim
, encoder_channels
you choose.
NOTE: channel_dim
is set to 1. So the assumption is [C, Nx, Ny, B]
The coordinates are split and the remaining channels are passed to encoder
which compresses each [:, :, :, 1]
slice into a latent vector of length L
. The output of the encoder is of size [L, B]
.
With a compressed representation of each image
, we are ready to apply the decoder mapping. The decoder is an implicit neural network which expects as input the concatenation of the latent vector and a query point. The decoder returns the value of the target field at that point.
The decoder is usually a deep neural network and expects the channel dimension to be the leading dimension. The decoder expects input with size of leading dimension L+dim
, and returns an array with leading size out_dim
.
Here, we feed it an array of size [L+2, Nx, Ny, B]
, where the input Npoints
equal to (Nx, Ny,)
is the number of training points in each trajectory.
NeuralROMs.OpKernel
— MethodOpKernel(ch_in, ch_out, modes; ...)
OpKernel(
ch_in,
ch_out,
modes,
activation;
transform,
init,
use_bias
)
accept data in shape (C, X1, ..., Xd, B)
NeuralROMs.PSNR
— MethodPSNR(y, ŷ, maxval) --> -10 * log10(mse(y, ŷ) / maxval^2)
Peak signal to noise ratio
NeuralROMs.PSNR
— MethodPSNR(maxval)(NN, p, st, batch) --> PSNR
NeuralROMs.PermutedBatchNorm
— MethodPermutedBatchNorm(c, num_dims)
Assumes channel dimension is 1
NeuralROMs.__opconv
— Method__opconv(x, transform, modes)
Accepts x
[C, N1...Nd, B]. Returns x̂
[C, M, B] where M = prod(modes)
Operations
- apply transform to
N1...Nd
:[K1...Kd, C, B] <- [K1...Kd, C, B]
- truncate (discard high-freq modes):
[M1...Md, C, B] <- [K1...Kd, C, B]
wheremodes == (M1...Md)
NeuralROMs._ntimes
— Method, FUNCCACHEPREFER_NONE _ntimes(x, (Nx, Ny)): x [L, B] –> [L, Nx, Ny, B]
Make Nx ⋅ Ny
copies of the first dimension and store it in the following dimensions. Works for any (Nx, Ny, ...)
.
NeuralROMs.callback
— Methodcallback(
p,
st;
io,
_loss,
loss_,
_printstatistics,
printstatistics_,
STATS,
epoch,
nepoch,
notestdata
)
NeuralROMs.codereg_autodecoder
— Methodcodereg_autodecoder(lossfun, σ; property)(NN, p, st, batch) -> l, st, stats
code regularized loss: lossfun(..) + 1/σ² ||ũ||₂²
NeuralROMs.elasticreg
— Methodelasticreg(lossfun, λ1, λ2)(NN, p, st, batch) -> l, st, stats
Elastic Regularization (L1 + L2)
NeuralROMs.forwarddiff_deriv1
— MethodBased on SparseDiffTools.auto_jacvec
MWE:
f = x -> exp.(x)
f = x -> x .^ 2
x = [1.0, 2.0, 3.0, 4.0]
forwarddiff_deriv1(f, x)
forwarddiff_deriv2(f, x)
forwarddiff_deriv4(f, x)
NeuralROMs.fullbatch_metric
— Methodfullbatch_metric(NN, p, st, loader, lossfun, ismean) -> l
Only for callbacks. Enforce this by setting Lux.testmode
NN, p, st
: neural networkloader
: data loaderlossfun
: loss function: (x::Array, y::Array) -> l::Real
NeuralROMs.get_state
— Methodreturns t, p, u, f, f̃
NeuralROMs.interp_cubic
— MethodCubic hermite interpolation
NeuralROMs.linear_nonlinear
— Functionlinear_nonlinear(split, nonlin, linear, bilinear)
linear_nonlinear(split, nonlin, linear, bilinear, project)
if you have linear dependence on x1
, and nonlinear on x2
, then
x1 → nonlin → y1 ↘
bilinear → project → z
x2 → linear → y2 ↗
Arguments
- Call
nonlin
asnonlin(x1, p, st)
- Call
linear
aslinear(x2, p, st)
- Call
bilin
asbilin((y1, y2), p, st)
NeuralROMs.mae
— Methodmae(ypred, ytrue) -> l
mae(NN, p, st, batch) -> l, st, stats
Mean squared error
NeuralROMs.mae_clamped
— Methodmae_clamped(δ)(NN, p, st, batch) -> l, st, stats
Clamped mean absolute error
NeuralROMs.make_minconfig
— Methodearly stopping based on mini-batch loss from test set https://github.com/jeffheaton/appdeeplearning/blob/main/t81558class034earlystop.ipynb
NeuralROMs.makecallback
— Methodmakecallback(
NN,
_loader,
loader_,
lossfun;
STATS,
stats,
io,
cb_epoch,
notestdata
)
NeuralROMs.mse
— Methodmse(ypred, ytrue) -> l
mse(NN, p, st, batch) -> l, st, stats
Mean squared error
NeuralROMs.normalize_t
— Methodt ∈ [0, T] Input size [Ntime]
.
NeuralROMs.normalize_u
— MethodInput size [out_dim, ...]
NeuralROMs.opconv__
— Methodopconv__(ŷ_tr, transform, modes, Ks, Ns)
NeuralROMs.opconv_wt
— Methodopconv_wt(x, W)
Apply pointwise linear transform in mode space, i.e. no mode-mixing. Unique linear transform for each mode.
Operations
- reshape:
[Ci, M, B] <- [Ci, M1...Md, B]
whereM = prod(M1...Md)
- apply weight
- reshape:
[Co, M1...Md, B] <- [Co, M, B]
NeuralROMs.optimize
— Functionoptimize(opt, NN, p, st, nepoch, _loader, loader_; ...)
optimize(
opt,
NN,
p,
st,
nepoch,
_loader,
loader_,
__loader;
lossfun,
opt_st,
cb,
io,
fullbatch_freq,
early_stopping,
patience,
schedule,
kwargs...
)
Train parameters p
to minimize loss
using optimization strategy opt
.
Arguments
- Loss signature:
loss(p, st) -> y, st
- Callback signature:
cb(p, st epoch, nepoch) -> nothing
NeuralROMs.optimize
— Functionreferences
https://docs.sciml.ai/Optimization/stable/tutorials/minibatch/ https://lux.csail.mit.edu/dev/tutorials/advanced/1_GravitationalWaveForm#training-the-neural-network
NeuralROMs.plot_1D_surrogate_steady
— Methodplot_1D_surrogate_steady(
V,
_data,
data_,
NN,
p,
st;
nsamples,
dir,
format
)
NeuralROMs.pnorm
— Methodpnorm(p)(y, ŷ) -> l
pnorm(p)(NN, p, st, batch) -> l, st, stats
P-Norm
NeuralROMs.regularize_autodecoder
— Methodregularize_autodecoder(lossfun, σ, λ1, λ2, property)(NN, p, st, batch) -> l, st, stats
code reg loss, L1/L2 on decoder lossfun(..) + 1/σ² ||ũ||₂² + L1/L2 on decoder + Lipschitz reg. on decoder
NeuralROMs.regularize_decoder
— Methodregularize_decoder(lossfun, σ, λ1, λ2, property)(NN, p, st, batch) -> l, st, stats
code reg loss, L1/L2 on decoder lossfun(..) + 1/σ² ||ũ||₂² + L1/L2 on decoder + Lipschitz reg. on decoder
NeuralROMs.regularize_flatdecoder
— Methodregularize_flatdecoder(lossfun, σ, λ1, λ2, property)(NN, p, st, batch) -> l, st, stats
lossfun(..) + L2 (on hyper) + Lipschitz (on decoder)
NeuralROMs.rsquare
— Methodrsquare(ypred, ytrue) -> 1 - MSE(ytrue, ypred) / var(yture)
Calculuate r2 (coefficient of determination) score.
NeuralROMs.statistics
— Methodstatistics(NN, p, st, loader)
NeuralROMs.train_model
— Methodtrain_model(NN, _data; ...)
train_model(
NN,
_data,
data_;
rng,
_batchsize,
batchsize_,
__batchsize,
opts,
nepochs,
schedules,
fullbatch_freq,
early_stoppings,
patience_fracs,
weight_decays,
dir,
name,
metadata,
io,
p,
st,
lossfun,
device,
cb_epoch
)
Arguments
NN
: Lux neural network_data
: training data as(x, y)
.x
may be an AbstractArray or a tuple of arraysdata_
: testing data (same requirement as `_data)
Keyword Arguments
rng
: random nunmber generator_batchsize/batchsize_
: train/test batch sizeopts/nepochs
:NTuple
of optimizers, # epochs per optimizercbstep
: promptcallback
function everycbstep
epochsdir/name
: directory to save model, plots, model nameio
: io for printing statsp/st
: initial model parameter, state. if nothing, initialized withLux.setup(rng, NN)