Introduction
This tutorial couples a steady incompressible Navier–Stokes solver (SIMPLE algorithm) with a small neural network that predicts the eddy viscosity coefficient from reduced‑order (POD) velocity modes. The geometry is the simpleTurbGeomClosed case and the objective is to learn a turbulence model in a reduced‑order setting.
The workflow consists of two stages:
- Offline snapshot generation using the parent class SteadyNSSimple to produce velocity, pressure and turbulent viscosity fields for a set of parameter values (e.g. angle of attack); these are written to ./ITHACAoutput/Offline/.
- Neural‑network training. The SteadyNSSimpleNN class (defined in 01simpleTurbGeomClosed.C) computes L2 projection coefficients of the snapshots onto the POD modes and trains a fully‑connected network (two hidden layers of 128 and 64 neurons) using PyTorch. The network takes the reduced coefficients (and optionally a parameter value) as input and predicts the coefficients of the eddy viscosity field.
Once trained, the network is exported in TorchScript format and can be loaded with SteadyNSSimpleNN::loadNet. The online solver then evaluates the network to supply a turbulence closure while solving the reduced system.
A detailed look into the code
Let's have a look at all the useful scripts for tutorial NN-01.
Files of interest
- 01simpleTurbGeomClosed.C – contains the SteadyNSSimpleNN definition and main routine orchestrating offline and online phases: we will comment this script, which is the core of the tutorial, in the next section.
- train.py – Python script for training the network outside the C++ code.
- simpleTurbGeomClosed.py – helper utilities for data preprocessing.
- angOff_mat.txt, angOn_mat.txt, vel.txt – example matrices and fields.
Useful libraries are first included:
#include <torch/script.h>
#include <torch/torch.h>
#include "forces.H"
#include "IOmanip.H"
#include "RBFMotionSolver.H"
Header file of the ITHACAPOD class.
Header file of the ITHACAstream class, it contains the implementation of several methods for input ou...
Header file of the reducedSteadyNS class.
Header file of the steadyNS class.
Header file of the torch2Eigen namespace.
among which we have torch, which we will use to load the pre-trained neural network and to infer new outputs (the eddy viscosity coefficients) from new inputs (the velocity coefficients).
The statement using namespace ITHACAtorch::torch2Eigen; makes the tensor/matrix conversion utilities directly accessible. This is important because the code frequently moves data from the reduced basis representation stored as Eigen objects into Torch tensors for neural network inference, and then converts the predicted output back to Eigen form.
A class SteadyNSSimpleNN is here defined. This class extends SteadyNSSimple. Its purpose is to augment the standard steady SIMPLE full-order problem with a neural network interface. In other words, this class is responsible for everything related to the coupling between the CFD reduced-order workflow and the trained Torch model.
{
public:
:
{
Net->push_back(torch::nn::Linear(
NUmodes, 128));
Net->push_back(torch::nn::ReLU());
Net->push_back(torch::nn::Linear(128, 64));
Net->push_back(torch::nn::ReLU());
Net->push_back(torch::nn::Linear(64,
NNutModes));
optimizer = new torch::optim::Adam(Net->parameters(),
torch::optim::AdamOptions(2e-2));
};
Eigen::MatrixXd bias_inp;
Eigen::MatrixXd scale_inp;
Eigen::MatrixXd bias_out;
Eigen::MatrixXd scale_out;
Eigen::MatrixXd coeffL2Nut;
Eigen::MatrixXd coeffL2U;
torch::Tensor coeffL2U_tensor;
torch::Tensor coeffL2Nut_tensor;
torch::nn::Sequential Net;
torch::optim::Optimizer* optimizer;
torch::jit::script::Module netTorchscript;
SteadyNSSimpleNN(int argc, char *argv[])
Constructor.
Implementation of a parametrized full order steady NS problem and preparation of the the reduced ma...
label NUmodes
Number of velocity modes used for the projection.
label NNutModes
Number of nut modes used for the projection.
The constructor first initializes the parent class and then defines a fully connected feedforward neural network. The network maps the reduced velocity coefficients to the reduced turbulent viscosity coefficients. The architecture consists of an input linear layer from NUmodes to 128 neurons, followed by a ReLU activation, then a second linear layer from 128 to 64 neurons, another ReLU, and finally an output layer from 64 to NNutModes. This is a straightforward multilayer perceptron used as a nonlinear regression model, using Adam as optimizer in the training process. The matrices bias_inp, scale_inp, bias_out, and scale_out store the normalization parameters used for input and output preprocessing. These variables are essential because the neural network is not evaluated on raw reduced coefficients. Instead, the input is normalized before inference, and the output is denormalized afterward. This helps stabilize training and makes the model more robust numerically. The matrices coeffL2Nut and coeffL2U, together with their tensor counterparts, are intended to store reduced coefficients for the turbulent viscosity and velocity fields. These quantities are part of the dataset preparation process used for neural network training. The members Net, optimizer, and netTorchscript represent the neural network in two different forms. Net is the trainable C++ Torch module, optimizer manages parameter updates during training, and netTorchscript is the serialized, pre-trained model loaded from disk for inference. In this file, the TorchScript model is the one actually used during the online phase.
The method loadNet(word filename) loads the trained neural network and all associated normalization data. First, it checks whether the file exists and aborts with a clear error message if it does not. This is a protective step because the reduced turbulent closure cannot work unless the network has already been trained for the chosen numbers of velocity and turbulent-viscosity modes. Then the TorchScript model is loaded from disk, and the normalization arrays are read from .npy files. Finally, the model is switched to evaluation mode, which disables any training-specific behavior and ensures deterministic inference.
void loadNet(word filename)
{
std::string Msg = filename +
" is not existing, please run the training stage of the net with the correct number of modes for U and Nut";
netTorchscript = torch::jit::load(filename);
cnpy::load(bias_inp, "ITHACAoutput/NN/minAnglesInp_" + name(
NUmodes) + "_" + name(NNutModes) + ".npy");
cnpy::load(scale_inp, "ITHACAoutput/NN/scaleAnglesInp_" + name(
NUmodes) + "_" + name(NNutModes) + ".npy");
cnpy::load(bias_out, "ITHACAoutput/NN/minOut_" + name(NUmodes) + "_" + name(
NNutModes) + ".npy");
cnpy::load(scale_out, "ITHACAoutput/NN/scaleOut_" + name(NUmodes) + "_" + name(
NNutModes) + ".npy");
netTorchscript.eval();
}
bool check_file(std::string fileName)
Function that returns true if a file exists.
The method getTurbNN() prepares the training and testing datasets for the neural network. It only runs if the folder ITHACAoutput/NN/coeffs does not already exist. The function first reads the offline snapshots for velocity, pressure, and turbulent viscosity from ./ITHACAoutput/Offline/. Note that the snapshots are read with readMiddleFields: since the case is steady, we also include inside the snapshots matrix the middle states. This will improve the accuracy of the ROM.
void getTurbNN()
{
{
mkDir("ITHACAoutput/NN/coeffs");
PtrList<volVectorField> UfieldTrain;
PtrList<volScalarField> PfieldTrain;
PtrList<volScalarField> nutFieldsTrain;
"./ITHACAoutput/Offline/");
"./ITHACAoutput/Offline/");
auto nut_train = _mesh().lookupObject<volScalarField>("nut");
"./ITHACAoutput/Offline/");
void readMiddleFields(PtrList< GeometricField< Type, PatchField, GeoMesh > > &Lfield, GeometricField< Type, PatchField, GeoMesh > &field, fileName casename)
Funtion to read a list of volVectorField from name of the field including all the intermediate snapsh...
bool check_folder(word folder)
Checks if a folder exists.
It then projects these full fields onto their respective reduced bases using ITHACAutilities::getCoeffs. The result is a set of reduced coordinates for the training data. These coefficient matrices are transposed so that each row corresponds to a sample, which is the natural layout for machine learning datasets, and then saved as NumPy files.
Umodes,
0, true);
Info << "Computing the coefficients for p train" << endl;
Pmodes,
0, true);
Info << "Computing the coefficients for nuT train" << endl;
nutModes,
0, true);
coeffL2U_train.transposeInPlace();
coeffL2P_train.transposeInPlace();
coeffL2Nut_train.transposeInPlace();
cnpy::save(coeffL2U_train, "ITHACAoutput/NN/coeffs/coeffL2UTrain.npy");
cnpy::save(coeffL2P_train, "ITHACAoutput/NN/coeffs/coeffL2PTrain.npy");
cnpy::save(coeffL2Nut_train, "ITHACAoutput/NN/coeffs/coeffL2NutTrain.npy");
PtrList<volVectorField> UfieldTest;
PtrList<volScalarField> PfieldTest;
PtrList<volScalarField> nutFieldsTest;
Eigen::VectorXd getCoeffs(GeometricField< Type, PatchField, GeoMesh > &snapshot, PtrList< GeometricField< Type, PatchField, GeoMesh > > &modes, label Nmodes, bool consider_volumes)
Projects a snapshot on a basis function and gets the coefficients of the projection.
The second half of getTurbNN() performs the same procedure on the test snapshots stored in ./ITHACAoutput/checkOff/. This generates reduced coefficients for velocity, pressure, and turbulent viscosity on the validation set.
"./ITHACAoutput/checkOff/");
"./ITHACAoutput/checkOff/");
auto nut_test = _mesh().lookupObject<volScalarField>("nut");
"./ITHACAoutput/checkOff/");
Umodes,
0, true);
Pmodes,
0, true);
nutModes,
0, true);
coeffL2U_test.transposeInPlace();
coeffL2P_test.transposeInPlace();
coeffL2Nut_test.transposeInPlace();
cnpy::save(coeffL2U_test, "ITHACAoutput/NN/coeffs/coeffL2UTest.npy");
cnpy::save(coeffL2P_test, "ITHACAoutput/NN/coeffs/coeffL2PTest.npy");
cnpy::save(coeffL2Nut_test, "ITHACAoutput/NN/coeffs/coeffL2NutTest.npy");
}
}
The first overload of evalNet, namely Eigen::MatrixXd evalNet(Eigen::MatrixXd a, double mu_now), evaluates the neural network when both the reduced velocity coefficients and the current parameter value are part of the input. The function creates an input vector xpred with one additional entry beyond the reduced velocity coefficients. The first element is the current parameter mu_now, while the remaining entries are the reduced velocity coefficients. After assembling the input, the code applies an affine transformation using scale_inp and bias_inp, converting the physical input into the normalized space expected by the network. The normalized input is then transposed, converted into a Torch tensor, and passed through the TorchScript network. The output tensor is converted back to an Eigen matrix, transposed into column format, and finally denormalized using bias_out and scale_out. The returned matrix therefore contains the predicted reduced coefficients for the turbulent viscosity field.
Eigen::MatrixXd evalNet(Eigen::MatrixXd a, double mu_now)
{
Eigen::MatrixXd xpred(a.rows() + 1, 1);
if (xpred.cols() != 1)
{
throw std::runtime_error("Predictions for more than one sample not supported yet.");
}
xpred.bottomRows(a.rows()) = a;
xpred(0, 0) = mu_now;
xpred = xpred.array() * scale_inp.array() + bias_inp.array() ;
xpred.transposeInPlace();
torch::Tensor xTensor = eigenMatrix2torchTensor(xpred);
torch::Tensor out;
std::vector<torch::jit::IValue> XTensorInp;
XTensorInp.push_back(xTensor);
out = netTorchscript.forward(XTensorInp).toTensor();
Eigen::MatrixXd g = torchTensor2eigenMatrix<double>(out);
g.transposeInPlace();
g = (g.array() - bias_out.array()) / scale_out.array();
return g;
}
The second overload of evalNet, namely Eigen::MatrixXd evalNet(Eigen::MatrixXd a), is a simpler variant in which the network input consists only of the reduced coefficients already packed into a. It normalizes the input, performs inference, denormalizes the output, and returns the predicted coefficients. Compared with the previous overload, this version assumes that the parameter dependence has already been embedded in the provided input matrix.
Eigen::MatrixXd evalNet(Eigen::MatrixXd a)
{
netTorchscript.eval();
a.transposeInPlace();
a = (a.array() - bias_inp.array()) / scale_inp.array();
torch::Tensor aTensor = eigenMatrix2torchTensor(a);
torch::Tensor out;
std::vector<torch::jit::IValue> XTensorInp;
XTensorInp.push_back(aTensor);
out = netTorchscript.forward(XTensorInp).toTensor();
Eigen::MatrixXd g = torchTensor2eigenMatrix<double>(out);
g = g.array() * scale_out.array() + bias_out.array();
g.transposeInPlace();
return g;
}
The class reducedSimpleSteadyNN extends reducedSimpleSteadyNS. Its role is to define the online reduced-order solver that uses the neural network closure. While the previous class handles the full-order problem plus the machine learning interface, this class actually performs the reduced SIMPLE iterations in the online stage.
{
public:
:
{}
reducedSimpleSteadyNN(SteadyNSSimpleNN &FOMproblem)
Constructor.
SteadyNSSimpleNN * problem
Full problem.
Class where it is implemented a reduced problem for the steady Navier-stokes problem.
The core routine is solveOnline_Simple. This method performs one online reduced simulation for a given parameter value mu_now. It initializes reduced coefficients for velocity, pressure, and turbulent viscosity:
void solveOnline_Simple(scalar mu_now,
int NmodesUproj, int NmodesPproj, int NmodesNut = 0,
word Folder = "./ITHACAoutput/Reconstruct/")
{
counter++;
...
Eigen::VectorXd uresidualOld = Eigen::VectorXd::Zero(NmodesUproj);
Eigen::VectorXd presidualOld = Eigen::VectorXd::Zero(NmodesPproj);
Eigen::VectorXd uresidual = Eigen::VectorXd::Zero(NmodesUproj);
Eigen::VectorXd presidual = Eigen::VectorXd::Zero(NmodesPproj);
scalar U_norm_res(1);
scalar P_norm_res(1);
Eigen::MatrixXd a = Eigen::VectorXd::Zero(NmodesUproj);
Eigen::MatrixXd a0 = Eigen::VectorXd::Zero(NmodesUproj);
Eigen::MatrixXd b = Eigen::VectorXd::Zero(NmodesPproj);
Eigen::MatrixXd b0 = Eigen::VectorXd::Zero(NmodesPproj);
Eigen::MatrixXd bOld = b;
Eigen::MatrixXd nutCoeff = Eigen::VectorXd::Zero(NmodesNut);
Eigen::MatrixXd nutCoeffOld = Eigen::VectorXd::Zero(NmodesNut);
float residualJumpLim =
problem->para->
ITHACAdict->lookupOrDefault<
float>(
"residualJumpLim", 1e-5);
float normalizedResidualLim =
problem->para->
ITHACAdict->lookupOrDefault<
float>(
"normalizedResidualLim",
1e-5);
scalar residual_jump(1 + residualJumpLim);
volScalarField& P = problem->
_p();
volVectorField& U = problem->
_U();
volScalarField& nut = const_cast<volScalarField&>
(problem->
_mesh().lookupObject<volScalarField>(
"nut"));
volVectorField u2 = U;
a(0) = a0(0);
b = b0;
fvMesh& mesh = problem->
_mesh();
P.rename("p");
surfaceScalarField& phi(problem->
_phi());
IOdictionary * ITHACAdict
Dictionary for input objects from file.
volScalarModes nutModes
List of POD modes for eddy viscosity.
autoPtr< surfaceScalarField > _phi
Flux.
autoPtr< Time > _runTime
Time.
volVectorModes Umodes
List of pointers used to form the velocity modes.
autoPtr< fvMesh > _mesh
Mesh.
autoPtr< volVectorField > _U
Velocity field.
volScalarModes Pmodes
List of pointers used to form the pressure modes.
autoPtr< volScalarField > _p
Pressure field.
It then reconstructs the corresponding full fields, and then enters a SIMPLE iteration loop.
vector v(1, 0, 0);
phi = fvc::flux(U);
int iter = 0;
simpleControl& simple = problem->
_simple();
std::ofstream res_os_U;
std::ofstream res_os_P;
res_os_U.open(Folder + name(counter) + "/residualsU", std::ios_base::app);
res_os_P.open(Folder + name(counter) + "/residualsP", std::ios_base::app);
while ((residual_jump > residualJumpLim
|| std::max(U_norm_res, P_norm_res) > normalizedResidualLim) &&
iter < maxIterOn)
{
iter++;
Info << iter << endl;
#if defined(OFVER) && (OFVER == 6)
simple.loop(runTime);
#else
#endif
GeometricField< Type, PatchField, GeoMesh > reconstruct(GeometricField< Type, PatchField, GeoMesh > &inputField, Eigen::MatrixXd Coeff, word Name)
Function to reconstruct the solution starting from the coefficients, in this case the field is passed...
autoPtr< simpleControl > _simple
simpleControl
void assignBC(GeometricField< scalar, fvPatchField, volMesh > &s, label BC_ind, double value)
Assign uniform Boundary Condition to a volScalarField.
Inside the SIMPLE loop, if the case is turbulent, the neural network is evaluated to predict the reduced turbulent-viscosity coefficients from the current reduced velocity state and parameter. These coefficients are then reconstructed into the full nut field.
{
nutCoeff = problem->evalNet(a, mu_now);
volScalarField& nut = const_cast<volScalarField&>
(problem->
_mesh().lookupObject<volScalarField>(
"nut"));
}
bool isTurbulent()
This function checks if the case is turbulent.
The momentum equation is assembled, projected onto the velocity POD space, and solved in reduced form.
volScalarField nueff = problem->
turbulence->nuEff();
vector v(1, 0, 0);
fvVectorMatrix UEqn
(
fvm::div(phi, U)
- fvm::laplacian(nueff, U)
- fvc::div(nueff * dev2(T(fvc::grad(U))))
);
UEqn.relax();
List<Eigen::MatrixXd> RedLinSysU = problem->
Umodes.
project(UEqn, NmodesUproj);
volVectorField gradPfull = -fvc::grad(P);
Eigen::MatrixXd projGrad = problem->
Umodes.
project(gradPfull, NmodesUproj);
RedLinSysU[1] = RedLinSysU[1] + projGrad;
volScalarField rAU(1.0 / UEqn.A());
volVectorField HbyA(constrainHbyA(1.0 / UEqn.A() * UEqn.H(), U, P));
surfaceScalarField phiHbyA("phiHbyA", fvc::flux(HbyA));
adjustPhi(phiHbyA, U, P);
tmp<volScalarField> rAtU(rAU);
if (simple.consistent())
{
rAtU = 1.0 / (1.0 / rAU - UEqn.H1());
phiHbyA +=
fvc::interpolate(rAtU() - rAU) * fvc::snGrad(P) * mesh.magSf();
HbyA -= (rAU - rAtU()) * fvc::grad(P);
}
List<Eigen::MatrixXd> RedLinSysP;
bOld = b;
List< Eigen::MatrixXd > project(fvMatrix< Type > &Af, label numberOfModes=0, word projType="G")
A function that project an FvMatrix (OpenFoam linear System) on the modes.
static Eigen::MatrixXd solveLinearSys(List< Eigen::MatrixXd > LinSys, Eigen::MatrixXd x, Eigen::VectorXd &residual, const Eigen::MatrixXd &bc=Eigen::MatrixXd::Zero(0, 0), const std::string solverType="fullPivLu")
Linear system solver for the online problem.
autoPtr< incompressible::turbulenceModel > turbulence
Turbulence model.
Then the pressure-correction equation is assembled, projected onto the pressure POD space, and solved in reduced form through the standard SIMPLE logic.
while (simple.correctNonOrthogonal())
{
fvScalarMatrix pEqn
(
fvm::laplacian(rAtU(), P) == fvc::div(phiHbyA)
);
if (simple.finalNonOrthogonalIter())
{
phi = phiHbyA - pEqn.flux();
}
}
b = bOld + mesh.fieldRelaxationFactor("p") * (b - bOld);
nutCoeffOld = nutCoeff;
U = HbyA - rAtU() * fvc::grad(P);
U.correctBoundaryConditions();
Residuals are monitored using both their normalized magnitude and their jump between consecutive iterations, and the loop stops when the chosen thresholds are satisfied or the maximum number of iterations is reached. At convergence, the reconstructed fields are exported:
uresidualOld = uresidualOld - uresidual;
presidualOld = presidualOld - presidual;
uresidualOld = uresidualOld.cwiseAbs();
presidualOld = presidualOld.cwiseAbs();
residual_jump = std::max(uresidualOld.sum(), presidualOld.sum());
uresidualOld = uresidual;
presidualOld = presidual;
uresidual = uresidual.cwiseAbs();
presidual = presidual.cwiseAbs();
U_norm_res = uresidual.sum() / (RedLinSysU[1].cwiseAbs()).sum();
P_norm_res = presidual.sum() / (RedLinSysP[1].cwiseAbs()).sum();
...
res_os_U << U_norm_res << endl;
res_os_P << P_norm_res << endl;
}
res_os_U.close();
res_os_P.close();
...
{
volScalarField& nut = const_cast<volScalarField&>
(problem->
_mesh().lookupObject<volScalarField>(
"nut"));
nut.rename("nutAux");
}
runTime.setTime(runTime.startTime(), 0);
}
void exportSolution(GeometricField< Type, PatchField, GeoMesh > &s, fileName subfolder, fileName folder, word fieldName)
Export a field to file in a certain folder and subfolder.
The class tutorial01cl specializes the problem by adding geometric parameterization. It stores the original mesh points and the current moved coordinates, and defines how the mesh is deformed as a function of the input parameter, which in this example is an angle.
{
public:
:
{
}
List< vector > curX
List to store the moved coordinates.
vectorField point0
Initial coordinates of the grid points.
tutorial01cl(int argc, char *argv[])
Constructor.
The method offlineSolve either reads previously generated snapshots or performs the full-order simulations over the training parameters.
void offlineSolve(Eigen::MatrixXd Box, List<label> patches,
word folder = "./ITHACAoutput/Offline/")
{
Vector<double> inl(0, 0, 0);
List<scalar> mu_now(1);
volVectorField& U = _U();
volScalarField& p = _p();
{
auto nut = _mesh().lookupObject<volScalarField>("nut");
}
{
}
else if (!offline)
{
...
For each parameter value, the mesh is reset, the selected points are moved:
for (label i = 0; i < mu.rows(); i++)
{
_mesh().movePoints(point0);
List<vector> points2Move;
points2Move);
mu_now[0] = mu(i, 0);
linearMovePts(mu_now[0], points2Move);
for (int j = 0; j < boxIndices.size(); j++)
{
curX[boxIndices[j]] = points2Move[j];
}
Field<vector> curXV(curX);
_mesh().movePoints(curXV);
name(i + 1) + "/polyMesh/");
void writePoints(pointField points, fileName folder, fileName subfolder)
Write points of a mesh to a file.
labelList getIndicesFromBox(const fvMesh &mesh, List< label > indices, Eigen::MatrixXd Box, List< vector > &points2Move)
Gives the indices conteined into a defined box.
The truth problem is then solved, and the resulting snapshots are stored.
truthSolve2(mu_now, folder);
int middleF = 1;
i + 1) + "/" + name(middleF)))
{
word command("ln -s $(readlink -f " + folder + name(
i + 1) + "/polyMesh/) " + folder + name(i + 1) + "/" + name(
middleF) + "/" + " >/dev/null 2>&1");
std::cout.setstate(std::ios_base::failbit);
system(command);
std::cout.clear();
middleF++;
}
restart();
}
}
}
The method linearMovePts defines the actual deformation law, applying a smooth position-dependent horizontal displacement controlled by the angle.
void linearMovePts(double angle, List<vector>& points2Move)
{
double sMax;
scalarList x;
scalarList y;
for (label i = 0; i < points2Move.size(); i++)
{
x.append(points2Move[i].component(0));
y.append(points2Move[i].component(1));
}
double xMin = min(x);
double xMax = max(x);
double yMin = min(y);
double yMax = max(y);
sMax = (yMax - yMin) * std::tan(M_PI * angle / 180);
for (label i = 0; i < points2Move.size(); i++)
{
points2Move[i].component(0) = points2Move[i].component(0) +
(yMax - points2Move[i].component(1)) / (yMax - yMin) * (xMax -
points2Move[i].component(0)) / (xMax - xMin) * sMax;
}
}
};
In main, the tutorial object is created and the parameter values for offline and online stages are loaded from file or generated if missing.
int main(int argc, char* argv[])
{
example._runTime());
std::ifstream exFileOff("./angOff_mat.txt");
if (exFileOff)
{
}
else
{
example.mu = Eigen::VectorXd::LinSpaced(50, 0, 75);
}
Eigen::MatrixXd angOn;
std::ifstream exFileOn("./angOn_mat.txt");
if (exFileOn)
{
}
else
{
}
Class for the definition of some general parameters, the parameters must be defined from the file ITH...
static ITHACAparameters * getInstance()
Gets an instance of ITHACAparameters, to be used if the instance is already existing.
void exportMatrix(Eigen::Matrix< T, -1, dim > &matrix, word Name, word type, word folder)
Export the reduced matrices in numpy (type=python), matlab (type=matlab) and txt (type=eigen) format ...
List< Eigen::MatrixXd > readMatrix(word folder, word mat_name)
Read a three dimensional matrix from a txt file in Eigen format.
Eigen::MatrixXd rand(label rows, label cols, double min, double max)
Generates random matrix with random values in an interval.
A geometric box and a set of moving patches are defined to identify the portion of the mesh that must be deformed.
Eigen::MatrixXd Box(2, 3);
Box << 1.98, 0.01, 0.11,
7.02, -0.666669, -0.01;
List<label> movPat;
movPat.append(3);
movPat.append(5);
example.maxIter = para->ITHACAdict->lookupOrDefault<int>("maxIter", 2000);
The offline phase is then executed to generate or load the training snapshots:
example.offlineSolve(Box, movPat);
List<vector> points2Move;
movPat, Box,
points2Move);
example.linearMovePts((example.mu.maxCoeff() + example.mu.minCoeff()) / 2,
points2Move);
for (int j = 0; j < boxIndices.size(); j++)
{
example.curX[boxIndices[j]] = points2Move[j];
}
Field<vector> curXV(example.curX);
example._mesh().movePoints(curXV);
After the offline stage, POD modes are computed for velocity and pressure, and stored.
example.podex, 0, 0,
example.NUmodesOut);
example.podex, 0, 0,
example.NPmodesOut);
void getModes(PtrList< GeometricField< Type, PatchField, GeoMesh > > &snapshots, PtrList< GeometricField< Type, PatchField, GeoMesh > > &modes, word fieldName, bool podex, bool supex, bool sup, label nmodes, bool correctBC)
Computes the bases or reads them for a field.
A second full-order problem is then used to generate reference solutions on the online parameter set, which will later be used for error analysis.
std::clock_t startOff;
double durationOff;
startOff = std::clock();
{
checkOff.restart();
checkOff._runTime());
checkOff.offline = false;
checkOff.mu = angOn;
checkOff.offlineSolve(Box, movPat, "./ITHACAoutput/checkOff/");
checkOff.offline = true;
}
If the case is turbulent, POD modes for turbulent viscosity are also computed, and getTurbNN is called to export the reduced datasets needed for neural-network training:
{
example.podex, 0, 0, example.NNutModesOut);
example.getTurbNN();
}
The code then checks that the trained network file exists, loads it, and creates the reduced-order problem.
std::string wrng =
"The network for this problem has not been calculated yet. Perform the Python training (see train.py).";
example.NUmodes) + "_" + name(
example.NNutModes) + ".pt"), wrng.c_str());
example.loadNet("ITHACAoutput/NN/Net_" + name(example.NUmodes) + "_" + name(
example.NNutModes) + ".pt");
example._mesh().movePoints(example.point0);
...
For each online parameter, the mesh is deformed consistently with the training stage, the reduced SIMPLE solver is executed, and the reduced solution is written to disk:
for (label k = 0; k < angOn.rows(); k++)
{
Info << "Solving online for parameter number " + name(k + 1) << endl;
scalar mu_now = angOn(k, 0);
example.restart();
reduced.vel_now = vel;
{
example._mesh().movePoints(example.point0);
List<vector> points2Move;
movPat, Box, points2Move);
example.linearMovePts(mu_now, points2Move);
for (int j = 0; j < boxIndices.size(); j++)
{
example.curX[boxIndices[j]] = points2Move[j];
}
Field<vector> curXV(example.curX);
example._mesh().movePoints(curXV);
"./ITHACAoutput/Reconstruct_" + name(example.NUmodes) + "_" + name(
example.NPmodes) + "/", name(k + 1) + "/polyMesh/");
reduced.solveOnline_Simple(mu_now, example.NUmodes, example.NPmodes,
example.NNutModes, "./ITHACAoutput/Reconstruct_" + name(
example.NUmodes) + "_" + name(example.NPmodes) + "/");
}
else
{
example._mesh().movePoints(example.point0);
List<vector> points2Move;
movPat, Box, points2Move);
example.linearMovePts(mu_now, points2Move);
for (int j = 0; j < boxIndices.size(); j++)
{
example.curX[boxIndices[j]] = points2Move[j];
}
Field<vector> curXV(example.curX);
example._mesh().movePoints(curXV);
"./ITHACAoutput/Reconstruct_" + name(example.NUmodes) + "_" + name(
example.NPmodes) + "/", name(k + 1) + "/polyMesh/");
reduced.solveOnline_Simple(mu_now, example.NUmodes, example.NPmodes,
example.NNutModes, "./ITHACAoutput/Reconstruct_" + name(
example.NUmodes) + "_" + name(example.NPmodes) + "/");
}
}
In the final section, the code reads the full-order reference fields and the reconstructed reduced fields, computes relative errors for velocity and pressure, exports these error matrices, and prints the offline and online computational times.
PtrList<volVectorField> Ufull;
PtrList<volScalarField> Pfull;
PtrList<volVectorField> Ured;
PtrList<volScalarField> Pred;
volVectorField Uaux("Uaux", checkOff._U());
volScalarField Paux("Paux", checkOff._p());
"./ITHACAoutput/checkOff/");
"./ITHACAoutput/checkOff/");
"./ITHACAoutput/Reconstruct_" + name(example.NUmodes) + "_" + name(
example.NPmodes) + "/");
"./ITHACAoutput/Reconstruct_" + name(example.NUmodes) + "_" + name(
example.NPmodes) + "/");
Eigen::MatrixXd relErrorU(Ufull.size(), 1);
Eigen::MatrixXd relErrorP(Pfull.size(), 1);
dimensionedVector U_fs("U_fs", dimVelocity, vector(1, 0, 0));
for (label k = 0; k < Ufull.size(); k++)
{
volVectorField errorU = (Ufull[k] - Ured[k]).ref();
volVectorField devU = (Ufull[k] - U_fs).ref();
volScalarField errorP = (Pfull[k] - Pred[k]).ref();
}
"errorU_" + name(example.NUmodes) + "_" + name(example.NPmodes) + "_" + name(
example.NNutModes), "python", ".");
"errorP_" + name(example.NUmodes) + "_" + name(example.NPmodes) + "_" + name(
example.NNutModes), "python", ".");
...
void readConvergedFields(PtrList< GeometricField< Type, PatchField, GeoMesh > > &Lfield, GeometricField< Type, PatchField, GeoMesh > &field, fileName casename)
Function to read a list of volVectorField from name of the field including only converged snapshots.
void read_fields(PtrList< GeometricField< Type, PatchField, GeoMesh > > &Lfield, word Name, fileName casename, int first_snap, int n_snap)
Function to read a list of fields from the name of the field and casename.
double frobNorm(GeometricField< Type, PatchField, GeoMesh > &field)
Evaluate the Frobenius norm of a field.
The plain code
The plain code is available here.