This is the official implementation of the paper "Fine-tuning Neural-Operator architectures for training and generalization".
If you want to reproduce all the results (including the baselines) shown in the paper,
You can then set up a conda environment with all dependencies like so:
conda env create -f environment.yaml
conda activate forward-operator
The data set is proportioned upon request. It must be located in a directory with the following structure:
databases/acoustic/GRF_{Freq}Hz/data
databases/acoustic/GRF_{Freq}Hz/model
In the place holder is
dataset_time-harmonic-waves_hawen_parameters
All the architectures' parameters are located in /config directory.
config/acoustic/GRF_{Freq}Hz/<Architecture>.yaml
They can be trained with
CUDA_VISIBLE_DEVICES={k} python3 main.py -c <path_to_config_file>
We train multiple times in the code. The evaluation is a function of the amoung of saving files. It can be implemented as follows
CUDA_VISIBLE_DEVICES={k} python3 evaluation.py -n <number_training_save_model> -c <path_to_config_file>
The same structure follows for plotting
python3 reconstruction_plot.py -c <path_to_config_file>
visualization_code