Architecture Optimization in Physics-Informed Neural Networks

Arbeitstyp nach Absprache, Masterarbeit, Hiwi Stelle, Projektseminar, Bachelorarbeit

Artificial Neural Networks (NNs) have provided transformative results in numerous and diverse engineering domains, e.g. image processing or pattern recognition. In recent years, NNs have also been utilized for solving Partial Differential Equations (PDEs). Therein, one of the most popular approaches are Physics-Informed Neural Networks (PINNs). The procedure in PINNs is the following: First, an NN is employed to approximate the solution of the PDE. Next, an optimization algorithm is used to calibrate the NN’s intrinsic parameters such that the NN satisfies the PDE and the initial/boundary conditions.

A key factor which determines the performance and expressive capabilities of an NN is its architecture, i.e. number of layers, neurons per layer, connectivity, activation functions, etc. Nevertheless, investigations regarding efficient PINN architectures are virtually non-existent in the related literature. In an attempt to fill that gap, the task of this thesis will be the implementation of various NN architectures in the context of PINNs and their evaluation in terms of PDE solution quality. Possibilities include Long-Term/Short-Term Memory (LSTM) NNs, adaptive activation functions, Deep Jointly-Informed NNs (DJINNs), and others.