Transfer Learning in Scalable Graph Neural Network for Improved Physical Simulation
AuthorsSiqi Shen, Yu Liu, Daniel Biggs, Omar Hafez, Jiandong Yu, Wentao Zhang, Bin Cui, Jiulong Shan
AuthorsSiqi Shen, Yu Liu, Daniel Biggs, Omar Hafez, Jiandong Yu, Wentao Zhang, Bin Cui, Jiulong Shan
In recent years, graph neural network (GNN) based models showed promising results in simulating complex physical systems. However, training dedicated graph network simulator can be costly, as most models are confined to fully supervised training. Extensive data generated from traditional simulators is required to train the model. It remained unexplored how transfer learning could be applied to improve the model performance and training efficiency. In this work, we introduce a pretraining and transfer learning paradigm for graph network simulator. First, We proposed the scalable graph U-net (SGUNet). By incorporating an innovative depth-first search (DFS) pooling, the SGUNet is configurable to adaptable different mesh size and resolutions for different simulation tasks. To enable the transfer learning between different configured SGUNet, we propose a set of mapping functions to align the parameters between pretrained model and target model. An extra normalization term is also added into loss to constrain the similarity between the pretrained weights and target model weights for better generalization performance. Then we created a dataset for pretraining the simulators. It includes 20,000 physical simulations with 3D shapes randomly selected from the open source A Big CAD (ABC) datasets. We demonstrate that with our proposed transfer learning methods, model fine-tuned with a small portion of the training data could reach even better performance compared with the one trained from scratch. On 2D Deformable Plate, our pretrained model fine-tuned on 1/16 of the training data could achieve 11.05% improvement compared to model trained from scratch.