I’m excited to share our new paper: “ConFIG” https://arxiv.org/abs/2408.11104 It’s the first method for multi-task learning that really yields conflict free gradients. Whether you’re looking at PINN training or other multi-task objectives, I can highly recommend trying it out! It really beats all other methods 😃🤘 Full source code and samples are already available at: https://tum-pbs.github.io/ConFIG/
The package is now on pip: you can install via “pip install conflictfree“. Please also try the examples such as the classic PINN Burgers case: https://colab.research.google.com/github/tum-pbs/ConFIG/blob/main/docs/examples/pinn_burgers.ipynb . With a small change (providing a list of loss terms) you can directly decrease the loss from 0.031 to 0.0019. That’s 16x smaller!
Full abstract: The loss functions of many learning problems contain multiple additive terms that can disagree and yield conflicting update directions. For Physics-Informed Neural Networks (PINNs), loss terms on initial/boundary conditions and physics equations are particularly interesting as they are well-established as highly difficult tasks. To improve learning the challenging multi-objective task posed by PINNs, we propose the ConFIG method, which provides conflict-free updates by ensuring a positive dot product between the final update and each loss-specific gradient. It also maintains consistent optimization rates for all loss terms and dynamically adjusts gradient magnitudes based on conflict levels. We additionally leverage momentum to accelerate optimizations by alternating the back-propagation of different loss terms. The proposed method is evaluated across a range of challenging PINN scenarios, consistently showing superior performance and runtime compared to baseline methods. We also test the proposed method in a classic multi-task benchmark, where the ConFIG method likewise exhibits a highly promising performance.