Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Towards SoftAdapt loss balancing for tf.compat.v1 #1586

Draft
wants to merge 4 commits into
base: master
Choose a base branch
from
Draft
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 32 additions & 1 deletion deepxde/callbacks.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
from . import config
from . import gradients as grad
from . import utils
from .backend import backend_name, tf, torch, paddle
from .backend import backend_name, tf, torch, paddle, Variable


class Callback:
Expand Down Expand Up @@ -571,3 +571,34 @@ def on_epoch_end(self):
raise ValueError(
"`num_bcs` changed! Please update the loss function by `model.compile`."
)


class SoftAdapt(Callback):
"""Use adaptive loss balancing.

Args:
beta: If beta > 0, then softAdapt will pay more attention the worst performing
loss component. If beta < 0, then SoftAdapt will assign higher weights
to the better performing components. Beta==0 is the trivial case and
all loss components will have coefficient 1.
epsilon: parameter to prevent overflows.

"""

def __init__(self, beta=0.1, epsilon=1e-8):
super().__init__()

self.beta = beta
self.epsilon = epsilon

def on_train_begin(self):
loss_weights = tf.constant(self.model.loss_weights)
loss_weights = Variable(loss_weights, dtype=loss_weights.dtype)
loss_weights *= 0

self.model.loss_weights = loss_weights

print(loss_weights, "loss_weights")
# Allow instances to be re-used.
# Evaluate coefficients.
# Update weights.