Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/mfkans #201

Open
wants to merge 9 commits into
base: develop
Choose a base branch
from
Open

Feature/mfkans #201

wants to merge 9 commits into from

Conversation

brunopjacob
Copy link
Collaborator

Adding example of multi-fidelity KANs

Verified

This commit was signed with the committer’s verified signature.
parona-source Alfred Wingate

Verified

This commit was signed with the committer’s verified signature.
parona-source Alfred Wingate

Verified

This commit was signed with the committer’s verified signature.
parona-source Alfred Wingate

Verified

This commit was signed with the committer’s verified signature.
parona-source Alfred Wingate
@drgona drgona requested review from drgona and RBirmiwal October 17, 2024 19:36
@brunopjacob
Copy link
Collaborator Author

brunopjacob commented Dec 4, 2024

@drgona @RBirmiwal please take a look and let me know if this looks good! If yes, we are good to merge. Thank you!

p.s.: the main thing to review here is the p3 KAN notebook.

@@ -250,6 +252,11 @@ def train(self):
alpha_loss = node.callable.get_alpha_loss()
output[self.train_metric] += alpha_loss

for node in self.model.nodes:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the motivation to modify the trainer itself to include this KAN regularization loss?
This does not seem very modular solution. We can simply instantiate the regularization loss as a separate loss term during problem formulation rather than handling it internally via trainer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants