Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] pt fitting has bug with float32 #3291

Closed
njzjz opened this issue Feb 18, 2024 · 1 comment · Fixed by #3314
Closed

[BUG] pt fitting has bug with float32 #3291

njzjz opened this issue Feb 18, 2024 · 1 comment · Fixed by #3314
Assignees
Labels
Milestone

Comments

@njzjz
Copy link
Member

njzjz commented Feb 18, 2024

pt:

    def forward(
        self,
        xx: torch.Tensor,
    ) -> torch.Tensor:
        """One MLP layer used by DP model.
    
        Parameters
        ----------
        xx : torch.Tensor
            The input.
    
        Returns
        -------
        yy: torch.Tensor
            The output.
        """
        yy = (
>           torch.matmul(xx, self.matrix) + self.bias
            if self.bias is not None
            else torch.matmul(xx, self.matrix)
        )
E       RuntimeError: expected mat1 and mat2 to have the same dtype, but got: double != float
@anyangml
Copy link
Collaborator

#3314

@njzjz njzjz linked a pull request Feb 21, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Development

Successfully merging a pull request may close this issue.

2 participants