Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some questions #46

Open
longzeyilang opened this issue Aug 12, 2024 · 3 comments
Open

some questions #46

longzeyilang opened this issue Aug 12, 2024 · 3 comments

Comments

@longzeyilang
Copy link

Thank you for good job. some question as follows:
(1) I train my own data. the raw image size: 128*128. wo revised model to get big feature map.
(2) self._unfold2d(x, ws=8), the 8 is fixed-parameters, but the keypoint_position_loss and coordinate_classification_loss still have 8, is equal param, must modify at the same time?
(3) the generateRandomTPS has grid (8,6), what is the meaning? and it relates to (2) ws=8
(4) acc_f is always nan in my training, is it normal?
Loss: 7.6677 acc_c0 0.314 acc_c1 0.156 acc_f: nan loss_c: 5.476 loss_f: 8.002 loss_kp: 0.065 #matches_c: 64 loss_kp_pos: 16.468 acc_kp_pos: 0.030:

@longzeyilang
Copy link
Author

I train my data, the result so bad. and the train result is not same as test. the code not right?

@longzeyilang
Copy link
Author

would you please answer me?

@guipotje
Copy link
Collaborator

Hi @longzeyilang,

(1) Couldn't understand whats the question, could you please reformulate?
(2) You should modify the keypoint head and the alike_distill_loss.
(3) These are control points for the TPS warper , we will use them to apply geometric transformations to augment the image.
(4) Yes, in the beginning this is normal, because there is a confidence threshold in the loss that prevents the optimization in too low confident matches. After a few iterations the loss should be stable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants