Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Access to non-nda non-ip training data for retraining OIDN #245

Open
etheory opened this issue Dec 9, 2024 · 0 comments
Open

Access to non-nda non-ip training data for retraining OIDN #245

etheory opened this issue Dec 9, 2024 · 0 comments

Comments

@etheory
Copy link

etheory commented Dec 9, 2024

Now, I am fully aware that the training data is proprietary, and a competitive advantage, but getting access to what type of data it contains (how many images of what type of thing and with what type of parameter ranges) would be VERY useful for anyone wanting to retrain OIDN from scratch (such as Animal Logic/Netflix Animation Studios would like to).

This would allow us to retrain using our pixel-filter, removing the need to use filtered importance sampled images (which we've always found an unfortunate burden).

It would also allow us to provide more training data for hair/fine curves which also doesn't work very well in the current OIDN implementation.

Happy to continue this discussion at length.

It would also be good, without giving away anything specific, what the proprietary training data contains, so we could synthesize it from scratch with our own data. i.e. n tests for feature x given y specifics. and a list of those covering the proprietary training data to get us at least in the ballpark with where we need to be r.e. number of example images we'd need to get a decent result.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant