You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now, I am fully aware that the training data is proprietary, and a competitive advantage, but getting access to what type of data it contains (how many images of what type of thing and with what type of parameter ranges) would be VERY useful for anyone wanting to retrain OIDN from scratch (such as Animal Logic/Netflix Animation Studios would like to).
This would allow us to retrain using our pixel-filter, removing the need to use filtered importance sampled images (which we've always found an unfortunate burden).
It would also allow us to provide more training data for hair/fine curves which also doesn't work very well in the current OIDN implementation.
Happy to continue this discussion at length.
It would also be good, without giving away anything specific, what the proprietary training data contains, so we could synthesize it from scratch with our own data. i.e. n tests for feature x given y specifics. and a list of those covering the proprietary training data to get us at least in the ballpark with where we need to be r.e. number of example images we'd need to get a decent result.
The text was updated successfully, but these errors were encountered:
Now, I am fully aware that the training data is proprietary, and a competitive advantage, but getting access to what type of data it contains (how many images of what type of thing and with what type of parameter ranges) would be VERY useful for anyone wanting to retrain OIDN from scratch (such as Animal Logic/Netflix Animation Studios would like to).
This would allow us to retrain using our pixel-filter, removing the need to use filtered importance sampled images (which we've always found an unfortunate burden).
It would also allow us to provide more training data for hair/fine curves which also doesn't work very well in the current OIDN implementation.
Happy to continue this discussion at length.
It would also be good, without giving away anything specific, what the proprietary training data contains, so we could synthesize it from scratch with our own data. i.e. n tests for feature x given y specifics. and a list of those covering the proprietary training data to get us at least in the ballpark with where we need to be r.e. number of example images we'd need to get a decent result.
The text was updated successfully, but these errors were encountered: