-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Secure persistence: add extensibility #141
Comments
Thinking more about this, what do you think of supporting only This would mean we don't need to do much on our side, and they only need to implement a sensible If we manage to move all sklearn models to the same path, then we can completely remove |
That sounds good, we can start with Still, the user should have to opt in explicitly. Do you already have an API in mind? |
I'm not sure what you mean by user in this context. I'm thinking the I'm still not sure if we should load simple objects if the user has the corresponding library installed or not, if they have it installed, they trust it? |
That's what I meant. How would the API for this look like?
Previously, I would have said that if you installed a malicious library, you're probably already compromised, so there is nothing to be gained from not trusting it at this point. But IIUC, Python is moving in a direction of not allowing arbitrary code to be run during installation (avoiding |
Packages can run arbitrary code through discussions about removing While saving models, we save everything, it's only while loading that security issues come in play. Therefore I think simply relying on |
Oh man, it never stops, does it? Okay, so we can basically assume that if a malicious package is installed, the user is already compromised.
Okay, let's do it this way and hopefully it'll be sufficient. |
A mechanism should be implemented that allows library authors to add secure persistence to their library, and for users to opt those libraries in.
The text was updated successfully, but these errors were encountered: