Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to wrap things up and where to place libtorch_cpu? #232

Closed
sonovice opened this issue Jul 17, 2020 · 4 comments
Closed

How to wrap things up and where to place libtorch_cpu? #232

sonovice opened this issue Jul 17, 2020 · 4 comments

Comments

@sonovice
Copy link

sonovice commented Jul 17, 2020

I have trained a model with Python and now I'm trying to deploy it with a simple rust executable.

Here's the short code snippet:

extern crate tch;

use std::env;
use tch::IndexOp;

fn main() {
    let args: Vec<String> = env::args().collect();

    let model_file = &args[1];
    let image_file = &args[2];

    let model = tch::CModule::load(model_file).unwrap();

    let image = tch::vision::image::load(image_file)
        .unwrap()
        .to_kind(tch::Kind::Float)
        / 255;

    let output = model
        .forward_ts(&[image.unsqueeze(0)])
        .unwrap()
        .softmax(1, tch::Kind::Float)
        .squeeze();

    let background = (output.i(0) * 255).to_kind(tch::Kind::Uint8).unsqueeze(0);
    
    let background_path = format!("{}/output.png", env::current_dir().unwrap().into_os_string().into_string().unwrap());
    tch::vision::image::save(&background, background_path).unwrap();
}

However, after building it, running ./main model.pt input.png always returns error while loading shared libraries: libtorch_cpu.so: cannot open shared object file: No such file or directory

Being new to Rust, I have no clue how to wrap things up so I can just pass a single executable to another linux machine. Any advice would be highly appreciated!

(Side question: Is it possible to use a statically linked version of libtorch to decrease file size? I'm aware that this would need analysis of my model as well...)

@LaurentMazare
Copy link
Owner

I'm not sure how rust related this is. Maybe you want to tweak your LD_LIBRARY_PATH to point at the directory where libtorch_cpu.so is - there are some details on this in the main readme.
As per your side question, I think that libtorch is only packaged as a shared library on the pytorch.org website and I wouldn't know of a way to create a statically linked binary based on this.

@sonovice
Copy link
Author

Thank you for the answer. Is it maybe somehow possible to set LD_LIBRARY_PATH to local folder (./) inside the rust executable?

@LaurentMazare
Copy link
Owner

I'm not sure that is very easy to achieve, the rust build system does not let you set up arbitrary flag. You could try hacking around to get -Wl,-rpath passed to the linker but I wouldn't see how to do it in a non-hacky way. I think either setting up the LD_LIBRARY_PATH environment variable properly or putting the library in a directory were it's discoverable (e.g. /lib or /usr/lib) is probably the easiest way to go (you could just create a script that sets the environment variable and runs the binary and deploy this script together with the binary and libraries).

@sonovice
Copy link
Author

Alright, thank you for your answer and the suggestions. I will close this issue since it's not directly related to tch-rs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants