Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calling methods on pytorch Generator causes segfault #1259

Closed
sbrunk opened this issue Nov 6, 2022 · 9 comments
Closed

Calling methods on pytorch Generator causes segfault #1259

sbrunk opened this issue Nov 6, 2022 · 9 comments
Assignees

Comments

@sbrunk
Copy link
Contributor

sbrunk commented Nov 6, 2022

Something seems odd with the mapping of Generator. Calling seed and state related methods on a Generator instance triggers a segfault.

For instance running this snippet:

//> using lib "org.bytedeco:pytorch-platform:1.12.1-1.5.8"

val g = new org.bytedeco.pytorch.Generator()
g.current_seed()
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x000000016bd576d2, pid=14945, tid=9731
#
# JRE version: OpenJDK Runtime Environment Temurin-17+35 (17.0+35) (build 17+35)
# Java VM: OpenJDK 64-Bit Server VM Temurin-17+35 (17+35, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, bsd-amd64)
# Problematic frame:
# C  [libjnitorch.dylib+0x13b6d2]  Java_org_bytedeco_pytorch_Generator_current_1seed+0x42
...

Any ideas what could be wrong here?

@saudet
Copy link
Member

saudet commented Nov 7, 2022

That looks like an abstract class. Did you try to create an instance of a subclass instead?

@saudet
Copy link
Member

saudet commented Nov 13, 2022

It looks like the way to create a Generator is with make_generator(), so we should work on mapping that, I guess?
https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/test/cpu_rng_test.cpp

@sbrunk
Copy link
Contributor Author

sbrunk commented Nov 13, 2022

Thanks for having a look and apologies, I missed your first response.

In the meantime I also realized that functions like randperm() that take an optional generator also respect setting the global RNG state via manual_seed which unblocked me while writing deterministic tests. :)

But being able to call make_generator() to create Generator instances is still a good idea IMHO.

@sbrunk
Copy link
Contributor Author

sbrunk commented May 20, 2023

I'd like to get back into getting this in. generator.h is already in the presets and the make_generator function in C++ seems to be defined here:

https://github.com/pytorch/pytorch/blob/66a2600b6aadc5ee18ad4259f8a9f745dbaddc9a/aten/src/ATen/core/Generator.h#L143-L146

template<class Impl, class... Args>
Generator make_generator(Args&&... args) {
  return Generator(c10::make_intrusive<Impl>(std::forward<Args>(args)...));
}

Do we need additional mapping infos because it is a template function?

@HGuillemet
Copy link
Collaborator

Yes, we need explicit infos to instantiate the function template.
In the new version of the presets I'm working on, I'll add make_generator_cuda and make_generator_cpu to map at::make_generator<at::CUDAGeneratorImpl,int8_t> and at::make_generator<at::CPUGeneratorImpl,uint64_t> respectively.
Would that be enough ?

@sbrunk
Copy link
Contributor Author

sbrunk commented May 21, 2023

Yes that'd be great. In the Python impl, the Generator constructor takes a dtype and then decides which make_generator to call:

  if (device.type() == at::kCPU) {
    self->cdata = make_generator<CPUGeneratorImpl>();
  }
#ifdef USE_CUDA
  else if (device.type() == at::kCUDA) {
    self->cdata = make_generator<CUDAGeneratorImpl>(device.index());
  }
#elif USE_MPS
  else if (device.type() == at::kMPS) {
    self->cdata = make_generator<MPSGeneratorImpl>();
  }

So if we have the cpu and cuda variants (and mps in the future as well once we support building against mps), it should be straightforward to do it in a similar way.

Out of curiosity, is the new version of the presets you are working on a major overhaul?

@HGuillemet
Copy link
Collaborator

HGuillemet commented May 21, 2023

There will be 4 variants:

@Namespace("at") public static native @ByVal @Name("make_generator<at::CUDAGeneratorImpl>") Generator make_generator_cuda();
@Namespace("at") public static native @ByVal @Name("make_generator<at::CUDAGeneratorImpl,int8_t>") Generator make_generator_cuda(@Const byte device_index);
@Namespace("at") public static native @ByVal @Name("make_generator<at::CPUGeneratorImpl>") Generator make_generator_cpu();
@Namespace("at") public static native @ByVal @Name("make_generator<at::CPUGeneratorImpl,uint64_t>") Generator make_generator_cpu(long seed_in);

Out of curiosity, is the new version of the presets you are working on a major overhaul?

Yes. Hopefully coming soon as a PR.
Do you have any additional wishes for this version ?

@sbrunk
Copy link
Contributor Author

sbrunk commented May 21, 2023

Yes. Hopefully coming soon as a PR.

Looking forward to it.

Do you have any additional wishes for this version ?

None that I know of yet. Right now I'm just curious how this might affect the Scala bindings I'm working on.

@sbrunk
Copy link
Contributor Author

sbrunk commented Aug 15, 2023

With #1360 merged, we can now use the make_generator_cpu() and make_generator_cuda() variants.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants