-
Notifications
You must be signed in to change notification settings - Fork 430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add API for getting a bool with chance of exactly 1-in-10 or 2-in-3 #491
Conversation
One thing that really confused me in this patch is why the existing code below compiles: #[derive(Clone, Copy, Debug)]
pub struct Uniform<X: SampleUniform> {
inner: X::Sampler,
}
pub trait SampleUniform: Sized {
type Sampler: UniformSampler<X = Self>;
}
pub trait UniformSampler: Sized {
...
} How can As soon as I did #[derive(Clone, Copy, Debug)]
pub struct Bernoulli<T: SampleUniform> {
/// Probability of success, relative to the maximal integer.
distribution: Uniform<T>,
limit: T,
} rustc started complaining that |
Bias due to FP rounding is actually very small, something like 1 in 2^53 (so effect of the bias is only visible with more than 2^53 samples).
This isn't bias. It's supposed to round like that. I just ended up being the easiest way to get the constant we wanted.
|
Sorry, I think I didn't express that very clearly. What I mean is that Isn't this why
Interesting! I'll see if I can take advantage of that. |
Doing
just gets me:
|
Ah, but it does work if I do: #[derive(Clone, Copy, Debug)]
pub struct Bernoulli<T: SampleUniform + PartialOrd> {
/// Probability of success, relative to the maximal integer.
distribution: T::Sampler,
limit: T,
} No idea why though? Happy to do that if that's preferred? |
Integer sampling can have huge biases without a rejection zone when the range being sampled is close (within an order of magnitude) of the sample space range. Granted, both integer and FP arithmetic is binary (thus cannot represent fractions perfectly when the denominator is not a power of 2), but the biases are typically small and (especially with 64-bit) hard to observe. This isn't to say we shouldn't add exact fractional sampling, but it does significantly reduce the motivation. |
I guess I don't understand why we then bother with rejection zones for types u8, u16 and u32 given that most RngCore implementations generate an u64 as fast as an u32? That seems like it'd save both zone calculations, as well as branches for checking if we're in the zone. |
This is true, but I guess you mis-read the code: the |
I feel like we're talking past each other. What I'm suggesting, if we don't care about small biases, is something like this for u8/i8/u16/i16:
and for u32/i32:
Potentially the last line could be written as Either way, this is notably faster than both the current Based on my reading of the code, I thought the whole point of the Is the purpose something else? |
You're right @sicking; this comes down to whether we care about small biases or not. To be honest I never thought seriously about sampling small integer ranges like that which is also part of the reason. In your example to sample One thing to bear in mind is the limits of current computers — approximately 2^32 cycles per second with about 2^25 seconds per year, i.e. 2^57 cycles per core per year. This is far larger than 2^24, though admittedly also larger than 2^53, so I guess you are correct that we should care about that bias. As for the approach, I'm wondering why you choose to make fn gen_ratio(&mut self, num: u64, denom: u64) -> bool {
assert!(denom >= 1);
num > self.gen_range(0, denom)
} |
I totally agree that the biases here are really small. I don't have a strong opinion about how big biases we should worry about. I just think we should be consistent. Since we seemed to care about the bias elsewhere I was surprised that we wouldn't for bool-generation as well. But I also think there's a place for supporting slightly biased, perf-optimized, sampling. As @pitdicker suggests here. I was planning in exploring that elsewhere. It's somewhat tricky since simply adding new
The code that you've posted is exactly what's in the PR. Minus the assert since I added |
Oh, I guess your Supporting any |
Also, to be clear, the idea of this API wasn't just to fix bias issues. It was also to have a nice-to-use API for cases where you want specifically a bool with a N-in-M chance. I think it's likely common to think of an event as a chance as "2 out of 3" rather than p=0.6667, especially when dealing discreet events like dice rolls or card draws. Possibly that was why we ended up with a discreet API like So having an API that reflects that made sense to me. But the ability to remove bias is certainly also a goal. |
Actually, the bias is just under 1 in (2^32 - 2^8). I.e. about one in 4 billion. |
I like the idea of an extra But I think the idea that Can an implementation of pub fn new_ratio(numerator: u32, denominator: u32) -> Bernoulli;
For small integers, and without using the modulus method, sampling from ranges with a tiny bias but better performance is for many a good trade-off. Still seems useful to me, but needs some careful documentation that details when it can be, or should not be, used. |
I agree that detecting a bias in
Sure, simply doing pub fn new_ratio(numerator: u32, denominator: u32) -> Bernoulli {
Self::new(numerator as f64 / denominator as f64)
} would work. |
Pushed a patch which is more in line with what it sounds like people are thinking. I left pub fn new_ratio(numerator: u32, denominator: u32) -> Bernoulli {
Self::new(numerator as f64 / denominator as f64)
} I also change the signature of As for implementing
So seems pretty clear that we should go with the middle approach. I also looked at implementing However it always made For either method I'm happy to go with whatever implementation approach though. |
I don't see much point adding a method that just does I opened #494 to discuss this type of compromise, but I don't really have an answer. |
I see it both as a ease-of-use thing and as something that gives us the flexibility to change the implementation if a better strategy becomes available. For example if we decide that a slightly biased I also think it's helpful in cases where developers don't realize that the implementation is as simple as it is, or where it's not obvious what the fastest implementation strategy is. You could similarly make the argument that But adding convenience functions definitely needs to be weighed against the cost of having a larger API. I wouldn't think that cost is worth it if it wasn't for that I think that getting a bool with a M-in-N chance is very common. The whole reason that I got interested in contributing to rand was that when I started coding Rust and needed random numbers, the rand 0.3 API felt clunky enough that I just opted for writing my own rng suited to my needs. The API now is certainly a lot better than it was in 0.3, and I'm sure I was affected by being a rust beginner and wasn't as familiar with idiomatic rust. However I still feel like we can make the API easier to use. Which I know doesn't mean add any conceivable API and hope that it gets used (in fact, my first PR was to remove API). But I do think it'll involve adding convenient APIs for the most common operations. |
We already had some long discussions on I am tentatively in favour of the other changes (adding
|
I don't have a strong opinion. I originally used a generic implementation, but changed it based on @pitdicker's comment. In practice I would expect the most common use case will be to call with constants, like But I'm happy to go either way.
Yes! Unless there's a strong convention to use
I think the added usability is worth it. But it's obviously a judgement call. I don't have a strong opinion about implementation strategy unless we decide that we care about even minimal biases. I don't know that the size matters much given how good rust is at keeping things on the stack or inline with other objects. But the new approach is certainly both simpler and faster, just not as exact. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, I think we could add this, but a few changes are needed to the PR as mentioned.
Also please clean up the commits: rebase and squash (or drop) a few commits. We don't need to implement then immediately remove some functionality again. (You may want to preserve your other implementation of Bernoulli::new_ratio
in a separate branch, though I don't think it's likely we'll use it.)
src/distributions/bernoulli.rs
Outdated
/// | ||
/// # Panics | ||
/// | ||
/// If `denominator` == 0 or `numerator` > `denominator`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quotes around the whole comparison please (denominator == 0
).
src/lib.rs
Outdated
let d = distributions::Bernoulli::new(p); | ||
self.sample(d) | ||
fn gen_ratio(&mut self, numerator: u32, denominator: u32) -> bool { | ||
self.gen_bool(numerator as f64 / denominator as f64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless there's a good reason for this, I think we should just wrap Bernoulli::from_ratio
instead.
src/lib.rs
Outdated
@@ -533,16 +535,34 @@ pub trait Rng: RngCore { | |||
/// let mut rng = thread_rng(); | |||
/// println!("{}", rng.gen_bool(1.0 / 3.0)); | |||
/// ``` | |||
#[inline] | |||
fn gen_bool(&mut self, p: f64) -> bool { | |||
assert!(0.0 <= p && p <= 1.0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned, please don't change this function in this PR. If you really think it needs changing make a new PR/issue, because it's a significant discussion point on its own.
src/distributions/bernoulli.rs
Outdated
#[inline] | ||
pub fn new_ratio(numerator: u32, denominator: u32) -> Bernoulli { | ||
Self::new(numerator as f64 / denominator as f64) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you change this a bit it can improve performance:
pub fn new_ratio(numerator: u32, denominator: u32) -> Bernoulli {
assert!(numerator <= denominator);
if numerator == denominator {
return Bernoulli { p_int: ::core::u64::MAX }
}
const SCALE: f64 = 2.0 * (1u64 << 63) as f64;
let p_int = ((numerator as f64 / denominator as f64) * SCALE) as u64;
Bernoulli { p_int }
}
Before (with Rng::gen_ratio
changed to use Bernoulli:new_ratio
):
test misc_gen_ratio_const ... bench: 3,961 ns/iter (+/- 2)
test misc_gen_ratio_var ... bench: 8,383 ns/iter (+/- 31)
After:
test misc_gen_ratio_const ... bench: 3,980 ns/iter (+/- 9)
test misc_gen_ratio_var ... bench: 7,351 ns/iter (+/- 84)
It would be nice if there was a method to calculate the 'cutoff integer' more accurately than converting to floats and back, but I can't think of anything that is not 15× slower.
benches/misc.rs
Outdated
b.iter(|| { | ||
let mut accum = true; | ||
for i in 2..(::RAND_BENCH_N as u32 + 2) { | ||
accum ^= rng.gen_ratio(1, i); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good benchmark 👍. In theory the multiplication by the numerator can be optimized out. What do you think of
for i in 1..(::RAND_BENCH_N as u32 + 1) {
accum ^= rng.gen_ratio(i, i + 1);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
@@ -401,7 +401,7 @@ pub trait Rng: RngCore { | |||
/// ``` | |||
/// | |||
/// [`Uniform`]: distributions/uniform/struct.Uniform.html | |||
fn gen_range<T: PartialOrd + SampleUniform>(&mut self, low: T, high: T) -> T { | |||
fn gen_range<T: SampleUniform>(&mut self, low: T, high: T) -> T { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
src/lib.rs
Outdated
fn gen_weighted_bool(&mut self, n: u32) -> bool { | ||
// Short-circuit after `n <= 1` to avoid panic in `gen_range` | ||
n <= 1 || self.gen_range(0, n) == 0 | ||
// Short-circuit after `n <= 1` to avoid panic in `gen_ratio` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment is not correct with the change. But maybe it is best to not change this deprecated method (also it will be removed soon #499).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see what's incorrect about the comment, but I'll leave this function unchanged.
A For the rest I agree with the remaining comments of @dhardy 😄. |
@dhardy I think it is nice to merge this PR first, and I'll rebase #500. One more idea. Not really sure about it. #[inline]
pub fn new_ratio(numerator: u32, denominator: u32) -> Bernoulli {
#[inline]
fn div_rem(num: u64, divisor: u64) -> (u64, u64) {
(num / divisor, num % divisor)
}
assert!(numerator <= denominator);
if numerator == denominator {
return Bernoulli { p_int: ::core::u64::MAX }
}
let num = numerator as u64;
let denom = denominator as u64;
// Because we can't do `div = 2^64 / denom`, we do
// `div = (2^64 - denom) / denom + 1`.
let (div, rem) = div_rem(0u64.wrapping_sub(denom), denom);
let p_int = (div + 1) * num + rem * num / denom;
Bernoulli { p_int }
} This makes @sicking It seems to me |
I'll have to check the math on that code in the morning. But I assume you're correct about the precision :). I don't think the added precision is worth making But that's pretty easily fixable by using the ctor you are proposing, but making |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, will read again but after the doc change I think we can finally merge this!
@@ -523,8 +523,6 @@ pub trait Rng: RngCore { | |||
|
|||
/// Return a bool with a probability `p` of being true. | |||
/// | |||
/// This is a wrapper around [`distributions::Bernoulli`]. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please at least mention this in the doc; same with gen_ratio
below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a comment, so I'll add this back if really desired. However I don't think it's a good idea to promise a particular implementation strategy. That just seems like complicate future arguments regarding if changing the implementation is a breaking change or not.
If we want a pointer to Bernoulli
, maybe do what we do for gen_range
and say: See also the [Bernoulli
] distribution type which may be faster if sampling with the same probability repeatedly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, just a pointer to Bernoulli
is sufficient. You're right that we don't need to specify this constraint in the doc, though I don't see it as a big deal.
The
gen_weighted_bool
function is being deprecated in 0.5 since it's not very flexible and can only generate a bool with ratios 1-in-N.However the
gen_bool
replacement has the problem that it can't generate bools with exact ratios. So for examplerng.gen_bool(0.1)
won't returntrue
exactly 10% of the time, since an f64 can't express exactly the value 0.1. Additionally since the implementation ofBernoulli
introduce further bias in the multiplication with u64::MAX and rounding to u64.Another way to look at it is that since the implementation of
Bernoulli
generate an u64 without rejecting any values, it's impossible to generate an unbiased 1/10 distribution.Same thing with
rng.gen_bool(2/3)
or any other ratio.This PR adds a
rng.gen_ratio(numerator, denominator)
function, which take two numbers and generate boolean which istrue
exactlynumerator/denominator
of the time. Or at least as exactly as ourUniform
implementation is for the given type. Sorng.gen_ratio(2u8, 3u8)
will be unbiased, whereasrng.gen_ratio(2f32, 3f32)
will suffer some rounding errors.This PR also adds
Bernoulli::new_ratio
which creates aDistribution<bool>
with the same property.I'm not a particularly big fan of the name
gen_ratio
andnew_ratio
. Happy for better suggestions.