You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
just wanted to point this out. Occasionally you get the following error when using inference benchmark :
2024-09-17 07:56 INFO User selected random dataset. Generating prompt and output lengths from distributions
Traceback (most recent call last):
File "/Users/vv/Documents/code_local/flexible-inference-bench-main/flex-env/bin/inference-benchmark", line 8, in <module>
sys.exit(main())
File "/Users/vv/Documents/code_local/flexible-inference-bench-main/flex-env/lib/python3.10/site-packages/flexible_inference_benchmark/main.py", line 240, in main
requests_prompts = generate_prompts(args, size)
File "/Users/vv/Documents/code_local/flexible-inference-bench-main/flex-env/lib/python3.10/site-packages/flexible_inference_benchmark/main.py", line 88, in generate_prompts
data = prompt_cls.generate_data(size_adjusted)
File "/Users/vv/Documents/code_local/flexible-inference-bench-main/flex-env/lib/python3.10/site-packages/flexible_inference_benchmark/engine/data.py", line 193, in generate_data
data = list(self.token_distribution.generate_distribution(lengths[i] + self.num_trials))
File "/Users/vv/Documents/code_local/flexible-inference-bench-main/flex-env/lib/python3.10/site-packages/flexible_inference_benchmark/engine/distributions.py", line 47, in generate_distribution
return [int(elem) for elem in np.random.randint(self.low, self.high, size)]
File "numpy/random/mtrand.pyx", line 798, in numpy.random.mtrand.RandomState.randint
File "numpy/random/_bounded_integers.pyx", line 1343, in numpy.random._bounded_integers._rand_int64
ValueError: negative dimensions are not allowed
It seems to be in the random generation creating a negative number some times and truly is random because just running the command again will work perfect. Not sure if this is a bug or what. Just wanted to bring it up.
The text was updated successfully, but these errors were encountered:
I see, the input token has normal dist with mean 20 and std 10, so it goes between 10 and 30, at the lower end there may be some issues, will take a look later, for now maybe try with lower std values like 2 or 5
just wanted to point this out. Occasionally you get the following error when using inference benchmark :
It seems to be in the random generation creating a negative number some times and truly is random because just running the command again will work perfect. Not sure if this is a bug or what. Just wanted to bring it up.
The text was updated successfully, but these errors were encountered: