You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be cool to connect between versions of Julia and different systems (specific combinations of hardware (GPU, CPU, SSD, etc) and software (OS, locally compiled libs, etc)). To do that in a more scalable fashion (apart from making sure you harvest all the specs on the local system this runs on) you'll need to upload the results to some open database.
Then people would be able to see what specific system configurations are relatively suffering and maybe what could be done about it. If enough people submit, we would even be able to say how many people are affected by relatively poor performance, which would depend on how many people have that specific configuration. And if this gets really big then the testsuite might attract more and more package authors as a means for testing their code on tons and tons of very different configurations. They would then be able to use the feedback to possibly adjust their code to cater to more systems.
Of course, to make this as real as possible, it would be imperative to make it as easy as possible:
using SystemBenchmark
runsubmit() # runs the banchmark AND submits it online
The text was updated successfully, but these errors were encountered:
Agreed. I couldn't figure out a non-vulnerable way to have an anonymous bucket, so for now I've set up #8 with a note in the readme. We can pull the data in as people submit it for now.
It'd be great to let people easily upload raw benchmark data. FWIW https://github.com/simeonschaub/ArtifactUtils.jl has upload_to_gist function to upload an artifact to gist using git and gh which could be useful for this. Maybe it's also possible to post an URL and the artifact SHA1 on #8 automatically using gh. It needs people to log in via gh CLI though.
It would be cool to connect between versions of Julia and different systems (specific combinations of hardware (GPU, CPU, SSD, etc) and software (OS, locally compiled libs, etc)). To do that in a more scalable fashion (apart from making sure you harvest all the specs on the local system this runs on) you'll need to upload the results to some open database.
Then people would be able to see what specific system configurations are relatively suffering and maybe what could be done about it. If enough people submit, we would even be able to say how many people are affected by relatively poor performance, which would depend on how many people have that specific configuration. And if this gets really big then the testsuite might attract more and more package authors as a means for testing their code on tons and tons of very different configurations. They would then be able to use the feedback to possibly adjust their code to cater to more systems.
Of course, to make this as real as possible, it would be imperative to make it as easy as possible:
The text was updated successfully, but these errors were encountered: