Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

alioth.debian.org #660

Closed
mictadlo opened this issue Apr 2, 2012 · 18 comments
Closed

alioth.debian.org #660

mictadlo opened this issue Apr 2, 2012 · 18 comments
Labels
help wanted Indicates that a maintainer wants help on an issue or pull request

Comments

@mictadlo
Copy link

mictadlo commented Apr 2, 2012

Hello,
Could you write code for http://alioth.debian.org/scm/viewvc.php/shootout/bench/?root=shootout examples. This will show where are problems and also for newbies to understand the language better.

@JeffBezanson
Copy link
Member

It would be great to have these, but there are a lot of benchmarks there and we're probably not going to budget time to go through and implement all of them. It will probably take a group of people over a period of time to get this done.

@StefanKarpinski
Copy link
Member

@mictadlo: if you're interested in getting more familiar with Julia, implementing some of these benchmarks would be an excellent way to start. Another issue is that it may be fairly difficult to convince the shootout to include Julia.

@ViralBShah
Copy link
Member

This is good idea, but clearly needs to be a community effort as is pointed out. @mictadlo would you like to anchor this - post to the mailing list, and maybe start on a couple? These can live in our performance tests repository, and once complete, we can see if the guys at alioth add them. I think they will if we can have a debian package by then. All in all, will take a while, but let's get started.

@dcjones
Copy link
Contributor

dcjones commented Apr 3, 2012

To get you started, here's the "fasta" benchmark, which I had implemented to get to know Julia: https://gist.github.com/2288846

This one is informative because the performance is a bit lagging. (It's slower than the python program, for example.) There are a lot of operations on strings, which makes me thing it's related to #661.

@ViralBShah
Copy link
Member

Excellent! Can you create a pull request and add it to examples?

-viral

On 03-Apr-2012, at 7:58 AM, Daniel Jones wrote:

To get you started, here's the "fasta" benchmark, which I had implemented to get to know Julia: https://gist.github.com/2288846

This one is informative because the performance is a bit lagging. (It's slower than the python program, for example.) There are a lot of operations on strings, which makes me thing it's related to #661.


Reply to this email directly or view it on GitHub:
#660 (comment)

@ViralBShah
Copy link
Member

Also, I think we should have a new issue open to track this specific performance issue.

-viral

On 03-Apr-2012, at 7:58 AM, Daniel Jones wrote:

To get you started, here's the "fasta" benchmark, which I had implemented to get to know Julia: https://gist.github.com/2288846

This one is informative because the performance is a bit lagging. (It's slower than the python program, for example.) There are a lot of operations on strings, which makes me thing it's related to #661.


Reply to this email directly or view it on GitHub:
#660 (comment)

@mictadlo
Copy link
Author

mictadlo commented Apr 3, 2012

@mictadlo
Copy link
Author

mictadlo commented Apr 3, 2012

@JeffBezanson
Copy link
Member

The global rng_state will be a problem. Are we not allowed to use the usual rand() for this? If not, putting a declaration on uses of that global might fix it completely.

@markhend
Copy link

markhend commented Apr 7, 2012

rand() - no
They call out the required random generator at the bottom of the page.

@dcampbell24
Copy link
Contributor

The chameneous-redux and thread-ring benchmarks both require using pre-emptive threads, but I do not see any documentation about using pre-emptive threads in Julia. How should I do these? Use C calls to pthreads? I could also implement versions using tasks that may be considered "interesting alternatives" and might help serve to tell us how well tasks are working. Let me know, thanks.

@quinnj
Copy link
Member

quinnj commented Apr 8, 2013

So I took a stab at improving the spectral-norm implementation and I think it's significant improvement (293s -> 10-12s).
I'd like to get some other eyeballs on this though to see if there's anything non-idiomatic or other tweaks to improve before doing a pull request.

https://gist.github.com/karbarcca/5340631

@pao
Copy link
Member

pao commented Apr 8, 2013

@karbarcca Please do go ahead and make it a pull request; that makes it easiest to review. Use "RFC:" at the beginning of the pull request title if you are looking for feedback.

@timholy
Copy link
Member

timholy commented Apr 9, 2013

That's a nice improvement!

@quinnj
Copy link
Member

quinnj commented May 28, 2013

The last two benchmarks require use of pre-emptive threads (chameneos-redux and thread-ring), which Julia, AFAIK, doesn't officially support, so we can probably close this issue.

chameneos: http://benchmarksgame.alioth.debian.org/u32/performance.php?test=chameneosredux#about
thread-ring: http://benchmarksgame.alioth.debian.org/u32/performance.php?test=threadring

@ViralBShah
Copy link
Member

Should we continue having these in base, and integrate them in our rudimentary perf framework, or move them into a separate package - as a first step towards encouraging more benchmarks to be written in julia?

@quinnj
Copy link
Member

quinnj commented May 29, 2013

There seems to be quite a few "code example" repos of various kinds; I wonder if it would make sense to consolidate into a single "Code-Examples" package. A few candidates:

  • The shootout benchmarks
  • The homepage benchmarks
  • julia-tutorial repo under JuliaLang
  • The examples folder in the julia repo
  • The Rosetta-Code repo I created (we're above 70 tasks now)
  • Possibly even the RDatasets.jl package (I think the majority of R tutorials/examples I see reference one of these datasets)

This is quite a lot when listing it all out. We should probably do some organizing/trimming. Maybe have a few categories like Popular Benchmarks (focus on performance), Popular Algorithms, and Common Tasks
We would probably want a standard of sorts in terms of how the code is formatted (generous commenting? expected results in comments or assertions?). The tutorials should probably be farmed out to the various packages or take chunks and fit them into the categories I mentioned above. I think we can also advertise the Manual in this case too since it has a very "tutorial" feel about it with plenty of examples.

@ViralBShah
Copy link
Member

They all serve different purposes. Package tags would help address this considerably.

ViralBShah added a commit that referenced this issue Jul 6, 2013
Run perf tests by running make in test/perf
Factor out timing code into test/perf/perfutil.jl
Micro benchmarks are now in test/perf/micro/
perf2 benchmarks are now in test/perf/kernel
shootout benchmarks now run (not all yet) as part of the perf tests (#660)
cat benchmarks now run as part of the perf tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Indicates that a maintainer wants help on an issue or pull request
Projects
None yet
Development

No branches or pull requests

10 participants