Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Identify computer science algorithm exercises #761

Open
kytrinyx opened this issue Apr 18, 2017 · 16 comments
Open

Identify computer science algorithm exercises #761

kytrinyx opened this issue Apr 18, 2017 · 16 comments

Comments

@kytrinyx
Copy link
Member

kytrinyx commented Apr 18, 2017

We currently have at least one, maybe several exercises that asks the learner to implement a specific algorithm.

We discussed whether this type of exercise is a good fit for Exercism, in exercism/discussions#124 and concluded that:

Instead of asking "Write a program that implements erastosthenes sieve." we should ask "Write a program which returns a list of prime numbers, there are a couple of efficient algorithms: erastostenes sive, wheel sieve, etc", perhaps even with links.

We need to identify which exercises target an algorithm rather than targeting what the algorithm does.

The one that started the discussion in sieve, but I'm not sure if that's the only one.

Once we've identified them, we need to open a new issue to implement the new exercise and deprecate the old one.

Exercises identified as too specific

  • sieve
  • binary-search
@petertseng
Copy link
Member

binary-search asks for a specific algorithm.

@ErikSchierboom
Copy link
Member

The diffie-hellman exercise is all about a specific algorithm.

@tejasbubane
Copy link
Member

tejasbubane commented May 8, 2017

luhn is another specific algorithm. Although I am not aware of any similar ones like the sieve example above. Maybe rename luhn to credit-card-validator to remove the specificity?

@Insti
Copy link
Contributor

Insti commented May 8, 2017

Are there other algorithms that will pass the tests for diffie-hellman and luhn?
With sieve and binary-search there are.

Is that the difference between a "good" and a "bad" specfic algorithm question?

@ErikSchierboom
Copy link
Member

@Insti I think that's it!

@petertseng
Copy link
Member

Oh. In that case, it's time to suggest parallel-letter-frequency... because I imagine most tracks' parallel-letter-frequency test suites don't specifically enforce that the implementation is indeed parallel, so a serial solution (whether it is intentionally or unintentionally serial) could pass the tests, right?

(Or should the tests enforce parallelism, and how? Like exercism/haskell#542 ?)

@Insti
Copy link
Contributor

Insti commented May 22, 2017

hangman?

@Insti
Copy link
Contributor

Insti commented May 23, 2017

binary?
(although it is deprecated.)

@emilianbold
Copy link

emilianbold commented Jul 3, 2017

The core of the matter for me is not about implementing a specific algorithm. It's that the algorithm does not help me improve for that particular language and that sometimes I spend more time on the algorithm than on the language.

Also, I'm not learning how to program, I'm just practicing a new language. Do I really need that many exercises teaching me recursion in this new language? I know it helps to repeat some concepts, but at some point, it's just one exercise after another teaching you nothing new.

One good exercise, "Saddle Points" on Haskell, it's pretty hard to solve with the given method signature. I took it as a challenge and solved it. And I learned a few things about the related APIs. Here the algorithm was easy, it was the hunting for the useful hooks that was educational.

"Parallel Letter Frequency" was hard because you are given no hint on what to use. I read a lot of (some, dubious) articles until I decided to use the simplest function I could use. And it worked.

Right now, after "Robot Name" which was good as I used a MVar I'm offered "Pig Latin" which is pointless, then "Say" which is not much at this point, then "Spiral Matrix" which is just algorithm writing.

I feel like the following exercises are missing a chance to teach me some new language concept or very common language module / function / pattern.

@kytrinyx
Copy link
Member Author

kytrinyx commented Jul 3, 2017

Thanks @emilianbold this makes a lot of sense, and I think you're right.

We definitely need to retire exercises that are not teaching anything.

I don't know whether we know which ones feel like they're not teaching anything though, and this is likely different from language to language.

/cc @iHiD @nicolechalmers not directly related to the redesign, but a question we might want to discuss in order to have some ideas about how to figure out which exercises are not helping.

@stkent
Copy link
Contributor

stkent commented Jul 3, 2017

I would imagine whether or not an exercise teaches anything can also vary from person to person.

@iHiD
Copy link
Member

iHiD commented Jul 3, 2017 via email

@masters3d
Copy link
Contributor

To clarify: This is not saying that we can not use traditional cs ideas to solve the problems right?

Can we still have a custom stable sorting problem where a user is pretty much being asked to sort an array and just suggest the many different ways this can be done?

Another example that comes to mind is Priority Queues. There are many different ways this can be done with out having to specify to the user to use a Heap. Etc.

@iHiD
Copy link
Member

iHiD commented Jan 26, 2018 via email

@coriolinus
Copy link
Member

coriolinus commented Jan 26, 2018 via email

@kytrinyx
Copy link
Member Author

I feel the opposite: there's value in knowing various sort algorithms,
learning their performance characteristics, implementing them.

Yes, there is value in this, however that is not—directly—what Exercism is about.

In my opinion, there aren't many better problems than sorting for
demonstrating in a visceral way that different algorithms have different
performance characteristics.

This is a good point. I would be open to a sorting-based exercise, provided that it is framed in terms of a problem to solve that isn't directly saying "learn to sort". Just as two-fer is not framed as "optional default values" or some other theoretical concept.

If we can have an interesting exercise that has the possibility for people to get a visceral experience with algorithm performance characteristics, that is fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants