Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility for unit tests? #28

Open
islamelnabarawy opened this issue Sep 13, 2016 · 10 comments
Open

Possibility for unit tests? #28

islamelnabarawy opened this issue Sep 13, 2016 · 10 comments
Assignees

Comments

@islamelnabarawy
Copy link
Member

Something that's coming up as I'm trying to make additions and improvements to this, is the fear of accidentally breaking some existing functionality. Having at least some unit test coverage would be a blessing in this case.

I figured we can start a conversation on how we could possibly do that, what would be important things to consider, and so on.

@LinuxMercedes
Copy link
Member

I agree that unit tests would be good, but I don't know anything about what frameworks are good for Python unit tests.

Most of the API functionality is already split off into classes, so it shouldn't be too hard to mock out most of that stuff. The hardest testing part is probably going to be the stuff in assigner.py, unless there's some framework that modifies how print() works so we can check results that way.

@michaelwisely any thoughts?

@islamelnabarawy
Copy link
Member Author

I've used unittest successfully in the past. It's simple and efficient, and now it has the unittest.mock library as well.

My thoughts are, since the assigner.py file seems to be getting more and more bloated, now might be a good time to break it up into individual commands. I've seen that pattern in several CLI projects before, and I think it would do great here. That will make assigner.py very short, and we will have a modular framework where each command is a separate file (moving common code to library functions as needed). That will allow us to write unit tests each individual command in a clean manner, with the added benefit of being able to add, remove, and debug commands easily and cleanly.

A (simple) example of the philosophy I have in mind is explained here.

@islamelnabarawy
Copy link
Member Author

I took a first stab at the modular CLI approach, and here's the result. This is purely refactoring; I didn't change the code for any of the commands, but I wrote a small bit of code that lists the BaseCommand subclasses and lets each one of them add its own configuration parameters.

I tried to make sure the commands are as de-coupled from assigner.py as possible, and moved all the common functions to commands/common.py.

As I said, this is only a first attempt at it. I have a feeling there's a better way than having to list all the modules in __init__.py, but the only other alternative I could think of right off the bat would be to move the call to register() from the individual command files to the BaseCommand.py file, which sounds worse since it means we'd have to change that file every time a new command is added.

Please take a look at it and let me know what you think. And if you want to test it, please make sure you fetch and merge #36 as well, because that error prevents both the new and open commands from working correctly right now, on both the old and new code bases.

@islamelnabarawy
Copy link
Member Author

Forgot to add: I'm also not super happy with the way I passed the subparsers collection to each command; that's obviously dangerous and bad for many reasons. I didn't want to spend more time on the refactoring than I already knew I would have to for now, so it's more or less a hack. Ideally it will be replaced by a clear and concise but also dynamic way of having each command report its name, description, and parameters, then the assigner.py code would be responsible for setting up the parser.

@LinuxMercedes
Copy link
Member

@michaelwisely pointed out that Grader already has separate files for each command; it'd be good to mirror that architecture.

@LinuxMercedes LinuxMercedes added this to the Quality Code milestone Jan 5, 2018
@brhoades
Copy link
Member

brhoades commented Jan 19, 2018

Cool, so it looks like the modular CLI part is done. I am not interested, personally, in making unit tests here. I do see utility in setting up an integration test though. I think I'll spawn off an other issue for some integration-style sanity tests.

Edit:
I'm not sure how easy integration tests would be here either... There's the remote API that throws a wrench in testing things.

@islamelnabarawy
Copy link
Member Author

@brhoades I believe you can mock the remote API.

@LinuxMercedes
Copy link
Member

I think long-term I'd like to split the git and GitLab parts of BaseRepo and generally re-design that to be easier to mock.

@brhoades
Copy link
Member

I could mock it. Right now, this sounds like a lot of fun for some reason. I'm going to look into this.

@brhoades brhoades self-assigned this Jan 19, 2018
@brhoades
Copy link
Member

Modules to write tests for:

  • assigner
  • baserepo
  • canvas
  • config
  • progress
  • roster_util
  • commands.archive
  • commands.assign
  • commands.canvas
  • commands.get
  • commands.import
  • commands.init
  • commands.lock
  • commands.new
  • commands.open
  • commands.protect
  • commands.roster
  • commands.set
  • commands.status
  • commands.unarchive
  • commands.unlock
  • commands.unprotect

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants