At theScore, we are always looking for intelligent, resourceful, full-stack developers to join our growing team. To help us evaluate new talent, we have created this take-home interview question. This question should take you no more than a few hours.
All candidates must complete this before the possibility of an in-person interview. During the in-person interview, your submitted project will be used as the base for further extensions.
In-person coding interviews can be stressful and can hide some people's full potential. A take-home gives you a chance work in a less stressful environment and showcase your talent.
We want you to be at your best and most comfortable.
As outlined in our job description, you will come across technologies which include a server-side web framework (like Elixir/Phoenix, Ruby on Rails or a modern Javascript framework) and a front-end Javascript framework (like ReactJS)
We have sets of records representing football players' rushing statistics. All records have the following attributes:
Player
(Player's name)Team
(Player's team abbreviation)Pos
(Player's postion)Att/G
(Rushing Attempts Per Game Average)Att
(Rushing Attempts)Yds
(Total Rushing Yards)Avg
(Rushing Average Yards Per Attempt)Yds/G
(Rushing Yards Per Game)TD
(Total Rushing Touchdowns)Lng
(Longest Rush -- aT
represents a touchdown occurred)1st
(Rushing First Downs)1st%
(Rushing First Down Percentage)20+
(Rushing 20+ Yards Each)40+
(Rushing 40+ Yards Each)FUM
(Rushing Fumbles)
In this repo is a sample data file rushing.json
.
-
Create a web app. This must be able to do the following steps
- Create a webpage which displays a table with the contents of
rushing.json
- The user should be able to sort the players by Total Rushing Yards, Longest Rush and Total Rushing Touchdowns
- The user should be able to filter by the player's name
- The user should be able to download the sorted data as a CSV, as well as a filtered subset
- Create a webpage which displays a table with the contents of
-
The system should be able to potentially support larger sets of data on the order of 10k records.
-
Update the section
Installation and running this solution
in the README file explaining how to run your code
- Download this repo
- Complete the problem outlined in the
Requirements
section - In your personal public GitHub repo, create a new public repo with this implementation
- Provide this link to your contact at theScore
We will evaluate you on your ability to solve the problem defined in the requirements section as well as your choice of frameworks, and general coding style.
If you have any questions regarding requirements, do not hesitate to email your contact at theScore for clarification.
You need to install Docker and docker-compose to run this solution.
To access the solution at http://localhost:4000
, run:
$ git clone https://github.com/youalreadydid/nfl-rushing.git
$ cd nfl-rushing
$ docker-compose run --rm dev mix do setup, run priv/repo/seeds.exs
$ docker-compose up dev
To run the test suite, type the following command:
$ docker-compose run --rm test
To help a fellow developer that will review this project:
- This was made with LiveView (I've learned it while doing this project, sorry in advance haha), you can find the main "controller" at
lib/nfl_rushing_web/live/rushing_statistics_live.ex
; - But I couldn't use LiveView to send the .csv with the statistics, so you can find the controller for it at
lib/nfl_rushing_web/controllers/rushing_statistics_controller.ex
; - The script to import the statistics is integrated on the seeds and is located at
priv/scripts/01_import_rushing_data.exs
; - The "context" is at
lib/nfl_rushing.ex
. I don't usually specialize a context until it's really needed, what do you think?; - I also don't like to write documentation besides the README or some very specific logic that the code isn't enough, but you can try to convince me I'm wrong hehehe