Requirements:
- python3
- Microsoft ODBC driver for SQL Server
For Mac (https://docs.microsoft.com/en-us/sql/connect/odbc/linux-mac/install-microsoft-odbc-driver-sql-server-macos?view=sql-server-ver15) For Linx (https://docs.microsoft.com/en-us/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server?view=sql-server-ver15)
Step to run from root directory:
- python3 -m venv venv
- source venv/bin/activate
- pip install -r requirements.txt
- python3 api.py
This api has swagger support, so you can go directly to http://127.0.0.1:5000/ to test the api. I have a remote Mssql server set up on AWS, you can use any sql server to interact with it. Etc azure data studio
If you encounter any problems, please contact me at [email protected].
Answers to the following questions: How did you test that your implementation was correct?
I tested the api with the sample input and also a variety of inputs I made on my own. I made sure the correct output (json or exception) are returned.
If this application was destined for a production environment, what would you add or change?
If it is a production environment, I would add CI/CD setup to this project, and maybe containerize it in docker. In addition, I would add a security layer to save the credientals. Finally, I would like to add automatic testing to this solution for production environment.
What compromises did you have to make as a result of the time constraints of this challenge?
- For csv upload, If I had more time I would try to use the mssql bulk insert to improve the performance of the post call.
- If I had more time, I would like to add docker support for easier environment set up
- I didn't have enought time to set up the swagger api model for this project, it would be nice to have it.
Applicants for the Full-stack Developer role at Wave must complete the following challenge, and submit a solution prior to the onsite interview.
The purpose of this exercise is to create something that we can work on together during the onsite. We do this so that you get a chance to collaborate with Wavers during the interview in a situation where you know something better than us (it's your code, after all!)
There isn't a hard deadline for this exercise; take as long as you need to complete it. However, in terms of total time spent actively working on the challenge, we ask that you not spend more than a few hours, as we value your time and are happy to leave things open to discussion in the on-site interview.
Please use whatever programming language and framework you feel the most comfortable with.
Feel free to email [email protected] if you have any questions.
Imagine that this is the early days of Wave's history, and that we are prototyping a new payroll system API. A front end (that hasn't been developed yet, but will likely be a single page application) is going to use our API to achieve two goals:
- Upload a CSV file containing data on the number of hours worked per day per employee
- Retrieve a report detailing how much each employee should be paid in each pay period
All employees are paid by the hour (there are no salaried employees.) Employees belong to one of two job groups which determine their wages; job group A is paid $20/hr, and job group B is paid $30/hr. Each employee is identified by a string called an "employee id" that is globally unique in our system.
Hours are tracked per employee, per day in comma-separated value files (CSV). Each individual CSV file is known as a "time report", and will contain:
- A header, denoting the columns in the sheet (
date
,hours worked
,employee id
,job group
) - 0 or more data rows
In addition, the file name should be of the format time-report-x.csv
,
where x
is the ID of the time report represented as an integer. For example, time-report-42.csv
would represent a report with an ID of 42
.
You can assume that:
- Columns will always be in that order.
- There will always be data in each column and the number of hours worked will always be greater than 0.
- There will always be a well-formed header line.
- There will always be a well-formed file name.
A sample input file named time-report-42.csv
is included in this repo.
We've agreed to build an API with the following endpoints to serve HTTP requests:
-
An endpoint for uploading a file.
- This file will conform to the CSV specifications outlined in the previous section.
- Upon upload, the timekeeping information within the file must be stored to a database for archival purposes.
- If an attempt is made to upload a file with the same report ID as a previously uploaded file, this upload should fail with an error message indicating that this is not allowed.
-
An endpoint for retrieving a payroll report structured in the following way:
NOTE: It is not the responsibility of the API to return HTML, as we will delegate the visual layout and redering to the front end. The expectation is that this API will only return JSON data.
- Return a JSON object
payrollReport
. payrollReport
will have a single field,employeeReports
, containing a list of objects with fieldsemployeeId
,payPeriod
, andamountPaid
.- The
payPeriod
field is an object containing a date interval that is roughly biweekly. Each month has two pay periods; the first half is from the 1st to the 15th inclusive, and the second half is from the 16th to the end of the month, inclusive.payPeriod
will have two fields to represent this interval:startDate
andendDate
. - Each employee should have a single object in
employeeReports
for each pay period that they have recorded hours worked. TheamountPaid
field should contain the sum of the hours worked in that pay period multiplied by the hourly rate for their job group. - If an employee was not paid in a specific pay period, there should not be an object in
employeeReports
for that employee + pay period combination. - The report should be sorted in some sensical order (e.g. sorted by employee id and then pay period start.)
- The report should be based on all of the data across all of the uploaded time reports, for all time.
As an example, given the upload of a sample file with the following data:
date hours worked employee id job group 2020-01-04 10 1 A 2020-01-14 5 1 A 2020-01-20 3 2 B 2020-01-20 4 1 A A request to the report endpoint should return the following JSON response:
{ payrollReport: { employeeReports: [ { employeeId: 1, payPeriod: { startDate: "2020-01-01", endDate: "2020-01-15" }, amountPaid: "$300.00" }, { employeeId: 1, payPeriod: { startDate: "2020-01-16", endDate: "2020-01-31" }, amountPaid: "$80.00" }, { employeeId: 2, payPeriod: { startDate: "2020-01-16", endDate: "2020-01-31" }, amountPaid: "$90.00" } ]; } }
- Return a JSON object
We consider ourselves to be language agnostic here at Wave, so feel free to use any combination of technologies you see fit to both meet the requirements and showcase your skills. We only ask that your submission:
- Is easy to set up
- Can run on either a Linux or Mac OS X developer machine
- Does not require any non open-source software
Please commit the following to this README.md
:
- Instructions on how to build/run your application
- Answers to the following questions:
- How did you test that your implementation was correct?
- If this application was destined for a production environment, what would you add or change?
- What compromises did you have to make as a result of the time constraints of this challenge?
- Clone the repository.
- Complete your project as described above within your local repository.
- Ensure everything you want to commit is committed.
- Create a git bundle:
git bundle create your_name.bundle --all
- Email the bundle file to [email protected] and CC the recruiter you have been in contact with.
Evaluation of your submission will be based on the following criteria.
- Did you follow the instructions for submission?
- Did you complete the steps outlined in the Documentation section?
- Were models/entities and other components easily identifiable to the reviewer?
- What design decisions did you make when designing your models/entities? Are they explained?
- Did you separate any concerns in your application? Why or why not?
- Does your solution use appropriate data types for the problem as described?