mmaher22 / grading_scripts Goto Github PK
View Code? Open in Web Editor NEWMachine Learning / DataScience Homeworks Grading Scripts @ UniTartCS
Machine Learning / DataScience Homeworks Grading Scripts @ UniTartCS
A final block could be added to write a general comment for each task/subtask for all students.
For some reason code crashed on a student with ID B**097 for first homework (B**097_2.ipynb) - 2 digits of the ID replaced with aestriks for privacy concerns. The traceback is as follows:
Traceback (most recent call last):
File "main.py", line 151, in <module>
filtered_submissions = filter_submissions(path, rerun_flag)
File "main.py", line 28, in filter_submissions
final_submissions.append(Submission(student_id, str(submissions[s]), trial_no, rerun_flag))
File "/home/novin/workspace/school/lbd/MachineLearning/Meelis Kull/Grading_Scripts/Submission.py", line 20, in __init__
data = json.dumps(json.load(f))
File "/opt/anaconda3/lib/python3.6/json/__init__.py", line 299, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/opt/anaconda3/lib/python3.6/json/__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "/opt/anaconda3/lib/python3.6/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/anaconda3/lib/python3.6/json/decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
I was testing opt=3 for the time spent by students (issue #5) that I encountered this error. I just removed the student's submission to get through!
Currently, default grade is filled by 1.0 while it could be filled with 0.0 i.e. when a student had not attempted to solve the task (this could be detected automatically by checking the cell that student had to complete). This is important to keep the cells in the grading spreadsheet empty for tasks that student haven't had any submission. And dedicated zero for those tasks that they made attempt but failed (final result is same, but visually this way is more appealing when reading the sheet!)
A more advanced scenario is to have certain unit_tests and grade the task completely automatic (which not sure if needed at all?!)
Add a note on the students (in written excel file) whose timing extraction was not successful. At the moment, those who were ignored or were not able to extract the timing (due to the absence of submission or some other issue) are printed for inspection as the output of option 3 of the script (corresponding to the timing part).
summary:
Now we have two options when there is an existing grading notebook: 1) overwrite 2) append
The third option 3) skip is needed when we have already graded half through and there has been some change to the conf.json and need to keep the grades up to that task (so to skip from some) and overwrite/append the one that was problematic.
Related issues: #3
We need a script to gather times reported by students for each task into a single sheet.
Start / End Flag Strings must be the first line in a cell!
We need to look through the whole cell lines for those flags.
If I already graded some of the student's homeworks and accidently run option 1 of the main script it will overwrite the graded notebooks and it does not save the previous grading progress.
Possibilities:
Some student has submitted previous homework. This creates empty cells in the grading notebooks as the beg and end flags of the current homework could not be matched.
A better solution is to avoid these submissions in the first place by checking the header of the homework and perhaps also check that there is no match. This facilitates the grading as there is no need to double-check if there has been a problem with extraction or student had a wrong submission.
This is also useful when extracting the timings. Right now, a sanity check is made on the number of fields expected for the timing. So that it prevents wrong submissions or submissions that altered the timing field!
Since we are passing an absolute path to the conf.json for the path of the homeworks and we are usually opening jupyter-notebook from another directory other than the root (/) the URLs for the homeworks are not correct and the first couple of directories need to be replaced with the jupyter's path, also adding /tree/ in the path for jupyter notebook's sake.
I think something could be done to make the blocks more distinguishable. The fact that the student ID is placed after his/her solution is also confusing. A separating line maybe would help.
As we may want to grade related subtasks together without having separate noetbooks for each one but at the same time each one of them should end with a separate column and note in the results spreadsheet.
So, we need to add a feature that one notebook can have multiple grades and comments corresponding to the number of subtasks included in that notebook.
Perhaps as a separate script, it would be good to have an automatic way of extracting contents of the zip file submissions (mainly extracting the ipynb)
Otherwise should be done manually and with many zipped submission, this becomes time-consuming.
I think something has been added to the option 1 that makes the script considerably slower, with our without rerun, it is still slow!
It would be beneficial if grading could be done on different passes. For example, two halves. Grading some of the students prior to the deadline and grading the rest after the deadline passed.
This is mainly needed to improve practice session quality as grading the homework gives a rough estimate of which tasks need more discussion at the practice. Also, since the deadline is close to two of the practice session, it is not easy to grade them prior to those practices.
At the moment grading students in different sessions/passes is a bit of hassle when results need to be put together.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.