Make a downloadable package of all the subject information in one zip
20 subjects - 5 calibration trials each
10 are a sweep calibration - half trials head fixed and half head free - at the end they did a task calibration
10 in the grid calibration - at the end of each of these they did the task with head free
we should be able to call the routine with the model
we need ocular proof on the actual tasks
The minimum is head-free grid.
Train on the minimum and test on the maximum
First question: What single 3 minute calibration is the best for predicting the behaviour
For each participant, what is the best model we can create for that participant?
We have 100 calibrations, which group gives us the best if we have the single calibration.
Which segment is the best to train the model...
Optimize some subset of minimize time and maximize the predictive ability.
Minimize the number of points needed for predictive ability.
Question:
Head free vs head fixed. Hypothesis is that head free is better than head fixed.
Sweep vs Grid. Hypothesis is that grid is better than the sweep.
Head free / Head fixed / Task - Maybe there is a composite which is an optimal choice of features.
Sweep head free
Sweep head fixed
grid head free
grid head fixed
task
Task vs All other methods
Error measures:
3D distance time series
Smash the error into the XY plane ( stored as a multidimensional array )
Point by point variance of the error
We have a valid 3D point that is what is "true". We have to assume that is the ground truth.
The smallest error we can get is based on the ground truth.
We can get error measures that are aggregate over an entire set of calibrations.
The error measures vs a set calibration.
We can measure the train vs test over time.
And we can also measure the error in space.
Define a function:
to output the z-variation, y-variation, x-variation