Huge thanks for organizing this. I am a MobileML practitioner and have experience with many deep learning models that are mobile-friendly. Some projects:
I wanted to know if you folks would be up if submitted a Colab Notebook showing the model conversion and inference process in TensorFlow Lite. This way I think the community will be able to play with the pre-trained models quickly.
Hi,
Since the MAI21 competition was closed a few months ago, could you please publish the ground-truth (DSLR) data of the Validation set, for pure research purposes? Currently only the raw images of the validation set are available to download at the competition page (https://competitions.codalab.org/competitions/28054#participate-get_data).
Also, it would be very nice if you could share the Test set (raw + rgb) as well.
Where can i download pre-trained model for inference? Though it has been mentioned that "Download Mediatek's pre-trained PUNET model and put it into models/original/ folder" but i could not find the link. You help would be much appreciated.
The training happens with 128x128 (reduced) RAW images while the pre-trained model's input dimensions are different. Why is this mismatch? I understand the pre-trained model's input dimensions have been matched to what's expected in the challenge. But I feel I am missing out on something here.
I am trying to get the provided code running. Wanted to know how was the validation split obtained? Was it obtained from the original training pairs (RAW-RGB)?