Dear authors,
I am currently working on reproducing the results from your paper. It doesn't seem like you haven't included any code regarding the implementation of your loss function, and I therefore have some questions on the matter.
From my understanding of the loss, you have modified it in order to account for the dynamical queue (additional gps embeddings).
$P$ - corresponds to the different views from an image of a given batch, lets take it as being 1 view for simplicity.
$V$ - is the embedded image
$L$ - is the embedded GPS coordinate
This simplifies the Loss for a single view of a single image in a batch to the following:
$$L_i = - \log \frac{ \exp(V_i \cdot L_i / \tau)}{\sum_{i = 0} \exp(V_i \cdot L_i / \tau) + \sum_{i = 0} \exp(V_i \cdot \tilde{L}_i / \tau)}$$
Where in the denominator, the first sum is for a batch of length B, and the second sum is for the dynamic queue of length S.
My questions are the following:
-
- It seems like you are using the same index $i$ for both the $i^{th}$ sample of a batch, the sum over the batch, and the sum over the dynamical queue. Did you mean to take something like the loss below (index $i$ changed to $k$ in the denominator)?
$$L_i = - \log \frac{ \exp(V_i \cdot L_i / \tau)}{\sum_{k = 0} \exp(V_i \cdot L_k / \tau) + \sum_{k = 0} \exp(V_i \cdot \tilde{L}_k / \tau)}$$
By doing so, you do contrastive learning of each image over all other coordinates while keeping the same image $V_i$ in the denominator.
-
- If it is true that you do contrastive learning of each image over all other coordinates, why did you decide not to do contrastive learning of each GPS coordinate over all other images? In fact in the original CLIP paper, the Cross Entropy Loss is utilized both horizontally and vertically, yet you have chosen only to use it horizontally. Is there a specific reason for this decision?
-
- Going back to the $P$ augmented views, you mention in your paper that a benefit of using a frozen CLIP backbone is that one can pre-encode all images, making the training process faster. Yet if you perform $P$ augmentations for each image and for each batch, didn't you have to re-encode the augmented images again, thus not being able to take advantage from this benefit?
I look forward to hearing from you! Thanks.