Hi, We ran the code for 295 epochs. Below is the log after the run of the code. Please help us if we are missing something
epoch [294/295] train_loss 0.2000 supervised_loss 0.1954 consistency_loss 0.0012 train_iou: 0.9596 - val_loss 0.5416 - val_iou 0.6689 - val_SE 0.5690 - val_PC 0.6468 - val_F1 0.5644 - val_ACC 0.7565
We have done this modification for learning rate, as we were encountering the "RuntimeError: For non-complex input tensors, argument alpha must not be a complex number.". BAsed on the link provided by you in other issue
def adjust_learning_rate(optimizer, i_iter, len_loader, max_epoch, power, args):
lr = lr_poly(args.base_lr, i_iter, max_epoch*len_loader, power)
optimizer.param_groups[0]['lr'] = lr
if len(optimizer.param_groups) > 1:
optimizer.param_groups[1]['lr'] = lr * 10
return lr
lr_ = adjust_learning_rate(optimizer, iter_num, len(trainloader), max_epoch, 0.9, args)