Hello.
I am interested in end2end_edof_backward_tracing.py, but when I tried to run it, I encountered an HTTP error 503 during the download of
http://data.lip6.fr/cadene/pretrainedmodels/inceptionresnetv2-520b38e4.pth
Therefore, I was unable to proceed. However, I was able to download the model by running
wget --no-check-certificate http://data.lip6.fr/cadene/pretrainedmodels/inceptionresnetv2-520b38e4.pth
I saved the model as mine,
${HOME}/.cache/torch/hub/checkpoints/inceptionresnetv2-520b38e4.pth
and tried running it again.
But I encountered below. Does anyone know of a solution?
$ python end2end_edof_backward_tracing.py
ai.size = 10
Check your lens:
Lensgroup:
origin: tensor([0., 0., 0.], device='cuda:0')
shift: tensor([0., 0., 0.], device='cuda:0')
theta_x: 0.0
theta_y: 0.0
theta_z: 0.0
device: cuda
surfaces[0]: XYPolynomial:
d: 96.0
is_square: False
r: 6.35
device: cpu
J: 3
ai: tensor([-0., -0., -0., -0., -0., -0., -0., -0., -0., -0.], device='cuda:0')
b: -0.0
surfaces[1]: Aspheric:
d: 98.5
is_square: False
r: 6.35
device: cpu
c: -0.0
k: 0.0
ai: None
surfaces[2]: Aspheric:
d: 100.0
is_square: False
r: 6.35
device: cpu
c: 0.007800311781466007
k: 0.0
ai: None
surfaces[3]: Aspheric:
d: 102.5
is_square: False
r: 6.35
device: cpu
c: 0.021881837397813797
k: 0.0
ai: None
surfaces[4]: Aspheric:
d: 106.5
is_square: False
r: 6.35
device: cpu
c: -0.01592356711626053
k: 0.0
ai: None
materials[0]: Material:
name: air
A: 1.000293
B: 0.0
materials[1]: Material:
name: n-bk7
A: 1.5046606373973115
B: 4215.6909567737075
materials[2]: Material:
name: air
A: 1.000293
B: 0.0
materials[3]: Material:
name: sf5
A: 1.6412198987038364
B: 10932.236122773584
materials[4]: Material:
name: n-bk7
A: 1.5046606373973115
B: 4215.6909567737075
materials[5]: Material:
name: air
A: 1.000293
B: 0.0
pixel_size: 0.0138
film_size[0]: 128
film_size[1]: 128
to_world: Transformation:
R: tensor([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]], device='cuda:0')
t: tensor([0., 0., 0.], device='cuda:0')
to_object: Transformation:
R: tensor([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]], device='cuda:0')
t: tensor([0., 0., 0.], device='cuda:0')
mts_prepared: True
r_last: 0.8832
d_sensor: 0
aperture_ind: None
mts_Rt: Transformation:
R: tensor([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]], device='cuda:0')
t: tensor([0., 0., 0.], device='cuda:0')
aperture_radius: 6.35
aperture_distance: 96.0
W0705 15:52:17.955988 2101 warnings.py:109] /opt/conda/lib/python3.8/site-packages/albumentations/augmentations/dropout/cutout.py:49: FutureWarning: Cutout has been deprecated. Please use CoarseDropout
warnings.warn(
I0705 15:52:17.956725 2101 dataset.py:28] Subsampling buckets from 0 to 90.0, total buckets number is 100
I0705 15:52:17.957050 2101 dataset.py:71] Dataset has been created with 7 samples
I0705 15:52:17.958113 2101 dataset.py:28] Subsampling buckets from 90.0 to 100, total buckets number is 100
I0705 15:52:17.958303 2101 dataset.py:71] Dataset has been created with 1 samples
Initial:
Current optical parameters are:
-- lens.surfaces[0].ai: [-0. -0. -0. -0. -0. -0. -0. -0. -0. -0.]
Training starts ...
=========
Iteration = 0, z = 8000.0 [mm]:
Current optical parameters are:
-- lens.surfaces[0].ai: [-0. -0. -0. -0. -0. -0. -0. -0. -0. -0.]
=========
(1) Rendering batch images: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:01<00:00, 4.39it/s]
(2) Training network weights: 0%| | 0/200 [00:00<?, ?it/s]W0705 15:52:43.765300 2101 warnings.py:109] /opt/conda/lib/python3.8/site-packages/torch/nn/functional.py:3669: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
(2) Training network weights: 0%| | 0/200 [00:03<?, ?it/s]
Traceback (most recent call last):
File "end2end_edof_backward_tracing.py", line 186, in
Is_output = net.run(
File "/work/shared/DiffOptics/examples/neural_networks/DeblurGANv2/train_end2end.py", line 104, in run
curr_psnr, curr_ssim, img_for_vis = self.model.get_images_and_metrics(inputs.detach(), outputs, targets)
File "/work/shared/DiffOptics/examples/neural_networks/DeblurGANv2/models/models.py", line 28, in get_images_and_metrics
ssim = SSIM(fake, real, multichannel=True)
File "/opt/conda/lib/python3.8/site-packages/skimage/metrics/_structural_similarity.py", line 178, in structural_similarity
raise ValueError(
ValueError: win_size exceeds image extent. Either ensure that your images are at least 7x7; or pass win_size explicitly in the function call, with an odd value less than or equal to the smaller side of your images. If your images are multichannel (with color channels), set channel_axis to the axis number corresponding to the channels.