GithubHelp home page GithubHelp logo

vccimaging / diffoptics Goto Github PK

View Code? Open in Web Editor NEW
123.0 123.0 21.0 39.25 MB

This is the open source repository for our IEEE Transactions on Computational Imaging 2022 paper "dO: A differentiable engine for Deep Lens design of computational imaging systems".

Home Page: https://vccimaging.org/Publications/Wang2022DiffOptics

License: MIT License

Python 100.00%

diffoptics's People

Contributors

congliwang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

diffoptics's Issues

ref2.tif not found error

excuse me. where can I get the ref2.tif to run the example code. I went into an error like this

DiffMetrology is using: cuda:0
Figure(800x600)
Traceback (most recent call last):
  File "misalignment_point.py", line 97, in <module>
    img = imread('./data/20210304/ref2.tif') # for now we use grayscale
  File "/home/ma-user/anaconda3/envs/PyTorch-1.8/lib/python3.7/site-packages/matplotlib/image.py", line 1560, in imread
    with img_open(fname) as image:
  File "/home/ma-user/anaconda3/envs/PyTorch-1.8/lib/python3.7/site-packages/PIL/Image.py", line 3092, in open
    fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: './data/20210304/ref2.tif'

end2end_edof_backward_tracing.py run error

Hello.

I am interested in end2end_edof_backward_tracing.py, but when I tried to run it, I encountered an HTTP error 503 during the download of
http://data.lip6.fr/cadene/pretrainedmodels/inceptionresnetv2-520b38e4.pth

Therefore, I was unable to proceed. However, I was able to download the model by running
wget --no-check-certificate http://data.lip6.fr/cadene/pretrainedmodels/inceptionresnetv2-520b38e4.pth

I saved the model as mine,
${HOME}/.cache/torch/hub/checkpoints/inceptionresnetv2-520b38e4.pth
and tried running it again.

But I encountered below. Does anyone know of a solution?


$ python end2end_edof_backward_tracing.py
ai.size = 10
Check your lens:
Lensgroup:
origin: tensor([0., 0., 0.], device='cuda:0')
shift: tensor([0., 0., 0.], device='cuda:0')
theta_x: 0.0
theta_y: 0.0
theta_z: 0.0
device: cuda
surfaces[0]: XYPolynomial:
d: 96.0
is_square: False
r: 6.35
device: cpu
J: 3
ai: tensor([-0., -0., -0., -0., -0., -0., -0., -0., -0., -0.], device='cuda:0')
b: -0.0
surfaces[1]: Aspheric:
d: 98.5
is_square: False
r: 6.35
device: cpu
c: -0.0
k: 0.0
ai: None
surfaces[2]: Aspheric:
d: 100.0
is_square: False
r: 6.35
device: cpu
c: 0.007800311781466007
k: 0.0
ai: None
surfaces[3]: Aspheric:
d: 102.5
is_square: False
r: 6.35
device: cpu
c: 0.021881837397813797
k: 0.0
ai: None
surfaces[4]: Aspheric:
d: 106.5
is_square: False
r: 6.35
device: cpu
c: -0.01592356711626053
k: 0.0
ai: None
materials[0]: Material:
name: air
A: 1.000293
B: 0.0
materials[1]: Material:
name: n-bk7
A: 1.5046606373973115
B: 4215.6909567737075
materials[2]: Material:
name: air
A: 1.000293
B: 0.0
materials[3]: Material:
name: sf5
A: 1.6412198987038364
B: 10932.236122773584
materials[4]: Material:
name: n-bk7
A: 1.5046606373973115
B: 4215.6909567737075
materials[5]: Material:
name: air
A: 1.000293
B: 0.0
pixel_size: 0.0138
film_size[0]: 128
film_size[1]: 128
to_world: Transformation:
R: tensor([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]], device='cuda:0')
t: tensor([0., 0., 0.], device='cuda:0')
to_object: Transformation:
R: tensor([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]], device='cuda:0')
t: tensor([0., 0., 0.], device='cuda:0')
mts_prepared: True
r_last: 0.8832
d_sensor: 0
aperture_ind: None
mts_Rt: Transformation:
R: tensor([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]], device='cuda:0')
t: tensor([0., 0., 0.], device='cuda:0')
aperture_radius: 6.35
aperture_distance: 96.0
W0705 15:52:17.955988 2101 warnings.py:109] /opt/conda/lib/python3.8/site-packages/albumentations/augmentations/dropout/cutout.py:49: FutureWarning: Cutout has been deprecated. Please use CoarseDropout
warnings.warn(

I0705 15:52:17.956725 2101 dataset.py:28] Subsampling buckets from 0 to 90.0, total buckets number is 100
I0705 15:52:17.957050 2101 dataset.py:71] Dataset has been created with 7 samples
I0705 15:52:17.958113 2101 dataset.py:28] Subsampling buckets from 90.0 to 100, total buckets number is 100
I0705 15:52:17.958303 2101 dataset.py:71] Dataset has been created with 1 samples
Initial:
Current optical parameters are:
-- lens.surfaces[0].ai: [-0. -0. -0. -0. -0. -0. -0. -0. -0. -0.]
Training starts ...

=========

Iteration = 0, z = 8000.0 [mm]:
Current optical parameters are:
-- lens.surfaces[0].ai: [-0. -0. -0. -0. -0. -0. -0. -0. -0. -0.]

=========

(1) Rendering batch images: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:01<00:00, 4.39it/s]
(2) Training network weights: 0%| | 0/200 [00:00<?, ?it/s]W0705 15:52:43.765300 2101 warnings.py:109] /opt/conda/lib/python3.8/site-packages/torch/nn/functional.py:3669: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")

(2) Training network weights: 0%| | 0/200 [00:03<?, ?it/s]
Traceback (most recent call last):
File "end2end_edof_backward_tracing.py", line 186, in
Is_output = net.run(
File "/work/shared/DiffOptics/examples/neural_networks/DeblurGANv2/train_end2end.py", line 104, in run
curr_psnr, curr_ssim, img_for_vis = self.model.get_images_and_metrics(inputs.detach(), outputs, targets)
File "/work/shared/DiffOptics/examples/neural_networks/DeblurGANv2/models/models.py", line 28, in get_images_and_metrics
ssim = SSIM(fake, real, multichannel=True)
File "/opt/conda/lib/python3.8/site-packages/skimage/metrics/_structural_similarity.py", line 178, in structural_similarity
raise ValueError(
ValueError: win_size exceeds image extent. Either ensure that your images are at least 7x7; or pass win_size explicitly in the function call, with an odd value less than or equal to the smaller side of your images. If your images are multichannel (with color channels), set channel_axis to the axis number corresponding to the channels.

about get the original image from rerendered image

Hi there,

Thanks you for your contribution to the community. Appreciated~~

Now, I am trying to get the original image from rerendered image obtained from your demo. With the camera parameters file provided, how could I achieve this task. (I am not sure if this reverse task is doable...) Without digging into the code, the pipeline come to my mind is to put the rerendered image into the backward tracing to obtain the original image. Am I correct?

Best Regards,
S. F.

BSplines

Hi,
thank you for this nice github repo. I am trying to write an exporter to the CAD file format STEP with python OpenCASCADE (pythonOCC) i am bit confused with ur code for the BSplines.

In the Line

def _generate_knots(R, n, p=3, device=torch.device('cpu')):

Code with some extra Comments:

    @staticmethod
    def _generate_knots(R, n, p=3, device=torch.device('cpu')):
        t = np.linspace(-R, R, n)
        step = t[1] - t[0]
        T = t[0] - 0.9 * step
        
        np.pad(t, p+1, 'constant', constant_values=step)#-doesnt effect result
        
        t = np.concatenate((np.ones(p+1)*T, t, -np.ones(p+1)*T), axis=0)
        return torch.Tensor(t).to(device)   #=>c.shape[0]==t.shape[0]+4 if size[0]==size[1]

it becomes clear that the shape of c doesnt match the shape of tx or ty in the respected dimensions. it is smaller by 4 in each dimension. Why is that?

Would it be safe to directly use the BSpline Surface as an input for a CAD file. Or would it be better to fit a NURBS Curve to your surface?

regards Martin

Code Aviability

Hello, in the readme you mention that the code will be released once reviews are out. However, your paper is already published. Will you release the code soon?

License of code?

Hi!

Thanks a lot for the nice code and paper :)! Really cool work!

I was wondering if I can use parts of the code?

Do you plan to add a license to ease code usage?

Best :)
Felix

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.