matteo-ronchetti / torch-radon Goto Github PK
View Code? Open in Web Editor NEWComputational Tomography in PyTorch
Home Page: https://torch-radon.readthedocs.io
License: GNU General Public License v3.0
Computational Tomography in PyTorch
Home Page: https://torch-radon.readthedocs.io
License: GNU General Public License v3.0
Hello,
do you have a publication for this package that we could cite?
Best,
Florian
Hi! First of all thanks for the this great contribution.
I am using your torch_radon on a data which size are 972x972. I do the forward and I expect the respective sinogram size to be [angles, 972*sqrt(2)]. Why running your function the second dimension is 972 instead of sqrt(2)*972?
Is it possible to fix this?
Hi, after installing torch-radon successfully, I encountered a problem,
"
File "/usr/local/lib64/python3.6/site-packages/torch_radon/init.py", line 6, in
import torch_radon_cuda
ImportError: /usr/lib64/libm.so.6: version `GLIBC_2.27' not found (required by /usr/local/lib64/python3.6/site-packages/torch_radon_cuda.cpython-36m-x86_64-linux-gnu.so)
"
My env is -python3.6-cuda10.1-cudnn7.6-pytorch1.5.
I wonder if this problem is caused since the origin .whl is compiled on a higher-vision of Linux.
Could you help me fix this problem? thanks!
Hi, I am trying to install the torch-radon library and came into this problem, although the installation process finished and didn't report any error. I follow the author's instructions in a closed issue and have some trials but didn't fix it.
I very much appreciate any help and suggestion that how this library could be installed and run successfully. Thanks!
Hi!
First of all, I'd like to appreciate the neat work @matteo-ronchetti has been doing on Torch-Radon!
I wanted to report a small typo that arises a compilation error when building Torch-Radon v2.0.0. In parameter_classes.h, both get_block_dim and get_grid_size headers return a wrong type inside ExecCfg class. I believe they should return vec3 instead of dim3.
Regards.
Compiling src/symbolic.cpp
g++ -std=gnu++11 -fPIC -D_GLIBCXX_USE_CXX11_ABI=0 -Wall -lcuda -lcudart -I./include -DNDEBUG -O3 -c src/symbolic.cpp -o objs/symbolic.o
In file included from ./include/symbolic.h:5,
from src/symbolic.cpp:1:
./include/parameter_classes.h:98:5: error: ‘dim3’ does not name a type
98 | dim3 get_block_dim() const;
| ^~~~
./include/parameter_classes.h:100:5: error: ‘dim3’ does not name a type
100 | dim3 get_grid_size(int x, int y = 1, int z = 1) const;
CUDA 11.0,Torch1.7.1
The CUDA error: no kernel image is available for execution on the device was reported when I run the examples. The same example run successfully on my PC, but the above error was reported on the workstation.
pytorch 1.8.1 has a problm: module ‘torch‘ has no attribute "rfft"
The setup.py script leads me to believe the installation cannot be done on Windows, is that true?
Thanks
hello, when testing the torch-radon algorithm, I found a performance-related problem.
if I use radon()->radon.forward()->radon.filtered()->radon.backprojection(), the reconstructed image has a low PSNR performance (i.e. big MSE) compared to offical matlab code (the same setting).
For example, for a CT image, the reconstruction performance of torch-radon is 29.39dB, while that of matlab is nearly 40 dB.
Obviously, this reconstruction is not so satisfactory due to low PSNR.
So, could you tell me how to boost the basic performance of torch-radon? (without adding cnn modules and training?)
My goal is to obtain high-quality sinograms (projections).
Thanks! Appreciate for your help!
If you're reading this, it's because you're wondering if this package is still active. No. The last updates were merge in November 2020. However, I really like this package, so I will accept pull requests over on my fork. If Matteo comes back, I will happily help to merge my changes upstream.
https://github.com/carterbox/torch-radon
I also publish pre-compiled releases of my fork for the conda package manager on the conda-forge channel.
Good work on this interesting package. I built 306ae46 and ran the following simple test where I expect that the Radon operator should be perfect at angles 0 and PI/2.
It seems that possibly the angle specification is wrong or there is some other bias in the geometry for the rays. Was wondering if there is an explanation for this and if it could be fixed.
import torch
from torch_radon import Radon
def test_simple_integrals(image_size=16):
"""Check that the forward radon operator works correctly at 0 and PI/2.
When we project at angles 0 and PI/2, the foward operator should be the
same as taking the sum over the object array along each axis.
"""
angles = torch.tensor(
[0.0, -np.pi / 2, np.pi, np.pi / 2],
dtype=torch.float32,
device='cuda',
)
radon = Radon(
resolution=image_size,
angles=angles,
det_spacing=1.0,
det_count=image_size,
clip_to_circle=False,
)
original = torch.zeros(
image_size,
image_size,
dtype=torch.float32,
device='cuda',
)
original[image_size // 4, :] += 1
data = radon.forward(original)
data0 = torch.sum(original, axis=0)
data1 = torch.sum(original, axis=1)
print('\n', data[0].cpu().numpy())
print(data0.cpu().numpy())
print('\n', data[1].cpu().numpy())
print(data1.cpu().numpy())
print('\n', data[2].cpu().numpy())
print(data0.cpu().numpy()[::-1])
print('\n', data[3].cpu().numpy())
print(data1.cpu().numpy()[::-1])
tests/test_parallel_beam.py::test_simple_integrals
[0.99632365 0.99632365 0.99632365 0.99632365 0.99632365 0.99632365
0.99632365 0.99632365 0.99632365 0.99632365 0.99632365 0.99632365
0.99632365 0.99632365 0.99632365 0.99632365]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[ 0. 0. 0. 0. 16.000002 0. 0.
0. 0. 0. 0. 0. 0. 0.
0. 0. ]
[ 0. 0. 0. 0. 16. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0.99632365 0.99632365 0.99632365 0.99632365 0.99632365 0.99632365
0.99632365 0.99632365 0.99632365 0.99632365 0.99632365 0.99632365
0.99632365 0.99632365 0.99632365 0.99632365]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[ 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 16.000002 0. 0.
0. 0. ]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 16. 0. 0. 0. 0.]
Hi Matteo,
Nice work!
I encountered some error messages below when I tried to install your package from source. Do you know how to fix this?
Thanks,
Dongdong
---Steps:
$ git clone https://github.com/matteo-ronchetti/torch-radon.git
$ cd torch-radon
$ python setup.py install
---Errors:
Traceback (most recent call last):
File "setup.py", line 1, in
from setuptools import setup
ImportError: cannot import name 'setup'
I'd like to have the possibility to compute the Radon Transform of non-squared images.
This is extremely useful in a number of imaging applications.
I have a question, when can torch-radon support RTX 4090? RTX 4090 seems to be compute_capabilities=8.9.
thanks a lot!
What are the minimum possible versions of pytorch and cudatoolkit when building from source.
I receive a lot of errors when building using Pytorch 1.1 and cudatoolkit 9.0, which I don't have for Pytorch 1.7 and cudatoolkit 10.
Due to hardware restrictions it is not possible for me to install newer versions. Does it mean that it is hopeless?
The functions cuda_backend.rfft
and cuda_backend.irfft
don't implement a backward pass, so their gradient can't be computed. This makes BaseRadon.filter_sinogram
unusable for network trainings.
Is there a reason why you don't use torch.fft.rfft
and torch.fft.irfft
?
Hi,
The package is installed correctly, but it is not working. There is name 'RaysCfg' is not defined
error, how can i fix it?
Thanks!
(dlenv) $ python auto_install.py
Checking requirements
Operating System: linux OK
Python version: 3.6 OK
PyTorch: 1.7 OK
CUDA: 10.1 OK
Executing: pip install --force-reinstall https://rosh-public.s3-eu-west-1.amazonaws.com/radon/cuda-10.1/torch-1.7/torch_radon-1.0.0-cp36-cp36m-linux_x86_64.whl
Collecting torch-radon==1.0.0
Using cached https://rosh-public.s3-eu-west-1.amazonaws.com/radon/cuda-10.1/torch-1.7/torch_radon-1.0.0-cp36-cp36m-linux_x86_64.whl (1.0 MB)
Collecting scipy
Using cached scipy-1.5.4-cp36-cp36m-manylinux1_x86_64.whl (25.9 MB)
...
Successfully installed Pillow-8.2.0 alpha-transform-0.0.1 astropy-4.1 cycler-0.10.0 healpy-1.14.0 kiwisolver-1.3.1 matplotlib-3.3.4 numexpr-2.7.3 numpy-1.19.5 pyfftw-0.12.0 pyparsing-2.4.7 python-dateutil-2.8.1 scipy-1.5.4 six-1.15.0 torch-radon-1.0.0 tqdm-4.60.0`
NameError Traceback (most recent call last)
in
----> 1 radon = Radon(image_size, angles, clip_to_circle=True)~/.conda/envs/dlenv/lib/python3.6/site-packages/torch_radon/init.py in init(self, resolution, angles, det_count, det_spacing, clip_to_circle)
167 det_count = resolution
168
--> 169 rays_cfg = RaysCfg(resolution, resolution, det_count, det_spacing, len(angles), clip_to_circle)
170
171 super().init(angles, rays_cfg)NameError: name 'RaysCfg' is not defined
Linux version 3.10.0-862.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) ) #1 SMP Fri Apr 20 16:44:24 UTC 2018
NVIDIA-SMI 418.152.00
Driver Version: 418.152.00
CUDA Version: 10.1
I did projection, filtering, and backprojection to a 2D 512×512 image but the reconstruction image is far more different from the original image. Here is the main codes:
radon = RadonFanbeam(512, angles, 512, 512, det_count=739, det_spacing=1.0, clip_to_circle=False)
img = torch.FloatTensor(img).to(torch.device('cuda')).unsqueeze(0).unsqueeze(0)
our_fp = radon.forward(img)
our_bp = radon.backprojection(radon.filter_sinogram(our_fp))
What's the right way to conduct FBP with fan beam geometry?Thanks a lot.
I was using torch-radon v2 branch (which is installed using: wget -qO- https://raw.githubusercontent.com/matteo-ronchetti/torch-radon/v2/auto_install.py | python -
provided in Issue #23).
But when I try to run some test code I got the following error:
CUDA error at cudaMemcpy3D(&myparms) (src/texture.cu:167) error code: 1, error string: invalid argument
The GPU was not occupied (so it shouldn't be a problem of OOM?), and my test code is like the following:
import torch
import numpy as np
from torch_radon.radon import FanBeam
if __name__ == '__main__':
import os
import numpy as np
img = np.random.randn(256, 256)
os.environ['CUDA_VISIBLE_DEVICES'] = '4'
print(torch.cuda.device_count())
print(torch.cuda.is_available())
angles = np.linspace(0, 2 * np.pi, 360)
img = torch.from_numpy(img).to("cuda").float().reshape(1, 1, 256, 256)
print(img.shape, img.min(), img.max())
tool = FanBeam(det_count=768, angles=angles, src_dist=1075)
print('FanBeam tool initialized')
print(tool.forward(img, angles))
The full error messages are:
1
True
torch.Size([1, 1, 256, 256]) tensor(-4.5295, device='cuda:0') tensor(4.8204, device='cuda:0')
FanBeam tool initialized
CUDA error at cudaMemcpy3D(&myparms) (src/texture.cu:167) error code: 1, error string: invalid argument
Any advice on solving this issue? Thanks!
why in torch-radon/examples/visual_sample.py , the fbp result is radon.backprojection(filtered_sinogram) * np.pi / n_angles,
but in torch-radon/examples/visual_sample.py, the fbp result is radon.backprojection(filtered_sinogram)?
you give two different handle ways
Using CUDA_HOME=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.3
Compiling src\forward.template
C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.3/bin/nvcc -std=c++11 -ccbin=g++ -Xcompiler -fPIC -Xcompiler -static -Xcompiler -static-libgcc -Xcompiler -static-libstdc++ -I./include -gencode arch=compute_60,code=sm_60 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=sm_86 -DNDEBUG -O3 --generate-line-info --compiler-options -Wall -c src\forward.cu -o objs/cuda/forward.o
'C:/Program' 不是内部或外部命令,也不是可运行的程序
或批处理文件。
ERROR IN COMPILATION
As mentioned in the title, Colab updated to Python 3.10 and the Notebook you linked to does not work anymore. In particular, the autocomplier fails since only Python versions 3.6 3.7 and 3.8 are supported.
are there any plans to benchmark it against pyro-nn
Thank you for your excellent work on this project, but I ran into some problems and wanted to confirm with you.
I have followed your documentation as:
After I run these commands,the terminal displays these commands
and I use python
and import torch
and import torch_radon_cuda
,The program reports no errors.
But,when I use :
x = torch.randn(2, 1, 512, 512).to("cuda")
from torch_radon import Radon
angeles = np.linspace(0, np.pi, 512, endpoint=False)
radon = Radon(512,angeles)
pro = radon.forward(x)
I found that the pro
here is a tensor with all zero, I also converted this tensor into a PIL image to save, the picture is all black.
I want to know where is the problem.
Hope for your reply
Hi, I encounter a problem of "make: *** No rule to make target `install'. Stop." when I used "make install" .
Could you please help me to solve it. Thanks a lot. Hope Receive you reply soon.
Got error:
AttributeError: 'RadonFanbeam' object has no attribute 'noise_generator'
though I don't call noise_generator
Hi,
with pytorch >= 1.8 there is no function torch.rfft anymore, can you integrate an alternative?
Thanks!
Hi there,
I encountered an error saying " Import Exception" after install it via "wget -qO- https://raw.githubusercontent.com/matteo-ronchetti/torch-radon/master/auto_install.py | python -". The main problem seems to be the missing import of "torch_radon_cuda".
There is a code in torch_radon/init.py
try:
import torch_radon_cuda
from torch_radon_cuda import RaysCfg
except Exception as e:
print("Importing exception")
However, the file "torch_radon_cuda" is completely missing.
Could you have a look at this?
Thanks!
Best,
Di
I installed torch_radon successfully through:
wget -qO- https://raw.githubusercontent.com/matteo-ronchetti/torch-radon/master/auto_install.py | python -
and wrote the below codes for testing:
import numpy as np
from torch_radon import Radon, RadonFanbeam
my_radon = Radon(362, np.linspace(0, np.pi, 60, endpoint=False))
But I got:
Importing exception
Importing exception
Traceback (most recent call last):
File "model.py", line 3, in <module>
my_radon = Radon(362, np.linspace(0, np.pi, 60, endpoint=False))
File "/usr/local/lib/python3.6/dist-packages/torch_radon/__init__.py", line 169, in __init__
rays_cfg = RaysCfg(resolution, resolution, det_count, det_spacing, len(angles), clip_to_circle)
NameError: name 'RaysCfg' is not defined
Exception ignored in: <bound method BaseRadon.__del__ of <torch_radon.Radon object at 0x7f12d9f51e80>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch_radon/__init__.py", line 140, in __del__
self.noise_generator.free()
AttributeError: 'Radon' object has no attribute 'noise_generator'
What should I do?
I tried to reconstruct sinogram (obtained with real equipment) with torch-radon FBP method. But I only get first image. It may contain a very small object. The second image is obtained by another reconstruction method.
I used "torch_radon.RadonFanbeam()".
There are parameters.
resolution = 512
source_distance = 55 [mm]
det_distance = 148 [mm]
det_spacing = 0.276890625 [mm/pixel]
I wonder if we have to transform parameters when I apply torch-radon FBP for real data.
How can I solve this problem? Thanks.
Hi
Thank you for publishing this code! It is truly amazing.
After I solved the torch.rfft torch.irfft incompatibility (I want to believe successfully) using two wrapper functions (I post them below) I tried to run the visual_sample.py example but I got the following error
Traceback (most recent call last):
File "/home/user/torch-radon/examples/visual_sample.py", line 29, in <module>
backprojection = radon.backprojection(sinogram, extend=True)
File "/home/user/miniconda3/lib/python3.9/site-packages/torch_radon-1.0.0-py3.9-linux-x86_64.egg/torch_radon/utils.py", line 30, in wrapped
y = f(self, x, *args, **kwargs)
TypeError: backprojection() got an unexpected keyword argument 'extend'
Should I just delete the 'extend' argument?
The wrapper functions I used to solve the torch.rfft and torch.irfft incompatibility:
def rfft_wrapper(input, signal_ndim, normalized=False):
if signal_ndim < 1 or signal_ndim > 3:
print("Signal ndim out of range, was", signal_ndim, "but expected a value between 1 and 3, inclusive")
return
dims = (-1)
if signal_ndim == 2:
dims = (-2, -1)
if signal_ndim == 3:
dims = (-3, -2, -1)
norm = "backward"
if normalized:
norm = "ortho"
return torch.view_as_real(torch.fft.fftn(input, dim=dims, norm=norm))
def irfft_wrapper(input, signal_ndim, normalized=False):
if signal_ndim < 1 or signal_ndim > 3:
print("Signal ndim out of range, was", signal_ndim, "but expected a value between 1 and 3, inclusive")
return
dims = (-1)
if signal_ndim == 2:
dims = (-2, -1)
if signal_ndim == 3:
dims = (-3, -2, -1)
norm = "backward"
if normalized:
norm = "ortho"
return torch.view_as_real(torch.fft.ifftn(input, dim=dims, norm=norm))
Thank you for your fascinating codes.
I wonder, there may be a mistake in init.py (in torch-radon/build_tools file direcotory). When I run python setup.py install in linux, bash raises a mistake "UnboundLocalError: local variable 'compute_capabilities' referenced before assignment"
I found "compute_capabilities" is wrong in the format of "compute_capabilites" in two different lines n init.py (in torch-radon/build_tools file direcotory). After I fix it, the code runs smoothly.
Looking forward to checking in your free time!
I am receiving the following error when running the simple fbp example when caclulating the sinogram:
RuntimeError: CUDA error: no kernel image is available for execution on the device
My setup:
Nvidia GPU A6000
Pytorch 1.11.0 py3.9 cuda11.3 cudnn8.2.0.0
Cudatoolkit 11.3.1
Cudatoolkit-dev 11.4.1 - needed it to build TorchRadon from source
Torchradon was build without issues. Simple PyTorch calculations do not produce this error.
Hi,
I was wondering about version 2 of this library. Do you have any plans about when you are going to release it?
Currently, I am having troubles with the torch.fft modules of version 1, but the v2 branch does not build for me.
Any hints would be appreciated. Thanks for your help!
I have downloaded it in my anaconda environment, and I can see "torch-radon 1.0.0 pypi_0 pypi" if I do conda list.but there is wrong when i code"import torch-radon".How can i import it?
Hi.
I find torch-radon library extremely helpful in accelerating forward/backward projection.
It worked well on machines with CUDA 10.2. However, when I tried to run codes on machines with CUDA 11.x, I got the following error message:
Checking requirements
Operating System: linux OK
Python version: 3.7 OK
PyTorch: 1.1 ERROR
Precompiled packages are build for PyTorch 1.5 to 1.7
Consider manually compiling torch-radon
It turns out that this conflicts with the CUDA requirements for pytorch (1.11+), and I am wondering if there's a plan for an updated version of torch-radon that could solve this problem?
Thanks!
Output in shell:
(py3) C:\Users\z003zv1a\torch-radon>python setup.py install
Using CUDA_HOME=C:\CUDA\v10.1
�[32mCompiling src\forward.template�[0m
�[34mC:\CUDA\v10.1/bin/nvcc -std=c++11 -ccbin=g++ -Xcompiler -fPIC -Xcompiler -static -Xcompiler -static-libgcc -Xcompiler -static-libstdc++ -I./include -gencode arch=compute_60,code=sm_60 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -DNDEBUG -O3 --generate-line-info --compiler-options -Wall -c src\forward.cu -o objs/cuda/forward.o�[0m
ERROR IN COMPILATION
Operating system: Windows 10
Python version: 3.7
I see that Windows isn't supported oficially. Nevertheless, I'd appreciate any advice to address this issue.
Hi,
First of all thanks for the great library. It's very intuitive to use 👍
However, I have an issue with backward()
in torch not working after the forward projection, and some permutations to the result, due to the contiguousy issue.
The task that I'm trying to do here deals with 3D objects, and I guess I can also do that since torch-radon
supports batch dimensions. If I have a 3D object of shape (batch, channel, h, w, d)
, where h
refers to height, w
refers to width, and d
refers to depth, I have to permute the dimensions before computing forward projection, and after the projection, the sinogram (or 2D projections in this case) will have the shape (d, N, det)
where N
is the number of projection angles, det
is the number of detector pixels. For me to see this as multiple 2D projections, I need to permute the dimensions again so that it has the shape (batch, N, det, d)
. I have written a code which produces the same error I am encountering.
device = 'cuda:0'
img_size = 128
n_angles = 3
geometry = 'parallel'
# Parallel beam geometry
if geometry == 'parallel':
angles = np.linspace(0, np.pi, n_angles, endpoint=False)
radon = Radon(img_size, angles, clip_to_circle=False)
l1_loss = nn.L1Loss(reduction='mean')
#########################################################
# Case 1: No ERROR
#########################################################
# Random 3D object in pytorch dimensions - (b x c x h x w x d)
obj = torch.randn([1, 1, 128, 128, 128], requires_grad=True).to(device)
target = torch.ones([1, 3, 128, 128]).to(device)
target = proj_bpxy_to_ypx(target)
sino = radon.forward(vol_bchwz_to_zhw(obj))
loss = l1_loss(sino, target)
loss.backward()
#########################################################
# Case 2: No ERROR
#########################################################
# Random 3D object in pytorch dimensions - (b x c x h x w x d)
obj = torch.randn([1, 1, 128, 128, 128], requires_grad=True).to(device)
target = torch.ones([1, 3, 128, 128]).to(device)
sino = proj_ypx_to_bpxy(radon.forward(vol_bchwz_to_zhw(obj)))
loss = l1_loss(sino, target)
loss.backward()
the helper functions are defined as below
def proj_bpxy_to_ypx(tensor):
"""
Ex. 3-view
From ~ Input projection : [3(p), 128(x), 128(y)] or [1(b), 3(p), 128(x), 128(y)]
To ~ TorchRadon projction : [128(y), 3(p), 128(x)]
"""
assert isinstance(tensor, torch.Tensor), 'input must be type torch tensor'
if tensor.dim() == 4:
tensor = tensor.squeeze()
return tensor.permute(2, 0, 1).contiguous()
def proj_ypx_to_bpxy(tensor):
"""
Ex. 3-view
TorchRadon projction : [128(y), 3(p), 128(x)]
Input projection : [1(b), 3(p), 128(x), 128(y)]
"""
assert isinstance(tensor, torch.Tensor), 'input must be type torch tensor'
assert tensor.dim() == 3, f'Dimension must be e.g.[128(x), 3(p), 128(y)]. Received {tensor.shape}'
y, p, x = tensor.shape
return tensor.permute(1, 2, 0).contiguous().view(1, p, x, y).contiguous()
def vol_zhw_to_bchwz(tensor):
"""
Ex.
TorchRadon BP : [z, h, w]
Volume shape for 3D Unet Generator: [b, c, h, w, z]
"""
assert isinstance(tensor, torch.Tensor), 'input must be type torch tensor'
assert tensor.dim() == 3, f'Dimension must be e.g.[z, h, w]. Received {tensor.shape}'
z, h, w = tensor.shape
return tensor.permute(1, 2, 0).contiguous().view(1, 1, h, w, z)
def vol_bchwz_to_zhw(tensor):
"""
Ex.
Volume shape for 3D Unet Generator: [b, c, h, w, z]
TorchRadon BP : [z, h, w]
"""
assert isinstance(tensor, torch.Tensor), 'input must be type torch tensor'
assert tensor.dim() == 5, f'Dimension must be e.g.[b, c, h, w, z]. Received {tensor.shape}'
b, c, h, w, z = tensor.shape
return tensor.squeeze().contiguous().view(z, h, w)
For the Case 1, there is no error. However, if I run Case 2, I get the following error message.
RuntimeError: x must be contiguous
I guess you usually solve this error with contiguous()
, https://stackoverflow.com/questions/48915810/pytorch-contiguous and I've tried putting this to any possible places before and after transpose operations, but it still produce errors. Am I doing something wrong? Could you help me fix this issue?
Thanks a lot in advance
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.