rogersce / cnpy Goto Github PK
View Code? Open in Web Editor NEWlibrary to read/write .npy and .npz files in C/C++
License: MIT License
library to read/write .npy and .npz files in C/C++
License: MIT License
Hi,
I have followed the instruction of how to install cnpy and have added the path to my makfile. However, I will receive the No such file or directory
error when #include<zlib.h>
zip.h: No such file or directory
Right now, if one opens a npz-file in "append"-mode and saves an array into it, it is always added as a new array - even if an array with the same name already exists - rather than appending an existing array. It would be cool if cnpy could support appending already existing arrays within an npz-file (just as it is currently supported for npy-files).
cnpy cannot read attached npz file (unzip first). It was written using numpy 1.19.5.
Crashes in load_the_npy_file, num_bytes = 0 (?) and arr.data() crashes.
lin_policy_plus_best_10.zip
I am writing two npy formatted files in my C++ program as follows:
vector < vector < int> > vhist;
vector < double > vrmsd;
.
.
.
cerr <<"saving " << nfeat << " x " << histsize << " features\n" ;
cnpy::npy_save(fout,&vhist[0],{nfeat,histsize},"w");
cnpy::npy_save(frmsd,&vrmsd,{nfeat},"w");
cerr << "done saving features\n";
Where the output is:
saving 40339 x 64000 features
done saving features
There are two issues:
fout ("dock_feat.npy")contains only a header
strings dock_feat.npy
NUMPY
{'descr': '<?24', 'fortran_order': False, 'shape': (40339, 64000), }
The descr field in both fout and frmsd is not accepted by np.load()
ValueError: descr is not a valid dtype descriptor: '<?24'
I wrapped CNPY into RcppCNPy, an R package to read/write NumPy files. I put one extension in: the ability to read and write gzip-compressed files (but I only do npy files, not npz files).
If there was interest, I'd be happy to provide a pull request.
My code is now in this github repo.
Dear rogersce,
thanks for the wonderful library to read .npy files. There is a small bug in the recognition of the Fortran order. In cnpy.cpp in line 72 it should be (header.substr(loc1,4) instead of
(header.substr(loc1,5)
Best wishes
Timo
Reading a npy file whose shape larger than 2147483647 (the max value of int),
will crash on cnpy.cpp line 85: shape.push_back(std::stoi(sm[0].str()));
stoi
should be changed to stoll
or stoull
.
Hi,
I want to install cnpy on windows 10. I followed the installation step, after build step, when I type make there error occurs. I am running the make command in the build folder:
make: *** No targets specified and no makefile found. Stop.
Single-byte datatypes such as np.byte have '|' as their byte order character.
This wrongly triggers an assertion failure in cnpy.cpp, line 88
Suggested fix:
bool littleEndian = ((header[loc1] == '<' || header[loc1] == '|') ? true : false);
assert(littleEndian);
Somewhere in the cnpy::npy_save(...):
if (fp) {
...
}
else {
fp = fopen(fname.c_str(),"wb");
true_data_shape = shape;
}
...
fseek(fp,0,SEEK_SET); // It raises segmentation fault
Hi I am trying to install CNPY by following the instructions, but always get the following error after "make":
[ 20%] Linking CXX shared library libcnpy.dylib
Undefined symbols for architecture x86_64:
"_inflate", referenced from:
load_the_npz_array(__sFILE*, unsigned int, unsigned int) in cnpy.cpp.o
"_inflateEnd", referenced from:
load_the_npz_array(__sFILE*, unsigned int, unsigned int) in cnpy.cpp.o
"inflateInit2", referenced from:
load_the_npz_array(__sFILE*, unsigned int, unsigned int) in cnpy.cpp.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [libcnpy.dylib] Error 1
make[1]: *** [CMakeFiles/cnpy.dir/all] Error 2
make: *** [all] Error 2
I am not experienced and thought that it might have something to do with Zlib which I installed via homebrew, but it didn't solve the issue. I am using Mac OS 10.15.6. Does anyone have a hint for the installation? I would highly appreciate any help. Thanks so much.
I have a fairly large npz file that contains multiple matrixes. cnpy loads most of them, but seems to skip over three or four of them. The matrix names are not keys in the map. I can see the matrices in python though and load them normally. I can make the file available for download, it's about 600 MB.
I have installed your library in /usr/local/lib
. Compiling programs using it to executables works just fine.
Now I have some code in example.cpp
that relies on #include"cnpy.h"
and I want to compile it to a shared object file example.so
.
So I tried
$ g++ -c -fPIC example.cpp --std=c++11
$ g++ example.o -shared -o libexample.so -no-pie -Wl,--whole-archive /usr/local/lib/libcnpy.a -Wl,--no-whole-archive
which results in this error:
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/crt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
/usr/local/lib/libcnpy.a(cnpy.cpp.o): In function `load_the_npz_array(_IO_FILE*, unsigned int, unsigned int)':
cnpy.cpp:(.text+0x1535): undefined reference to `inflateInit2_'
cnpy.cpp:(.text+0x1597): undefined reference to `inflate'
cnpy.cpp:(.text+0x15ac): undefined reference to `inflateEnd'
collect2: error: ld returned 1 exit status
What am I doing wrong? Any help is appreciated.
(I use Ubuntu 16.04 if that's important)
I encouraged a problem when i make the dvs-panotracking,here is the error message:
/home/ccs/dvs-panotracking/src/common.cpp:131:63: error: no matching function for call to ‘npy_save(std::basic_string, float*, const unsigned int [2], int)’
cnpy::npy_save(filename + ".npy",in_cpu.data(),shape,2);
^
In file included from /home/ccs/dvs-panotracking/src/common.cpp:20:0:
/usr/local/include/cnpy.h:88:31: note: candidate: template void cnpy::npy_save(std::string, const T*, std::vector, std::string)
template void npy_save(std::string fname, const T* data, const std::vect
^
/usr/local/include/cnpy.h:88:31: note: template argument deduction/substitution failed:
/home/ccs/dvs-panotracking/src/common.cpp:131:63: note: cannot convert ‘shape’ (type ‘const unsigned int [2]’) to type ‘std::vector’
cnpy::npy_save(filename + ".npy",in_cpu.data(),shape,2);
^
In file included from /home/ccs/dvs-panotracking/src/common.cpp:20:0:
/usr/local/include/cnpy.h:223:31: note: candidate: template void cnpy::npy_save(std::string, std::vector, std::string)
template void npy_save(std::string fname, const std::vector data, std
^
/usr/local/include/cnpy.h:223:31: note: template argument deduction/substitution failed:
/home/ccs/dvs-panotracking/src/common.cpp:131:63: note: mismatched types ‘std::vector’ and ‘float*’
cnpy::npy_save(filename + ".npy",in_cpu.data(),shape,2);
^
/home/ccs/dvs-panotracking/src/common.cpp: In function ‘void saveState(std::string, const ImageGpu_8u_C4*)’:
/home/ccs/dvs-panotracking/src/common.cpp:148:38: warning: narrowing conversion of ‘sz.intn<2>::.special::intn<2>::width’ from ‘int’ to ‘unsigned int’ inside { } [-Wnarrowing]
const unsigned int shape[] = {sz.width,sz.height};
^
/home/ccs/dvs-panotracking/src/common.cpp:148:47: warning: narrowing conversion of ‘sz.intn<2>::.special::intn<2>::height’ from ‘int’ to ‘unsigned int’ inside { } [-Wnarrowing]
const unsigned int shape[] = {sz.width,sz.height};
^
make[2]: *** [CMakeFiles/dvs-tracking-common.dir/common.cpp.o] Error 1
make[1]: *** [CMakeFiles/dvs-tracking-common.dir/all] Error 2
make: *** [all] Error 2
please give me some solusions,thank you very much!
In the event that fclose()
fails, npy_save()
will not detect the failure. Depending on how the library is used, this can lead to serious problems. To mitigate this, if fclose()
fails, an exception should be thrown.
The same issue exists in npz_save()
.
I noticed that there might be some pitfalls, when trying to convert data in an inappropriate type.
Not sure if this is an issue with the code or with the documentation.
Here an example:
import numpy as np
a = np.arange(20).reshape((4, 5))
b = a * 0.5
np.savez("out.npz", a=a, b=b)
a
and b
cnpy::NpyArray a = cnpy::npz_load("out.npz", "a");
cnpy::NpyArray b = cnpy::npz_load("out.npz", "b");
auto a_int = a.data<int>(); // not OK
auto a_long = a.data<long>(); // OK
auto a_float = a.data<float>(); // not OK
auto a_double = a.data<double>(); // not OK
auto b_int = b.data<int>(); // not OK
auto b_long = b.data<long>(); // not OK
auto b_float = b.data<float>(); // not OK
auto b_double = b.data<double>(); // OK
With not OK I mean the output data are not correct, although the code compiles without warnings.
std::cout << "a_long:\n";
for(auto i=0; i< a.num_vals; i++){
std::cout << a_long[i] << " ";
if (i != 0 && i % (a.shape[1]) == 4 ) std::cout << "\n";
}
produces
a_int:
0 0 1 0 2
0 3 0 4 0
5 0 6 0 7
0 8 0 9 0
---------
a_long:
0 1 2 3 4
5 6 7 8 9
10 11 12 13 14
15 16 17 18 19
---------
b_float:
0 0 0 1.75 0
1.875 0 1.9375 0 2
0 2.0625 0 2.125 0
2.1875 0 2.25 0 2.28125
---------
b_double:
0 0.5 1 1.5 2
2.5 3 3.5 4 4.5
5 5.5 6 6.5 7
7.5 8 8.5 9 9.5
---------
long
double
.PS: The complete codes can be found here test.py and example.cpp
Consider the following simple program:
#include <vector>
#include "cnpy.h"
int main() {
std::vector<uint32_t> a(778126008);
std::vector<uint32_t> b(389063004);
std::string output("/tmp/tmparray");
cnpy::npz_save(output, "a", &a[0], {a.size()}, "w");
cnpy::npz_save(output, "b", &b[0], {b.size()}, "a");
}
/tmp/tmparray
will have a corrupt zipfile header and numpy will not load the array. unzip
outputs the following:
% unzip -v tmparray /tmp ip-172-31-92-71
Archive: tmparray
warning [tmparray]: 4294967296 extra bytes at beginning or within zipfile
(attempting to process anyway)
Length Method Size Cmpr Date Time CRC-32 Name
-------- ------ ------- ---- ---------- ----- -------- ----
3112504112 Stored 3112504112 0% 1980-00-00 00:00 f6b5ba4e a.npy
1556252096 Stored 1556252096 0% 1980-00-00 00:00 544711f7 b.npy
-------- ------- --- -------
4668756208 4668756208 0% 2 files
% unzip tmparray /tmp ip-172-31-92-71
Archive: tmparray
warning [tmparray]: 4294967296 extra bytes at beginning or within zipfile
(attempting to process anyway)
file #1: bad zipfile offset (local header sig): 4294967296
(attempting to re-compensate)
extracting: a.npy
extracting: b.npy
I guess this is due to missing Zip64 format support. I see that you currently implement Zip file support yourself. Moving to https://libzip.org/ should fix this issue. Is there any reason why you chose zlib over the libzip (which is a higher-level interface to zlib)?
Thanks for the great library. While I have been able to load a numpy file, how do I convert it to opencv Mat format.
Thanks!
In BigEndianTest(), the code will be vulnerable to bad optimization gcc strict aliasing is enabled (-O2 or higher). The following is with gcc 4.4.7 on Centos 6:
**[ 9%] Building CXX object libmt/src/chart/CMakeFiles/chart.dir/CYKNBest.C.o
In file included from /ikm/77/ws/it/mnlp/libmt/src/nmt/readInput.C:2:
/ikm/77/ws/it/mnlp/libcommon/3rd_party/cnpy/cnpy.cpp: In function ‘char cnpy::BigEndianTest()’:
/ikm/77/ws/it/mnlp/libcommon/3rd_party/cnpy/cnpy.cpp:14: warning: dereferencing type-punned pointer will break strict-aliasing rules
**
The fix is to use a union such as:
union {
char x[2];
short y;
}
Write the memory probe to x and read it back through y. gcc and other compilers honor this during optimization.
If the numpy arrays exceed the limits of int
, saving into a numpy array doesn't work properly.
The following small code causes a segfault on my system:
#include "cnpy.h"
#include <cstdlib>
int main()
{
size_t sz, idx;
int *large_data;
unsigned int shape[2];
shape[0]=300;
shape[1]=20000000;
sz = (size_t)shape[0];
sz *= (size_t)shape[1];
large_data = (int *)malloc(sizeof(int)*sz);
for (idx=0; idx<sz; idx++) {
large_data[idx] = 0;
}
cnpy::npy_save("large-data.npy", large_data, shape, 2, "w");
free(large_data);
}
To compile in the cnpy
directory (after cmake
, make
, etc.):
g++ -o write-large-file write-large-file.cpp -lcnpy -L. -I.
Looking at npy_save
lines 109-110 look to be the culprit as the int
datatype for nels
is overflowing when a size_t
datatype would be more appropriate.
Though I haven't tested, it looks like PR 9 "Various fixes and modernizations" would fix this issue.
Much like #52, npz_load
leaks open FILE
objects if an exception is thrown. Examples of locations that may throw exceptions include:
load_the_npy_file()
.Hello,
I would like to report a potential bug in the software: when loading a float32
numpy array, it gives incorrect values. I spent a while on this bug before realizing that converting the numpy array to float64
before saving makes it work.
Best regards.
undefined reference to `cnpy::npz_load(std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)'
I'm taking a project that compiles and runs on Ubuntu 16.04 to CentOS 7.4. For some reason, CentOS is not detecting the cnpy
library. I've done the make/make install on the device and copied the libcnpy.a/.so everywhere I can think.
Anyone have an idea of why it is still having trouble finding the library?
Thanks
Hi,
Just come to say thank you. Supergreat job.
Hey! I've found a bug in line 316 on cnpy.cpp@4e8810b. The index should be 18 so that it can handle compressed file correctly.
If you have a rgb.png and depth.npy how to synthesize point clouds based on c++
I met a problem when running example program. Does anyone met similar problem before? Looking forward to any comments.
The following are details:
Platform: Centos 7.2.1511
g++: 4.8.5
terminate called after throwing an instance of 'std::regex_error'
what(): regex_error
BackTrace
#0 0x00007ffff6d5e5f7 in raise () from /usr/lib64/libc.so.6
#1 0x00007ffff6d5fce8 in abort () from /usr/lib64/libc.so.6
#2 0x00007ffff76908bd in __gnu_cxx::__verbose_terminate_handler ()
at ../../.././libstdc++-v3/libsupc++/vterminate.cc:95
#3 0x00007ffff768e906 in __cxxabiv1::__terminate (handler=<optimized out>)
at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:47
#4 0x00007ffff768e951 in std::terminate () at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:57
#5 0x00007ffff768eb68 in __cxxabiv1::__cxa_throw (obj=obj@entry=0x62e120,
tinfo=0x7ffff7970b70 <typeinfo for std::regex_error>,
dest=0x7ffff76b7ca0 <std::regex_error::~regex_error()>)
at ../../.././libstdc++-v3/libsupc++/eh_throw.cc:87
---Type <return> to continue, or q <return> to quit---
#6 0x00007ffff76b6765 in std::__throw_regex_error (__ecode=std::regex_constants::_S_error_brack)
at ../../../.././libstdc++-v3/src/c++11/functexcept.cc:144
#7 0x00007ffff7bcc374 in std::__detail::_Compiler<char const*, std::regex_traits<char> >::_M_bracket_expression (this=0x7fffffffd8b0) at /usr/include/c++/4.8.2/bits/regex_compiler.h:974
#8 0x00007ffff7bcb1d1 in std::__detail::_Compiler<char const*, std::regex_traits<char> >::_M_atom (
this=0x7fffffffd8b0) at /usr/include/c++/4.8.2/bits/regex_compiler.h:960
#9 0x00007ffff7bc9cb3 in std::__detail::_Compiler<char const*, std::regex_traits<char> >::_M_term (
this=0x7fffffffd8b0) at /usr/include/c++/4.8.2/bits/regex_compiler.h:795
#10 0x00007ffff7bc823e in std::__detail::_Compiler<char const*, std::regex_traits<char> >::_M_alternative (
this=0x7fffffffd8b0) at /usr/include/c++/4.8.2/bits/regex_compiler.h:773
#11 0x00007ffff7bc6a4c in std::__detail::_Compiler<char const*, std::regex_traits<char> >::_M_disjunction (
---Type <return> to continue, or q <return> to quit---
this=0x7fffffffd8b0) at /usr/include/c++/4.8.2/bits/regex_compiler.h:758
#12 0x00007ffff7bc4c54 in std::__detail::_Compiler<char const*, std::regex_traits<char> >::_Compiler (
this=0x7fffffffd8b0, __b=@0x7fffffffd9b0: 0x7ffff7bce05f "[0-9][0-9]*",
__e=@0x7fffffffd9c8: 0x7ffff7bce06a "", __traits=..., __flags=16)
at /usr/include/c++/4.8.2/bits/regex_compiler.h:729
#13 0x00007ffff7bc2cfe in std::__detail::__compile<char const*, std::regex_traits<char> > (
__b=@0x7fffffffd9b0: 0x7ffff7bce05f "[0-9][0-9]*", __e=@0x7fffffffd9c8: 0x7ffff7bce06a "", __t=...,
__f=16) at /usr/include/c++/4.8.2/bits/regex_compiler.h:1105
#14 0x00007ffff7bc1473 in std::basic_regex<char, std::regex_traits<char> >::basic_regex (
this=0x7fffffffda20, __p=0x7ffff7bce05f "[0-9][0-9]*", __f=16)
at /usr/include/c++/4.8.2/bits/regex.h:388
---Type <return> to continue, or q <return> to quit---
#15 0x00007ffff7bbda36 in cnpy::parse_npy_header (fp=0x62dca0, word_size=@0x7fffffffdc78: 6479008,
shape=..., fortran_order=@0x7fffffffdc77: false)
at /path/to/3rdparty/cnpy/cnpy.cpp:127
#16 0x00007ffff7bbe21c in load_the_npy_file (fp=0x62dca0)
at /path/to/3rdparty/cnpy/cnpy.cpp:182
#17 0x00007ffff7bbf6cd in cnpy::npy_load (fname=...)
at /path/to/3rdparty/cnpy/cnpy.cpp:333
#18 0x000000000040d45e in main ()
at /path/to/3rdparty/cnpy/example1.cpp:24
Greetings cnpy developers and contributors,
We’re reaching out because your project is an important part of the open source ecosystem, and we’d like to invite you to integrate with our fuzzing service, OSS-Fuzz. OSS-Fuzz is a free fuzzing infrastructure you can use to identify security vulnerabilities and stability bugs in your project. OSS-Fuzz will:
Many widely used open source projects like OpenSSL, FFmpeg, LibreOffice, and ImageMagick are fuzzing via OSS-Fuzz, which helps them find and remediate critical issues.
Even though typical integrations can be done in < 100 LoC, we have a reward program in place which aims to recognize folks who are not just contributing to open source, but are also working hard to make it more secure.
We want to stress that anyone who meets the eligibility criteria and integrates a project with OSS-Fuzz is eligible for a reward.
If you're not interested in integrating with OSS-Fuzz, it would be helpful for us to understand why—lack of interest, lack of time, or something else—so we can better support projects like yours in the future.
If we’ve missed your question in our FAQ, feel free to reply or reach out to us at [email protected].
Thanks!
Tommy
OSS-Fuzz Team
A few simple changes to compile clean under MSVC 2017 64-bit. All changes in cnpy.cpp.
line 67 and line 110 , change int to size_t:
size_t loc1, loc2;
line 318, change size_t to uint32_t:
uint32_t size = (uint32_t) &local_header[22];
In Numpy, it's possible to create arrays of structures, e.g.:
a = np.zeros((2,),dtype=('i4,i4,f8'))
It would be great if one could read these into C/C++ with cnpy.
Hello,
I am unclear, if I have a 2D array stored in a npz file, how can I access the data and store it in a vector<vector>?
Also, if a scalar is saved as a npz from a simple python code (below), it's shape[0] is not 1 (as shown in the example), but 0.
x = 123
np.savez("scalar.npz", x)
Is there a way that I can exactly determine the underlying data type within the npz file if all I know is that it contains floating point values ranging 0 - 255?
Thank you!
hi,
Thank you for your great work! I notice your guideline for installation is for linux. Is it possible to use this lib on windows?
best wishes,
Matthew
Hi,
I'm trying to load an npz file that has 4 numpy arrays with the following number of elements:
arr1: 125, arr2: 6250, arr3: 245000, arr4: 1000.
when trying to read the data stored in arr3 I get a segfault:
` cnpy::NpyArray arr3 = cnpy::npz_load(npz_path, "arr3");
double* loaded_data = arr3.data<double>();
vector<double> vec;
for(int i = 0; i < arr3.num_vals; i++){
vec.push_back(loaded_data[i]);
}`
Thanks
Hi,
Iv'e tried saving the following data type:
std::vector<std::vector > data;
as follows:
cnpy::npy_save(path, &data[0], {nz_, ny_, nx_}, "w");
(the outer vector is of size: nx_ * ny_ * nz_)
and while trying in python:
d = np.load('path/grid.npy')
i'm getting the following error:
Traceback (most recent call last):
File ".../lib/python3.5/site-packages/numpy/lib/format.py", line 510, in _read_array_header
dtype = numpy.dtype(d['descr'])
TypeError: data type "<?24" not understood
How (or should) I consider the size of the inner vector?
Should I load the .npy differently in Python?
Thank you for your help.
./cnpy.hh:56:35: error: a space is required between consecutive right angle brackets
(use '> >')
std::shared_ptr<std::vector> data_holder;
^~
> >
./cnpy.hh:30:50: error: a space is required between consecutive right angle brackets
(use '> >')
data_holder = std::shared_ptr<std::vector>(
^~
> >
./cnpy.hh:63:16: warning: alias declarations are a C++11 extension [-Wc++11-extensions]
using npz_t = std::map<std::string, NpyArray>;
^
example1.cpp:16:37: error: expected expression
cnpy::npy_save("arr1.npy",&data[0],{Nz,Ny,Nx},"w");
^
example1.cpp:25:43: error: expected expression
cnpy::npz_save("out.npz","arr1",&data[0],{Nz,Ny,Nx},"a");
It would great if cnpy supported memory mapping of files similar to how numpy.load() does it. (This would be useful for reading small portions of very large files.)
Thanks for your contribution, awesome work! I'm asking whether you have the Eigen API, if not we are trying to implement one and create a pull request later on :)
Again, thanks for your great work!
I ran into an issue building when zlib was not installed in the default system location.
This could be fixed by using find_package to discover the zlib include and library paths, and allowing an alternative ZLIB_ROOT to be supplied at build time.
Hi all,
does anybody have the libcnpy.so cross-compiled for armv7 architecture?
I'm having some troubles while compiling it due to another library called libz.so.
So if anyone already compiled it for armv7 (arm-linux-gnueabi compiler) would be really nice.
Thank you.
Running this line:
cnpy::NpyArray arr = cnpy::npy_load("X.npy");
produces this error:
terminate called after throwing an instance of 'std::length_error'
what(): cannot create std::vector larger than max_size()
./cpp_model.sh: line 20: 15311 Aborted (core dumped)
with X.npy.zip. The array is pretty small, with shape (137, 24, 144)
. This is well under the int limit in #71. Loading it in Python via import numpy as np; np.load("./X.npy")
works as expected. The array is all doubles.
Could you please add an optional #ifdef to avoid zlib dependency for those looking to use only uncompressed npy save/load? There is no need for zlib.h for just *.npy files.
Something like:
#define CNPY_UNCOMPRESSED_ONLY //Disable compressed archive functionality and zlib dependency
#include "cnpy.h"
Also for MSVC users, the use of fopen, scanf/sprintf causes compiler issues. Adding the #pragma warning(disable:4996)
fixes the problem, but it is recommended to replace these unsafe functions with the safer versions.
Hi @rogersce
Thank you for your work.
I am working on reading data from .npy.
cnpy::NpyArray mat = cnpy::npy_load(file);
int row = mat.shape[0];
int col = mat.shape[1];
float* data = mat.data<float>();
For some npy files, I should read in col-major to get correct form;
for (int r = 0; r < row; ++r) {for (int c = 0; c < col; ++c) {egn(r, c) = data[offset]; ++offset;}}
And for the other npy files, I should read in row-major to get correct form;
for (int c = 0; c < col; ++c) {for (int r = 0; r < row; ++r) {egn(r, c) = data[offset]; ++offset;}}
I do not know when row-major and when col-major.
When trying to compile a program that uses the cnpy::npz_save method I got an "undefined reference to `crc32'" error, which I did not get when using the cnpy::npy_save method.
I could resolve this by adding -lz to the g++ compilation line, so it looks like this:
g++ test.cpp -L/usr/local/lib -lcnpy -lz -o test
Just wanted to add this here in case other run into the same issue.
Hi,
I install the code in Ubuntu 18.04 as following:
$ cmake /home/seyyed/software/cnpy-master
-- The C compiler identification is GNU 7.3.0
-- The CXX compiler identification is GNU 7.3.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found version "1.2.11")
-- Configuring done
-- Generating done
-- Build files have been written to: /home/seyyed/build
$ make
Scanning dependencies of target cnpy-static
[ 16%] Building CXX object CMakeFiles/cnpy-static.dir/cnpy.cpp.o
[ 33%] Linking CXX static library libcnpy.a
[ 33%] Built target cnpy-static
Scanning dependencies of target cnpy
[ 50%] Building CXX object CMakeFiles/cnpy.dir/cnpy.cpp.o
[ 66%] Linking CXX shared library libcnpy.so
[ 66%] Built target cnpy
Scanning dependencies of target example1
[ 83%] Building CXX object CMakeFiles/example1.dir/example1.cpp.o
[100%] Linking CXX executable example1
[100%] Built target example1
$ sudo make install
[sudo] password for seyyed:
[ 33%] Built target cnpy-static
[ 66%] Built target cnpy
[100%] Built target example1
Install the project...
-- Install configuration: ""
-- Installing: /usr/local/lib/libcnpy.so
-- Installing: /usr/local/lib/libcnpy.a
-- Installing: /usr/local/include/cnpy.h
-- Installing: /usr/local/bin/mat2npz
-- Installing: /usr/local/bin/npy2mat
-- Installing: /usr/local/bin/npz2mat
But when I test the example1.cpp in terminal the following error issued:
$ g++ -o example1 example1.cpp -L/usr/local/lib -lcnpy -lz --std=c++11
$ ./example1
./example1: error while loading shared libraries: libcnpy.so: cannot open shared object file: No such file or directory
So, I compile example1.cpp in Netbeans and the following is the error:
cd '/home/seyyed/NetBeansProjects/test_cnpy'
/usr/bin/make -f Makefile CONF=Debug
"/usr/bin/make" -f nbproject/Makefile-Debug.mk QMAKE= SUBPROJECTS= .build-conf
make[1]: Entering directory '/home/seyyed/NetBeansProjects/test_cnpy'
"/usr/bin/make" -f nbproject/Makefile-Debug.mk dist/Debug/GNU-Linux/test_cnpy
make[2]: Entering directory '/home/seyyed/NetBeansProjects/test_cnpy'
mkdir -p dist/Debug/GNU-Linux
g++ -o dist/Debug/GNU-Linux/test_cnpy build/Debug/GNU-Linux/main.o /usr/local/lib/libcnpy.a /usr/local/lib/libcnpy.so
build/Debug/GNU-Linux/main.o: In function void cnpy::npz_save<double>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, double const*, std::vector<unsigned long, std::allocator<unsigned long> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)': /home/seyyed/software/cnpy-master/cnpy.h:171: undefined reference to
crc32'
/home/seyyed/software/cnpy-master/cnpy.h:172: undefined reference to crc32' build/Debug/GNU-Linux/main.o: In function
void cnpy::npz_save(std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::__cxx11::basic_string<char, std::char_traits, std::allocator >, char const*, std::vector<unsigned long, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)':
/home/seyyed/software/cnpy-master/cnpy.h:171: undefined reference to crc32' /home/seyyed/software/cnpy-master/cnpy.h:172: undefined reference to
crc32'
build/Debug/GNU-Linux/main.o: In function void cnpy::npz_save<std::complex<double> >(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::complex<double> const*, std::vector<unsigned long, std::allocator<unsigned long> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)': /home/seyyed/software/cnpy-master/cnpy.h:171: undefined reference to
crc32'
build/Debug/GNU-Linux/main.o:/home/seyyed/software/cnpy-master/cnpy.h:172: more undefined references to crc32' follow /usr/local/lib/libcnpy.a(cnpy.cpp.o): In function
load_the_npz_array(_IO_FILE*, unsigned int, unsigned int)':
cnpy.cpp:(.text+0x1586): undefined reference to inflateInit2_' cnpy.cpp:(.text+0x15e8): undefined reference to
inflate'
cnpy.cpp:(.text+0x15fd): undefined reference to `inflateEnd'
collect2: error: ld returned 1 exit status
nbproject/Makefile-Debug.mk:66: recipe for target 'dist/Debug/GNU-Linux/test_cnpy' failed
make[2]: *** [dist/Debug/GNU-Linux/test_cnpy] Error 1
make[2]: Leaving directory '/home/seyyed/NetBeansProjects/test_cnpy'
nbproject/Makefile-Debug.mk:59: recipe for target '.build-conf' failed
make[1]: *** [.build-conf] Error 2
make[1]: Leaving directory '/home/seyyed/NetBeansProjects/test_cnpy'
nbproject/Makefile-impl.mk:39: recipe for target '.build-impl' failed
make: *** [.build-impl] Error 2
BUILD FAILED (exit value 2, total time: 278ms)
Please help me.
Thanks.
It would be great to have an uninstall target along with the install target for make. Users would be able to uninstall the installed cnpy files easily, instead of manually deleting them.
Program using cnumpy aborts at first sight of non-standard situation. It would be easier to use in something more complex than laboratory exercises if we replace abort() with exceptions or error codes. I can do it and create pull request, but I wanted to know, which way would be preferable? I no one answers, I'll send exception version in a few days.
I just want to know why I can not install cnpy according to the commands.Is there any body can tell me how to install cnpy specifically.
The current implementation of npy_load()
leaks an open FILE
object if the sub-method load_the_npy_file()
throws an exception.
This case is not rare. It can be triggered intentionally by passing a corrupted NPY file. Suitable corrupted files include:
substr
calls to fail:
There also exist several other locations which could trigger exceptions, but would be difficult to induce intentionally.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.