GithubHelp home page GithubHelp logo

tf-encrypted / tf-seal Goto Github PK

View Code? Open in Web Editor NEW
95.0 95.0 16.0 85 KB

Bridge between TensorFlow and the Microsoft SEAL homomorphic encryption library

License: Apache License 2.0

Dockerfile 0.54% Python 35.51% Shell 9.40% C++ 48.09% Makefile 6.45%
cryptography homomorphic-encryption machine-learning tensorflow

tf-seal's Issues

Implement Sigmoid

HE approximates Sigmoid much like MPC. We just need to implement the approx.

Python wrappers

We need to create some python wrappers, similarly to tf big.

Separate Generate Key Op and Variants

In the first PR #1, we introduced the ability to encrypt and decrypt some floating point data. We'll want to improve this by adding a separate generate key op and separate variants for each type of key. Public Key, secret key, relinearization key and galois key. We can probably do something with generics to not have duplicate code.

Op naming

From here:

Since an op’s name uniquely identifies it, different TF packages should ensure their op names are globally unique across the entire TF ecosystem. To do so, prepend the package’s name to the op’s name and separate with a ‘>’. An op named “MatMul” inside the “tensorflow_addons” package should be named “Addons>MatMul”, for example.

can't use pip to install

when I use the command pip install tf-seal,I just get the following error
ERROR: Could not find a version that satisfies the requirement tf-seal (from versions: none)
ERROR: No matching distribution found for tf-seal

Could not find a version that satisfies the requirement tf-nightly<2,>=1.14.0 (from tf-seal)

When I used " pip install tf-seal", the error occurred:
ERROR: Could not find a version that satisfies the requirement tf-nightly<2,>=1.14.0 (from tf-seal) (from versions: 2.5.0.dev20210101, 2.5.0.dev20210102, 2.5.0.dev20210103, 2.5.0.dev20210104, 2.5.0.dev20210105, 2.5.0.dev20210106, 2.5.0.dev20210107, 2.5.0.dev20210108, 2.5.0.dev20210109, 2.5.0.dev20210110, 2.5.0.dev20210111, 2.5.0.dev20210112, 2.5.0.dev20210113, 2.5.0.dev20210114, 2.5.0.dev20210119, 2.5.0.dev20210121, 2.5.0.dev20210122, 2.5.0.dev20210123, 2.5.0.dev20210124, 2.5.0.dev20210125, 2.5.0.dev20210126, 2.5.0.dev20210127, 2.5.0.dev20210128, 2.5.0.dev20210129, 2.5.0.dev20210130, 2.5.0.dev20210203, 2.5.0.dev20210204, 2.5.0.dev20210208, 2.5.0.dev20210209, 2.5.0.dev20210210)
ERROR: No matching distribution found for tf-nightly<2,>=1.14.0

Development docker container

It will help development environment setup and circleci build times if we have a docker container with all the dependencies etc installed already.

pip install tf-seal fails

Running pip install tf-seal results in:

Defaulting to user installation because normal site-packages is not writeable
ERROR: Could not find a version that satisfies the requirement tf-seal (from versions: none)
ERROR: No matching distribution found for tf-seal

Ubuntu under Windows 10 WSL:

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.4 LTS
Release:        18.04
Codename:       bionic

Fix macos CI

There's a problem with the xcode version on circleci not being compatible with the tensorflow build I did for macos. Figure out how we can make these work together.

Build failing for custom tensorflow inside docker

I believe there needs to be a patch applied to allow tensorflow to build with newer grpc versions, as tensorflow only supports a single commit of grpc.

I believe this is related to:

Error message

cc1plus: warning: command line option '-Wno-implicit-function-declaration' is valid for C/ObjC but not for C++
ERROR: /home/archie/.cache/bazel/_bazel_archie/ddef78ddb52319615715a709a4a5202c/external/grpc/BUILD:507:1: C++ compilation of rule '@grpc//:gpr_base' failed (Exit 1)
external/grpc/src/core/lib/gpr/log_linux.cc:43:13: error: ambiguating new declaration of 'long int gettid()'
   43 | static long gettid(void) { return syscall(__NR_gettid); }
      |             ^~~~~~
In file included from /usr/include/unistd.h:1170,
                 from external/grpc/src/core/lib/gpr/log_linux.cc:41:
/usr/include/bits/unistd_ext.h:34:16: note: old declaration '__pid_t gettid()'
   34 | extern __pid_t gettid (void) __THROW;
      |                ^~~~~~
external/grpc/src/core/lib/gpr/log_linux.cc:43:13: warning: 'long int gettid()' defined but not used [-Wunused-function]
   43 | static long gettid(void) { return syscall(__NR_gettid); }
      |             ^~~~~~
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
ERROR: /home/archie/git/tf-seal/tensorflow-c-17/tensorflow/tools/pip_package/BUILD:49:1 C++ compilation of rule '@grpc//:gpr_base' failed (Exit 1)
INFO: Elapsed time: 108.783s, Critical Path: 30.85s
INFO: 682 processes: 682 local.
FAILED: Build did NOT complete successfully
make: *** [Makefile:52: tensorflow] Error 1

Reproduce

Using this arch dockerfile in current directory:
https://github.com/DreamingRaven/nemesyst/blob/37b7c546f7e9c0c85f4916a8c57b07f4e2c26f90/examples/containers/tf_seal/Dockerfile

  • sudo docker build -t archer/tf_seal .

  • sudo docker run --gpus all -it archer/tf_seal bash or if there is no nvidia container toolkit sudo docker run -it archer/tf_seal bash to get an interactive terminal

  • cd ~/git/tf-seal/

  • make tensorflow

TensorFlow install URL results in Error 403 (Forbidden)

Unable to setup TensorFlow built with C++17 following the readme.md.

The wget command results in 403 (Forbidden):

$ wget https://storage.googleapis.com/tf-pips/tf-c++17-support/tf_nightly-1.14.0-cp37-cp37m-linux_x86_64.whl
Will not apply HSTS. The HSTS database must be a regular and non-world-writable file.
ERROR: could not open HSTS store at '/home/source144/.wget-hsts'. HSTS will be disabled.
--2020-07-29 12:34:11--  https://storage.googleapis.com/tf-pips/tf-c++17-support/tf_nightly-1.14.0-cp37-cp37m-linux_x86_64.whl
Resolving storage.googleapis.com (storage.googleapis.com)... 142.250.64.176, 216.58.192.48, 172.217.8.80, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|142.250.64.176|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2020-07-29 12:34:11 ERROR 403: Forbidden.

Storing tfseal objects on disk

I am trying to store the public key and secret key on disk to use them later. I have tried to store them with pickle and with the saver method of tensorflow but both methods give error because these keys are tfseal objects. Is there any way to store them?. The error which I have is can't pickle _thread.RLock objects

Thanks

Implement PolyEval

Implement a generic polynomial evaluation function that takes a list of coefficients and then evaluates them at some point. This can then be used to generically evaluate approximations such as Sigmoid

Illegal instruction while importing custom tensorflow

The environment is the tf-seal docker image that I have made available here.

So when I run the container on play-with-docker tf_seal is working just fine then when I run the container on my local machine I get an Illegal instruction error. The image is the same on both machine and not different builds. Both machines are running an x86_64 processor.

Below is a full backtrace from the python interpreter while importing tensorflow.

(gdb) backtrace 
#0  0x00007f19c9e139bf in tensorflow::UnaryVariantOpRegistry::RegisterDeviceCopyFn(tensorflow::VariantDeviceCopyDirection, std::type_index const&, std::function<tensorflow::Status (tensorflow::Variant const&, tensorflow::Variant*, std::function<tensorflow::Status (tensorflow::Tensor const&, tensorflow::Tensor*)>)> const&) () from /usr/local/lib/python3.7/dist-packages/tensorflow_core/python/../libtensorflow_framework.so.1
#1  0x00007f19c9e143ee in tensorflow::variant_op_registry_fn_registration::UnaryVariantDeviceCopyRegistration<float>::UnaryVariantDeviceCopyRegistration(tensorflow::VariantDeviceCopyDirection, std::type_index const&, std::function<tensorflow::Status (float const&, float*, std::function<tensorflow::Status (tensorflow::Tensor const&, tensorflow::Tensor*)>)> const&) ()
   from /usr/local/lib/python3.7/dist-packages/tensorflow_core/python/../libtensorflow_framework.so.1
#2  0x00007f19c9b6dadd in __static_initialization_and_destruction_0(int, int) [clone .constprop.306] ()
   from /usr/local/lib/python3.7/dist-packages/tensorflow_core/python/../libtensorflow_framework.so.1
#3  0x00007f19e88f895a in ?? () from /lib64/ld-linux-x86-64.so.2
#4  0x00007f19e88f8a59 in ?? () from /lib64/ld-linux-x86-64.so.2
#5  0x00007f19e88fccbc in ?? () from /lib64/ld-linux-x86-64.so.2
#6  0x00007f19e88555cf in __GI__dl_catch_exception (exception=<optimized out>, operate=<optimized out>, args=<optimized out>)
    at dl-error-skeleton.c:196
#7  0x00007f19e88fc2a6 in ?? () from /lib64/ld-linux-x86-64.so.2
#8  0x00007f19e86d1256 in dlopen_doit (a=a@entry=0x7fffb6461940) at dlopen.c:66
#9  0x00007f19e88555cf in __GI__dl_catch_exception (exception=exception@entry=0x7fffb64618e0, operate=<optimized out>, args=<optimized out>)
    at dl-error-skeleton.c:196
#10 0x00007f19e885565f in __GI__dl_catch_error (objname=0x157a530, errstring=0x157a538, mallocedp=0x157a528, operate=<optimized out>, 
    args=<optimized out>) at dl-error-skeleton.c:215
#11 0x00007f19e86d1a25 in _dlerror_run (operate=operate@entry=0x7f19e86d1200 <dlopen_doit>, args=args@entry=0x7fffb6461940) at dlerror.c:163
#12 0x00007f19e86d12e6 in __dlopen (file=<optimized out>, mode=<optimized out>) at dlopen.c:87
#13 0x000000000062c750 in _PyImport_FindSharedFuncptr ()
#14 0x0000000000636e58 in _PyImport_LoadDynamicModuleWithSpec ()
#15 0x000000000063713f in ?? ()
#16 0x00000000005d80e7 in _PyMethodDef_RawFastCallDict ()
#17 0x00000000005d82aa in PyCFunction_Call ()
#18 0x0000000000553064 in _PyEval_EvalFrameDefault ()
#19 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#20 0x00000000005d88c2 in _PyFunction_FastCallKeywords ()
#21 0x000000000054afe0 in ?? ()
#22 0x000000000055211b in _PyEval_EvalFrameDefault ()
#23 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#24 0x000000000054e3ac in _PyEval_EvalFrameDefault ()
#25 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#26 0x000000000054e210 in _PyEval_EvalFrameDefault ()
#27 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#28 0x000000000054e210 in _PyEval_EvalFrameDefault ()
#29 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#30 0x000000000054e210 in _PyEval_EvalFrameDefault ()
#31 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#32 0x00000000005d88c2 in _PyFunction_FastCallKeywords ()
#33 0x000000000054e210 in _PyEval_EvalFrameDefault ()
#34 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#35 0x000000000054afe0 in ?? ()
#36 0x000000000055211b in _PyEval_EvalFrameDefault ()
#37 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#38 0x000000000054e210 in _PyEval_EvalFrameDefault ()
#39 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#40 0x0000000000558a0e in ?? ()
#41 0x00000000005d80e7 in _PyMethodDef_RawFastCallDict ()
#42 0x00000000005d82aa in PyCFunction_Call ()
#43 0x0000000000553064 in _PyEval_EvalFrameDefault ()
#44 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#45 0x00000000005d88c2 in _PyFunction_FastCallKeywords ()
#46 0x000000000054afe0 in ?? ()
#47 0x000000000055211b in _PyEval_EvalFrameDefault ()
#48 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#49 0x000000000054e3ac in _PyEval_EvalFrameDefault ()
#50 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#51 0x000000000054e210 in _PyEval_EvalFrameDefault ()
#52 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#53 0x000000000054e210 in _PyEval_EvalFrameDefault ()
#54 0x00000000005d9906 in _PyFunction_FastCallDict ()
#55 0x00000000005db0f4 in ?? ()
#56 0x00000000005db55d in _PyObject_CallMethodIdObjArgs ()
#57 0x000000000053a87b in PyImport_ImportModuleLevelObject ()
#58 0x0000000000550059 in _PyEval_EvalFrameDefault ()
#59 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#60 0x0000000000558a0e in ?? ()
#61 0x00000000005d80e7 in _PyMethodDef_RawFastCallDict ()
#62 0x00000000005d82aa in PyCFunction_Call ()
#63 0x0000000000553064 in _PyEval_EvalFrameDefault ()
#64 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#65 0x00000000005d88c2 in _PyFunction_FastCallKeywords ()
#66 0x000000000054afe0 in ?? ()
#67 0x000000000055211b in _PyEval_EvalFrameDefault ()
#68 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#69 0x000000000054e3ac in _PyEval_EvalFrameDefault ()
#70 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#71 0x000000000054e210 in _PyEval_EvalFrameDefault ()
#72 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#73 0x000000000054e210 in _PyEval_EvalFrameDefault ()
#74 0x00000000005d9906 in _PyFunction_FastCallDict ()
#75 0x00000000005db0f4 in ?? ()
#76 0x00000000005db55d in _PyObject_CallMethodIdObjArgs ()
#77 0x000000000053a87b in PyImport_ImportModuleLevelObject ()
#78 0x0000000000556774 in ?? ()
#79 0x00000000005d831a in PyCFunction_Call ()
#80 0x0000000000553064 in _PyEval_EvalFrameDefault ()
#81 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#82 0x00000000005d88c2 in _PyFunction_FastCallKeywords ()
#83 0x000000000054e210 in _PyEval_EvalFrameDefault ()
#84 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#85 0x00000000005d9ace in _PyFunction_FastCallDict ()
#86 0x00000000005db0f4 in ?? ()
#87 0x00000000005db55d in _PyObject_CallMethodIdObjArgs ()
#88 0x000000000053a7a3 in PyImport_ImportModuleLevelObject ()
#89 0x0000000000550059 in _PyEval_EvalFrameDefault ()
#90 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#91 0x0000000000558a0e in ?? ()
#92 0x00000000005d80e7 in _PyMethodDef_RawFastCallDict ()
#93 0x00000000005d82aa in PyCFunction_Call ()
#94 0x0000000000553064 in _PyEval_EvalFrameDefault ()
#95 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#96 0x00000000005d88c2 in _PyFunction_FastCallKeywords ()
#97 0x000000000054afe0 in ?? ()
#98 0x000000000055211b in _PyEval_EvalFrameDefault ()
#99 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#100 0x000000000054e3ac in _PyEval_EvalFrameDefault ()
#101 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#102 0x000000000054e210 in _PyEval_EvalFrameDefault ()
#103 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#104 0x000000000054e210 in _PyEval_EvalFrameDefault ()
#105 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#106 0x000000000054e210 in _PyEval_EvalFrameDefault ()
#107 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#108 0x00000000005d88c2 in _PyFunction_FastCallKeywords ()
#109 0x000000000054afe0 in ?? ()
#110 0x000000000055211b in _PyEval_EvalFrameDefault ()
#111 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#112 0x00000000005d88c2 in _PyFunction_FastCallKeywords ()
#113 0x000000000054afe0 in ?? ()
#114 0x000000000055211b in _PyEval_EvalFrameDefault ()
#115 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#116 0x000000000054afe0 in ?? ()
#117 0x000000000055211b in _PyEval_EvalFrameDefault ()
#118 0x00000000005d9906 in _PyFunction_FastCallDict ()
#119 0x00000000004d9f62 in ?? ()
#120 0x00000000005db202 in ?? ()
#121 0x00000000005db499 in PyObject_CallFunctionObjArgs ()
#122 0x0000000000589bbb in ?? ()
#123 0x00000000005918be in ?? ()
#124 0x000000000055747e in ?? ()
#125 0x00000000005d7c03 in _PyMethodDef_RawFastCallKeywords ()
#126 0x0000000000551fc3 in _PyEval_EvalFrameDefault ()
#127 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#128 0x00000000005d9ace in _PyFunction_FastCallDict ()
#129 0x00000000005db0f4 in ?? ()
#130 0x00000000005db55d in _PyObject_CallMethodIdObjArgs ()
#131 0x000000000053a7a3 in PyImport_ImportModuleLevelObject ()
#132 0x0000000000550059 in _PyEval_EvalFrameDefault ()
#133 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#134 0x0000000000558a0e in ?? ()
#135 0x00000000005d80e7 in _PyMethodDef_RawFastCallDict ()
#136 0x00000000005d82aa in PyCFunction_Call ()
#137 0x0000000000553064 in _PyEval_EvalFrameDefault ()
#138 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#139 0x00000000005d88c2 in _PyFunction_FastCallKeywords ()
#140 0x000000000054afe0 in ?? ()
#141 0x000000000055211b in _PyEval_EvalFrameDefault ()
#142 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#143 0x000000000054e3ac in _PyEval_EvalFrameDefault ()
#144 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#145 0x000000000054e210 in _PyEval_EvalFrameDefault ()
#146 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#147 0x000000000054e210 in _PyEval_EvalFrameDefault ()
#148 0x00000000005d9906 in _PyFunction_FastCallDict ()
#149 0x00000000005db0f4 in ?? ()
#150 0x00000000005db55d in _PyObject_CallMethodIdObjArgs ()
#151 0x000000000053a87b in PyImport_ImportModuleLevelObject ()
#152 0x0000000000550059 in _PyEval_EvalFrameDefault ()
#153 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#154 0x0000000000558a0e in ?? ()
#155 0x00000000005d80e7 in _PyMethodDef_RawFastCallDict ()
#156 0x00000000005d82aa in PyCFunction_Call ()
#157 0x0000000000553064 in _PyEval_EvalFrameDefault ()
#158 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#159 0x00000000005d88c2 in _PyFunction_FastCallKeywords ()
#160 0x000000000054afe0 in ?? ()
#161 0x000000000055211b in _PyEval_EvalFrameDefault ()
#162 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#163 0x000000000054e3ac in _PyEval_EvalFrameDefault ()
#164 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#165 0x000000000054e210 in _PyEval_EvalFrameDefault ()
#166 0x00000000005d85cc in _PyFunction_FastCallKeywords ()
#167 0x000000000054e210 in _PyEval_EvalFrameDefault ()
#168 0x00000000005d9906 in _PyFunction_FastCallDict ()
#169 0x00000000005db0f4 in ?? ()
#170 0x00000000005db55d in _PyObject_CallMethodIdObjArgs ()
#171 0x000000000053a87b in PyImport_ImportModuleLevelObject ()
#172 0x0000000000550059 in _PyEval_EvalFrameDefault ()
#173 0x000000000054baa2 in _PyEval_EvalCodeWithName ()
#174 0x000000000054dd03 in PyEval_EvalCode ()
#175 0x0000000000630d02 in ?? ()
#176 0x0000000000480d74 in ?? ()
#177 0x0000000000480ef6 in PyRun_InteractiveLoopFlags ()
#178 0x0000000000631c13 in PyRun_AnyFileExFlags ()
#179 0x00000000006541ce in ?? ()
#180 0x000000000065452e in _Py_UnixMain ()
#181 0x00007f19e871db6b in __libc_start_main (main=0x4bcb00 <main>, argc=1, argv=0x7fffb6469058, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffb6469048) at ../csu/libc-start.c:308
#182 0x00000000005e01aa in _start ()

Circle CI is very slow

CI is taking a long time, to the point of Circle CI timeout.

One solution may be to increase the Circle CI timeout. Alternatively, or in addition, we could also split testing up into more tasks, such as one just for building (which takes a while on its own, especially now with protobuf) and one just for testing.

Combine all public keys into one variant

It would be helpful if all public keys just lived in one tensor variant. These public keys are the normal encryption public key, relinearization key and galois key.

Reproduce iDASH 2017 example

Reproduce logistic regression on encrypted data from eg https://eprint.iacr.org/2018/462, although using BFV scheme seems dated, might be better to find work using HEAAN instead. This would make a killer blog post and helps guide integration with TFE (what lives where).

AttributeError: module 'tf_seal' has no attribute 'convert_to_tensor'

After running the example we can see on the first page of the github, I have this error:
AttributeError: module 'tf_seal' has no attribute 'convert_to_tensor'

I'm working with Dockerfile and I have build the docker correctly, but I have this error

I have done the pip install tf-seal but this attribute isn't there, what happen?. I suppose this function should be there, in fact on the code you can find it.

Thanks in advance

Build of custom tensorflow fail

The build of the custom tensorflow according to the README is failing because noignite is not defined in any .rc file if we remove --config=noignite from here then it will throw another error about nokafka, then removing --config=nokafka will make the build process to continue, however, I'm not sure if removing those config values is the best way to solve the problem.

Below is the full error from the build process.

cd tensorflow-c-17 && echo -e '\n' | TF_ENABLE_XLA=0 TF_NEED_CUDA=0 TF_SET_ANDROID_WORKSPACE=0 \
 TF_CONFIGURE_IOS=0 TF_NEED_OPENCL_SYCL=0 TF_DOWNLOAD_CLANG=0 \
 TF_NEED_ROCM=0 TF_NEED_MPI=0 ./configure
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.26.1 installed.
Please specify the location of python. [Default is /home/youben/anaconda3/envs/tfseal/bin/python]: 

Found possible Python library paths:
  /home/youben/anaconda3/envs/tfseal/lib/python3.7/site-packages
Please input the desired Python library path to use.  Default is [/home/youben/anaconda3/envs/tfseal/lib/python3.7/site-packages]
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: 

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
	--config=mkl         	# Build with MKL support.
	--config=monolithic  	# Config for mostly static monolithic build.
	--config=ngraph      	# Build with Intel nGraph support.
	--config=numa        	# Build with NUMA support.
	--config=dynamic_kernels	# (Experimental) Build kernels into separate shared objects.
	--config=v2          	# Build TensorFlow 2.x instead of 1.x.
Preconfigured Bazel build configs to DISABLE default on features:
	--config=noaws       	# Disable AWS S3 filesystem support.
	--config=nogcp       	# Disable GCP support.
	--config=nohdfs      	# Disable HDFS support.
	--config=nonccl      	# Disable NVIDIA NCCL support.
Configuration finished
cd tensorflow-c-17 && bazel build --config=opt --config=c++17 --config=noaws --config=nogcp \
	--config=nohdfs --config=noignite --config=nokafka --config=nonccl \
	//tensorflow/tools/pip_package:build_pip_package
Starting local Bazel server and connecting to it...
INFO: Options provided by the client:
  Inherited 'common' options: --isatty=1 --terminal_columns=150
INFO: Reading rc options for 'build' from /home/youben/git-repo/tf-seal/tensorflow-c-17/.bazelrc:
  'build' options: --apple_platform_type=macos --define framework_shared_object=true --define open_source_build=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone --strategy=Genrule=standalone -c opt --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --announce_rc --define=grpc_no_ares=true --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include
INFO: Reading rc options for 'build' from /home/youben/git-repo/tf-seal/tensorflow-c-17/.tf_configure.bazelrc:
  'build' options: --action_env PYTHON_BIN_PATH=/home/youben/anaconda3/envs/tfseal/bin/python --action_env PYTHON_LIB_PATH=/home/youben/anaconda3/envs/tfseal/lib/python3.7/site-packages --python_path=/home/youben/anaconda3/envs/tfseal/bin/python --action_env TF_CONFIGURE_IOS=0
INFO: Found applicable config definition build:opt in file /home/youben/git-repo/tf-seal/tensorflow-c-17/.tf_configure.bazelrc: --copt=-march=native --copt=-Wno-sign-compare --host_copt=-march=native --define with_default_optimizations=true
INFO: Found applicable config definition build:c++17 in file /home/youben/git-repo/tf-seal/tensorflow-c-17/.bazelrc: --cxxopt=-std=c++17
INFO: Found applicable config definition build:noaws in file /home/youben/git-repo/tf-seal/tensorflow-c-17/.bazelrc: --define=no_aws_support=true
INFO: Found applicable config definition build:nogcp in file /home/youben/git-repo/tf-seal/tensorflow-c-17/.bazelrc: --define=no_gcp_support=true
INFO: Found applicable config definition build:nohdfs in file /home/youben/git-repo/tf-seal/tensorflow-c-17/.bazelrc: --define=no_hdfs_support=true
ERROR: Config value noignite is not defined in any .rc file
make: *** [Makefile:52: tensorflow] Error 2

Cannot download tf_nightly-1.14.0

Command "wget https://storage.googleapis.com/tf-pips/tf-c++17-support/tf_nightly-1.14.0-cp37-cp37m-linux_x86_64.whl" doesn't work now.
--2020-10-04 11:18:10-- https://storage.googleapis.com/tf-pips/tf-c++17-support/tf_nightly-1.14.0-cp37-cp37m-linux_x86_64.whl Resolving storage.googleapis.com (storage.googleapis.com)... 34.64.4.80, 34.64.4.112, 34.64.4.16 Connecting to storage.googleapis.com (storage.googleapis.com)|34.64.4.80|:443... connected. HTTP request sent, awaiting response... 404 Not Found 2020-10-04 11:18:13 ERROR 404: Not Found.

Can someone please help me to solve this problem ?

Testing is very slow

Running tests, out locally and as part of CI, is currently very slow, even if we are using tiny tensors. One possible reason could be that every test is generating a new keypair.

One potential solution would be to only generate a keypair once and make most tests use it, or perhaps even hardcode a keypair so that no generation takes place during testing (except for tests directly related to key generation, of course).

Update to TF 2.0

Recently I'm trying to transfer my code from TF 1 to 2.0 version, apparently they changed a lot in 2.0 version, e.g. no session() any more. Should we convert the code to TF 2.0?

Custom TensorFlow URL are invalid

The following URLs are invalid now:
https://storage.googleapis.com/tf-pips/tf-c++17-support/tf_nightly-1.14.0-cp37-cp37m-linux_x86_64.whl
https://storage.googleapis.com/tf-pips/tf-c++17-support/tf_nightly-1.14.0-cp37-cp37m-macosx_10_7_x86_64.whl

Run:

wget https://storage.googleapis.com/tf-pips/tf-c++17-support/tf_nightly-1.14.0-cp37-cp37m-linux_x86_64.whl

Got:

--2020-11-29 22:39:23--  https://storage.googleapis.com/tf-pips/tf-c++17-support/tf_nightly-1.14.0-cp37-cp37m-linux_x86_64.whl
Resolving storage.googleapis.com (storage.googleapis.com)... 172.217.13.240, 172.217.15.80, 172.217.7.144, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|172.217.13.240|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2020-11-29 22:39:23 ERROR 404: Not Found.

Errors running the example

Hi, After running any of both examples. I have the next error:

File "logistic_regression.py", line 4, in
import tf_seal as tfs
File "/home/ubuntusystem/Descargas/tf-seal/tf_seal/init.py", line 1, in
from tf_seal.python.tensor import constant
File "/home/ubuntusystem/Descargas/tf-seal/tf_seal/python/tensor.py", line 4, in
import tf_seal.python.ops.seal_ops as ops
File "/home/ubuntusystem/Descargas/tf-seal/tf_seal/python/ops/seal_ops.py", line 5, in
seal_ops = load_library.load_op_library(op_lib_file)
File "/home/ubuntusystem/.local/lib/python3.7/site-packages/tensorflow_core/python/framework/load_library.py", line 61, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError: /home/ubuntusystem/Descargas/tf-seal/tf_seal/python/ops/_seal_ops.so: cannot open shared object file: No such file or directory

Why?, I have follow every step you have on readme section. Thanks in advance

importing tf-seal ends with 'GLIBC_2.29' not found error

OS: Ubuntu 18.04.4
I have installed tf-seal with both binary custom tensorflow and with locally compiled version, on two virtual environments, using Python's virtualenv and conda.
All scenerios installed the library successfully but when importing tf_seal it throws NotFoundError:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/pipoc/anaconda3/envs/HE-cenv2/lib/python3.7/site-packages/tf_seal/__init__.py", line 1, in <module>
    from tf_seal.python.tensor import constant
  File "/home/pipoc/anaconda3/envs/HE-cenv2/lib/python3.7/site-packages/tf_seal/python/tensor.py", line 4, in <module>
    import tf_seal.python.ops.seal_ops as ops
  File "/home/pipoc/anaconda3/envs/HE-cenv2/lib/python3.7/site-packages/tf_seal/python/ops/seal_ops.py", line 5, in <module>
    seal_ops = load_library.load_op_library(op_lib_file)
  File "/home/pipoc/anaconda3/envs/HE-cenv2/lib/python3.7/site-packages/tensorflow_core/python/framework/load_library.py", line 61, in load_op_library
    lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /home/pipoc/anaconda3/envs/HE-cenv2/lib/python3.7/site-packages/tf_seal/python/ops/_seal_ops.so)```

seal_poly_eval

There is a strange behavoir when using seal_poly_eval with the 3th parameter of coeffs different than 0 (in the unitary test the value is 0):

with tf.Session() as sess:
        x = np.array([[1, 2, 3], [4, 5, 5]], np.float32)
        coeffs = np.array([0.0, 0.0, 0.1, 0.00])
        pub_key, sec_key = seal_key_gen(gen_relin=True, gen_galois=True)
        x_var = seal_encrypt(x, pub_key)
        c_var = seal_poly_eval(x_var, coeffs, pub_key)
        c = seal_decrypt(c_var, sec_key, tf.float32)
        sess.run(c)

I have the exception:
terminate called after throwing an instance of 'std::logic_error'
what(): result ciphertext is transparent
Aborted

For others coefficients like "coeffs = np.array([0.1, 0.5, 0.1, 0.00])", I have the following exception:
terminate called after throwing an instance of 'std::invalid_argument'
what(): encrypted_ntt and plain_ntt parameter mismatch
Aborted

It is working when using the coefficents used in the unitary test (coeffs = np.array([0.5, 0.197, 0.0, -0.004]))

I have installed the files given by Wixee in the issue #62

Make a pip package

We need to be able to make a pip package so that when we're ready to open source this project we can upload the pip package to pypi.

Set up circleci

Should be able to copy pretty directly what tf big is doing.

Linting

We have a command for linting c++ code (make lint) we should extend this to python code and then add running make lint to ci.

decryption

c_encrypted = tfs.matmul(a_encrypted, b_encrypted)

Here I wanted to decrypt the value of "c_encrypted". Is there any way I could do that?

Errors on make tensorflow

After doing make tensorflow, I have some errors like these:
ERROR: /home/carlos/Descargas/tf-seal/tensorflow-c-17/tensorflow/core/BUILD:2840:1: C++ compilation of rule '//tensorflow/core:framework_internal_impl' failed (Exit 1) In file included from ./tensorflow/core/framework/register_types.h:22, from tensorflow/core/util/batch_util.cc:18: ./tensorflow/core/framework/variant.h: In member function 'void tensorflow::Variant::HeapOrInline::ResetMemory()': ./tensorflow/core/framework/variant.h:653:45: warning: 'void* memset(void*, int, size_t)' clearing an object of type 'union tensorflow::Variant::HeapOrInline' with no trivial copy-assignment; use value-initialization instead [-Wclass-memaccess] this, 0, sizeof(Variant::HeapOrInline)); ^ ./tensorflow/core/framework/variant.h:496:9: note: 'union tensorflow::Variant::HeapOrInline' declared here union HeapOrInline { ^~~~~~~~~~~~ gcc: fatal error: Killed signal terminated program cc1plus compilation terminated. Target //tensorflow/tools/pip_package:build_pip_package failed to build Use --verbose_failures to see the command lines of failed build steps. INFO: Elapsed time: 2076.649s, Critical Path: 795.15s INFO: 1011 processes: 1011 local. FAILED: Build did NOT complete successfully make: *** [Makefile:52: tensorflow] Error 1
What happen?. Just I did a make tensorflow

Parallelize operations

Currently, we're storing a matrix of ciphers texts as a vector<Ciphertext> where each cipher text is a row. Since multiple numbers can be stored in each cipher text the operations are already parallelized column-wise. We can parallelize row-wise though by splitting up the work among different threads.

Trying to work with simple datasets

Hello, I am trying to work with a simple example like this one:
`
from future import print_function
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

learning_rate = 0.01
training_epochs = 25
batch_size = 100
display_step = 1

 x = tf.placeholder(tf.float32, [None, 784]) # mnist data image of shape 28*28=784
 y = tf.placeholder(tf.float32, [None, 10]) # 0-9 digits recognition => 10 classes

W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))

pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

 init = tf.global_variables_initializer()

 with tf.Session() as sess:
   sess.run(init)
   for epoch in range(training_epochs):
      avg_cost = 0.
      total_batch = int(mnist.train.num_examples/batch_size)
      for i in range(total_batch):
          batch_xs, batch_ys = mnist.train.next_batch(batch_size)
          _, c = sess.run([optimizer, cost], feed_dict={x: batch_xs, y: batch_ys})
          avg_cost += c / total_batch
   
      if (epoch+1) % display_step == 0:
          print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost))

       print("Optimization Finished!")

        correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
        print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))`

The example is simple, but it doesn't let me download the dataset. As well, I'm worried because I don't know if tf seal supports this kind of examples.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.