GithubHelp home page GithubHelp logo

microsoft / aiforearth-api-development Goto Github PK

View Code? Open in Web Editor NEW
74.0 15.0 46.0 5.93 MB

This is an API Framework for AI models to be hosted locally or on the AI for Earth API Platform (https://github.com/microsoft/AIforEarth-API-Platform).

License: MIT License

Dockerfile 1.90% Python 19.50% R 2.33% Shell 0.05% Jupyter Notebook 76.23%
aiforearth

aiforearth-api-development's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aiforearth-api-development's Issues

Better message when task ID is not found (local development)

When calling the /task endpoint with a task ID that is not valid (i.e. after the API was restarted and the task ID was from a previous session, or an invalid task ID), the message is malformed:

image

Would be good to say in the status field that the task ID is not found, and also have a valid timestamp.

Issue where self.tracer wasn't defined

I ran into an issue with the following code

where self.tracer was not defined.

if not isinstance(self.log, AI4EAppInsights):
self.tracer = self.log.tracer

Above L42, I guess you could just add self.tracer = None to prevent this. Would you like me to make the PR?

System Information Leak:Internal

AIforEarth-API-Development-master/Containers/common/blob_mounting/blob_mounter.py 44
image
An internal information leak occurs when system data or debugging information is sent to a local file, console, or screen via printing or logging.

Azure container instances don't work with azure data blobs.

I can't seem to mount azure data blobs on ACI using docker, always runs into
fuse mount failed,
I can circumvent this, by passing arguments when using docker run, --cap-add SYS_ADMIN, --device /dev/fuse --security-opt apparmor:unconfined on my local, is there a way to pass these arguments to docker when launching container instance?

build error when installing appinsights

following the tensorflow example, my Dockerfile runs the following commands and then fails when installing appinsights. The Dockerfile:

# Pull in the AI for Earth Base Image, so we can extract necessary libraries.
FROM mcr.microsoft.com/aiforearth/base-py:latest as ai4e_base

# Use any compatible Ubuntu-based image as your selected base image.
FROM nvidia/cuda:9.0-cudnn7-runtime-ubuntu16.04
# Copy the AI4E tools and libraries to our container.
COPY --from=ai4e_base /ai4e_api_tools /ai4e_api_tools

# Add the AI4E API source directory to the PATH.
ENV PATH /usr/local/envs/ai4e_py_api/bin:$PATH
# Add the AI4E tools directory to the PYTHONPATH.
ENV PYTHONPATH="${PYTHONPATH}:/ai4e_api_tools"

# Install Miniconda, Flask, Supervisor, uwsgi
RUN ./ai4e_api_tools/requirements/install-api-hosting-reqs.sh

# Install Azure Blob SDK
RUN ./ai4e_api_tools/requirements/install-azure-blob.sh

# Install Application Insights
RUN ./ai4e_api_tools/requirements/install-appinsights.sh

Traceback:

# rave at rave-desktop in ~/CropMask_RCNN/app on git:master ✖︎ [10:02:09]
→ docker build . -t cropmask
Sending build context to Docker daemon  179.2MB
Step 1/23 : FROM mcr.microsoft.com/aiforearth/base-py:latest as ai4e_base
 ---> b96b2ebc8ea3
Step 2/23 : FROM nvidia/cuda:9.0-cudnn7-runtime-ubuntu16.04
 ---> 4ecbea4d32bd
Step 3/23 : COPY --from=ai4e_base /ai4e_api_tools /ai4e_api_tools
 ---> Using cache
 ---> 923a40ede187
Step 4/23 : ENV PATH /usr/local/envs/ai4e_py_api/bin:$PATH
 ---> Using cache
 ---> be940007fd2f
Step 5/23 : ENV PYTHONPATH="${PYTHONPATH}:/ai4e_api_tools"
 ---> Using cache
 ---> c1316f2a8527
Step 6/23 : RUN ./ai4e_api_tools/requirements/install-api-hosting-reqs.sh
 ---> Using cache
 ---> ffbf8e7fc9fa
Step 7/23 : RUN ./ai4e_api_tools/requirements/install-azure-blob.sh
 ---> Using cache
 ---> fe6d7a201594
Step 8/23 : RUN ./ai4e_api_tools/requirements/install-appinsights.sh
 ---> Running in 233013baa44c
Collecting applicationinsights
  Downloading https://files.pythonhosted.org/packages/a1/53/234c53004f71f0717d8acd37876e0b65c121181167057b9ce1b1795f96a0/applicationinsights-0.11.9-py2.py3-none-any.whl (58kB)
Installing collected packages: applicationinsights
Successfully installed applicationinsights-0.11.9
Collecting grpcio
  Downloading https://files.pythonhosted.org/packages/f2/5d/b434403adb2db8853a97828d3d19f2032e79d630e0d11a8e95d243103a11/grpcio-1.22.0-cp36-cp36m-manylinux1_x86_64.whl (2.2MB)
Collecting opencensus==0.6.0
  Downloading https://files.pythonhosted.org/packages/b8/79/466e39c5e81ec105bbbe42a5f85d5a5e27a75d629271af2dcc9408adcb12/opencensus-0.6.0-py2.py3-none-any.whl (124kB)
Collecting opencensus-ext-requests
  Downloading https://files.pythonhosted.org/packages/c7/ff/e12bdbed71ac483b70219b57af483f4783a2ab7b0cd60aea069e8c2d36a0/opencensus_ext_requests-0.1.2-py2.py3-none-any.whl
Collecting opencensus-ext-azure
  Downloading https://files.pythonhosted.org/packages/d4/87/643a1a068f066fa6a4a389526028a5a454d7c40bbdc65ea517e01014b3fa/opencensus_ext_azure-0.7.0-py2.py3-none-any.whl
Requirement already satisfied: six>=1.5.2 in /usr/local/envs/ai4e_py_api/lib/python3.6/site-packages (from grpcio) (1.12.0)
Collecting opencensus-context<1.0.0,>=0.1.1 (from opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/2b/b7/720d4507e97aa3916ac47054cd75490de6b6148c46d8c2c487638f16ad95/opencensus_context-0.1.1-py2.py3-none-any.whl
Collecting google-api-core<2.0.0,>=1.0.0 (from opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/71/e5/7059475b3013a3c75abe35015c5761735ab224eb1b129fee7c8e376e7805/google_api_core-1.14.2-py2.py3-none-any.whl (68kB)
Collecting wrapt<2.0.0,>=1.0.0 (from opencensus-ext-requests)
  Downloading https://files.pythonhosted.org/packages/23/84/323c2415280bc4fc880ac5050dddfb3c8062c2552b34c2e512eb4aa68f79/wrapt-1.11.2.tar.gz
Requirement already satisfied: requests>=2.19.0 in /usr/local/envs/ai4e_py_api/lib/python3.6/site-packages (from opencensus-ext-azure) (2.22.0)
Collecting psutil>=5.6.3 (from opencensus-ext-azure)
  Downloading https://files.pythonhosted.org/packages/1c/ca/5b8c1fe032a458c2c4bcbe509d1401dca9dda35c7fc46b36bb81c2834740/psutil-5.6.3.tar.gz (435kB)
Collecting contextvars; python_version >= "3.6" and python_version < "3.7" (from opencensus-context<1.0.0,>=0.1.1->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/83/96/55b82d9f13763be9d672622e1b8106c85acb83edd7cc2fa5bc67cd9877e9/contextvars-2.4.tar.gz
Collecting protobuf>=3.4.0 (from google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/dc/0e/e7cdff89745986c984ba58e6ff6541bc5c388dd9ab9d7d312b3b1532584a/protobuf-3.9.0-cp36-cp36m-manylinux1_x86_64.whl (1.2MB)
Requirement already satisfied: setuptools>=34.0.0 in /usr/local/envs/ai4e_py_api/lib/python3.6/site-packages (from google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0) (41.0.1)
Collecting google-auth<2.0dev,>=0.4.0 (from google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/c5/9b/ed0516cc1f7609fb0217e3057ff4f0f9f3e3ce79a369c6af4a6c5ca25664/google_auth-1.6.3-py2.py3-none-any.whl (73kB)
Requirement already satisfied: pytz in /usr/local/envs/ai4e_py_api/lib/python3.6/site-packages (from google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0) (2019.2)
Collecting googleapis-common-protos<2.0dev,>=1.6.0 (from google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/eb/ee/e59e74ecac678a14d6abefb9054f0bbcb318a6452a30df3776f133886d7d/googleapis-common-protos-1.6.0.tar.gz
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/envs/ai4e_py_api/lib/python3.6/site-packages (from requests>=2.19.0->opencensus-ext-azure) (2.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/envs/ai4e_py_api/lib/python3.6/site-packages (from requests>=2.19.0->opencensus-ext-azure) (1.25.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/envs/ai4e_py_api/lib/python3.6/site-packages (from requests>=2.19.0->opencensus-ext-azure) (2019.6.16)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/envs/ai4e_py_api/lib/python3.6/site-packages (from requests>=2.19.0->opencensus-ext-azure) (3.0.4)
Collecting immutables>=0.9 (from contextvars; python_version >= "3.6" and python_version < "3.7"->opencensus-context<1.0.0,>=0.1.1->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/e3/91/bc4b34993ef77aabfd1546a657563576bdd437205fa24d4acaf232707452/immutables-0.9-cp36-cp36m-manylinux1_x86_64.whl (91kB)
Collecting cachetools>=2.0.0 (from google-auth<2.0dev,>=0.4.0->google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/2f/a6/30b0a0bef12283e83e58c1d6e7b5aabc7acfc4110df81a4471655d33e704/cachetools-3.1.1-py2.py3-none-any.whl
Collecting pyasn1-modules>=0.2.1 (from google-auth<2.0dev,>=0.4.0->google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/be/70/e5ea8afd6d08a4b99ebfc77bd1845248d56cfcf43d11f9dc324b9580a35c/pyasn1_modules-0.2.6-py2.py3-none-any.whl (95kB)
Collecting rsa>=3.1.4 (from google-auth<2.0dev,>=0.4.0->google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/02/e5/38518af393f7c214357079ce67a317307936896e961e35450b70fad2a9cf/rsa-4.0-py2.py3-none-any.whl
Collecting pyasn1<0.5.0,>=0.4.6 (from pyasn1-modules>=0.2.1->google-auth<2.0dev,>=0.4.0->google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/6a/6e/209351ec34b7d7807342e2bb6ff8a96eef1fd5dcac13bdbadf065c2bb55c/pyasn1-0.4.6-py2.py3-none-any.whl (75kB)
Building wheels for collected packages: wrapt, psutil, contextvars, googleapis-common-protos
  Building wheel for wrapt (setup.py): started
  Building wheel for wrapt (setup.py): finished with status 'done'
  Stored in directory: /root/.cache/pip/wheels/d7/de/2e/efa132238792efb6459a96e85916ef8597fcb3d2ae51590dfd
  Building wheel for psutil (setup.py): started
  Building wheel for psutil (setup.py): finished with status 'error'
  ERROR: Complete output from command /usr/local/envs/ai4e_py_api/bin/python -u -c 'import setuptools, tokenize;__file__='"'"'/tmp/pip-install-hl7bjjdm/psutil/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-3c3my185 --python-tag cp36:
  ERROR: running bdist_wheel
  running build
  running build_py
  creating build
  creating build/lib.linux-x86_64-3.6
  creating build/lib.linux-x86_64-3.6/psutil
  copying psutil/_psaix.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_pslinux.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_common.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_compat.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_psbsd.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_pswindows.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_pssunos.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/__init__.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_psosx.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_psposix.py -> build/lib.linux-x86_64-3.6/psutil
  creating build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_misc.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_aix.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_linux.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_memory_leaks.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_contracts.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/__main__.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_bsd.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_windows.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_unicode.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_system.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_connections.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/runner.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_process.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_posix.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/__init__.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_osx.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_sunos.py -> build/lib.linux-x86_64-3.6/psutil/tests
  running build_ext
  building 'psutil._psutil_linux' extension
  creating build/temp.linux-x86_64-3.6
  creating build/temp.linux-x86_64-3.6/psutil
  gcc -pthread -B /usr/local/envs/ai4e_py_api/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_VERSION=563 -DPSUTIL_LINUX=1 -DPSUTIL_ETHTOOL_MISSING_TYPES=1 -I/usr/local/envs/ai4e_py_api/include/python3.6m -c psutil/_psutil_common.c -o build/temp.linux-x86_64-3.6/psutil/_psutil_common.o
  unable to execute 'gcc': No such file or directory
  error: command 'gcc' failed with exit status 1
  ----------------------------------------
  ERROR: Failed building wheel for psutil
  Running setup.py clean for psutil
  Building wheel for contextvars (setup.py): started
  Building wheel for contextvars (setup.py): finished with status 'done'
  Stored in directory: /root/.cache/pip/wheels/a5/7d/68/1ebae2668bda2228686e3c1cf16f2c2384cea6e9334ad5f6de
  Building wheel for googleapis-common-protos (setup.py): started
  Building wheel for googleapis-common-protos (setup.py): finished with status 'done'
  Stored in directory: /root/.cache/pip/wheels/9e/3d/a2/1bec8bb7db80ab3216dbc33092bb7ccd0debfb8ba42b5668d5
Successfully built wrapt contextvars googleapis-common-protos
Failed to build psutil
ERROR: opencensus-ext-azure 0.7.0 has requirement opencensus<1.0.0,>=0.7.0, but you'll have opencensus 0.6.0 which is incompatible.
Installing collected packages: grpcio, immutables, contextvars, opencensus-context, protobuf, cachetools, pyasn1, pyasn1-modules, rsa, google-auth, googleapis-common-protos, google-api-core, opencensus, wrapt, opencensus-ext-requests, psutil, opencensus-ext-azure
  Running setup.py install for psutil: started
    Running setup.py install for psutil: finished with status 'error'
    ERROR: Complete output from command /usr/local/envs/ai4e_py_api/bin/python -u -c 'import setuptools, tokenize;__file__='"'"'/tmp/pip-install-hl7bjjdm/psutil/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-q2u74ciq/install-record.txt --single-version-externally-managed --compile:
    ERROR: running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.6
    creating build/lib.linux-x86_64-3.6/psutil
    copying psutil/_psaix.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_pslinux.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_common.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_compat.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_psbsd.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_pswindows.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_pssunos.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/__init__.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_psosx.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_psposix.py -> build/lib.linux-x86_64-3.6/psutil
    creating build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_misc.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_aix.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_linux.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_memory_leaks.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_contracts.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/__main__.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_bsd.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_windows.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_unicode.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_system.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_connections.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/runner.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_process.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_posix.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/__init__.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_osx.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_sunos.py -> build/lib.linux-x86_64-3.6/psutil/tests
    running build_ext
    building 'psutil._psutil_linux' extension
    creating build/temp.linux-x86_64-3.6
    creating build/temp.linux-x86_64-3.6/psutil
    gcc -pthread -B /usr/local/envs/ai4e_py_api/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_VERSION=563 -DPSUTIL_LINUX=1 -DPSUTIL_ETHTOOL_MISSING_TYPES=1 -I/usr/local/envs/ai4e_py_api/include/python3.6m -c psutil/_psutil_common.c -o build/temp.linux-x86_64-3.6/psutil/_psutil_common.o
    unable to execute 'gcc': No such file or directory
    error: command 'gcc' failed with exit status 1
    ----------------------------------------
ERROR: Command "/usr/local/envs/ai4e_py_api/bin/python -u -c 'import setuptools, tokenize;__file__='"'"'/tmp/pip-install-hl7bjjdm/psutil/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-q2u74ciq/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-install-hl7bjjdm/psutil/
The command '/bin/sh -c ./ai4e_api_tools/requirements/install-appinsights.sh' returned a non-zero code: 1

Error in api_task_manager.UpdateTaskStatus when there's an exception in provided functions

There was an exception in my part of the app (making an async API), but the error was not propagated it seems, and subsequent calls to the /task endpoint still shows the job as "created":

{
    "uuid": 6061,
    "status": "created",
    "timestamp": "2019-08-08 01:00:46",
    "endpoint": "uri"
}

From the local log, the issue is

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/envs/ai4e_py_api/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/usr/local/envs/ai4e_py_api/lib/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/app/orchestrator_api/runserver.py", line 290, in _request_detections
    api_task_manager.UpdateTaskStatus(request_id,
UnboundLocalError: local variable 'request_id' referenced before assignment

Expected behavior: task status reflects the error in the custom code.

PIL/Pillow does not support tiff format, can't use api with multichannel tiff images

See python-pillow/Pillow#3984 and python-pillow/Pillow#1888

It doesn't look like the pillow issues linked above will be fixed anytime soon, which means the api examples that use pillow/PIL can't accept tiff format. All the models I've trained so far have been trained on int16 tiff RGB files from Landsat (most geospatial raster data comes in tiff format). So I adapted the api to try and read in the submission with a different library, rasterio

https://github.com/ecohydro/CropMask_RCNN/blob/master/app/keras_iNat_api/keras_detector.py#L17-L37

But I can't seem to open a byte array with anything except for PIL/pillow. I've tried running the above function and get the following error when submitting a tiff to the docker server

{
    "TaskId": 3130,
    "Status": "failed: <class 'AttributeError'>\nTraceback (most recent call last):\n  File \"/app/keras_iNat_api/runserver.py\", line 60, in detect\n    arr_for_detection, image_for_drawing = keras_detector.open_image(image_bytes)\n  File \"./keras_detector.py\", line 34, in open_image\n    arr = reshape_as_image(img.read())\nAttributeError: '_GeneratorContextManager' object has no attribute 'read'\n",
    "Timestamp": "2019-07-22 17:33:14",
    "Endpoint": "uri"
}

Any tips on how to read a multichannel tiff byte array? I'd like my api to accept this format since most geospatial imagery users would prefer to use an api that accept the format that Landsat imagery comes in (tiff).

Feature Request: Add tips for quicker interactive app development

Hello, I'm making some updates to my application and am wondering if there are some suggestions that could be made for:

  1. hot reloading the application if a python file is changed
    I need to run docker build -t pytorchapp-prod -f ./Dockerfile-prod . and then docker run -it -p 8081:80 --runtime=nvidia pytorchapp-prod to reload any changes I've made to my app. This isn't so bad but can get a bit cumbersome.

I tried to mount the app folder in my host machine to the docker container with --mount type=bind,source="$(pwd)/pytorch_api",target=/app/pytorch_api and then add py-autoreload=3 in supervisord.conf to monitor for changes to application files (which I make in vscode on my host machine). However this doesn't seem to work and either the files aren't being monitored for changes or the changes are not being propagated to the application.

  1. (less important) sending logs from log = AI4EAppInsights to the terminal during development so that messages from calls to log.log_debug are visible.
    It'd be nice to do local development with the same logging statements that interface with the AppInsights service for easier portability. Right now I'm replacing all these logging statements with print so that I can see them in the terminal logs to debug my app.

when there is no detection above the threshold, render boxes errors

I'm using the tensorflow example to profile why rendering the boxes does not work on my own dataset (which I'll post in a separate issue in case anyone has suggestions). When I ran the suggested ResNet 50 faster RCNN model (http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet50_fgvc_2018_07_19.tar.gz) on this image https://farm3.staticflickr.com/2248/2195772708_716d50d8e9.jpg

I get this traceback because the 5 scores are too love wo be over the .5 threshold. This results in an error because the draw_bounding_boxes_on_image function expects at least one box. A simple fix would be to not call the function if no scores are above the threshold and instead return the original image.

Traceback

render_bounding_boxes(...
(0,)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
 in 
      1 render_bounding_boxes(
----> 2             boxes, scores, clsses, image, confidence_threshold=0.5)

 in render_bounding_boxes(boxes, scores, classes, image, label_map, confidence_threshold)
    110     display_boxes = np.array(display_boxes)
    111     print(display_boxes.shape)
--> 112     draw_bounding_boxes_on_image(image, display_boxes, display_str_list_list=display_strs)
    113 
    114 # the following two functions are from https://github.com/tensorflow/models/blob/master/research/object_detection/utils/visualization_utils.py

 in draw_bounding_boxes_on_image(image, boxes, color, thickness, display_str_list_list)
    140     return
    141   if len(boxes_shape) != 2 or boxes_shape[1] != 4:
--> 142     raise ValueError('Input must be of size [N, 4]')
    143   for i in range(boxes_shape[0]):
    144     display_str_list = ()

ValueError: Input must be of size [N, 4]
#%%

import tensorflow as tf
import numpy as np
import PIL.Image as Image
import PIL.ImageColor as ImageColor
import PIL.ImageDraw as ImageDraw
import PIL.ImageFont as ImageFont


# Core detection functions


def load_model(checkpoint):
    """Load a detection model (i.e., create a graph) from a .pb file.

    Args:
        checkpoint: .pb file of the model.

    Returns: the loaded graph.

    """
    print('tf_detector.py: Loading graph...')
    detection_graph = tf.Graph()
    with detection_graph.as_default():
        od_graph_def = tf.GraphDef()
        with tf.gfile.GFile(checkpoint, 'rb') as fid:
            serialized_graph = fid.read()
            od_graph_def.ParseFromString(serialized_graph)
            tf.import_graph_def(od_graph_def, name='')
    print('tf_detector.py: Detection graph loaded.')

    return detection_graph


def open_image(image_bytes):
    """ Open an image in binary format using PIL.Image and convert to RGB mode
    Args:
        image_bytes: an image in binary format read from the POST request's body

    Returns:
        an PIL image object in RGB mode
    """
    image = Image.open(image_bytes)
    if image.mode not in ('RGBA', 'RGB'):
        raise AttributeError('Input image not in RGBA or RGB mode and cannot be processed.')
    if image.mode == 'RGBA':
        # Image.convert() returns a converted copy of this image
        image = image.convert(mode='RGB')
    return image


def generate_detections(detection_graph, image):
    """ Generates a set of bounding boxes with confidence and class prediction for one input image file.

    Args:
        detection_graph: an already loaded object detection inference graph.
        image_file: a PIL Image object

    Returns:
        boxes, scores, classes, and the image loaded from the input image_file - for one image
    """
    image_np = np.asarray(image, np.uint8)
    image_np = image_np[:, :, :3] # Remove the alpha channel

    #with detection_graph.as_default():
    with tf.Session(graph=detection_graph) as sess:
        image_np = np.expand_dims(image_np, axis=0)

        # get the operators
        image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
        box = detection_graph.get_tensor_by_name('detection_boxes:0')
        score = detection_graph.get_tensor_by_name('detection_scores:0')
        clss = detection_graph.get_tensor_by_name('detection_classes:0')
        num_detections = detection_graph.get_tensor_by_name('num_detections:0')

        # performs inference
        (box, score, clss, num_detections) = sess.run(
            [box, score, clss, num_detections],
            feed_dict={image_tensor: image_np})

    return np.squeeze(box), np.squeeze(score), np.squeeze(clss), image  # these are lists of bboxes, scores etc


# Rendering functions


def render_bounding_boxes(boxes, scores, classes, image, label_map={}, confidence_threshold=0.5):
    """Renders bounding boxes, label and confidence on an image if confidence is above the threshold.

    Args:
        boxes, scores, classes:  outputs of generate_detections.
        image: PIL.Image object, output of generate_detections.
        label_map: optional, mapping the numerical label to a string name.
        confidence_threshold: threshold above which the bounding box is rendered.

    image is modified in place!

    """
    display_boxes = []
    display_strs = []  # list of list, one list of strings for each bounding box (to accommodate multiple labels)

    for box, score, clss in zip(boxes, scores, classes):
        if score > confidence_threshold:
            print('Confidence of detection greater than threshold: ', score)
            display_boxes.append(box)
            clss = int(clss)
            label = label_map[clss] if clss in label_map else str(clss)
            displayed_label = '{}: {}%'.format(label, round(100*score))
            display_strs.append([displayed_label])

    display_boxes = np.array(display_boxes)
    print(display_boxes.shape)
    draw_bounding_boxes_on_image(image, display_boxes, display_str_list_list=display_strs)

# the following two functions are from https://github.com/tensorflow/models/blob/master/research/object_detection/utils/visualization_utils.py

def draw_bounding_boxes_on_image(image,
                                 boxes,
                                 color='LimeGreen',
                                 thickness=4,
                                 display_str_list_list=()):
  """Draws bounding boxes on image.

  Args:
    image: a PIL.Image object.
    boxes: a 2 dimensional numpy array of [N, 4]: (ymin, xmin, ymax, xmax).
           The coordinates are in normalized format between [0, 1].
    color: color to draw bounding box. Default is red.
    thickness: line thickness. Default value is 4.
    display_str_list_list: list of list of strings.
                           a list of strings for each bounding box.
                           The reason to pass a list of strings for a
                           bounding box is that it might contain
                           multiple labels.

  Raises:
    ValueError: if boxes is not a [N, 4] array
  """
  boxes_shape = boxes.shape
  if not boxes_shape:
    return
  if len(boxes_shape) != 2 or boxes_shape[1] != 4:
    raise ValueError('Input must be of size [N, 4]')
  for i in range(boxes_shape[0]):
    display_str_list = ()
    if display_str_list_list:
      display_str_list = display_str_list_list[i]
    draw_bounding_box_on_image(image, boxes[i, 0], boxes[i, 1], boxes[i, 2],
                               boxes[i, 3], color, thickness, display_str_list)


def draw_bounding_box_on_image(image,
                               ymin,
                               xmin,
                               ymax,
                               xmax,
                               color='red',
                               thickness=4,
                               display_str_list=(),
                               use_normalized_coordinates=True):
  """Adds a bounding box to an image.

  Bounding box coordinates can be specified in either absolute (pixel) or
  normalized coordinates by setting the use_normalized_coordinates argument.

  Each string in display_str_list is displayed on a separate line above the
  bounding box in black text on a rectangle filled with the input 'color'.
  If the top of the bounding box extends to the edge of the image, the strings
  are displayed below the bounding box.

  Args:
    image: a PIL.Image object.
    ymin: ymin of bounding box.
    xmin: xmin of bounding box.
    ymax: ymax of bounding box.
    xmax: xmax of bounding box.
    color: color to draw bounding box. Default is red.
    thickness: line thickness. Default value is 4.
    display_str_list: list of strings to display in box
                      (each to be shown on its own line).
    use_normalized_coordinates: If True (default), treat coordinates
      ymin, xmin, ymax, xmax as relative to the image.  Otherwise treat
      coordinates as absolute.
  """
  draw = ImageDraw.Draw(image)
  im_width, im_height = image.size
  if use_normalized_coordinates:
    (left, right, top, bottom) = (xmin * im_width, xmax * im_width,
                                  ymin * im_height, ymax * im_height)
  else:
    (left, right, top, bottom) = (xmin, xmax, ymin, ymax)
  draw.line([(left, top), (left, bottom), (right, bottom),
             (right, top), (left, top)], width=thickness, fill=color)
  try:
    font = ImageFont.truetype('arial.ttf', 24)
  except IOError:
    font = ImageFont.load_default()

  # If the total height of the display strings added to the top of the bounding
  # box exceeds the top of the image, stack the strings below the bounding box
  # instead of above.
  display_str_heights = [font.getsize(ds)[1] for ds in display_str_list]
  # Each display_str has a top and bottom margin of 0.05x.
  total_display_str_height = (1 + 2 * 0.05) * sum(display_str_heights)

  if top > total_display_str_height:
    text_bottom = top
  else:
    text_bottom = bottom + total_display_str_height
  # Reverse list and print from bottom to top.
  for display_str in display_str_list[::-1]:
    text_width, text_height = font.getsize(display_str)
    margin = np.ceil(0.05 * text_height)
    draw.rectangle(
        [(left, text_bottom - text_height - 2 * margin), (left + text_width,
                                                          text_bottom)],
        fill=color)
    draw.text(
        (left + margin, text_bottom - text_height - margin),
        display_str,
        fill='black',
        font=font)
    text_bottom -= text_height - 2 * margin
#%%
model = load_model("./tf_iNat_api/faster_rcnn_resnet50_fgvc_2018_07_19/frozen_inference_graph.pb")

f = open("/home/rave/AIforEarth-API-Development/Examples/tensorflow/2195772708_716d50d8e9.jpg", 'rb')
image = open_image(f)

#%%
boxes, scores, clsses, image = generate_detections(
            model, image)

#%%
render_bounding_boxes(
            boxes, scores, clsses, image, confidence_threshold=0.5)

Make the task ID in local development mode longer and persistent

When doing local development, currently an up to 4-digit number is given as the task ID. Can we make this a longer ID, since some APIs can stay in local mode for a long time?

It'd be great if the ID remains rather short, such as https://github.com/skorokithakis/shortuuid, if that won't create a problem.

Update June 12
Another feature request: is it possible to use a remote and persistent database to store the task ID even while in local mode? For some APIs that do not need scaling up, it's useful to be able to run the container on a single VM for a long time, and not having the task IDs reset after each re-start.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.