GithubHelp home page GithubHelp logo

uvavision / text2scene Goto Github PK

View Code? Open in Web Editor NEW
117.0 9.0 25.0 1.52 MB

[CVPR 2019] Text2Scene: Generating Compositional Scenes from Textual Descriptions

Python 99.02% Shell 0.64% Makefile 0.02% Cython 0.32%

text2scene's Introduction

Fuwen Tan, Song Feng, Vicente Ordonez. CVPR 2019

Overview

In this work, we propose Text2Scene, a model that generates various forms of compositional scene representations from natural language descriptions. Unlike recent works, our method does NOT use Generative Adversarial Networks (GANs). Text2Scene instead learns to sequentially generate objects and their attributes (location, size, appearance, etc) at every time step by attending to different parts of the input text and the current status of the generated scene. We show that under minor modifications, the proposed framework can handle the generation of different forms of scene representations, including cartoon-like scenes, object layouts corresponding to real images, and synthetic images. Our method is not only competitive when compared with state-of-the-art GAN-based methods using automatic metrics and superior based on human judgments but also has the advantage of producing interpretable results.

Installation

  • Setup a conda environment and install some prerequisite packages like this
conda create -n syn python=3.6          # Create a virtual environment
source activate syn         		# Activate virtual environment
conda install jupyter scikit-image cython opencv seaborn nltk pycairo   # Install dependencies
git clone https://github.com/cocodataset/cocoapi.git 			# Install pycocotools
cd cocoapi/PythonAPI
python setup.py build_ext install
python -m nltk.downloader all						# Install NLTK data
  • Please also install pytorch 1.0 (or higher), torchVision, and torchtext
  • Install the repo
git clone https://github.com/uvavision/Text2Scene.git
cd Text2Scene/lib
make
cd ..

Data

  • Download the Abstract Scene and COCO datasets if you have not done so
./experiments/scripts/fetch_data.sh

This will populate the Text2Scene/data folder with AbstractScenes_v1.1, coco/images and coco/annotations. Please note that, for layout generation, we use coco2017 splits. But for composite image generation, we use coco2014 splits for fair comparisons with prior methods. The split info could be found in Text2Scene/data/caches.

Demo

  • Download the pretrained models
./experiments/scripts/fetch_models.sh
  • For the abstract scene and layout generation tasks, simply run
./experiments/scripts/sample_abstract.sh	# Abstract Scene demo
./experiments/scripts/sample_layout.sh	# Layout demo

The scripts will take the example sentences in Text2Scene/examples as input. The step-by-step generation results will appear in Text2Scene/logs. Runing the scripts for the first time would be slow as it takes time to generate cache files (in Text2Scene/data/caches) for the datasets and download the GloVe data.

  • To run the composite and inpainting demos, you need to download auxiliary data, including the image segment database and (optionally) the precomputed nearest neighbor tree. Be careful that the auxiliary data is around 30GB!!
./experiments/scripts/fetch_aux.sh
./experiments/scripts/sample_composites.sh	 # Composites demo
./experiments/scripts/sample_inpainted.sh	 # Inpainting demo

Note that the demos will be run in CPU by default. To use GPU, simply add the --cuda flag in the scripts like:

./tools/abstract_demo.py --cuda --pretrained=abstract_final

Training

You can run the following scripts to train the models:

./experiments/scripts/train_abstract.sh 		# Train the abstract scene model
./experiments/scripts/train_layout.sh 		# Train the coco layout model
./experiments/scripts/train_composites.sh 	# Train the composite image model

The composite image model will be trained using multiple GPUs by default. To use a single GPU, please remove the --parallel flag and modify the batch size using the --batch_size flag accordingly.

Evaluation

You can run the following script to eval the Abstract Scene model:

./experiments/scripts/eval_abstract.sh 		# Evaluate the abstract scene model
./experiments/scripts/eval_layout.sh 		# Evaluate the layout model

The scripts would be run in GPU by default.

Citing

If you find our paper/code useful, please consider citing:

@InProceedings{text2scene2019, 
    author = {Tan, Fuwen and Feng, Song and Ordonez, Vicente},
    title = {Text2Scene: Generating Compositional Scenes from Textual Descriptions},
    booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2019}}

License

This project is licensed under the MIT license:

Copyright (c) 2019 University of Virginia, Fuwen Tan, Song Feng, Vicente Ordonez.

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

text2scene's People

Contributors

fwtan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

text2scene's Issues

lack of ram

Hi, im running this code, but when i want to train the abstract scenes network on colab tpu i counter lack of memory, plz tell me if im doing wrong or the project wants more ram?(more than 12 G when traning )

Split file missing

current files in data/caches:

abstract_ckpts  composites_ckpts  composites_coco_general.pkl  composites_color_palette.txt  layout_ckpts  train_nntables_with_bg

Trying to run the composite experiment:

OSError: /auto/nlg-05/chengham/Text2Scene/data/caches/composites_train_split.txt not found.

Error while running sample_abstract.sh file

I have followed the instruction I am getting the following error. Note: I am using pycharm IDE on windows.

C:\Users\Abdullah\Desktop\Text2Scene\Data\experiments\scripts>"C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Git" C:/Users/Abdullah/Desktop/Text2Sce
ne/Data/experiments/scripts/sample_abstract.sh
'"C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Git"' is not recognized as an internal or external command,
operable program or batch file.

(base) C:\Users\Abdullah\Desktop\Text2Scene\Data\experiments\scripts>"C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Git" C:/Users/Abdullah/Desktop/Text2Sce
ne/Data/experiments/scripts/sample_abstract.sh

stuff_train2017.json missing

I was trying to run the composite experiment and following error occurred.

+ set -e
+ export PYTHONUNBUFFERED=True
+ PYTHONUNBUFFERED=True
++ date +%Y-%m-%d_%H-%M-%S
+ LOG=experiments/logs/sample_composites.txt.2019-10-16_08-31-33
+ exec
++ tee -a experiments/logs/sample_composites.txt.2019-10-16_08-31-33
tee: experiments/logs/sample_composites.txt.2019-10-16_08-31-33: No such file or directory
+ echo Logging output to experiments/logs/sample_composites.txt.2019-10-16_08-31-33
Logging output to experiments/logs/sample_composites.txt.2019-10-16_08-31-33
+ ./tools/composites_demo.py --cuda --for_visualization=True --use_super_category=True --use_patch_background=True --n_shape_hidden=256 --where_attn=2 --where_attn_2d=True --pretrained=composites_final
loading annotations into memory...
Done (t=19.85s)
creating index...
index created!
Traceback (most recent call last):
  File "./tools/composites_demo.py", line 97, in <module>
    composites_demo(config)
  File "./tools/composites_demo.py", line 68, in composites_demo
    traindb = composites_coco(config, 'train', '2017')
  File "/auto/nlg-05/chengham/Text2Scene/tools/../lib/datasets/composites_coco.py", line 30, in __init__
    self.get_coco_general_info()
  File "/auto/nlg-05/chengham/Text2Scene/tools/../lib/datasets/composites_coco.py", line 79, in get_coco_general_info
    self.cocoStuffAPI = COCO(self.get_ann_file('stuff'))
  File "/auto/nlg-05/chengham/Text2Scene/tools/../lib/datasets/composites_coco.py", line 418, in get_ann_file
    assert osp.exists(ann_path), 'Path does not exist: {}'.format(ann_path)
AssertionError: Path does not exist: /auto/nlg-05/chengham/Text2Scene/tools/../lib/../data/coco/annotations/stuff_train2017.json

Is there a step I am missing? I used all fetch_data, fetch_model, and fetch_aux scripts.

error no 2

Traceback (most recent call last):
File "C:\Users\asus\cocoapi\PythonAPI\Text2Scene\tools\abstract_demo.py", line 35, in
abstract_demo(config)
File "C:\Users\asus\cocoapi\PythonAPI\Text2Scene\tools\abstract_demo.py", line 20, in abstract_demo
input_sentences = json_load('examples/abstract_samples.json')
File "C:\Users\asus\cocoapi\PythonAPI\Text2Scene\tools..\lib\abstract_utils.py", line 53, in json_load
with open(path, 'r') as fid:
FileNotFoundError: [Errno 2] No such file or directory: 'examples/abstract_samples.json'

Errors while trying ./experiments/scripts/sample_abstract.sh

I am having errors while trying ./experiments/scripts/sample_abstract.sh
the message I get is the following:

  • set -e
  • export PYTHONUNBUFFERED=True
  • PYTHONUNBUFFERED=True
    ++ date +%Y-%m-%d_%H-%M-%S
  • LOG=experiments/logs/sample_composites.txt.2021-04-12_17-41-19
  • exec
    ++ tee -a experiments/logs/sample_composites.txt.2021-04-12_17-41-19
    tee: experiments/logs/sample_composites.txt.2021-04-12_17-41-19: No such file or directory
  • echo Logging output to experiments/logs/sample_composites.txt.2021-04-12_17-41-19
    Logging output to experiments/logs/sample_composites.txt.2021-04-12_17-41-19
  • ./tools/abstract_demo.py --pretrained=abstract_final
    Traceback (most recent call last):
    File "./tools/abstract_demo.py", line 35, in
    abstract_demo(config)
    File "./tools/abstract_demo.py", line 18, in abstract_demo
    train_db = abstract_scene(config, split='val', transform=transformer)
    File "/home/arwatawfik/Text2Scene/tools/../lib/datasets/abstract_scene.py", line 42, in init
    scenedb = self.load_scene(self.root_dir)
    File "/home/arwatawfik/Text2Scene/tools/../lib/datasets/abstract_scene.py", line 263, in load_scene
    fp = open(osp.join(db_dir, 'Scenes_10020.txt'), 'r')
    FileNotFoundError: [Errno 2] No such file or directory: '/home/arwatawfik/Text2Scene/tools/../lib/../data/AbstractScenes_v1.1/Scenes_10020.txt'

"IndexError: too many indices for array" in lib/datasets/layout_coco.py

There is like below error after epoch 000 learned. I followed README however I couldn't. Are other libraries' vesions different. For example COCOAPI, numpy, opencv?

Epoch 000, iter 0013410:
loss:  7.487740596005087 0.0 0.0
accu:  [0.53364956 0.00988664 0.3089692  0.45671326]
-------------------------
0 0
0 1
0 2
0 3
0 4
0 5
0 6
0 7
0 8
0 9
0 10
0 11
0 12
0 13
0 14
Traceback (most recent call last):
  File "./tools/train_layout.py", line 46, in <module>
    train_model(config)
  File "./tools/train_layout.py", line 31, in train_model
    trainer.train(train_db, test_db, val_db)   
  File "/home/haruka/Workspace/Text2Scene/tools/../lib/modules/layout_trainer.py", line 218, in train
    val_loss, val_accu, val_infos = self.validate_epoch(val_db)
  File "/home/haruka/Workspace/Text2Scene/tools/../lib/modules/layout_trainer.py", line 348, in validate_epoch
    for cnt, batched in enumerate(val_loader):
  File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 615, in __next__
    batch = self.collate_fn([self.dataset[i] for i in indices])
  File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 615, in <listcomp>
    batch = self.collate_fn([self.dataset[i] for i in indices])
  File "/home/haruka/Workspace/Text2Scene/tools/../lib/datasets/layout_coco.py", line 102, in __getitem__
    out_inds, out_msks = self.scene_to_output_inds(scene)  
  File "/home/haruka/Workspace/Text2Scene/tools/../lib/datasets/layout_coco.py", line 502, in scene_to_output_inds
    other_inds = self.boxes2indices(boxes)
  File "/home/haruka/Workspace/Text2Scene/tools/../lib/datasets/layout_coco.py", line 479, in boxes2indices
    coord_inds = self.loc_map.coords2indices(boxes[:,:2])
IndexError: too many indices for array

These are libraries' versions I uesd.

annoy==1.16.2
asn1crypto==0.24.0
attrs==19.3.0
backcall==0.1.0
beautifulsoup4==4.7.1
bleach==3.1.0
certifi==2019.9.11
cffi==1.12.3
chardet==3.0.4
cloudpickle==1.2.2
conda==4.7.12
conda-build==3.17.8
conda-package-handling==1.6.0
cryptography==2.6.1
cycler==0.10.0
Cython==0.29.13
cytoolz==0.10.0
dask==2.6.0
decorator==4.4.0
defusedxml==0.6.0
entrypoints==0.3
filelock==3.0.10
glob2==0.6
idna==2.8
imageio==2.6.1
importlib-metadata==0.23
ipykernel==5.1.3
ipython==7.5.0
ipython-genutils==0.2.0
ipywidgets==7.5.1
jedi==0.13.3
Jinja2==2.10.1
jsonschema==3.1.1
jupyter==1.0.0
jupyter-client==5.3.4
jupyter-console==6.0.0
jupyter-core==4.6.1
kiwisolver==1.1.0
libarchive-c==2.8
lief==0.9.0
MarkupSafe==1.1.1
matplotlib==3.1.1
mistune==0.8.4
mkl-fft==1.0.12
mkl-random==1.0.2
more-itertools==7.2.0
nbconvert==5.6.1
nbformat==4.4.0
networkx==2.4
nltk==3.4.5
notebook==6.0.1
numpy==1.16.3
oauthlib==3.1.0
olefile==0.46
pandas==0.25.2
pandocfilters==1.4.2
parso==0.4.0
patsy==0.5.1
pexpect==4.7.0
pickleshare==0.7.5
Pillow==6.0.0
pkginfo==1.5.0.1
prometheus-client==0.7.1
prompt-toolkit==2.0.9
psutil==5.6.2
ptyprocess==0.6.0
pycairo==1.18.1
pycocotools==2.0
pycosat==0.6.3
pycparser==2.19
Pygments==2.3.1
pyOpenSSL==19.0.0
pyparsing==2.4.2
pyrsistent==0.15.4
PySocks==1.6.8
python-dateutil==2.8.0
pytz==2019.1
PyWavelets==1.1.1
PyYAML==5.1
pyzmq==18.1.0
qtconsole==4.5.5
requests==2.21.0
requests-oauthlib==1.3.0
ruamel-yaml==0.15.46
scikit-image==0.15.0
scipy==1.2.1
seaborn==0.9.0
Send2Trash==1.5.0
six==1.12.0
soupsieve==1.8
statsmodels==0.10.1
terminado==0.8.2
testpath==0.4.2
toolz==0.10.0
torch==1.0.1
torchtext==0.4.0
torchvision==0.2.1
tornado==6.0.3
tqdm==4.31.1
traitlets==4.3.2
urllib3==1.24.2
wcwidth==0.1.7
webencodings==0.5.1
widgetsnbextension==3.5.1
zipp==0.6.0

error D8021 : invalid numeric argument '/Wno-cpp'

Hello,
When I run the command :
(syn) C:\Users\USER\cocoapi\PythonAPI\Text2Scene\lib>make
I get the error D8021.
This is the execution of make (my OS : windows : 8.0 and I have the version : Visual Studio C++ 2015) :
python setup.py build_ext --inplace running build_ext cythoning nms/cpu_nms.pyx to nms\cpu_nms.c C:\Users\USER\anaconda3\envs\syn\lib\site-packages\Cython\Compiler\Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: C:\Users\USER\cocoapi\PythonAPI\Text 2Scene\lib\nms\cpu_nms.pyx tree = Parsing.p_module(s, pxd, full_module_name) building 'nms.cpu_nms' extension creating build creating build\temp.win-amd64-cpython-39 creating build\temp.win-amd64-cpython-39\Release creating build\temp.win-amd64-cpython-39\Release\nms "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\USER\anaconda3\envs\syn\lib\site-pa ckages\numpy\core\include -IC:\Users\USER\anaconda3\envs\syn\include -IC:\Users\ USER\anaconda3\envs\syn\Include "-IC:\Program Files (x86)\Microsoft Visual Studi o 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240. 0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits \8.1\include\winrt" /Tcnms\cpu_nms.c /Fobuild\temp.win-amd64-cpython-39\Release\ nms\cpu_nms.obj -Wno-cpp -Wno-unused-function -std=c99 cl : Command line error D8021 : invalid numeric argument '/Wno-cpp' error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\ x86_amd64\\cl.exe' failed with exit code 2 make: * [all] Erreur 1
error1

FileNotFoundError: [Errno 2] No such file or directory:

I follow the instruction and download the required data, but I get the following error after running sample_composite.sh

File
"/home/notebook/code/multiModal/Text2Img/cocoapi/PythonAPI/Text2Scene/tools/../lib/nntable.py", line 60, in build_nntable
with open(feature_path, 'rb') as fid:
FileNotFoundError: [Errno 2] No such file or directory: '/home/notebook/code/multiModal/Text2Img/cocoapi/PythonAPI/Text2Scene/tools/../lib/../data/coco/patch_feature_with_bg/train2017/000000303610/000000303610_000000422547.pkl'

I used try-except method to catch the exception, and found the following error:
person NNTable
File not found exception...
plant NNTable
File not found exception...

sample_composite.sh makes not result

After running sample_composite.sh i get the following:
dataset info loaded from /home/user/Text2Scene/data/caches/composites_coco_general.pkl
Loading color pallete.
Loading annotations into memory...
Done (t=13.99s)
creating index...
index created!
Loading annotations into memory...
Done (t=1.57s)
creating index...
index created!
Loading annotations into memory...
Done (t=31.39s)
creating index...
index created!
Killed

After all a new composite folder is created in the log directory but it contains no files
Please help, what could be causing the problem?

undefined symbol: _Py_ZeroStruct

after following all the steps in the readme, with out getting any error I finnaly run
./experiments/scripts/sample_composites.sh
and the output I get is an error
ImportError: /home/user/Text2Scene/tools/../lib/nms/cpu_nms.so: undefined symbol: _Py_ZeroStruct

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.