GithubHelp home page GithubHelp logo

recski / brise-plandok Goto Github PK

View Code? Open in Web Editor NEW
6.0 4.0 2.0 44.76 MB

Information extraction from text documents of the zoning plan of the City of Vienna

License: MIT License

Python 59.59% Shell 2.11% Jupyter Notebook 7.13% Dockerfile 0.12% TeX 31.04%

brise-plandok's Introduction

brise-plandok

Information extraction from text documents of the zoning plan of the City of Vienna

Work supported by BRISE-Vienna (UIA04-081), a European Union Urban Innovative Actions project.

The asail2021 tag contains the code in the state presented in our 2021 ASAIL paper. Legacy code can be found in the asail folder.

Table of Contents

Requirements

Install the brise_plandok repository:

pip install .

# To follow changes
pip install -e .

Installing this repository will also install the tuw_nlp repository, a graph-transformation framework. To get to know more, visit https://github.com/recski/tuw-nlp.

Ensure that you have at least Java 8 for the alto library.

Coding guidelines

This repository uses black for code formatting and flake8 for PEP8 compliance. To install the pre-commit hooks run:

pre-commit install

This creates the .git/hooks/pre-commit file, which automatically reformats all the modified files prior to any commit.

Run black separately

pip install black
black .

Run flake8 separately

pip install flake8
flake8 .

Annotated Data Description

See DATA.md.

Extraction service

Start service with your own data

python brise_plandok/services/full_extractor.py -d <DATA_DIR>

Example: python brise_plandok/services/full_extractor.py -d data/train

Start service from Docker

The docker image downloads the data from our cloud storage.

# Build docker image
docker build --tag brise-attr-extraction .

# Start service
docker run -p 5000:5000 brise-attr-extraction

Call service

You can now reach the service in both cases by calling curl http://localhost:5000/<endpoint>/<doc_id>. If the doc_id does not exist, Not found will be returned.

brise-extract-api

curl http://localhost:5000/brise-extract-api/7377

psets

# To get minimal psets
curl http://localhost:5000/psets/7377

# To get full psets
curl http://localhost:5000/psets/7377?full=true

Demo for attribute names only

To run the browser-based demo described in the paper (also available online), first start rule extraction as a service like this:

python brise_plandok/services/attribute_extractor.py

Then run the frontend with this command:

streamlit run brise_plandok/frontend/extract.py

To run the prover of our system, also start the prover service from this repository: https://github.com/adaamko/BRISEprover. This will start a small Flask service on port 5007 that will be used by the demo service.

The demo can then be accessed from your web browser at http://localhost:8501/

Preprocessing

Input data

All steps described below can be run on the sample documents included in this repository under sample_data.

The preprocessed version of all plan documents (as of December 2020) can be downloaded as a single JSON file. If you would like to customize preprocessing, you can also download the raw text documents

NLP Pipeline

Extract section structure from raw text and run NLP pipeline (sentence segmentation, tokenization, dependency parsing):

python brise_plandok/preproc/plandok.py sample_data/txt/*.txt > sample_data/json/sample.jsonl

Attribute extraction task

To run the current best rule-based extraction, see here.

To run experiments with POTATO, see here.

To have a look at our baseline experiments, see here.

Annotation process

For details about the annotation process, see here.

Development

For development details read more here.

References

The rule extraction system is described in the following paper:

Gabor Recski, Björn Lellmann, Adam Kovacs, Allan Hanbury: Explainable rule extraction via semantic graphs (...)

The demo also uses the deontic logic prover described in this paper

The preprocessing pipeline relies on the Stanza library

brise-plandok's People

Contributors

adaamko avatar eszti avatar recski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

adaamko eszti

brise-plandok's Issues

potato/create_dataset fails unexpectedly if target data files exist

This command works fine if data/gold* files don't exist:

python create_dataset.py -d ~/sandbox/brise-nlp/annotation/2021_09/full_data -g fourlang -o -n gold

But if I rerun it to regenerate the graphs, the same command fails with this error:

/home/recski/miniconda3/envs/brise/lib/python3.7/site-packages/pandas/core/indexing.py:845: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  self.obj[key] = _infer_fill_value(value)
/home/recski/miniconda3/envs/brise/lib/python3.7/site-packages/pandas/core/indexing.py:966: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  self.obj[item] = s

Emptying the data folder solves the problem.

@Eszti please have a look when you can

Install puts 'export ALTO_JAR' 3 times into bash_profile invalid

After executing pip install . I have the following line in my .bash_profile:

export ALTO_JAR=/home/eszter/tuw_nlp_resources/alto-2.3.6-SNAPSHOT-all.jarexport ALTO_JAR=/home/eszter/tuw_nlp_resources/alto-2.3.6-SNAPSHOT-all.jarexport ALTO_JAR=/home/eszter/tuw_nlp_resources/alto-2.3.6-SNAPSHOT-all.jar

plandok.py leaves sections.sens.text null if cache is used

The following preprocessing step leaves the sections.sens.text attribute null if nlp_cache.json is used.

python brise_plandok/plandok.py sample_data/txt/*.txt > sample_data/json/sample.jsonl

If the cache file is deleted, and therefore regenerated on the next run, the json output is complete.

convert.py does not open stream for -of other than xlsx

NOTE: this issue was detected on dev

Executing

python brise_plandok/convert.py \
    -i XLSX \
    -if ~/research/data/brise/ann/$1.xlsx \
    -o JSON \
    -of ~/research/data/brise/ann/$1.json

results in

Traceback (most recent call last):
  File "/home/eszter/research/brise-plandok/brise_plandok/convert.py", line 391, in <module>
    main()
  File "/home/eszter/research/brise-plandok/brise_plandok/convert.py", line 387, in main
    converter.convert(input_stream, output_stream)
  File "/home/eszter/research/brise-plandok/brise_plandok/convert.py", line 358, in convert
    self.write(doc, output_stream)
  File "/home/eszter/research/brise-plandok/brise_plandok/convert.py", line 348, in write
    self.write_json(doc, stream)
  File "/home/eszter/research/brise-plandok/brise_plandok/convert.py", line 324, in write_json
    stream.write(json.dumps(doc))
AttributeError: 'str' object has no attribute 'write'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.