Machine Learning models on top of Abstract Syntax Trees. Formerly known as ast2vec.
Currently, there are implemented:
- id2vec, source code identifier embeddings
- docfreq, source code identifier document frequencies (part of TF-IDF)
- nBOW, weighted bag of vectors, as in src-d/wmd-relax
- topic modeling
This project can be the foundation for machine learning on source code (MLoSC) research and development. It abstracts feature extraction and working with models, thus allowing to focus on the higher level tasks.
It is written in Python3 and has been tested on Linux and macOS. sourced.ml is tightly coupled with Babelfish and delegates all the AST parsing to it.
Here is the list of projects which are built with sourced.ml:
- vecino - finding similar repositories
- tmsc - topic modeling of repositories
- role2vec - AST node embedding and correction
- snippet-ranger - topic modeling of source code snippets
pip3 install ast2vec
You need to have libxml2
installed. E.g., on Ubuntu apt install libxml2-dev
.
This project exposes two interfaces: API and command line. The command line is
ast2vec --help
There is an example of using Python API here.
It exposes several tools to generate the models and setup the environment.
API is divided into two domains: models and training. The first is about using while the second is about creating. Models: Id2Vec, DocumentFrequencies, NBOW, Cooccurrences. Transformers (keras/sklearn style): Repo2nBOWTransformer, Repo2CooccTransformer, PreprocessTransformer, SwivelTransformer and PostprocessTransformer.
First you need to run babelfish image and download drivers:
docker run -d --privileged -p 9432:9432 --name bblfshd bblfsh/bblfshd:v2.2.0
docker exec -it bblfshd bblfshctl driver install image docker://bblfsh/python-driver:v1.2.6
docker exec -it bblfshd bblfshctl driver install image docker://bblfsh/java-driver:v1.2.2
You can find fill installation guide in the official getting started manual.
Then build ast2vec image and run it:
docker build -t srcd/ast2vec .
docker run -it --rm srcd/ast2vec --help
If the first command fails with
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
And you are sure that the daemon is running, then you need to add your user to docker
group:
refer to the documentation.
You can launch our docker image with a notebook examples just running:
docker run --name ast2vec-jupyter -p 8888:8888 -it --rm --entrypoint jupyter srcd/ast2vec notebook --allow-root --ip=0.0.0.0
Go to http://localhost:8888/notebooks/how_to_use_ast2vec.ipynb to see the notebook example.
Don't forget to run babelfish image and build srcd/ast2vec image as described in the previous section.
We build the source code identifier co-occurrence matrix for every repository.
- Clone or read the repository from disk.
- Classify files using enry.
- Extract UAST from each supported file.
- Split and stem all the identifiers in each tree.
- Traverse UAST, collapse all non-identifier paths and record all identifiers on the same level as co-occurring. Besides, connect them with their immediate parents.
- Write the individual co-occurrence matrices.
- Merge co-occurrence matrices from all repositories. Write the document frequencies model.
- Train the embeddings using Swivel running on Tensorflow. Interactively view
the intermediate results in Tensorboard using
--logs
. - Write the identifier embeddings model.
- Publish generated models to the Google Cloud Storage.
1-6 is performed with repo2coocc
tool / Repo2CooccTransformer
class,
7 with id2vec_preproc
/ id_embedding.PreprocessTransformer
, 8 with id2vec_train
/ id_embedding.SwivelTransformer
,
9 with id2vec_postproc
/ id_embedding.PostprocessTransformer
and 10 with publish
.
We represent every repository as a weighted bag-of-vectors, provided by we've got document frequencies ("docfreq") and identifier embeddings ("id2vec").
- Clone or read the repository from disk.
- Classify files using enry.
- Extract UAST from each supported file.
- Split and stem all the identifiers in each tree.
- Leave only those identifiers which are present in "docfreq" and "id2vec".
- Set the weight of each such identifier as TF-IDF.
- Set the value of each such identifier as the corresponding embedding vector.
- Write the nBOW model.
- Publish it to the Google Cloud Storage.
1-8 is performed with repo2nbow
tool / Repo2nBOWTransformer
class and 9 with publish
.
See here.
We use PEP8 with line length 99 and ". All the tests must pass:
unittest discover /path/to/ast2vec
Apache License Version 2.0, see LICENSE