deeprob-org / deeprob-kit Goto Github PK
View Code? Open in Web Editor NEWA Python Library for Deep Probabilistic Modeling
Home Page: https://deeprob-kit.readthedocs.io/en/latest/
License: MIT License
A Python Library for Deep Probabilistic Modeling
Home Page: https://deeprob-kit.readthedocs.io/en/latest/
License: MIT License
Most of the code available in BinaryCLT actually works for any tree-shaped Bayesian Network. Therefore, it would be better to create a super-class called TreeBN and then make BinaryCLT a subclass of it.
I was trying to run plot_spn.py
, but the code raises an error. Here's the output:
Plotting the dummy SPN to spn-dummy.svg ...
Traceback (most recent call last):
File ".../deeprob-kit/examples/spn_plot.py", line 25, in <module>
spn.plot_spn(root, spn_filename)
File ".../miniconda3/envs/deeprob/lib/python3.9/site-packages/deeprob/spn/structure/io.py", line 317, in plot_spn
pos = nx_pydot.graphviz_layout(graph, prog='dot')
File ".../miniconda3/envs/deeprob/lib/python3.9/site-packages/networkx/drawing/nx_pydot.py", line 357, in graphviz_layout
return pydot_layout(G=G, prog=prog, root=root)
File ".../miniconda3/envs/deeprob/lib/python3.9/site-packages/networkx/drawing/nx_pydot.py", line 406, in pydot_layout
P = to_pydot(G)
File ".../miniconda3/envs/deeprob/lib/python3.9/site-packages/networkx/drawing/nx_pydot.py", line 263, in to_pydot
raise ValueError(
ValueError: Node names and attributes should not contain ":" unless they are quoted with "". For example the string 'attribute:data1' should be written as '"attribute:data1"'. Please refer https://github.com/pydot/pydot/issues/258
Setup a new repository deeprob-org/deeprob-kit-docs or the special branch gh-pages containing versioned documentation.
Refer to sphinx-multiversion for building versioned documentation.
In particular, refer to a fork of sphinx-multiversion supporting sphinx-apidoc and sphinx-autodoc.
Finally, setup a GitHub Action to automatically push new documentation versions when:
main
branch is made.However, this can also be done using Travis CI.
Implement the FID score for generative models. A suitable package to place the function fid_score is deeprob.utils.statistics.
Moreover, include the FID score, aside the BPP (bits-per-pixel) metric, in the results given by normalizing flows experiments.
Perhaps the following link to the corresponding Arxiv article can be included in the readme:
pytest
instead of unittest
Implementation of Greedy Variable Splitting (GVS) is broken.
The method "apply_forward" of the class "AutoregressiveLayer" is not differentiable, due to in-place operations. This makes training using the flow sampling direction impossible.
Forward evaluation of RealNVP with more than one of sequential multiscale architectures raises a RuntimeError.
The library depends on the deprecated sklearn
package and should be updated to use scikit-learn
instead.
are_compatible
between two PCs.is_structured_decomposable
, since we can define it as self-compatibility.Review the implementation of MPE queries on Binary Chow-Liu Trees (CLTs) and write unit tests related on the correctness of MPE queries.
A suitable multiprocessing library for this task is joblib
, for which it is possible to specify 'threading'
as lightweight backend.
When training a normalizing flow having a Standard Gaussian as base distribution (i.e. using in_base=None by default), mean and standard deviation are not kept constant during training. The expected behavior is that they must be kept constant during training.
This is probably due to a wrong initialization of mean and standard deviation parameters: https://github.com/deeprob-org/deeprob-kit/blob/main/deeprob/flows/models/base.py#L52-L53.
experiments/spn.py
) to launch XPC experiments.Split the README.md file at root directory into multiple markdown files discussing the content (and usage) of the scripts present in the following directories:
Setup PyLint static code analyser.
Also, setup GitHub Action to automatically print a report about code quality.
I installed deeprob-kit using pip:
$ pip install --user deeprob-kit
Now, I want try out the experiments to see if the code works on my system. However, I do not seem to have the experiments
folder and therefore do not seem to be able to run them or put the datasets in place. Namely, what I have after installation is the following tree:
~/.local/lib/python3.9 $ tree -L 3
.
└── site-packages
├── deeprob
│ ├── __init__.py
│ ├── __pycache__
│ ├── context.py
│ ├── flows
│ ├── spn
│ ├── torch
│ └── utils
└── deeprob_kit-1.0.0.dist-info
├── INSTALLER
├── LICENSE
├── METADATA
├── RECORD
├── REQUESTED
├── WHEEL
└── top_level.txt
8 directories, 9 files
My impression is that the bundle on PyPi only contains deeprob-kit itself, without any of the other materials. Perhaps putting the experiments folder (and others) under deeprob
may provide a solution, but I guess you chose the current structure for a reason. Or perhaps I am looking in the wrong location.
Add a string flag method on learn_estimator function (in module deeprob.spn.learning.wrappers) that permits to choose between different SPN learning algorithms.
At the moment, the flag method must support two values: learnspn and learnxpc, corresponding to LearnSPN and LearnXPC algorithms respectively.
NormalizingFlow
abstract class import in flows/models/__init__.py
RatSpn
abstract class import in spn/models/__init__.py
Fix most of the errors given by the Pylint static code analyzer.
I ran all examples. They are a nice way of testing how the code runs on one's computer and show its capabilities. Below, I provide some points of feedback/suggestions that may improve the experience people have when running the examples. Some of that feedback may pertain or be relevant to other parts of the code base as well.
spn_latent_mnist.py
does not.)nvp1d_moons.py
) and not in an image file. That is nice. Could it be generalized to all examples, with a fallback to image file generation?Obviously, these are mostly cosmetic suggestions, so I'd understand that you classify (parts of) this issue as ‘wontfix’.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.