scientific-python / cookie Goto Github PK
View Code? Open in Web Editor NEWScientific Python Library Development Guide and Cookiecutter
Home Page: https://learn.scientific-python.org/development
License: BSD 3-Clause "New" or "Revised" License
Scientific Python Library Development Guide and Cookiecutter
Home Page: https://learn.scientific-python.org/development
License: BSD 3-Clause "New" or "Revised" License
According to https://beta.ruff.rs/docs/settings/#target-version, the target-version
property used for ruff can be inferred from the project.requires-python
field, if it is present.
Repo-review currently requires the target-version
to be set in
cookie/src/sp_repo_review/checks/ruff.py
Lines 35 to 50 in 884ef32
Based on the above, this check could be relaxed a little. Maybe to something like:
class RF002(Ruff):
"Target version must be set"
requires = {"RF001"}
@staticmethod
def check(pyproject: dict[str, Any]) -> bool:
"""
Must select a minimum version to target. Affects pyupgrade,
isort, and others. Can be inferred from project.requires-python.
"""
match pyproject:
case {"tool": {"ruff": {"target-version": str()}}}:
return True
case {"project": {"requires-python": str()}}:
return True
case _:
return False
Or even something where it is suggested to not specify tool.ruff.target-version
whenever project.requires-python
is specified.
One of the suggestions is to add
addops = ["-ra", "--strict-config", "--strict-markers"]
That should be addopts
(with t
) - I have not found the typo in this repo but I hope you'll know where to look ;-)
Found for https://learn.scientific-python.org/development/guides/repo-review/?repo=pydata%2Fxarray&branch=main
Hi, thank you for this great project. It is really helpful.
I discovered the "Homepage" URL, defined in [project.urls] returns 404. Maybe the GitHub pages deployment is not used anymore?
See:
Line 50 in ed74738
Just an idea: This might be another check to include? To have valid and accessible URLs in pyproject.toml?
Otherwise you cannot navigate back.
seems to remove all files. As an example project, see this script I wrote which mimics how MANIFEST.in
is parsed using distutils
(line-by-line)
$ cat file_list.py
from distutils.filelist import FileList
file_list = FileList()
for line in open('MANIFEST.in').readlines():
line = line.strip()
if not line: continue
print(line)
file_list.process_template_line(line)
print(file_list.files, end='\n'*2)
and the output
graft src
['src/pylibmagic/_version.pyi', 'src/pylibmagic/_version.py', 'src/pylibmagic/__init__.py', 'src/pylibmagic/py.typed', 'src/pylibmagic/__pycache__/__init__.cpython-39.pyc', 'src/pylibmagic.egg-info/PKG-INFO', 'src/pylibmagic.egg-info/not-zip-safe', 'src/pylibmagic.egg-info/SOURCES.txt', 'src/pylibmagic.egg-info/requires.txt', 'src/pylibmagic.egg-info/top_level.txt', 'src/pylibmagic.egg-info/dependency_links.txt']
graft tests
['src/pylibmagic/_version.pyi', 'src/pylibmagic/_version.py', 'src/pylibmagic/__init__.py', 'src/pylibmagic/py.typed', 'src/pylibmagic/__pycache__/__init__.cpython-39.pyc', 'src/pylibmagic.egg-info/PKG-INFO', 'src/pylibmagic.egg-info/not-zip-safe', 'src/pylibmagic.egg-info/SOURCES.txt', 'src/pylibmagic.egg-info/requires.txt', 'src/pylibmagic.egg-info/top_level.txt', 'src/pylibmagic.egg-info/dependency_links.txt', 'tests/test_package.py', 'tests/test_compiled.py', 'tests/__pycache__/test_package.cpython-39-pytest-7.1.1.pyc', 'tests/__pycache__/test_compiled.cpython-39-pytest-7.1.1.pyc']
include LICENSE README.md pyproject.toml setup.py setup.cfg
['src/pylibmagic/_version.pyi', 'src/pylibmagic/_version.py', 'src/pylibmagic/__init__.py', 'src/pylibmagic/py.typed', 'src/pylibmagic/__pycache__/__init__.cpython-39.pyc', 'src/pylibmagic.egg-info/PKG-INFO', 'src/pylibmagic.egg-info/not-zip-safe', 'src/pylibmagic.egg-info/SOURCES.txt', 'src/pylibmagic.egg-info/requires.txt', 'src/pylibmagic.egg-info/top_level.txt', 'src/pylibmagic.egg-info/dependency_links.txt', 'tests/test_package.py', 'tests/test_compiled.py', 'tests/__pycache__/test_package.cpython-39-pytest-7.1.1.pyc', 'tests/__pycache__/test_compiled.cpython-39-pytest-7.1.1.pyc', 'LICENSE', 'README.md', 'pyproject.toml', 'setup.py', 'setup.cfg']
global-exclude __pycache__ *.py[cod] .*
warning: no previously-included files matching '__pycache__' found anywhere in distribution
['src/pylibmagic.egg-info/PKG-INFO', 'src/pylibmagic.egg-info/not-zip-safe', 'LICENSE']
If I drop, instead the .*
requirement at the end of this line, I get
global-exclude __pycache__ *.py[cod]
warning: no previously-included files matching '__pycache__' found anywhere in distribution
['src/pylibmagic/_version.pyi', 'src/pylibmagic/_version.py', 'src/pylibmagic/__init__.py', 'src/pylibmagic/py.typed', 'src/pylibmagic.egg-info/PKG-INFO', 'src/pylibmagic.egg-info/not-zip-safe', 'src/pylibmagic.egg-info/SOURCES.txt', 'src/pylibmagic.egg-info/requires.txt', 'src/pylibmagic.egg-info/top_level.txt', 'src/pylibmagic.egg-info/dependency_links.txt', 'tests/test_package.py', 'tests/test_compiled.py', 'LICENSE', 'README.md', 'pyproject.toml', 'setup.py', 'setup.cfg']
which looks potentially better? I suspect what should have happened is a line like exclude .*
since I think the goal was to exclude (hidden) files starting with periods, but global-exclude
seems to be a regex that will match anywhere. See some investigation I did below:
>>> files = ['src/pylibmagic/_version.pyi', 'src/pylibmagic/_version.py', 'src/pylibmagic/__init__.py', 'src/pylibmagic/py.typed', 'src/pylibmagic/__pycache__/__init__.cpython-39.pyc', 'src/pylibmagic.egg-info/PKG-INFO', 'src/pylibmagic.egg-info/not-zip-safe', 'src/pylibmagic.egg-info/SOURCES.txt', 'src/pylibmagic.egg-info/requires.txt', 'src/pylibmagic.egg-info/top_level.txt', 'src/pylibmagic.egg-info/dependency_links.txt', 'tests/test_package.py', 'tests/test_compiled.py', 'tests/__pycache__/test_package.cpython-39-pytest-7.1.1.pyc', 'tests/__pycache__/test_compiled.cpython-39-pytest-7.1.1.pyc', 'LICENSE', 'README.md', 'pyproject.toml', 'setup.py', 'setup.cfg']
>>> from distutils.filelist import translate_pattern
>>> translate_pattern(".*", 0, None, 0) # action: global-exclude
re.compile('(?s:\\.[^/]*)\\Z')
>>> translate_pattern(".*", 1, None, 0) # action: exclude
re.compile('(?s:\\A\\.[^/]*)\\Z')
>>> [f for f in files if translate_pattern(".*", 0, None, 0).search(f)]
['src/pylibmagic/_version.pyi', 'src/pylibmagic/_version.py', 'src/pylibmagic/__init__.py', 'src/pylibmagic/py.typed', 'src/pylibmagic/__pycache__/__init__.cpython-39.pyc', 'src/pylibmagic.egg-info/SOURCES.txt', 'src/pylibmagic.egg-info/requires.txt', 'src/pylibmagic.egg-info/top_level.txt', 'src/pylibmagic.egg-info/dependency_links.txt', 'tests/test_package.py', 'tests/test_compiled.py', 'tests/__pycache__/test_package.cpython-39-pytest-7.1.1.pyc', 'tests/__pycache__/test_compiled.cpython-39-pytest-7.1.1.pyc', 'README.md', 'pyproject.toml', 'setup.py', 'setup.cfg']
>>> [f for f in files if translate_pattern(".*", 1, None, 0).search(f)]
[]
which shows you that the files that got matched by the pattern is perhaps too greedy?
See https://github.com/python/cpython/blob/b3f2d4c8bab52573605c96c809a1e2162eee9d7e/Lib/distutils/filelist.py#L115 for reference (anchor=0 or anchor=1).
See below:
Lord Stark:~$ pipx install copier
installed package copier 9.1.1, installed using Python 3.8.6
These apps are now globally available
- copier
⚠️ Note: '/Users/kratsg/.local/bin' is not on your PATH environment variable. These apps will not be globally
accessible until your PATH is updated. Run `pipx ensurepath` to automatically add it, or manually modify your
PATH in your shell's config file (i.e. ~/.bashrc).
done! ✨ 🌟 ✨
Lord Stark:~$ pipx inject copier copier-templates-extensions
injected package copier-templates-extensions into venv copier
done! ✨ 🌟 ✨
Lord Stark:~$ pipx run copier copy gh:scientific-python/cookie itkdb-reports --trust
Traceback (most recent call last):
File "/Users/kratsg/.local/pipx/.cache/b3c4e793c52a986/bin/copier", line 5, in <module>
from copier.__main__ import copier_app_run
File "/Users/kratsg/.local/pipx/.cache/b3c4e793c52a986/lib/python3.8/site-packages/copier/__init__.py", line 6, in <module>
from .main import * # noqa: F401,F403
File "/Users/kratsg/.local/pipx/.cache/b3c4e793c52a986/lib/python3.8/site-packages/copier/main.py", line 46, in <module>
from .subproject import Subproject
File "/Users/kratsg/.local/pipx/.cache/b3c4e793c52a986/lib/python3.8/site-packages/copier/subproject.py", line 15, in <module>
from .template import Template
File "/Users/kratsg/.local/pipx/.cache/b3c4e793c52a986/lib/python3.8/site-packages/copier/template.py", line 20, in <module>
from yamlinclude import YamlIncludeConstructor
ModuleNotFoundError: No module named 'yamlinclude'
While #41 introduced Python 3.10 to the CI, not all pre-commit hooks support that version.
One such hook is pycl
:
ERROR: Package 'pycln' requires a different Python: 3.10.0 not in '<3.10,>=3.6.2'
It is still an open issue: hadialqattan/pycln#78
In addition, even the pyproject.toml
templates include lines like
# https://github.com/scikit-hep/cookie/blob/main/%7B%7Bcookiecutter.project_name%7D%7D/pyproject-flit.toml#L34
requires = [
"typing_extensions>=3.7; python_version<'3.8'",
]
So I assume that this will also create failures.
Note that the latest black
docs now recommends that users use https://github.com/psf/black-pre-commit-mirror
instead of https://github.com/psf/black
because the version of black
provided by the mirror is faster.
The mirror supports all black
tags from 22.3.0
to through to the current 23.7.0
tag. Hence, I think the guide should start recommending using this version of the black
repo, or at the very least the sp-repo-review
tool should allow for the use of this URL.
I think this needs --all-files
in https://github.com/scikit-hep/cookie/blob/ccba61982ea04cae0e2eefae47a86d28d73e3e55/%7B%7Bcookiecutter.project_name%7D%7D/.github/workflows/ci.yml#L24
Or perhaps something that diffs the PR branch against the head branch.
I have been using cookie
for hatch
projects recently, but I could not find an option to include hatch-vcs
using the CLI. The setuptools
backend includes setuptools_scm
by default, should the hatchling
backend include hatch-vcs
by default too or is it not included intentionally? Thanks!
Edit: hatch-vcs
is mentioned in the guide, but not included in cookie
.
Valid for Python 3.7+
Yet another PEP 517 backend is hatchling. I'm not sure if it's got anything that makes it stand out from the rest. I think it might support plug-ins. Its parent project, hatch, is an env manager similar to pdm and Poetry.
It appears that the issue reported in #283 has re-appeared in the latest version: 2023.10.27
, while it was fixed in the previous version: 2023.09.21
.
See astropy/astropy#15367, if one changes the cookie's pre-commit version from 2023.09.21
to 2023.10.27
reproduces this error.
Based on the UHI repo, I've noticed a few issues that could be improved:
Following up #136, investigate whether we should recommend:
ipython
sphinx directiveplot
directiveI suspect that Jupytext is now the best way to do this, and more compatible with a Markdown-based approach, but I want to educate myself more.
We should put some sort of attribution page (or place on a page) giving credit to the places the content used to live & people who worked on it.
https://scikit-hep.org/developer/reporeview should maybe be renamed to repo-review.
Hello,
I receive the error below when trying to run repo-review
with pandas
(url)
When I tried with other popular repos (numpy
, xarray
), there were no problem.
Is this a bug? Thank you very much.
Running Python via Pyodide
Traceback (most recent call last):
File "/lib/python311.zip/_pyodide/_base.py", line 468, in eval_code
.run(globals, locals)
^^^^^^^^^^^^^^^^^^^^
File "/lib/python311.zip/_pyodide/_base.py", line 310, in run
coroutine = eval(self.code, globals, locals)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 9, in
File "/lib/python3.11/site-packages/repo_review/processor.py", line 214, in process
result = apply_fixtures({"name": name, **fixtures}, tasks[name].check)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/repo_review/fixtures.py", line 106, in apply_fixtures
return func(**kwargs)
^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/sp_repo_review/checks/pyproject.py", line 92, in check
return "minversion" in options and float(options["minversion"]) >= 6
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: could not convert string to float: '7.3.2'
Hi,
Thank for your great guide and these templates, it very useful and I discovered many things (copier, cruft, flit_scm, ...).
I am trying to compare several backends and I have generated project templates with "Hatchling", "Flit-core" and "Setuptools with pyproject.toml".
I left the default BSD license for all.
No template has a license
metadata in the [project]
table of the pyproject.toml
file.
There is a classifier License :: OSI Approved :: BSD License
.
When the wheel is build with Flit, it contains no LICENSE file.
Here is what I did for the "Hatchling" backend :
$ cookiecutter gh:scientific-python/cookie
[1/9] The name of your project (package): my_scientific_package
[2/9] The name of your (GitHub?) org (org): mylogin
[3/9] The url to your GitHub or GitLab repository (https://github.com/mylogin/my_scientific_package):
[4/9] Your name (My Name): My full name
[5/9] Your email ([email protected]): [email protected]
[6/9] A short description of your project (A great package.): Testing the scientific templates for different backends
[7/9] Select a license
1 - BSD
2 - Apache
3 - MIT
Choose from [1/2/3] (1):
[8/9] Choose a build backend
1 - Hatchling - Pure Python (recommended)
2 - Flit-core - Pure Python (minimal)
3 - PDM-backend - Pure Python
4 - Whey - Pure Python
5 - Poetry - Pure Python
6 - Setuptools with pyproject.toml - Pure Python
7 - Setuptools with setup.py - Pure Python
8 - Setuptools and pybind11 - Compiled C++
9 - Scikit-build-core - Compiled C++ (recommended)
10 - Meson-python - Compiled C++ (also good)
11 - Maturin - Compiled Rust (recommended)
Choose from [1/2/3/4/5/6/7/8/9/10/11] (1):
[9/9] Use version control for versioning [y/n] (y):
After creating a hatchling package out-of-the-box, the cd.yml file has an error in it:
GitHub Actions / CD
Invalid workflow file
The workflow is not valid. .github/workflows/cd.yml (Line: 21, Col: 3): The workflow must contain at least one job with no dependencies.
I think it's probably more common for the ruff typing-modules
configuration to be {package}.typing
rather than with _compat
in here. So perhaps throwing in py.typed
and typing.py
would do the trick: https://github.com/scikit-hep/cookie/blob/df59a9bccbe1d0f52de5042fe21a29e7a6793d8a/%7B%7Bcookiecutter.project_name%7D%7D/pyproject.toml#L260
The MY101
and MY102
have been passing and failing (respectively) on the develop branch of PyBaMM, but it should be the other way.
Running sp_repo_review: https://learn.scientific-python.org/development/guides/repo-review/?repo=pybamm-team%2FPyBaMM&branch=develop
Actual mypy config: https://github.com/pybamm-team/PyBaMM/blob/94aa498176d0b6bb1186aa63bebd9c85f7b74bff/pyproject.toml#L272-L282
I noticed that running the develop version of sp_repo_review
and repo_review
does not give these false results. Please feel free to close this if this has been fixed but has not been released. Thanks!
From #137.
See #45. Let's move the tiny pytest section in Style over to the PyTest page while we are at it (developer pages).
The guide puts __init__.py
and refraction.py
directly into src/
cookie/docs/pages/tutorials/packaging.md
Lines 53 to 58 in 7250955
whereas the directory structure listing has src/example/...
cookie/docs/pages/tutorials/packaging.md
Lines 90 to 95 in 7250955
Update to grouped dependabot updates (which are great, actually), and add check for groups. Would have helped with upload/download archive.
repo-review
and sp-repo-review
are now conda packages on conda-forge.
There is a bit of an snag with testing repo-review
when sp-repo-review
is provided in the tests environment. It is not critical, but it would be great if you wouldn't mind taking a look at the PR: conda-forge/repo-review-feedstock#1
This is not an issue, the packages are already released, the tests are just a non-necessary double-check that I prefer to define in the recipes
Also, you are welcome to become a maintainer for that recipe, let me know
https://scientific-python.github.io/repo-review/?repo=Quantum-Accelerators%2Fquacc&branch=main
In running the above test, GH100 gets raised and states that there are no .yml
files in .github/workflows
. While this is indeed true, there are .yaml
files which are equally valid. This seems like a simple parsing issue.
Do you have any recommendations for handling the cases where I want some code importable only in tests? One example in open source is https://github.com/pydantic/pydantic/blob/main/tests/test_datetime.py#L20
I think this would require including a tests/__init__.py
file? I noticed that you recommended against that here
cookie/docs/pages/guides/pytest.md
Lines 112 to 114 in e88cb44
The following output is observed (GH200
is not skipped).
GitHub Actions:
├── GH100 Has GitHub Actions config ❌
│ All projects should have GitHub Actions config for this series of checks. If there are no .yml files in .github/workflows, the
│ remaining checks will be skipped.
├── GH101 Has nice names [skipped]
├── GH102 Auto-cancel on repeated PRs [skipped]
├── GH103 At least one workflow with manual dispatch trigger [skipped]
├── GH104 Use unique names for upload-artifact [skipped]
├── GH200 Maintained by Dependabot ❌
│ All projects should have a .github/dependabot.yml file to support at least GitHub Actions regular updates. Something like this:
│
│
│ version: 2
│ updates:
│ # Maintain dependencies for GitHub Actions
│ - package-ecosystem: "github-actions"
│ directory: "/"
│ schedule:
│ interval: "weekly"
│
├── GH210 Maintains the GitHub action versions with Dependabot [skipped]
├── GH211 Do not pin core actions as major versions [skipped]
└── GH212 Require GHA update grouping [skipped]
Ruff has a replacement for blacken-docs: astral.sh/blog/ruff-v0.1.8#formatting-code-snippets-in-docstrings.
In Jupyter Server we're skipping PC111
for now.
I've been looking at copier, and it looks like a much better maintained cookiecutter-like project, with some really nice features, like a much, much better CLI interface (help text! Types!). What would people think about moving to copier (I looked at dual-supporting both for a while, but the templates are different enough that it probably wouldn't be reasonable).
Along with that, what about a rename? Since we just moved it, it might be okay to do, and pipx run copier new gh:scientific-python/cookie
seems a little odd for a project that isn't using cookiecutter; would something else be better? template
, new
, package
, pyproject
, etc? The repo itself has three things in it now (the guide, the template, and sp-repo-review), but the only one you type on the command line is for the template generation. The old name would still work via redirect (though you'd already have to type copier new
in stead of cookiecutter
, so I think it's not too bad even if it didn't work.)
Here, it should be pipx
and not pip
:
My fork has diverged a bit so probably easier to just patch this from your side :)
Also -- why is CONTRIBUTING.md
in .github/
and not the root folder of the project?
I'm playing around with pre-commit and black together, and looked through scikit-hep/cookie for inspiration (with the now-recommended hatch backend), but when pre-commit runs, black doesn't check any files. I've gone through black's docs and looked at other projects a bit and can't seem to see how black is figuring out the files / directory to search. Is it possible that the the generated pyproject.toml in this instance doesn't pass black enough information? Otherwise is there something else I've missed?
Just wondering if it's intentional to have two sets of flake8 options, one in .flake8
and one in pyproject.toml
?
There is some subtle difference between them :)
One way to pull all the configuration into a pyproject file is to use the pyproject-flake8 package, like pypa/wheel
does. Something to consider - I'd take arguments for or against.
Thanks for this nice collection of wisdom!
In the Packaging tutorial here, the __init__.py
and refraction.py
files are placed in the wrong directory, under src
instead of src/example
. The tree
output of the file structure is correct, however.
This is related to scikit-build option that I chose when asked by cookiecutter. I think the problem is that the MANIFEST.in explicitly includes CMakeLists.txt and LICENSE for sdist, but this really doesn't apply to bdist_wheels.
-info
installed path.I was able to solve this by removing
https://github.com/scikit-hep/cookie/blob/eed6d2600ffa9d1517a09abf240b4a4b7dd38e6e/%7B%7Bcookiecutter.project_name%7D%7D/setup-skbuild.py#L16
But, there's likely a more elegant solution (probably using exclude_package_data
). I'm not sure if this issue was introduced by trying to satisfy installation for different implementations of Python (like PyPy).
We also should add this to cookie.
Originally posted by @henryiii in scikit-hep/scikit-hep.github.io#245 (comment)
I was looking at https://learn.scientific-python.org/development/guides/repo-review/?repo=networkx%2Fnetworkx&branch=main and I clicked on PP003
, which took me here:
https://learn.scientific-python.org/development/guides/packaging-classic#PP003
I didn't try all the links, but others work fine. For example,
https://learn.scientific-python.org/development/guides/packaging-simple/#PP002
E.g LICENCE
, or COPYING
.
Not sure if I hit a forbidden phase-space, but it seems black-jupyter
and isort
.
In the current config, black-jupyter
wants
from . import _print_unused_id_ranges, _print_used_id_ranges, _scan_groups_and_users
but isort
wants
from . import (_print_unused_id_ranges, _print_used_id_ranges,
_scan_groups_and_users)
which results in
black-jupyter............................................................Failed
- hook id: black-jupyter
- files were modified by this hook
...
isort....................................................................Failed
- hook id: isort
- files were modified by this hook
ad infinitum.
This is documented in https://black.readthedocs.io/en/stable/guides/using_black_with_other_tools.html#isort
The most simple solution is to add
[tool.isort]
profile = "black"
to the pyproject.toml
I ran into pypa/cibuildwheel#402 while building wheels that now incorporate numpy
See e.g. https://github.com/cms-nanoAOD/correctionlib/runs/2521360719?check_suite_focus=true#step:3:1003
Seems the current best solution is to drop pypy wheels for os x?
This issue is an overview of the effort to merge the cookiecutter and guide maintained by Brookhaven National Lab NSLS-II into this repo and move the result upstream into the scientific-python GitHub organization.
This work is being started at the SciPy Dev Summit 2023.
Index: Who is this for?
Tutorial
Principles - Split NSLS-II "Guiding Design Principles" into two sections, and add more later
Guides (most scikit-hep content is this style)
Recommended Patterns
Current link is
https://gitter.im/{{cookiecutter.url}}/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge
which leads to entries like
https://gitter.im/https://github.com/<account or org>/<project name>/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge
Instead, it should probably be {{cookiecutter.url}}
→ {{ cookiecutter.github_username }}/{{ cookiecutter.project_slug }}
:
https://gitter.im/{{ cookiecutter.github_username }}/{{ cookiecutter.project_slug }}//community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge
Does anyone know if cookiecutter.github_username
also works for organizations? If yes, I can make a PR.
Ruff has the option of setting select = ["ALL"]
, which selects all the rule sets not in "preview". In particular, this includes
However, repo review flags these as failures when "ALL"
is set even though they are actually being used.
maturin is a PEP 517 build backend for Rust extensions with PEP 621 support.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.