GithubHelp home page GithubHelp logo

frequenz-floss / frequenz-repo-config-python Goto Github PK

View Code? Open in Web Editor NEW
3.0 1.0 7.0 4.24 MB

Frequenz repository setup tools and common configuration for Python

Home Page: https://frequenz-floss.github.io/frequenz-repo-config-python/

License: MIT License

Python 87.42% CSS 10.75% HTML 0.58% Shell 1.25%
config frequenz grpc lib library mkdocs nox project protobuf python

frequenz-repo-config-python's People

Contributors

cwasicki avatar dependabot[bot] avatar leandro-lucarella-frequenz avatar llucax avatar marenz avatar shsms avatar tiyash-basu-frequenz avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar

frequenz-repo-config-python's Issues

Add a tool command to upgrade the common-api repo dependency

From https://github.com/frequenz-floss/frequenz-api-microgrid/blob/v0.x.x/CONTRIBUTING.md:

Upgrading dependencies
======================

If you want to update the dependency `frequenz-api-common`, then you need to
* update the submodule `frequenz-api-common`
* update the version of the `frequenz-api-common` package in `pyproject.toml`
* update the version of the `frequenz-api-common` package in
`minimum-requirements-ci.txt`

The version of `frequenz-api-common` used in all the three places mentioned
above should be the same.

Here is an example of upgrading the `frequenz-api-common` dependency to version
`v0.2.0`:
```sh
ver="0.2.0"
cd submodules/frequenz-api-common
git remote update
git checkout v${ver}
cd ../..
sed s/"frequenz-api-common == [0-9]\.[0-9]\.[0-9]"/"frequenz-api-common == ${ver}"/g -i pyproject.toml
sed s/"frequenz-api-common == .*"/"frequenz-api-common == ${ver}"/g -i minimum-requirements-ci.txt
```

This could be automated by adding some command like frequenz-repo-config update-submodule frequenz-api-common v0.5.1

Add release notes checking workflow

What's needed?

We need to remind contributors that pull request need release notes updates.

Proposed solution

Use the workflow used in the SDK to check this:

Use cases

No response

Alternatives and workarounds

No response

Additional context

We need to make sure to document that a label needs to be created to manually skip checking for release notes update when it is not necessary.

`-` sign in `name` should be replaced with `_` for `src/` module/folder creation

What happened?

given the name fcr-balancing, the folder src/frequenz/actor/fcr-balancing was created,

What did you expect instead?

but I believe we want to use _ here, at least my existing folder was fcr_balancing (migrating project).

Affected version(s)

No response

Affected part(s)

Affects only the configuration of actor repos (part:actor-only)

Extra information

If this works as intended, feel free to close.

Add a session to check proto files with `protolint`

protolint is a tool to lint protocol buffer files.

We should add a nox session that runs protolint. One problem with protolint is that is not very popular yet, and it doesn't have an easy to install package in all distributions and also no pypi package for it. Because of this, the nox session should not be included in the default sessions, but it should be explicitly used when running the CI.

Here is how protolint is currently being used in the CI:

jobs:
  protolint:
    runs-on: ubuntu-20.04

    steps:
      - name: Fetch sources
        uses: actions/checkout@v3
        with:
          submodules: true

      - name: Get protolint
        run: |
          # checksum taken from: https://github.com/yoheimuta/protolint/releases/download/v0.38.1/checksums.txt
          echo "c1a43f352da6f96f3e60efe33438eaca6148526e6145d9bbd77c40cebc9895a6  protolint_0.38.1_Linux_x86_64.tar.gz" > checksum.txt

          curl -LO https://github.com/yoheimuta/protolint/releases/download/v0.38.1/protolint_0.38.1_Linux_x86_64.tar.gz \
          && sha256sum --check checksum.txt \
          && tar xf protolint_0.38.1_Linux_x86_64.tar.gz \
          && chmod +x protolint \
          && sudo mv protolint /usr/local/bin/protolint

      - name: Run protolint
        run: protolint lint proto

There are also a few alternative GitHub actions to use out of the box:

Wrong setuptools version when running nox

After running cookiecutter for a new actor, the suggested example printed after cookiecutter setup

cd frequenz-actor-forecast
python3.11 -m venv .venv
. .venv/bin/activate
pip install .[dev-noxfile]
nox

fails with error

Traceback (most recent call last):
  File "/home/christoph/fqz/frequenz-actor-forecast/.venv/bin/nox", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/home/christoph/fqz/frequenz-actor-forecast/.venv/lib/python3.11/site-packages/nox/__main__.py", line 47, in main
    exit_code = workflow.execute(
                ^^^^^^^^^^^^^^^^^
  File "/home/christoph/fqz/frequenz-actor-forecast/.venv/lib/python3.11/site-packages/nox/workflow.py", line 51, in execute
    return_value = function_(*args, global_config=global_config)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/christoph/fqz/frequenz-actor-forecast/.venv/lib/python3.11/site-packages/nox/tasks.py", line 113, in load_nox_module
    return _load_and_exec_nox_module(global_config)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/christoph/fqz/frequenz-actor-forecast/.venv/lib/python3.11/site-packages/nox/tasks.py", line 63, in _load_and_exec_nox_module
    loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/christoph/fqz/frequenz-actor-forecast/noxfile.py", line 7, in <module>
    from frequenz.repo.config import nox
  File "/home/christoph/fqz/frequenz-actor-forecast/.venv/lib/python3.11/site-packages/frequenz/repo/config/__init__.py", line 257, in <module>
    from . import nox, setuptools
  File "/home/christoph/fqz/frequenz-actor-forecast/.venv/lib/python3.11/site-packages/frequenz/repo/config/setuptools/__init__.py", line 6, in <module>
    from . import grpc_tools
  File "/home/christoph/fqz/frequenz-actor-forecast/.venv/lib/python3.11/site-packages/frequenz/repo/config/setuptools/grpc_tools.py", line 19, in <module>
    import setuptools.command.build as _build_command
ModuleNotFoundError: No module named 'setuptools.command.build'

The setuptools version that is used is in my case 59.6.0.

After adding the required version to the dependencies section the example works:

diff --git a/pyproject.toml b/pyproject.toml
index b056a9c..2fc035e 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -25,6 +25,7 @@ classifiers = [ # TODO(cookiecutter): Remove and add more if appropriate
 ]
 requires-python = ">= 3.11, < 4"
 dependencies = [ # TODO(cookiecutter): Remove and add more if appropriate
+  "setuptools == 67.7.2",
   "typing-extensions == 4.5.0",
   "frequenz-sdk == 0.20.0",
 ]

Specified license not considered everywhere

What happened?

The license MIT is hardcoded here:

https://github.com/frequenz-floss/frequenz-repo-config-python/blob/e1943d5b27bbddee6e4d93fdbf83155f85bcdcc8/cookiecutter/%7B%7Bcookiecutter.github_repo_name%7D%7D/pyproject.toml#L1C3-L1C3

and here:

What did you expect instead?

I expected it to be "Proprietary" as specified while creating the project.

Affected version(s)

No response

Affected part(s)

Cookiecutter template (part:cookiecutter)

Extra information

No response

Bump SDK version

What's needed?

There were 2 new releases of the SDK since the cookiecutter template was created, we should encourage new projects to use the latest SDK.

Proposed solution

Bump the SDK version in the cookiecutter template.

Use cases

No response

Alternatives and workarounds

No response

Additional context

No response

Add support for coverage reporting

Coverage reporting can be added via pytest-cov and then visualized via codecov.

In the pyproject.toml file we should add to the dependencies:

dev-pytest = [
  "pytest == 7.3.1",
  "pytest-cov == 4.0.0",
  "...",
]

Then basically just run pytest --cov python.package.to.test in the CI and use the codecov/codecov-action to upload the reports.

Optionally create and configure the GitHub repository

When a new repository is created, we should optionally allow the user to:

  • Create the GitHub repository
  • Configure the GitHub repository with our common desirable configuration:
    • Set labels
    • Branch protection rules
    • Allowed merge methods
    • Merge queues
    • etc.
  • Push the created git repository to the new GitHub repository

Can't update project

What happened?

While trying to update my actor project, I got this error:

❰marenz❙~/frequenz/fcr(git≠sdk-update)❱✘≻ cookiecutter gh:frequenz-floss/frequenz-repo-config-python --directory=cookiecutter  --replay --replay-file .cookiecutter-replay.json --overwrite-if-exists
You've downloaded /home/marenz/.cookiecutters/frequenz-repo-config-python before. Is it okay to delete and re-download it? [yes]:
Traceback (most recent call last):
  File "/usr/bin/cookiecutter", line 33, in <module>
    sys.exit(load_entry_point('cookiecutter==2.1.1', 'console_scripts', 'cookiecutter')())
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/site-packages/cookiecutter/cli.py", line 194, in main
    cookiecutter(
  File "/usr/lib/python3.11/site-packages/cookiecutter/main.py", line 114, in cookiecutter
    result = generate_files(
             ^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/site-packages/cookiecutter/generate.py", line 373, in generate_files
    generate_file(
  File "/usr/lib/python3.11/site-packages/cookiecutter/generate.py", line 173, in generate_file
    tmpl = env.get_template(infile_fwd_slashes)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/site-packages/jinja2/environment.py", line 1010, in get_template
    return self._load_template(name, globals)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/site-packages/jinja2/environment.py", line 969, in _load_template
    template = self.loader.load(self, name, self.make_globals(globals))
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/site-packages/jinja2/loaders.py", line 138, in load
    code = environment.compile(source, name, filename)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/site-packages/jinja2/environment.py", line 768, in compile
    self.handle_exception(source=source_hint)
  File "/usr/lib/python3.11/site-packages/jinja2/environment.py", line 936, in handle_exception
    raise rewrite_traceback_stack(source=source)
  File ".github/labeler.yml", line 13, in template
    - "**/*.md"
  ^^^^^^^^^^^^^^
jinja2.exceptions.TemplateAssertionError: No filter named 'as_identifier'.
  File ".github/labeler.yml", line 13
    #   - "src/frequenz/{{cookiecutter.type}}/{{cookiecutter.name | as_identifier}}/module/**"

What did you expect instead?

no error.

Affected version(s)

No response

Affected part(s)

Cookiecutter template (part:cookiecutter)

Extra information

No response

Lint examples in `__init__.py` files

What happened?

Examples in __init__.py files are not being linted, these files are completely skipped.

What did you expect instead?

Examples in __init__.py files should be checked.

Extra information

This happens because we are explicitly excluding those files because Sybil fails to import __init__.py files, raising an error like:

self = <sybil.document.PythonDocStringDocument object at 0x7f36e8644390>
example = <Example path=/home/luca/devel/repo-config/src/frequenz/repo/config/nox/__init__.py line=12 column=1 using <bound meth...eBlockParser.evaluate of <frequenz.repo.config.pytest.examples._CustomPythonCodeBlockParser object at 0x7f36e8db7b50>>>

    def evaluator(self, example: Example) -> Optional[str]:
        """
        Imports the document's source file as a Python module when the first
        :class:`~sybil.example.Example` from it is evaluated.
        """
>       module = import_path(Path(self.path))

.nox/pytest_min/lib/python3.11/site-packages/sybil/document.py:153: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

path = PosixPath('/home/luca/devel/repo-config/src/frequenz/repo/config/nox/__init__.py')

    def import_path(path: Path):
        container = path
        while True:
            container = container.parent
            if not (container / INIT_FILE).exists():
                break
        relative = path.relative_to(container)
        if relative.name == INIT_FILE:
            parts = tuple(relative.parts)[:-1]
        else:
            parts = tuple(relative.parts)[:-1]+(relative.stem,)
        module = '.'.join(parts)
        try:
            return importlib.import_module(module)
        except ImportError as e:
>           raise ImportError(
                f'{module!r} not importable from {path} as:\n{type(e).__name__}: {e}'
            ) from None
E           ImportError: 'config.nox' not importable from /home/luca/devel/repo-config/src/frequenz/repo/config/nox/__init__.py as:
E           ModuleNotFoundError: No module named 'config'

.nox/pytest_min/lib/python3.11/site-packages/sybil/python.py:40: ImportError

We need to investigate this and open an issue in Sybil if appropriate.

Generated docs default to chronologically latest release, not numerical latest

What happened?

The generated docs use the chronologically latest gitub release, not the numerical latest.

This also affect GitHub releases.

What did you expect instead?

I expected to see the highest stable version number as the default release

Affected version(s)

No response

Affected part(s)

Documentation (part:docs)

Extra information

No response

Add CODEOWNERS file

Example:

# Each line is a file pattern followed by one or more owners.
# Owners will be requested for review when someone opens a pull request.

# Fallback owner.
# These are the default owners for everything in the repo, unless a later match
# takes precedence.
* @frequenz-floss/SOME-team

Improve the example code for an app repository

The example we are including in src/frequenz/{type}/{name}/__init__.py is not great, specially for apps, where we can have an example that is actually some sort of Hello World app using the SDK.

Use taplo to check and format `toml`s

The same as we check and format python files automatically, it would be good to do the same with pyproject.toml (and any other toml file) to make sure we have a consistent formatting across projects and can detect errors early.

taplo is a nice tool written in rust that can do both (and even check a toml semantically against a json schema).

Add tests

We should add at least unit tests, but it would be also nice to add integration tests, include some example repository and making sure things work as expected.

Improve the example code for an actor repository

The example we are including in src/frequenz/{type}/{name}/__init__.py is not great, specially for actors, where we can have an example that is actually some sort of Hello World actor using the SDK.

Add PR Labeler

Workflow

Add a .github/workflow/labeler.yml workflow:

name: Pull Request Labeler

# XXX: !!! SECURITY WARNING !!!
# pull_request_target has write access to the repo, and can read secrets. We
# need to audit any external actions executed in this workflow and make sure no
# checked out code is run (not even installing dependencies, as installing
# dependencies usually can execute pre/post-install scripts). We should also
# only use hashes to pick the action to execute (instead of tags or branches).
# For more details read:
# https://securitylab.github.com/research/github-actions-preventing-pwn-requests/
on: [pull_request_target]

jobs:
  Label:
    permissions:
      contents: read
      pull-requests: write
    runs-on: ubuntu-latest
    steps:
      - name: Labeler
        # Only use hashes, see the security comment above
        uses: actions/labeler@e54e5b338fbd6e6cdb5d60f51c22335fc57c401e  # 4.0.1
        with:
          repo-token: "${{ secrets.GITHUB_TOKEN }}"

Configuration

Configure the workflow creating a .github/labeler.yml.

For example:

# Configuration for the Labeler GitHub action, executed by
# .github/workflows/labeler.yml.
#
# The basic syntax is [label]: [path patterns].
#
# For more details on the configuration please see:
# https://github.com/marketplace/actions/labeler

"part:docs": 
  - "**/*.md"
  - LICENSE

"part:tests":
  - "tests/**"

"part:tooling":
  - ".git*"
  - ".git*/**"
  - "**/*.toml"
  - "**/*.ini"
  - CODEOWNERS
  - MANIFEST.in
  - "*requirements*.txt"
  - setup.py
  - setup.cfg

"part:BLAH":
  - "src/frequenz/BLAH/**"
  • Ideally make sure all the part:xxx labels defined in the repo are present.

SDK Example:

Add keyword labeler app

Add [https://github.com/ZeWaka/KeywordLabeler] app configuration in .github/keylabeler.yml.

For example:

# KeywordLabeler app configuration. For more information check:
# https://github.com/ZeWaka/KeywordLabeler#readme

# Determines if we search the title (optional). Defaults to true.
matchTitle: true

# Determines if we search the body (optional). Defaults to true.
matchBody: true

# Determines if label matching is case sensitive (optional). Defaults to true.
caseSensitive: true

# Explicit keyword mappings to labels. Form of match:label. Required.
labelMappings:
  "part:docs": "part:docs"
  "part:tests": "part:tests"
  "part:tooling": "part:tooling"
  "part:❓": "part:❓"
  • Make sure all the part:xxx labels defined in the repo are present.
  • Ideally sort the labelMappings

SDK example: https://github.com/frequenz-floss/frequenz-sdk-python/blob/v0.x.x/.github/keylabeler.yml

Disable `pylint`'s `unsubscriptable-object` check

pylint's unsubscriptable check is unreliable, having some false positives. For example:

"""Test pylint."""


class AClass:
    """Test pylint."""

    a_map: dict[str, int] | None = None

    def __str__(self) -> str:
        """Test pylint."""
        return "None" if not self.a_map else self.a_map["a"]


class BClass(AClass):
    """Test pylint."""

    a_map = {"a": 1}

This produces an error E1136: Value 'a_map' is unsubscriptable (unsubscriptable-object) when using self.a_map["a"], which is wrong.

This check is also redundant, as it is a type check, which is already done (better) by mypy.

We should add this to the cookiecutter pyproject.toml template:

[tool.pylint.messages_control]
disable = [
    # ...
    "unsubscriptable-object",
]

The SDK is already using this ignore:

Add golden tests for cookiecutter

A very easy way to make sure templates don't introduce regressions is to do golden testing.

This basically consists in testing the generated files manually and once they are verified to work well, store them as the "golden copy". Golden test consist in just comparing newly generated files with the golden copies.

This is an easy way to test complex outputs (like in this case, where testing involves running github actions, checking if owners are assigned to PRs, etc.) and making sure regressions are not introduced in refactoring for example.

There are 2 main golden tests libraries in Python:

Both compare similarly in terms of usage and sustainability according to Snyk, so we should compare the features and general feel to select one.

Update: It looks like pytest-goldie might be a fake. The source is not available in pypi, the home page gives a 404.

Automatically add `.pyi` files to manifest

When using the setuptools.grpc tools, the user needs to specify recursive-include py *.pyi in the MANIFEST.in file, otherwise setuptools[scm] doesn't seem to realize that the files should be included (not sure why it does realize the .py files should).

It is supposed to be a way to include them automatically if the sub-command build_py reports the new *.pyi files:

This is probably also needed to support editable installs.

Generate docs using mkdocs

We also should include any utilities needed to generate the docs by other projects.

SDK example:

API (protobuf) docs generation example:

This will generate the docs as a markdown file and put it in the GitHub job step summary, but we can put it in the mkdocs website instead.

  gen-docs:
    runs-on: ubuntu-20.04

    steps:
      - name: Fetch sources
        uses: actions/checkout@v3
        with:
          submodules: true

      - name: Generate documentation
        run: |
          docker run --rm -v $PWD:$PWD pseudomuto/protoc-gen-doc \
            -I $PWD/proto \
            -I $PWD/submodules/api-common-protos \
            -I $PWD/submodules/frequenz-api-common/proto \
            --doc_opt=markdown,gen-docs.md --doc_out=$PWD \
            $(find $PWD/proto -name *.proto)

      - name: Write summary
        run: cat gen-docs.md > $GITHUB_STEP_SUMMARY

Add a protolint checker to the CI

Right now we are using a custom command to do a custom download of the protolint tool:

jobs:
  protolint:
    runs-on: ubuntu-20.04

    steps:
      - name: Fetch sources
        uses: actions/checkout@v3
        with:
          submodules: true

      - name: Get protolint
        run: |
          # checksum taken from: https://github.com/yoheimuta/protolint/releases/download/v0.44.0/checksums.txt
          echo "33627c1fd4392edc9363b414651f60692286c27e54424fc535ebb373a47a3004  protolint_0.44.0_Linux_x86_64.tar.gz" > checksum.txt
          curl -LO https://github.com/yoheimuta/protolint/releases/download/v0.44.0/protolint_0.44.0_Linux_x86_64.tar.gz \
          && sha256sum --check checksum.txt \
          && tar xf protolint_0.44.0_Linux_x86_64.tar.gz \
          && chmod +x protolint \
          && sudo mv protolint /usr/local/bin/protolint
      - name: Run protolint
        run: protolint lint proto

There are 2 main problems with this approach:

  1. dependabot can't check for updates for us
  2. We don't get nice comments in PR with the issues it finds

There are (at least) 2 github actions that we can look at to see if they solve these problems more nicely:

If these are not good enough, there is also a pypi package for protolint that could be used at least to get automatic updates:

Add cookiecutter variables description

What's needed?

When creating a new project using cookiecutter, it is not obvious at all what some variables are used for, and what should be the input format, etc.

Proposed solution

Ideally show a help message before the variables are prompted (it might be doable via the pre build hook), but also show this information in the documentation website. We can create one markdown formatted file with this information, and display it in the hook and embed the same file (as a snippet) in the documentation so there is only one source of truth.

Use cases

No response

Alternatives and workarounds

No response

Additional context

No response

Add git submodules dependency checks

api projects use Git submodules to manage dependencies for protobuf files.

We use dependabot to keep dependencies up to date and while dependabot does support git submdules as a dependency management system, it support is pretty basic, and submodules are always updated to the lastest default head and there is no way to configure it to update at least to tags, much less to configure updates following semver.

The current alternative (as some people did in that issue) is to implement our own workflow to check this (or send a PR to dependabot to improve support).

The most basic approach would be to schedule a workflow to run periodically that could just be looking for new tags in submodules (possibly following semver) and just failing the job if there are updates, so at least we get notified when something need updating. The update process would still be manual.

Please note that this is only needed (for now) for the google common proto files, as for internal dependencies we'll still get notified about new releases by the Python ecosystem via dependabot.

Improve the instructions to generate a project using cookiecutter

What's needed?

Many users reported problems while trying to generate a new project. The main mistakes were trying to modify the cookiecutter command, which should be typed literally: cookiecutter gh:frequenz-floss/frequenz-repo-config-python --directory=cookiecutter.

  • Some users tried to put the name of their project in gh:frequenz-floss/my-project
  • Other users wanted to put the directory they wanted generate in --directory=my-project

Proposed solution

We should make it more clear that this command needs to be typed as is, and all information about the project to be generated will be prompted.

Use cases

No response

Alternatives and workarounds

No response

Additional context

No response

Lint and test examples in `examples` and docstrings automatically

Projects normally have examples in the examples/ directory and in docstring documentation. The ones in examples/ are currently being linted, but never ran. The ones in docstrings are never linted nor ran. All of this very subject to break when there are updates.

In docstrings there are mainly 2 types of examples in documentation:

  • doctest: Usually any block starting with >>> in a docstring
  • Markdown examples. Could be in:
    • docstrings (Python files)
    • Other high-level documentation in docs/.

We mainly use markdown examples, so we should focus on that first.

There are several tools to actually test those examples to make sure they work, so we should use one:

Improve the example code for a model repository

The example we are including in src/frequenz/{type}/{name}/__init__.py is not great, specially for models, where we can have an example that is actually some sort of Hello World model using the SDK.

Replace `nox`

What's needed?

We need tests to be fast.

Proposed solution

Don't use nox anymore. Instead run each tool individually or ship our own tool to run all the linting and tests if necessary (for example: frequenz-repo-config lint and frequenz-repo-config test)

Use cases

All projects, but specially this one, that in integration tests need to run nox for 5 different generated projects on top of running nox for itself. All of that is also done twice, once for the minimum supported dependencies and one for the currently available dependencies. So it adds up.

Alternatives and workarounds

pre-commit might be an alternative to have a standarized way to invoke the tools and which tools should be used to test changes before creating a PR. We don't necessarily need to plug the pre-commit hook, pre-commit can also be invoked manually, but we take advantage of the well-known and adopted tool.

Additional context

Tests for all repos are very slow with nox, because it needs to create a new virtual environment for each session and inside of it, install all dependencies over and over again. Right now we have 5 sessions and we need to add more (pyupgrade, protolint, etc.).

Also nox for our use case brings practically zero advantages. nox is supposed to be useful to test with multiple Python or libraries versions, where having a separate virtual environment is really needed, but we don't use that and we rely on the CI to test with different Python versions and we have a separate session for the min/max dependencies with tests.

Also for the CI, there is absolutely no need to create a venv at all, as it starts with a clean environment where we can pip install directly, adding more unnecessary extra time to the mix.

The test for this repository are currently taking ~50 minutes to complete 🐢

Labeler Template: Add negative example to exclude files

What's needed?

While updating the .github/labeler.yml file I found myself needing to specify a pattern that excludes a path.

Proposed solution

Adding an example to the existing two in the file that shows negative matching. e.g. taken from their website:

# Add 'frontend` label to any change to *.js files as long as the `main.js` hasn't changed
frontend:
- any: ['src/**/*.js']
  all: ['!src/main.js']

Use cases

No response

Alternatives and workarounds

No response

Additional context

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.