GithubHelp home page GithubHelp logo

django-asv's Introduction

Django ASV

This repository contains the benchmarks for measuring Django's performance over time.

The benchmarking process is carried out by the benchmarking tool airspeed velocity and the results can be viewed here

Running the benchmarks


If you have installed Anaconda or miniconda

Conda is being used to run the benchmarks against different versions of python

If you already have conda or miniconda installed,you can run the benchmarks by using the commands

pip install asv
asv run

to run the benchmarks against the latest commit.

If you have not installed Anaconda or miniconda

If you do not have conda or miniconda installed, change the contents of the file asv.conf.json as follows to use virutalenv to run the benchmarks

{
    "version": 1,
    "project": "Django",
    "project_url": "https://www.djangoproject.com/",
    "repo": "https://github.com/django/django.git",
    "branches": ["main"],
    "environment_type": "virtualenv",
    "show_commit_url": "http://github.com/django/django/commit/",
}

and run the benchmarks using the commands

pip install asv
asv run

Note: ASV prompts you to set a machine name on the first run, please do not set it to 'ubuntu-22.04', 'windows-2022' or 'macos-12' as the results for the machines with these names are currently being stored in the repository

Comparing Benchmarks Results Of Different Commits Or Branches


Benchmarking results of differnt branches can be compared using the following method

asv run <commit1 SHA or branch1 name>
asv run <commit2 SHA or branch2 name>
asv compare <commit1 SHA or branch name> <commit2 SHA or branch name>

Writing New Benchmarks And Contributing


  • Fork this repository and create a new branch

  • Install pre-commit and run pre-commit install to install pre-commit hooks which will be used to format the code

  • Create a new directory with the name benchmark_name under the appropriate category of benchmarks

  • Add the files __init__.py and benchmark.py to the directory

  • Add the directory to the list of INSTALLLED_APPS in settings.py

  • Use the following format to write your benchmark in the file benchmark.py

        from ...utils import bench_setup()
    
        class BenchmarkClass:
    
            def setup():
                bench_setup()
                # if your benchmark makes use of models then use
                # bench_setup(migrate=True)
                ...
    
            def time_benchmark_name():
                ...
  • Commit changes and create a pull request

django-asv's People

Contributors

adamchainz avatar carltongibson avatar deepakdinesh1123 avatar pre-commit-ci[bot] avatar sarahboyce avatar smithdc1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

django-asv's Issues

Add ci

Hi @deepakdinesh1123 👋

I think it would be useful to add a few items for CI which run on each pull request and push to main.

  • lint (black, isort, flake8)

  • asv run: to run the benchmarks against a single commit. This will help to show the benchmark suite runs at this point in time. It will also help with PR reviews.

What do you think?

Moving repo to django org

I have added the workflow to run the benchmarks when a pull request is labeled in the django\django repo here, Mariusz Felisiak mentioned in a comment that moving this repo to the django organization would be better as the workflow wouldn't depend on any user repositories.

Should the repo be moved? Are there any changes that I need to make before moving?

Migrating benchmarks from djangobench which use run_comparison_benchmark

I was migrating some of the benchmarks from djangobench and I noticed that the benchmarks default_middleware and multi_value_dict use the utils.run_comparison_benchmark method to compare two benchmarks, ASV does not support a direct comparison between different benchmark methods. How should I implement this?

Azure pipelines setup

Over the past few days, I set up the Azure pipeline to run the benchmarks in the benchmark repo when a pull request is made in the Main repo(a comment trigger can also be added) since both the Django and djangobench repositories belong to the organization creation of an access token would not be required. Can I use this method?

Note: It would require adding azure-pipelines.yaml file to the Django repo.

Expand README.

So one checkouts, installs the requirements, asv run, ... — then what?

  • We should probably point to the ASV docs, and mention publish/preview.
  • Plus maybe how to generate a few runs for (e.g.) daily commits for last week, weekly commits for last month? (So there's something to see).

Plus, also, a little about the project, and how to join in.

Related is the discussion on the forum about the results folder: https://forum.djangoproject.com/t/django-benchmarking-project/13887/11 — Can we commit this back to the repo (and accept PRs maybe) so that we can build up the data over time?

I would think that running once a day (or week even once we've got some data) is more than enough to spot regressions no?

(Thoughts? 🤔 — We're working out the answers.)

Adding request response benchmarks

In the TODO file in djangobench, it was mentioned that a running test server might be required for the benchmark so I tried to do this by using a sample django project and the subprocess module

import subprocess

class Benchmark:
    def setup(self):
          self.process = subprocess.Popen(["python", "manage.py", "runserver"])
    
    def teardown(self):
          self.process.kill()
    
    def time_response(self):
         # benchmark...

But when I tried to access the manage.py file in the django project I got an error
LookupError: No installed app with label 'admin' whenever I tried to run the benchmarks.

After this I tried to set it up with docker but could not find a way to provide the commit hash so that the particular commit can be installed. It can be implemented in the workflow by using the python docker SDK to build the images that use particular commit hashes of django or use a shell script to pass commits to dockerfile, and use a script to start them and further benchmark them using ASV. Shall I go with this method?

Are there any other ways in which this can be done? Should I also try options other than ASV?

Prevent `RuntimeWarning`s about naive datetimes

Running all benchmarks locally with asv’s --show-stderr option, I saw the below warning repeated many times from query_benchmarks.queryset_filter_chain.benchmark.FilterChain.time_filter_chain:

  warnings.warn(
/.../python3.12/site-packages/django/db/models/fields/__init__.py:1669: RuntimeWarning: DateTimeField Book.date_published received a naive datetime (2024-02-23 15:00:07.507742) while time zone support is active.

It was repeated for both date_created and date_published.

This should be fixed to avoid the possible output spam and ensure the benchmark simulates a typical situation.

Repeatable benchmarks

https://smithdc1.github.io/django-asv/#template_benchmarks.template_compilation.benchmark.TemplateCompile.time_template_compile

So we are starting to build up some history now each day which is great.

This chart shows two things.

The long tail history which was run on a single server. This had repeatable results

The more recent daily commits are run on different machines and are much more noisy.

@deepakdinesh1123 and views on making this more repeatable?

@carltongibson any news from the ops team?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.