GithubHelp home page GithubHelp logo

rust-lang / rustc-perf Goto Github PK

View Code? Open in Web Editor NEW
598.0 598.0 147.0 67.32 MB

Website for graphing performance of rustc

Home Page: https://perf.rust-lang.org

JavaScript 0.03% HTML 0.16% CSS 0.01% Rust 97.62% Shell 0.09% Python 0.89% C 0.03% CMake 0.01% WebIDL 0.27% Mako 0.01% Roff 0.19% Smarty 0.02% Ruby 0.01% GLSL 0.18% Dockerfile 0.01% RenderScript 0.09% PowerShell 0.01% Batchfile 0.01% C++ 0.38% TypeScript 0.05%
performance

rustc-perf's Introduction

Rust Compiler Performance Monitoring & Benchmarking

This repository contains two primary crates:

  • collector: gathers data for each bors commit
  • site: displays the data and provides a GitHub bot for on-demand benchmarking

Additional documentation on running and setting up the frontend and backend can be found in the README files in the collector and site directories.

Additional documentation on the benchmark programs can be found in the README file in the collector/compile-benchmarks and collector/runtime-benchmarks directories.

rustc-perf's People

Contributors

alexcrichton avatar arielb1 avatar blyxyas avatar chengr4 avatar dependabot-preview[bot] avatar dependabot-support avatar ecstatic-morse avatar erikdesjardins avatar eth3lbert avatar gnzlbg avatar heroickatora avatar homersimpsons avatar ishitatsuyuki avatar jyn514 avatar kennytm avatar klensy avatar kobzol avatar lqd avatar mark-simulacrum avatar michaelwoerister avatar miwig avatar nnethercote avatar nrc avatar pnkfelix avatar ralfjung avatar rylev avatar s7tya avatar simonsapin avatar tgnottingham avatar wesleywiser avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rustc-perf's Issues

Display absolute times, not just percentages, on the front page

Percentages are useful but also kind of ... surprising. For example, a 1000% regression is "undone" by a 90% improvement. It'd be nice to see absolute times as well (also -- a percentage can look huge, but actually represent a small amount of time). It might be that we want to display this data only on hover or something.

Investigate cause of empty files

These are the files I found to be empty:

regex.0.1.30--2016-06-21-04-21-09.json
issue-32062-equality-relations-complexity--2016-06-21-04-21-09.json
helloworld--2016-06-21-04-21-09.json
piston-image-0.3.11--2016-06-21-04-21-09.json
regex-macros.0.1.30--2016-06-21-04-21-09.json
inflate-0.1.0--2016-06-21-04-21-09.json
rust-encoding.0.2.32--2016-06-21-04-21-09.json
hyper.0.5.0--2016-06-21-04-21-09.json
jld-day15-parser--2016-06-21-04-21-09.json
issue-32278-big-array-of-strings--2016-06-21-04-21-09.json
rustc--2016-06-21-04-21-09.json
html5ever-2015-05-15--2016-06-21-04-21-09.json

Rewrite to split out the total meta-phase from the codebase

Not necessarily required, and may not even be a good idea, but a thought.

Rewrite the make_times function to return a Timings struct which will keep the total phase separate, since it is a meta-phase. This is a huge project since this affects most of the codebase; as well as causing problems with the iteration over phases code: it probably expects the total phase to exist.

Use a database instead of always loading from directory

From IRC:

@nrc:
Long-term, I think that we should have a permanent DB with the processed data in, so startup is just booting up the DB and we only process any dataset once
That means we need to see the diff in a commit, rather than read everything in
but that shouldn't be too hard

Loading from a diff/commit is also useful for the GitHub integration (since we could then fairly quickly just restart the server). cc #29.

Display sub-pass timings

I think this has two fixes:

  • (short term) to link to the raw logs in some manner for easy inspection
  • (long term) modify the processed data to include the sub-passes

From IRC:

21:09 simulacrum: cc nrc: It looks like the new sub-passes added by nagisa recently aren't available for inspection on perf.rlo, is that because the scripts erase them? I suspect that's the case, but I'm not sure...
21:13 nrc: yeah, it ignores sub-passes, only handles top level passes
21:18 simulacrum: Hmm, some way to show those would be good
21:18 simulacrum: Since right now they aren't doing much good unless someone wants to drill down
21:18 simulacrum: but I can't think of an easy way to do it
21:18 simulacrum: I'll file an issue

Automatic update upon Github webhook receival

  • Add code to pull from GitHub
  • Add a /on_push path to server.rs (unknown if GET or POST) which will update the state.
    • For now, we can reinitialize the entire state. Eventually, we'll want incremental updates.

Ability to run offline

I really want the ability to run these benchmarks (or some subset) offline. This would complement #1 -- basically I could check a local build, save its results, and compare them against master (again running locally).

Show when last update of data occurred

Include a link to the data repository as a [manual] way to check if there has been updates since when the page was loaded/the server believes the latest update happened.

runtime benchmarks

We're using this issue to track work on making benchmark suite for tracking efficiency of generated code (as opposed to efficiency of compilation time itself).

TODO: It might be good to copy over the list of items from description on rust-lang/rust#31265 . Or figure out some other way to encode them/curate them as a todo list for investigation points.

(original description follows)


It'd be great if there were an option to add a make bench target to makefiles that would run runtime benchmarks. We could scrape the output from cargo bench, for example, and include those as data points.

I could then merge the entries from https://github.com/nikomatsakis/rust-runtime-benchmarks and use those.

Hyper Error

Log contained ERROR:hyper::server: request error = Version. Nothing more than that is known.

Factor out title and summary HTML into one place

The overall decision is that moving towards generating shared HTML in shared.js and inserting it into the relevant page is the best approach.


Currently, the menu and the settings HTML is mostly duplicated from page to page. This makes editing it difficult; I propose that we move to generating the HTML from a template. This can either be done at-runtime in Rust, at-runtime in JS, or statically.

If we decide on at-runtime, I think doing it in JS makes more sense. Something like React or Vue.js may be worth looking into, though they may be overly complicated.

@dikaiosune: Can you provide any recommendations based on your work with the dashboard?

cc @nrc

Move integration tests' expected outputs out into files

Edit: Integration and unit tests are now added for at least some of the functionality, main priority now is to move the integration tests to their own files. (This will ease checking that what we think changed changed).


Extracted conversation; primary point being that we'd like unit tests, but for them to be possible, modularization of the code is necessary.


Mark-Simulacrum:

I'd like to add tests, but I've been struggling to come up with good isolated portions of the code which can be tested. We can definitely do some form of overall diff-like testing though I think (create a InputData with predetermined state and call the server functions to generate output). That shouldn't be too difficult as the server functions don't depend on a request being passed, just the input data and [optionally] a body.

nrc:

Yeah, how we test this is a big question for me. I think in the medium term we should refactor to make portions testable, one piece at a time, and add unit tests - test coverage is a big goal for me for the rewrite.
In the short-term, that is probably too much to do. Some diff testing with real data from the timing repo would be good to do, if it is not too complex.

Summary page inconsistency between Rust and JS versions

#31 still doesn't bring exact compatibility with the JS in terms of the results of the summary page.

I'm not sure why this is (I've tried changing the percent logic to what I thought it was before, and a few more things), but nothing seems to bring us back even close to the old JS numbers.

Also, comparing the JS results and the Rust results makes me think a few more of these bugs are hiding from us: while the graph of compile times has dropped significantly, the summary page from the Rust version insists that compile times have gone up by ~13% last I checked while the JS page displays a ~55% drop (which feels more realistic to me when looking at the graphs).

Make it easy to go from front page to detailed info

When you see a regression, it'd be great to be able to click on the test name and see more detailed data for just that test -- for example, the graph for total time on that benchmark and also the graphs for all individual compiler passes on that benchmark. This would probably be a simple step that would address #56 in an easier way. That is, if we could just click on the regression, we'd easily see if one particular compiler pass seems to have changed dramatically.

Remove group by selector

The group by selector can be replaced with some JS logic that groups by phase if any phases other than total/maximum are selected, otherwise grouping by crate.

Fix CI

Currently can't run Rustfmt, which means all tests are failing.

Stop merging multiple runs with the same datetime into one

When loading runs from disk, we can find more than one run has occurred on a given date. When this happens, we ideally want to keep all of the data. I propose that we continue to merge data, but instead of doing so by date, we should do so by the associated commit. The commit is what gives each run "uniqueness"; the date is associated metadata. We should still sort both data_ vectors by the date, in order to ease display/manipulation and allow binary search functionality.

See #50 and #28 for previous discussion.

Align table cells with numbers to the right

Requested on IRC, and generally a good idea.

lqd: simulacrum nrc: numbers in table are generally aligned right rather than centered, to allow for easier vertical scanning and comparisons between rows :)

Error handling on the backend

Potentially should be merged with #6.

Currently, we use error-chain, but we don't use it's full potential. Do we need error-chain?

Unknown intended purpose of overwriting, data loss

I'm not sure what the original intended purpose of the code involving c_benchmarks_add in load.rs was. However, I noticed that this means we overwrite existing data in these cases. None of these are recent, so it may not be too big a deal, but wanted to make sure we were aware of this, since it might lead to data loss in the future as well.

hyper.0.5.0 in hyper.0.5.0--2015-05-25-14-31-35.json
helloworld in helloworld--2015-05-25-14-31-35.json
html5ever-2015-05-15 in html5ever-2015-05-15--2015-05-25-14-31-35.json
regex.0.1.30 in regex.0.1.30--2015-05-25-14-31-35.json
rust-encoding.0.2.32 in rust-encoding.0.2.32--2015-05-25-15-45-57.json
regex-macros.0.1.30 in regex-macros.0.1.30--2015-05-25-14-31-35.json

@nrc did not recall either:

Hmm, I don't recall either. I wonder if it is meant to coalesce data rather than overwrite it?
Add a line note

Potentially incorrect summary page median functionality

I believe the intent with the summarise_data function in the backend was to compute median times in the three weeks around the last week in the pulled data.

However, the current code doesn't do this, instead using only one week (probably the previous week from the last week being examined, if I'm reading the code correctly).

Is this intended?

(I noticed this while trying to understand what is happening in the current code while working on my Rust version of the code).

Auto-deploy website

  • Build and copy the built binary from Travis to the server
    • After copying, restart the server. Momentary (<1 minute) downtime is probably fine.
  • git pull on the server when the repository is updated to keep the JS/HTML/CSS up-to-date.

Track down a good way of parsing/serializing dates passed from/to the frontend

Chrono struggles with parsing semi-arbitrary data.

Extracted comment:

Also, I've been looking at this existing URL pointing at perf.rust-lang.org, and upon testing it with the current code, things break. I don't know of a good way to "best effort parse this date we're throwing at you" in Rust (and even in JS, truth be told). Without that, we'll either need to try and list all the cases we can think of (attempting parsing with each, failing, and moving on to the next one), or think of something else. Perhaps input sanitization? Regex-based search and recombine the input into something more readily parseable?

Let me know if you think we shouldn't care about backwards compatibility with URLs (I think we should). I'm also not 100% certain how to generate the URL in that issue, my attempts don't seem to produce the HH:MM:SS GMT+0000 (UTC) part.

Error(Msg("while parsing Mon Jun 13 2016 19:51:42 GMT+0000 (UTC)"), (Some(ParseError(TooLong)), stack backtrace:
...

Expose SHA1 hashes

Currently, if you hover on a data point, you get the date and time, but not the SHA1 hash of the commit. The SHA1 hash would be much more precise, and make it easy to determine if (e.g.) a given change is also on beta. For bonus points, it could be linked to the "diff" on GH versus the last bullet point, so one can easily browse the set of changes that took place in between.

Change to calculation of compile time percent increase/decrease

Currently, ((previous - current) / current) * 100.0 is the calculation used to derive the percent change between the previous median time and the current median time for the summary page. This page on Wikipedia implies that the formula should instead be ((current - previous) / previous) * 100.0.

Wanted to open this for discussion since this change the results in my current branch from 11.4% increase in compile times to a 9.4% decrease.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.