GithubHelp home page GithubHelp logo

paulirish / pwmetrics Goto Github PK

View Code? Open in Web Editor NEW
1.2K 29.0 74.0 1.42 MB

Progressive web metrics at your fingertipz

License: Apache License 2.0

JavaScript 33.20% TypeScript 66.42% Shell 0.39%
metrics performance webperf lighthouse

pwmetrics's Introduction

Deprecated

In favour of better support and many cool features of:

  • Lighthouse CI - is a suite of tools that make continuously running, saving, retrieving, and asserting against Lighthouse results as easy as possible.
  • Lighthouse CI Action - action integrates Lighthouse CI with Github Actions environment. Making it simple to see failed tests, upload results, run jobs in parallel, store secrets, and interpolate env variables.
  • Treo.sh - Page speed monitoring made simple.

PWMetrics

Progressive web metrics at your fingertipz. πŸ’…

CLI tool and lib to gather performance metrics via Lighthouse.

image

Documentation on these metrics in the works. If you hit bugs in the metrics collection, report at Lighthouse issues. How to use article

Install NPM pwmetrics package

$ yarn global add pwmetrics
# or
$ yarn add --dev pwmetrics

CLI Usage

$ pwmetrics <url> <flags>

pwmetrics http://example.com/

# --runs=n     Does n runs (eg. 3, 5), and reports the median run's numbers.
#              Median run selected by run with the median TTI.
pwmetrics http://example.com/ --runs=3

# --json       Reports json details to stdout.
pwmetrics http://example.com/ --json

# returns...
# {runs: [{
#   "timings": [
#     {
#       "name": "First Contentful Paint",
#       "value": 289.642
#     },
#     {
#       "name": "Largest Contentful Paint",
#       "value": 292
#     },
#     ...

# --output-path       File path to save results.
pwmetrics http://example.com/ --output-path='pathToFile/file.json'

# --config        Provide configuration (defaults to `package.json`). See _Defining config_ below.
pwmetrics --config=pwmetrics-config.js

# --submit       Submit results to Google Sheets. See _Defining submit_ below.
pwmetrics --submit

# --upload       Upload Lighthouse traces to Google Drive. See _Defining upload_ below.
pwmetrics --upload

# --view       View Lighthouse traces, which were uploaded to Google Drive, in DevTools. See _Defining view_ below.
pwmetrics --view

##
## CLI options useful for CI
##

# --expectations  Assert metrics results against provides values. See _Defining expectations_ below.
pwmetrics --expectations

# --fail-on-error  Exit PWMetrics with an error status code after the first unfilled expectation.
pwmetrics --fail-on-error

Defining config

# run pwmetrics with config in package.json
pwmetrics --config

package.json

...
  "pwmetrics": {
    "url": "http://example.com/",
    // other configuration options
  }
...
# run pwmetrics with config in pwmetrics-config.js
pwmetrics --config=pwmetrics-config.js

pwmetrics-config.js

module.exports = {
  url: 'http://example.com/',
   // other configuration options. Read _All available configuration options_
}

All available configuration options

pwmetrics-config.js

const METRICS = require('pwmetrics/lib/metrics');

module.exports = {
  url: 'http://example.com/',
  flags: { // AKA feature flags
    runs: 3, // number or runs
    submit: true, // turn on submitting to Google Sheets
    upload: true, // turn on uploading to Google Drive
    view: true, // open uploaded traces to Google Drive in DevTools
    expectations: true, // turn on assertion metrics results against provides values
    json: true, // not required, set to true if you want json output
    outputPath: 'stdout', // not required, only needed if you have specified json output, can be "stdout" or a path
    chromePath: '/Applications/Google\ Chrome\ Canary.app/Contents/MacOS/Google\ Chrome\ Canary', //optional path to specific Chrome location
    chromeFlags: '', // custom flags to pass to Chrome. For a full list of flags, see http://peter.sh/experiments/chromium-command-line-switches/.
    // Note: pwmetrics supports all flags from Lighthouse
    showOutput: true, // not required, set to false for pwmetrics not output any console.log messages
    failOnError: false // not required, set to true if you want to fail the process on expectations errors
  },
  expectations: {
    // these expectations values are examples, for your cases set your own
    // it's not required to use all metrics, you can use just a few of them
    // Read _Available metrics_ where all keys are defined
    [METRICS.TTFCP]: {
      warn: '>=1500',
      error: '>=2000'
    },
    [METRICS.TTLCP]: {
      warn: '>=2000',
      error: '>=3000'
    },
    [METRICS.TTI]: {
      ...
    },
    [METRICS.TBT]: {
      ...
    },
    [METRICS.SI]: {
      ...
    },
  },
  sheets: {
    type: 'GOOGLE_SHEETS', // sheets service type. Available types: GOOGLE_SHEETS
    options: {
      spreadsheetId: 'sheet_id',
      tableName: 'data',
      uploadMedian: false // not required, set to true if you want to upload only the median run
    }
  },
  clientSecret: {
    // Data object. Can be get
    // either
    // by (using everything in step 1 here)[https://developers.google.com/sheets/api/quickstart/nodejs#step_1_turn_on_the_api_name]
    //
    // example format:
    //
    // installed: {
    //   client_id: "sample_client_id",
    //   project_id: "sample_project_id",
    //   auth_uri: "https://accounts.google.com/o/oauth2/auth",
    //   token_uri: "https://accounts.google.com/o/oauth2/token",
    //   auth_provider_x509_cert_url: "https://www.googleapis.com/oauth2/v1/certs",
    //   client_secret: "sample_client_secret",
    //   redirect_uris: [
    //     "url",
    //     "http://localhost"
    //   ]
    // }
    //
    // or
    // by (using everything in step 1 here)[https://developers.google.com/drive/v3/web/quickstart/nodejs]
  }
}

Defining expectations

Recipes for using with CI

# run pwmetrics with config in package.json
pwmetrics --expectations

package.json

...
  "pwmetrics": {
    "url": "http://example.com/",
    "expectations": {
      ...
    }
  }
...
# run pwmetrics with config in pwmetrics-config.js
pwmetrics --expectations --config=pwmetrics-config.js

Defining submit

Submit results to Google Sheets

Instructions:

  • Copy this spreadsheet.
  • Copy the ID of the spreadsheet into the config as value of sheets.options.spreadsheetId property.
  • Setup Google Developer project and get credentials. (everything in step 1 here)
  • Take a client_secret and put it into the config as value of clientSecret property.
# run pwmetrics with config in package.json
pwmetrics --submit
# run pwmetrics with config in pwmetrics-config.js
pwmetrics --submit --config=pwmetrics-config.js

pwmetrics-config.js

module.exports = {
  'url': 'http://example.com/',
  'sheets': {
    ...
  },
  'clientSecret': {
    ...
  }
}

Defining upload

Upload Lighthouse traces to Google Drive

Instructions:

  • Setup Google Developer project and get credentials. (everything in step 1 here)
  • Take a client_secret and put it into the config as value of clientSecret property.
# run pwmetrics with config in package.json
pwmetrics --upload
# run pwmetrics with config in pwmetrics-config.js
pwmetrics --upload --config=pwmetrics-config.js

pwmetrics-config.js

module.exports = {
  'url': 'http://example.com/',
  'clientSecret': {
    ...
  }
}

View Lighthouse traces in timeline-viewer

Show Lighthouse traces in timeline-viewer.

Required to use upload flag

timeline-viewer - Shareable URLs for your Chrome DevTools Timeline traces.

# run pwmetrics with config in package.json
pwmetrics --upload --view
# run pwmetrics with config in your-own-file.js
pwmetrics --upload --view --config=your-own-file.js

pwmetrics-config.js

module.exports = {
  'url': 'http://example.com/',
  'clientSecret': {
    ...
  }
}

Available metrics:

All metrics now are stored in separate constant object located in pwmetrics/lib/metrics/metrics;

// lib/metrics/metrics.ts
{
  METRICS: {
    TTFCP: 'firstContentfulPaint',
    TTLCP: 'largestContentfulPaint',
    TBT: 'totalBlockingTime',
    TTI: 'interactive',
    SI: 'speedIndex'
  }
}

Read article Performance metrics. What’s this all about? which is decoding this metrics.

API

const PWMetrics = require('pwmetrics');

const options = {
  flags: {
    runs: 3, // number or runs
    submit: true, // turn on submitting to Google Sheets
    upload: true, // turn on uploading to Google Drive
    view: true, // open uploaded traces to Google Drive in DevTools
    expectations: true, // turn on assertation metrics results against provides values
    chromeFlags: '--headless' // run in headless Chrome
  }
};

const pwMetrics = new PWMetrics('http://example.com/', options); // _All available configuration options_ can be used as `options`
pwMetrics.start(); // returns Promise

Options

Option Type Default Description
flags* Object
{
  runs: 1,
  submit: false,
  upload: false,
  view: false,
  expectations: false,
  disableCpuThrottling: false,
  chromeFlags: ''
}
        
Feature flags
expectations Object {} See Defining expectations above.
sheets Object {} See Defining submit above.
clientSecret Object {} Client secrete data generated by Google API console. To setup Google Developer project and get credentials apply everything in step 1 here.

*pwmetrics supports all flags from Lighthouse. See here for the complete list.

Recipes

License

Apache 2.0. Google Inc.

pwmetrics's People

Contributors

andresz1 avatar bcabanes avatar bubbit avatar denar90 avatar exogen avatar ge11ert avatar gribnoysup avatar guillaumeamat avatar jooj123 avatar mickeykay avatar natedanner avatar paulirish avatar pedro93 avatar radum avatar samccone avatar santoshjoseph99 avatar staabm avatar sunnibfgi avatar vinhlh avatar willmendesneto avatar young avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pwmetrics's Issues

Error: No Chrome Installations Found

Hi Paul, i'm facing some issue in initialization.
I have Chrome and Lighthouse installed

franklin in ~ Ξ» pwmetrics https://www.belezanaweb.com.br
Launching Chrome...
created /var/folders/vy/zs7j58x177781vgtdpdct0840000gn/T/lighthouse.XXXXXXX.96kRSm4v
[Error: No Chrome Installations Found]

Measure bunch of pages

I think it can be useful to have possibility measure bunch of pages with running one command.
Right now we should do

pwmetrics http://example.com --runs=3
pwmetrics http://example.com/page1 --runs=3
pwmetrics http://example.com/page2 --runs=3

I think we should do smth either like
pwmetrics http://example.com http://example.com/page2 --runs=3

or use some config property

module.exports = {
  urls: ['http://example.com', 'http://example.com/page1'. 'http://example.com/page2']
}

Entire trace was found to be busy. Audit key: time-to-interactive.

This is a great tool! Hope it will grow with functionality!

Meanwhile getting this error on my website, i guess it has something to do with youtube iframe

pwmetrics http://cssing.org.ua                                   2.1m
Launching Chrome
  ✘ Error: Unable to complete run 1 of 1 due to Audit error: Entire trace was found to be busy. Audit key: time-to-interactive.

Hope this will help to improve the tool!
Thank you.

Cannot find module 'lighthouse/lighthouse-core/config/perf.example-custom.json'

$ node --version
v6.5.0

$ npm --version
3.10.3

$ pwmetrics https://airhorner.com

    module.js:457
    throw err;
    ^

Error: Cannot find module 'lighthouse/lighthouse-core/config/perf.example-custom.json'
    at Function.Module._resolveFilename (module.js:455:15)
    at Function.Module._load (module.js:403:25)
    at Module.require (module.js:483:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (/usr/local/lib/node_modules/pwmetrics/index.js:11:20)
    at Module._compile (module.js:556:32)
    at Object.Module._extensions..js (module.js:565:10)
    at Module.load (module.js:473:32)
    at tryModuleLoad (module.js:432:12)
    at Function.Module._load (module.js:424:3)

Pwmetrics fails with pair number of runs

When asking for an average of n runs (where n is a pair) the program will crash because of the calculations done in the findMedianRun function.

Specifically what is happening is that the program can not find a TTI timing that is the exact median value of the runs, so it will return a undefined as the medianRun which generates the error seen in the screenshot below. By definition the median will average the middle 2 results.

If we merge #71 I also propose we change the way this function works to drop the TTI dependency which a user might not specify and which will crash the program.

screen shot 2017-03-08 at 10 59 52

Required OptimizedImages gatherer encountered an error

using latest version

❯ pwmetrics https://bloomberg.com
Launching Chrome
✘ Error: Unable to complete run 1 of 1 due to Audit error: Required OptimizedImages gatherer encountered an error: not opened Audit key: uses-optimized-images.

@benschwarz mentioned bloomberg is a tricky page and it always errors. though this error is diff than what he was seeing:

✘ Error: Sorry, Visually Complete 85% metric is unavailable.
✘ Error: Sorry, First Contentful Paint metric is unavailable.

anyway.. looks like grabbing the Lighthouse perf config might not be what we want for pwmetrics and we could afford to skip the optimizedimages bit. :)

CI issue after updating lighthouse version

We have issue on CI after finishing measurements.
https://travis-ci.org/paulirish/pwmetrics/jobs/204950872#L722-L735

So I found that it's related to killing launched chrome.
Everything is fine if we don't kill it.

Here is my commit with changes.
I've removed all logic related to charts and stuff.

Diff - master...denar90:dont-kill-chrome
CI - https://travis-ci.org/denar90/pwmetrics/jobs/205395193

Then I downgraded version to 1.4.1 and it works perfectly

Diff - master...denar90:test-ci-with-1.4.1
CI - https://travis-ci.org/denar90/pwmetrics/jobs/205392241

I suppose that something has changed in chrome launcher in 1.5.x...

cc @paulirish @samccone

TracingStartedInPage event not found.

Just noticed:

$ pwmetrics https://jsfeatures.in
Event RendererMain.START_MSG_LOOP has already been started
Event RendererMain.START_MSG_LOOP has already been started
TracingStartedInPage event not found.
Sorry, Visually Complete 85% metric is unavailable

Looks like it's something to do with user-timings

InvalidProtoExpectation

davidosomething@potatopro:~                                                                                                                                               [js:v6.9.1|py:3.6.0|rb:2.4.0]
 I  % pwmetrics http://elitedaily.com/?omgplzno=11
Launching Chrome
Invalid ProtoExpectation: ignored(2560 2560 undefined) File a bug with this trace!
Invalid ProtoExpectation: ignored(2560 2560 undefined) File a bug with this trace!
Invalid ProtoExpectation: ignored(2560 2560 undefined) File a bug with this trace!
Invalid ProtoExpectation: ignored(2560 2560 undefined) File a bug with this trace!

                       |
First Contentful Paint |                                                               2,349
                       |
First Meaningful Paint |                                                                    2,557
                       |
Perceptual Speed Index |                                                                                                                   4,317
                       |
   Time to Interactive |                                                                                                                                            5,286
                       --------------------------------------------------------------------------------------------------------------------------------------------------------------
                        0                                                              Time (ms) since navigation start                                                               6000

and again 2nd run

pwmetrics "http://elitedaily.com/?omgplzno=11"
Launching Chrome
Invalid ProtoExpectation: ignored(1819 1819 undefined) File a bug with this trace!
Invalid ProtoExpectation: ignored(1819 1819 undefined) File a bug with this trace!
Invalid ProtoExpectation: ignored(1819 1819 undefined) File a bug with this trace!
Invalid ProtoExpectation: ignored(1819 1819 undefined) File a bug with this trace!

                       |
First Contentful Paint |                                                    1,607
                       |
First Meaningful Paint |                                                          1,814
                       |
Perceptual Speed Index |                                                                                                                  3,583
                       |
   Time to Interactive |                                                                                                                                                 4,561
                       --------------------------------------------------------------------------------------------------------------------------------------------------------------
                        0                                                              Time (ms) since navigation start                                                               5000

Add a more descriptive output for the runs.

At the moment if you are running an audit with more than one run you only get the generic line Launching Chrome:

bildschirmfoto 2017-02-26 um 14 49 01

In my opinion it would be more descriptive and helpful if the output would say something like Run 1 out of 5 is finished successfully and so on. That way you would also have the opportunity to deal with certain exceptions if something went wrong within the microcopy like Unable to complete run 3 due to .....

Remove duplicated pwmetrics object properties and merging yargs support

Whilst reading through the project I noticed the pwmetrics object contains a property opts which is not used anywhere else, and will in fact duplicate the object size in memory.

If you agree @paulirish I would like to do a code cleanup for this and some other minor fixes across files including adding the yargs PR, improve upon it and potencially add support for #48.

Should this all be done as a pull request for you to later review?
@denar90 do you agree with this?

Getting a "TypeError: Cannot read property 'length' of undefined" in pwmetrics 2.0.0

Hi I've updated to pwmetrics 2.0.0 few minutes ago. wanted to test things afterwards but ran straight into the following error:

$> pwmetrics https://bloomberg.com
/Users/myuser/.npm_global/lib/node_modules/pwmetrics/node_modules/lighthouse/lighthouse-core/lib/log.js:111
    const maxLength = columns - data.method.length - prefix.length - loggingBufferColumns;
                                           ^

TypeError: Cannot read property 'length' of undefined
    at Function.formatProtocol (/Users/myuser/.npm_global/lib/node_modules/pwmetrics/node_modules/lighthouse/lighthouse-core/lib/log.js:111:44)
    at CriConnection.handleRawMessage (/Users/myuser/.npm_global/lib/node_modules/pwmetrics/node_modules/lighthouse/lighthouse-core/gather/connections/connection.js:111:9)
    at WebSocket.ws.on.data (/Users/myuser/.npm_global/lib/node_modules/pwmetrics/node_modules/lighthouse/lighthouse-core/gather/connections/cri.js:78:37)
    at emitTwo (events.js:106:13)
    at WebSocket.emit (events.js:194:7)
    at Receiver.ontext (/Users/myuser/.npm_global/lib/node_modules/pwmetrics/node_modules/ws/lib/WebSocket.js:841:10)
    at /Users/myuser/.npm_global/lib/node_modules/pwmetrics/node_modules/ws/lib/Receiver.js:536:18
    at Receiver.applyExtensions (/Users/myuser/.npm_global/lib/node_modules/pwmetrics/node_modules/ws/lib/Receiver.js:371:5)
    at /Users/myuser/.npm_global/lib/node_modules/pwmetrics/node_modules/ws/lib/Receiver.js:508:14
    at Receiver.flush (/Users/myuser/.npm_global/lib/node_modules/pwmetrics/node_modules/ws/lib/Receiver.js:347:3)

I've noticed afterwards that the recommendation changed from using npm for install to yarn. I've installed version 2.0.0 with npm. but thought it wouldn't make a difference in with which app you install a module. So is it recommended to uninstall pwmetrics and reinstall with yarn? i am running npm 4.2.0 and node 7.9.0 on osx 10.11.6 and Canary Version 60.0.3076.0. Cheers r.

Store json results into the file

Right now we are showing json result directly in console. It will be nice to save them into the file too.
Kinda pwmetrics http://example.com --json=./pwmetrics-results.json

Add possibility to use it with CI

I would really like to have some option like maxTimeSinceNavSrart, value of which means speed expectation.
Using this option pwmetrics might throw some error Time (ms) since navigation start more then you expected, so improve your page perf.
This approach could be used for CI.

Add test

It will be nice to have some tests. It's a bit hard to check everything after adding changes... Think mocha can be used.

No actual value of each bar in the graph is painted

Hi I ran into an possible issue. I am running node 7.4.0 installed with homebrew on osx. I've installed [email protected] and [email protected] globally with npm install -g. When i run either pwmetrics or with the β€”runs=3 option I get an output like this.

bildschirmfoto 2017-01-10 um 11 43 42

I get an rough idea about the dimension by the 10.000 value on the x-axis but the specific values are missing like in the example screenshot in the readme.md file. If you need any further informations let me know. Best regards r.

Local server running feature

Hey guys, it's me again πŸ˜ƒ To use pwmetrics with CI we should run some server and then run metrics passing current url (e.g. localhost:8080/index.html). What do you think about adding possibility to run server for special file using CLI? I propose smth like
pwmetrics public/index.html --local

Median Run output inconsistent with normal output

This is a minor issue but might create some confusion for some users.
When running pwametrics with the runs flag the final median output's labels and data are not in the same order as the rest:

screen shot 2017-01-30 at 15 00 28

Notice how the last graph's labels are:

  • Time to Interactive
  • Perceptual Speed Index
  • First Meaningful Paint
  • First Contentful Paint

And the run graphs (which match the standard $ pwmetrics {url} output):

  • First Contentful Paint
  • First Meaningful Paint
  • Perceptual Speed Index
  • Time to Interactive

Merely changing this line to:

let timings = data.timings;

(Basically remove the .reverse())
Appears to fix the issue.

Metrics are not normalized

Metrics are normalized when running pwmetrics through CLI, however they are not when pwmetrics is required as a module. The get_config() method in expectations.js makes the following two calls:

validateMetrics(expectations.metrics);
normalizeMetrics(expectations.metrics);

This turns metrics like >=500 (this is what is documented in the readme, both for package.json as well as directly passing options to the constructor), into basic integers by stripping the >=.

These methods are not applied, however, when invoking pwmetrics as an included module. I assume this is an unintentional omission, as it prevents fully consistent behavior between CLI/package.json and module opt configurations.

The easy fix is adding the above-listed validation and normalization methods to the PWMetrics constructor.

Thoughts/questions about throttling

I've been thinking, since on localhost we don't have any roundtrips or another delays which can extend time of resources loading, do we have to add some default, e.g. 3G throttling, for measuring?

  • It can make it more real, can't it?
  • Can we use some devtools API for that?
  • Should it be done as part of lighthouse?

First Contentful Paint: NaN

With [email protected]:

$ pwmetrics https://mathiasbynens.be/
Launching Chrome
  ✘ Error: Sorry, Visually Complete 85% metric is unavailable.
  ✘ Error: Sorry, First Contentful Paint metric is unavailable.

                       |
First Contentful Paint | NaN
                       |
First Meaningful Paint | -1
                       |
Perceptual Speed Index |                                               1,055
                       |
   Time to Interactive | -1
                       -----------------------------------------------
                        0      Time (ms) since navigation start       1055

I haven’t looked into what’s causing this, but it seems to be an issue with pwmetrics itself, since lighthouse handles the same URL just fine.

$ lighthouse https://mathiasbynens.be/
  Lighthouse CLI Launching Chrome... +0ms
  ChromeLauncher Waiting for browser. +386ms
  ChromeLauncher Waiting for browser... +1ms
  ChromeLauncher Waiting for browser..... +504ms
  ChromeLauncher Waiting for browser....... +505ms
  ChromeLauncher Waiting for browser.......βœ“ +5ms
  status Initializing… +2s
  status Loading page & waiting for onload +382ms URL, HTTPS, Viewport, ThemeColor, Manifest, Accessibility, ImageUsage, ContentWidth
  statusEnd Loading page & waiting for onload +2s
  status Retrieving trace +2ms
  status Retrieving network records +608ms
  status Retrieving: URL +3ms
  status Retrieving: HTTPS +0ms
  status Retrieving: Viewport +2ms
  status Retrieving: ThemeColor +7ms
  status Retrieving: Manifest +2ms
  status Retrieving: Accessibility +224ms
  status Retrieving: ImageUsage +127ms
  status Retrieving: ContentWidth +3ms
  status Loading page & waiting for onload +347ms ServiceWorker, Offline
  statusEnd Loading page & waiting for onload +655ms
  status Retrieving network records +0ms
  status Retrieving: ServiceWorker +2ms
  status Retrieving: Offline +0ms
  status Loading page & waiting for onload +334ms HTTPRedirect, HTMLWithoutJavaScript
  statusEnd Loading page & waiting for onload +925ms
  status Retrieving network records +0ms
  status Retrieving: HTTPRedirect +2ms
  status Retrieving: HTMLWithoutJavaScript +2ms
  status Loading page & waiting for onload +356ms ChromeConsoleMessages, Styles, CSSUsage, EventListeners, AnchorsWithNoRelNoopener, AppCacheManifest, ConsoleTimeUsage, DateNowUse, DocWriteUse, GeolocationOnStart, NotificationOnStart, DOMStats, OptimizedImages, TagsBlockingFirstPaint, WebSQL
  statusEnd Loading page & waiting for onload +716ms
  status Retrieving network records +1ms
  status Retrieving: ChromeConsoleMessages +1ms
  status Retrieving: Styles +4ms
  status Retrieving: CSSUsage +156ms
  status Retrieving: EventListeners +6ms
  status Retrieving: AnchorsWithNoRelNoopener +90ms
  status Retrieving: AppCacheManifest +3ms
  status Retrieving: ConsoleTimeUsage +3ms
  status Retrieving: DateNowUse +2ms
  status Retrieving: DocWriteUse +3ms
  status Retrieving: GeolocationOnStart +2ms
  status Retrieving: NotificationOnStart +16ms
  status Retrieving: DOMStats +11ms
  status Retrieving: OptimizedImages +4ms
  status Retrieving: TagsBlockingFirstPaint +7ms
  status Retrieving: WebSQL +5ms
  status Disconnecting from browser... +505ms
  status Evaluating: Uses HTTPS +31ms
  status Evaluating: Redirects HTTP traffic to HTTPS +0ms
  status Evaluating: Registers a Service Worker +1ms
  status Evaluating: Responds with a 200 when offline +0ms
  status Evaluating: Has a `<meta name="viewport">` tag with `width` or `initial-scale` +2ms
  status Evaluating: Manifest's `display` property is set +1ms
  status Evaluating: Contains some content when JavaScript is not available +0ms
  status Evaluating: First meaningful paint +0ms
  status Evaluating: Perceptual Speed Index +23ms
  status Evaluating: Estimated Input Latency +377ms
  status Evaluating: Time To Interactive (alpha) +245ms
  status Evaluating: User Timing marks and measures +1ms
  status Evaluating: Screenshots of all captured frames +1ms
  status Evaluating: Critical Request Chains +97ms
  status Evaluating: Manifest exists +1ms
  status Evaluating: Manifest contains `background_color` +1ms
  status Evaluating: Manifest contains `theme_color` +0ms
  status Evaluating: Manifest contains icons at least 192px +0ms
  status Evaluating: Manifest contains icons at least 144px +1ms
  status Evaluating: Manifest contains `name` +1ms
  status Evaluating: Manifest contains `short_name` +0ms
  status Evaluating: Manifest's `short_name` won't be truncated when displayed on homescreen +0ms
  status Evaluating: Manifest contains `start_url` +0ms
  status Evaluating: Has a `<meta name="theme-color">` tag +0ms
  status Evaluating: Content is sized correctly for the viewport +1ms
  status Evaluating: Avoids deprecated APIs +0ms
  status Evaluating: Element aria-* attributes are allowed for this role +0ms
  status Evaluating: Elements with ARIA roles have the required aria-* attributes +0ms
  status Evaluating: Element aria-* attributes have valid values +1ms
  status Evaluating: Element aria-* attributes are valid and not misspelled or non-existent. +0ms
  status Evaluating: Background and foreground colors have a sufficient contrast ratio +0ms
  status Evaluating: Every image element has an alt attribute +0ms
  status Evaluating: Every form element has a label +9ms
  status Evaluating: No element has a `tabindex` attribute greater than 0 +0ms
  status Evaluating: Avoids enormous network payloads +0ms
  status Evaluating: Unused CSS rules +5ms
  status Evaluating: Unoptimized images +2ms
  status Evaluating: Oversized Images +2ms
  status Evaluating: Avoids Application Cache +6ms
  status Evaluating: Avoids an excessive DOM size +0ms
  status Evaluating: Opens external anchors using rel="noopener" +1ms
  status Evaluating: Avoids requesting the geolocation permission on page load +0ms
  status Evaluating: Render-blocking Stylesheets +1ms
  status Evaluating: Avoids `console.time()` in its own scripts +0ms
  status Evaluating: Avoids `Date.now()` in its own scripts +1ms
  status Evaluating: Avoids `document.write()` +3ms
  status Evaluating: Avoids Mutation Events in its own scripts +0ms
  status Evaluating: Avoids old CSS flexbox +3ms
  status Evaluating: Avoids WebSQL DB +2ms
  status Evaluating: Avoids requesting the notification permission on page load +0ms
  status Evaluating: Render-blocking scripts +0ms
  status Evaluating: Uses HTTP/2 for its own resources +0ms
  status Evaluating: Uses passive listeners to improve scrolling performance +3ms
  Printer html output written to /private/tmp/mathiasbynens.be_2017-04-19_17-22-39.report.html +322ms
  CLI:warn Report output no longer defaults to stdout. Use `--output=pretty` to re-enable. +1ms
  CLI Protip: Run lighthouse with `--view` to immediately open the HTML report in your browser +1ms

  ChromeLauncher Killing all Chrome Instances +57ms

Release 2.0

Anything to add, thoughts

Timings are undefined when multiple runs are performed.

Just get it installed on macOS Sierra with NPM on Node.js 7.4.0: npm install -g pwmetrics.

Performed multiple runs (pwmetrics http://example.com/ --runs=3) on any website and get the following error:

TypeError: Cannot read property 'timings' of undefined
    at results.map.r (/usr/local/lib/node_modules/pwmetrics/lib/index.js:127:41)
    at Array.map (native)
    at PWMetrics.findMedianRun (/usr/local/lib/node_modules/pwmetrics/lib/index.js:127:31)
    at PWMetrics.allDone.then._ (/usr/local/lib/node_modules/pwmetrics/lib/index.js:40:27)
```

Cleaner error messaging when using invalid parameters

Just a note, when running the cli tool with invalid parameters a cleaner output should be shown:

screen shot 2017-02-27 at 12 29 14

This happens because we have no try catch block to handle this exception. We can fix this by doing the following In the cli.js file test section

if (!options.url || !options.url.length) {
   console.err(getMessage('NO_URL'));
   process.exit(-1);
}

Also I would prefer the Error message outputed from getMessageWithPrefix, its a little more visual for the user.

Reduce output when doing runs

Consider only outputting the median graph data after all runs conclude.
Reason: Less information outputted to console and less cognitive load for the user.

Personally if I ask pwmetrics to execute a number of times I'm more interested in seeing the median values than any 1 execution.

Instead of displaying the chart of every run, pwmetrics could generate a file (JSON, CSV, whatever) with the data of every run but output to the console the chart for the median values. I understand there is a "sheets" configuration option but that requires configuring a google sheets and time to setup up. I propose a more "local" approach.

I could make a PR for this if the maintainers of this repository find it worthwhile to have this feature. To simply reduce the terminal output, the changes would occur here:
https://github.com/paulirish/pwmetrics/blob/master/lib/index.js#L84
By simply changing the if statement to:

if (!this.flags.submit && this.runs <= 1) {
    this.displayOutput(data);
}

Thoughts? @denar90 @paulirish

Feature request: ability to display chart AND metrics errors/warnings

First off, fantastic project - I love it! Thanks a ton for working on this, and in general for all the great FE work you do.

Secondly, I have a feature request. At present, pwmetrics allows you to either 1. display a performance chart or 2. include expectation metrics and display errors/warnings, however you can't do both: https://github.com/paulirish/pwmetrics/blob/master/lib/index.js#L75. According to the logic here, a chart will only ever be output if no expectations are set.

My request is to add the ability to display the chart and any expectation errors/warnings.

Feature request: ability to filter hiddenMetrics

At present it doesn't look like there's any way to filter/customize the metrics.hiddenMetrics array, which is used here to determine which metrics are output in the chart: https://github.com/paulirish/pwmetrics/blob/master/lib/index.ts#L214

Unless I'm mistaken, there is no way to customize the chart output with these hidden metrics, correct? If that is the case, I'd like to request the ability to do so. Seems like there are a few ways to do this, however if you want to summarize your preferred approach I'd be happy to take a pass at a PR if that's helpful. Thanks!

SyntaxError: Unexpected token {

amit@amit:~$ sudo npm install --global pwmetrics
[sudo] password for amit: 
/usr/local/bin/pwmetrics -> /usr/local/lib/node_modules/pwmetrics/bin/cli.js
/usr/local/lib
`-- [email protected] 
  +-- [email protected] 
  | `-- [email protected] 
  +-- [email protected] 
  | +-- [email protected] 
  | +-- [email protected] 
  | +-- [email protected] 
  | | `-- [email protected] 
  | +-- [email protected] 
  | | +-- [email protected] 
  | | `-- [email protected] 
  | +-- [email protected] 
  | +-- [email protected] 
  | | +-- [email protected] 
  | | +-- [email protected] 
  | | | +-- [email protected] 
  | | | `-- [email protected] 
  | | +-- [email protected] 
  | | | `-- [email protected] 
  | | `-- [email protected] 
  | |   +-- [email protected] 
  | |   +-- [email protected] 
  | |   +-- [email protected] 
  | |   `-- [email protected] 
  | |     +-- [email protected] 
  | |     | +-- [email protected] 
  | |     | | +-- [email protected] 
  | |     | | | +-- [email protected] 
  | |     | | | | `-- [email protected] 
  | |     | | | +-- [email protected] 
  | |     | | | `-- [email protected] 
  | |     | | `-- [email protected] 
  | |     | +-- [email protected] 
  | |     | `-- [email protected] 
  | |     `-- [email protected] 
  | +-- [email protected] 
  | +-- [email protected] 
  | +-- [email protected] 
  | | `-- [email protected] 
  | +-- [email protected] 
  | | +-- [email protected] 
  | | `-- [email protected] 
  | |   `-- [email protected] 
  | +-- [email protected] 
  | | `-- [email protected] 
  | |   +-- [email protected] 
  | |   +-- [email protected] 
  | |   | `-- [email protected] 
  | |   +-- [email protected] 
  | |   | `-- [email protected] 
  | |   |   +-- [email protected] 
  | |   |   `-- [email protected] 
  | |   +-- [email protected] 
  | |   `-- [email protected] 
  | +-- [email protected] 
  | +-- [email protected] 
  | | +-- [email protected] 
  | | | `-- [email protected] 
  | | +-- [email protected] 
  | | +-- [email protected] 
  | | +-- [email protected] 
  | | | +-- [email protected] 
  | | | | `-- [email protected] 
  | | | `-- [email protected] 
  | | `-- [email protected] 
  | |   +-- [email protected] 
  | |   | `-- [email protected] 
  | |   +-- [email protected] 
  | |   +-- [email protected] 
  | |   +-- [email protected] 
  | |   | +-- [email protected] 
  | |   | +-- [email protected] 
  | |   | | `-- [email protected] 
  | |   | `-- [email protected] 
  | |   |   +-- [email protected] 
  | |   |   | `-- [email protected] 
  | |   |   `-- [email protected] 
  | |   +-- [email protected] 
  | |   | +-- [email protected] 
  | |   | | `-- [email protected] 
  | |   | `-- [email protected] 
  | |   |   +-- [email protected] 
  | |   |   | +-- [email protected] 
  | |   |   | +-- [email protected] 
  | |   |   | | `-- [email protected] 
  | |   |   | |   `-- [email protected] 
  | |   |   | +-- [email protected] 
  | |   |   | `-- [email protected] 
  | |   |   |   `-- [email protected] 
  | |   |   `-- [email protected] 
  | |   +-- [email protected] 
  | |   | +-- [email protected] 
  | |   | | `-- [email protected] 
  | |   | |   `-- [email protected] 
  | |   | `-- [email protected] 
  | |   |   `-- [email protected] 
  | |   `-- [email protected] 
  | +-- [email protected] 
  | | +-- [email protected] 
  | | `-- [email protected] 
  | +-- [email protected] 
  | | +-- [email protected] 
  | | `-- [email protected] 
  | `-- [email protected] 
  |   +-- [email protected] 
  |   +-- [email protected] 
  |   | +-- [email protected] 
  |   | | +-- [email protected] 
  |   | | `-- [email protected] 
  |   | |   `-- [email protected] 
  |   | +-- [email protected] 
  |   | | `-- [email protected] 
  |   | `-- [email protected] 
  |   +-- [email protected] 
  |   +-- [email protected] 
  |   | `-- [email protected] 
  |   |   `-- [email protected] 
  |   +-- [email protected] 
  |   `-- [email protected] 
  `-- [email protected] 

amit@amit:~$ pwmetrics http://www.shweiki.com/blog/category/business/ --runs=3
/usr/local/lib/node_modules/pwmetrics/bin/cli.js:6
const { getConfig } = require('../lib/expectations');
      ^

SyntaxError: Unexpected token {
    at exports.runInThisContext (vm.js:53:16)
    at Module._compile (module.js:387:25)
    at Object.Module._extensions..js (module.js:422:10)
    at Module.load (module.js:357:32)
    at Function.Module._load (module.js:314:12)
    at Function.Module.runMain (module.js:447:10)
    at startup (node.js:146:18)
    at node.js:404:3
amit@amit:~$ npm -v
3.8.3

Median values not precise

This may be due to JS's math quirks with doubles and I might just be me nitpicking (if it is I apologize and do close the issue) but I ran pwmetrics on a webpage with 5 runs of which the outputted Time to Interactive metrics were:

  • 2,038
  • 2,026
  • 2,037
  • 2,043
  • 2,037

And a median Time to Interactive of: 2,037

With very simple math:
2,038 + 2,026 + 2,037 + 2,043 + 2,037 = 10,181
10,181 / 5 = 2,0362

The other metrics were similarly slightly off.
I know a possibility may be numbers rounding that are done only at the final stage but the user has no such indication.

If this is not a bug could there be an output message (or modify the Time (ms) since navigation start) which states values may be incorrect by a given interval (+- 10ms)?

Config file not working

I am trying to create a configuration file to use with pwmetrics however it is not being used by the library. I have a file performance-test.config.js that looks like this:

module.exports = {
    runs: 5,
    flags: {
        expectations: true,
    },
    sheets: {
        // sheets configuration
    },
    expectations: {
        ttfmp: {
            warn: '>=3000',
            error: '>=5000',
        },
        tti: {
            warn: '>=5000',
            error: '>=15000',
        },
        ttfcp: {
            warn: '>=1500',
            error: '>=3000',
        },
        psi: {
            warn: '>=3000',
            error: '>=6000',
        },
    },
};

I'm executing pwmetrics like this:

function runPerf(url, cb){
    const pwmetrics = path.join(__dirname, '../node_modules/pwmetrics/bin/cli.js');
    const perfConfig = require('../config/performance-test.config.js');
    const child = cp.exec('node ' + pwmetrics + ' ' + url + ' --config=' + perfConfig);
    child.stderr.pipe(process.stderr);
    child.stdout.pipe(process.stdout);
    child.on('close', (code) => {
        cb(code);
    });
}

I think the issue is with how the cli reads the options from the command line. Is there an option to override every command line option with the data from a json?

pwmetrics closes if run fails

When setting pwmetrics to test an URL n number of times, if any fails the process will shutdown if it can, otherwise the process will hang and not progress any further.

Software:
Chrome Canary: 59.0.3054.0 64bits
Node: v7.7.3 64 bits
OS: Mac OS X: 10.11.6
Lighthouse: 1.6.0 (as defined in the package.json)

To reproduce:
I'm using pwmetrics from the master branch

$ node bin/cli.js http://www.bbc.com/weather/ --json --runs=10

Output:
screen shot 2017-03-29 at 10 50 05

Also causes Chrome to stay open (pwmetrics opened it). <- Is this intended?

Edit: Cause https://github.com/paulirish/pwmetrics/blob/master/lib/index.js#L58 this thrown exception is not handled anywhere.

@paulirish @denar90 I can easily fix this (in the desiredMetrics PR I did it like this: store error to not break sequential loop and discard from final results) suggestions welcome.

pwmetrics section in package.json ignored

Running with script command pwmetrics --config according to docs will use pwmetrics config in package.json - so a section such as:

"pwmetrics": {
    "url": "http://www.blah.com",
    "flags": {
      "runs": "2",
      "expectations": true
    },
    "expectations": {
      "ttfcp": {
        "warn": ">=1500",
        "error": ">=2000"
      }
    }
  },

Is getting ignored and dumping error message "✘ Error: No url entered.."

Having a seperate js config file like pwmetrics --config=pwmetrics-config.js works fine on the other hand

Provide a more meaningful error message in case a tested site is too slow instead of "metric unavailable"

It is probably a naming issue. I've installed version 1.2.4 and ran into something i haven't seen yet afterwards. I occassionally get now that kind of errors on slower sites I test against:

bildschirmfoto 2017-01-12 um 14 31 49

bildschirmfoto 2017-01-12 um 14 31 52

But If i observe the Chrome window alongside it is actually painting something but i guess i am going past a certain time scope threshold benchmarking?! so pwmetric terminates the benchmark after a certain time? I wouldn't call it unavailable then but something like, took too long and aborted or provide some override to go until the bitter end? ;) cheers r.

First contentful paint question/issue

Re: the following run:
image

Based on what I've read it seems like First Contentful Paint should always come before Visually Complete 100%, or is that incorrect? Either way, I'm curious what might lead to such disparate timings between these two metrics. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.