GithubHelp home page GithubHelp logo

nemo's Introduction

nemo

Greenkeeper badge

Node.js solution for running mocha-based selenium-webdriver tests.

Getting started

Install nemo

npm install --save-dev nemo@^4

Use the scaffold feature

$ npx nemo -X test/functional

  DONE!

  Next steps:
  1. Make sure you have latest chrome/chromedriver installed (https://sites.google.com/a/chromium.org/chromedriver/getting-started)
     - The binary should be in your PATH
  2. Run nemo! "npx nemo"
  3. Look at nemo.config.js and test/functional/nemo.test.js
  4. Learn more: http://nemo.js.org

$

For a more complex, fully-featured suite:

$ ./node_modules/.bin/nemo -Z test/functional

CLI arguments

$ ./bin/nemo --help

  Usage: _nemo [options]

  Options:

    -V, --version                output the version number
    -B, --base-directory <path>  parent directory for config/ and spec/ (or other test file) directories. relative to cwd
    -C, --config-file <path>       config file. can be JS or JSON
    -P, --profile [profile]      which profile(s) to run, out of the configuration
    -G, --grep <pattern>         only run tests matching <pattern>
    -F, --file                   run parallel by file
    -D, --data                   run parallel by data
    -S, --server                 run the nemo web server
    -L, --logging                info level logging (errors log by default)
    -X, --scaffold <path>          inject an example nemo suite under <path>
    -Z, --scaffold-complex <path>  inject a full-featured (complex) example nemo suite under <path>
    -U, --allow-unknown-args     allow command line arguments not specified by Nemo
    -E, --exit                   force shutdown of the event loop after test run: nemo will call process.exit
    --debug-brk                  enable node's debugger breaking on the first line
    --inspect                    activate devtools in chrome
    --no-timeouts                remove timeouts in debug/inspect use case
    -h, --help                   output usage information

Configuration

You may either use the confit and shortstop powered, environment-aware configuration engine, or a plain JavaScript/JSON file.

Use the "complex scaffold" feature (-Z) to create a suite with this option.

Plain JS/JSON

If using a plain JS/JSON file, you can add it as nemo.config.json or nemo.config.js in the directory you run nemo from. Then you can run nemo simply as ./node_modules/.bin/nemo. Nemo will find your configuration file automatically.

You can also specify a differently named or placed file using the -C option as ./node_modules/.bin/nemo -C path/to/config/config.js.

Use the "basic scaffold" feature (-X) to create a suite with this option.

Profile options

output

output.reports <optional>

This convenience setting will create timestamped and tag-based directory structure for reports and screenshots when you use mochawesome or xunit reporters. When you use this, you can omit the specific directory/filename settings for those reporters, as nemo will take care of that for you.

Recommended to set this as path:report, which will create a report directory beneath your base directory. See Reporting below.

output.storage <optional>

You can provide an influxdb endpoint and store test results in it. E.g.

"storage": {
    "server": "localhost",
    "database": "nemo"
}

50%

Currently, you will get two measurements from running tests, test and lifecycle:

schema: [{
  measurement: 'test',
  fields: {
    result: Influx.FieldType.STRING,
    error: Influx.FieldType.STRING,
    stack: Influx.FieldType.STRING,
    fullTitle: Influx.FieldType.STRING,
    duration: Influx.FieldType.INTEGER,
    threadID: Influx.FieldType.STRING,
    masterID: Influx.FieldType.STRING
  },
  tags: [
    'title',
    'profile',
    'dkey',
    'file',
    'grep'
  ]
},
{
  measurement: 'lifecycle',
  fields: {
    event: Influx.FieldType.STRING,
    threadID: Influx.FieldType.STRING,
    masterID: Influx.FieldType.STRING,
    duration: Influx.FieldType.INTEGER
  },
  tags: [
    'profile',
    'dkey',
    'grep'
  ]
}]

output.listeners <optional>

The output:listeners property can resolve to a function, an Object, or an Array (or Array of Arrays) of functions/objects.

The function form:

module.exports = function (emitter) {
  emitter.on('test', (context, event) => {
    console.log(`another testlistener ${event.test.title} status: ${event.test.state}`);
  });
};

The Object form:

{
  type: 'pass',
  listener: (context, event) => {
    console.log(`user event listener: test passed ${JSON.stringify(event.tags)}`);
  }
}

Please see "Events" section for more details

base

is the main profile configuration that others will merge into

base.tests

is an absolute path based glob pattern. (e.g. "tests": "path:spec/!(wdb)*.js",)

base.parallel

only valid for 'base'.

  • if set to 'file' it will create a child process for each mocha file (alternative to -F CLI arg)
  • if set to 'data' it will create a child process for each object key under base.data (alternative to the -D CLI arg)

base.mocha

mocha options. described elsewhere

base.env

any environment variables you want in the test process.

base.zeroExitCode

-if set to true, nemo will always exit with zero code -if set to false, or don't set any value, the exitCode is Math.min(output.totals.fail, 255);

NOTES:

  • currently base.env is only honored if nemo is launching parallel nemo instances (each as its own process). If nemo launches a single nemo instance in the main process, these are ignored.
  • any env variables in your nemo process will be merged into the env for the parallel processes (along with whatever is set under base.env)

base.maxConcurrent

a number which represents the max limit of concurrent suites nemo will execute in parallel - if not provided there is no limit

Reporting

If you use either of the built-in reporters (xunit or mochawesome), nemo will generate timestamped directories for each run. The reports will be further separated based on the parallel options. E.g.

50%

In the above example, parallel options were "profile", "file", and "data".

A summary for all parallel instances can be found at summary.json

Screenshots

You can use nemo.snap() at any point in a test, to grab a screenshot. These screenshots will be named based on the respective test name, and number of screenshots taken using nemo.snap(). E.g.

  • my awesome test.1.png
  • my awesome test.2.png
  • my awesome test.3.png

If you use the mochawesome reporter, you will see these screeshots in the Additional Context section of the html report.

If you are using mochawesome or xunit along with the output.reports setting, screenshots will be placed in the appropriate output directory based on the instance details of the test which generated them.

Adding Nemo into the mocha context and vice versa

nemo injects a nemo instance into the Mocha context (for it, before, after, etc functions) which can be accessed by this.nemo within the test suites.

nemo also adds the current test's context to nemo.mocha. That can be useful if you want to access or modify the test's context from within a nemo plugin.

Parallel functionality

nemo will execute in parallel -P (profile) x -G (grep) mocha instances. The example above uses "browser" as the profile dimension and suite name as the "grep" dimension. Giving 2x2=4 parallel executions.

In addition to profile and grep, are the dimensions file and data.

Parallel by file

file will multiply the existing # of instances by the # of files selected by your configuration.

Parallel by data

data will multiply the existing # of instances by the # of keys found under profiles.base.data. It can also be overriden per-profile. It will also replace nemo.data with the value of each keyed object. In other words, you can use this to do parallel, data-driven testing.

If you have the following base profile configuration:

  "profiles": {
    "base": {
      "data": {
        "US": {"url": "http://www.paypal.com"},
        "FR": {"url": "http://www.paypal.fr"}
      },
      "parallel": "data",
      "tests": "path:spec/test-spec.js",
      "mocha": {
        //...
      }
    }
  }

Then the following test will run twice (in parallel) with corresponding values of nemo.data.url:

it('@loadHome@', function () {
    var nemo = this.nemo;
    return nemo.driver.get(nemo.data.url);//runs once with paypal.com, once with paypal.fr
});

Parallel reporting

Using a reporter which gives file output will be the most beneficial. nemo comes out of the box, ready to use mochawesome or xunit for outputting a report per parallel instance.

Mocha options

The properties passed in to the "mocha" property of config.json will be applied to the mocha instances that are created. In general, these properties correlate with the mocha command line arguments. E.g. if you want this:

mocha --timeout 180000

You should add this to the "mocha" property within "profiles" of config.json:

"profile": {
	...other stuff,
	"mocha": {
		"timeout": 180000
	}
}

nemo creates mocha instances programmatically. Unfortunately, not all mocha command line options are available when instantiating it this way. One of the arguments that is not supported is the --require flag, which useful if you want to require a module, e.g. babel-register (for Babel v6) or @babel/register (for Babel v7) for transpilation. Thus, we added a "require" property in nemo.config.json profile/base/mocha block, which takes a string of a single npm module name, or an array of npm module names. If it is an array, nemo will require each one before instantiating the mocha instances.

Events

Nemo publishes lifecycle events which can help to monitor progress.

instance:start

Published when an instance starts. The event is an object.

Property Type Description
tags Tags{object}

instance:end

Published when an instance ends. The event is an InstanceResult object.

master:end

This event is published when all instances are completed. The event is an array of InstanceResult objects.

root:before

This event is published when root suite execution started

suite:before

This event is published when suite execution started

suite

This event is published when suite finish

test:before

This event is published when test execution started. The event is an object. You can use "uid" to correlate this event with other test events from the same instance.

Property Type Description
tags Tags{object}
test TestResult modified Mocha test object, not with all values as test event (see elsewhere)

test

This event is published at the conclusion of a test. The event is an object. You can use "uid" to correlate this event with other test events from the same instance.

Property Type Description
tags Tags{object}
test TestResult modified Mocha test object (see elsewhere)
duration ms{number} Run duration for this test

<custom events>

You can publish custom events from within your tests using nemo.runner.emit(EventType{string}[, EventPayload{object}])

Nemo will publish this on the main event listener as the following object

Property Type Description
tags Tags{object}
payload EventPayload{object} user defined, or empty object

Common event objects

InstanceResult

Property Type Description
tags Tags{object}
testResults TestResult[]
duration ms{number} Run duration for this instance

TestResult

Modified Mocha test object

Property Type Description
file {string} path to file containing this test
fullTitleString {string} Suite and test title concatenated
state {string} passed or failed
duration ms{number} Run duration for this test

Many other properties. Inspect in debugger for more information

Tags

Property Type Description
profile {string} The profile which spawned this instance
uid {string} unique identifier for this instance
reportFile (optional) {string} path to report file for this instance (if generated)
grep (optional) {string} grep string, if provided
data (optional) {string} data key for this instance, if parallel by data is being used

Webdriver lifecycle options

<profile>.driverPerTest

Leave this unset, or set to false for a webdriver per Suite. Set to true for a webdriver per test

Example (find this in the test configuration):

...
"driverPerSuite": {
    "tests": "path:./lifecycle.js",
    "driverPerTest": false,
    "mocha": {
        "grep": "@entireSuite@"
    }
},
"driverPerTest": {
    "tests": "path:./lifecycle.js",
    "driverPerTest": true,
    "mocha": {
        "grep": "@eachTest@"
    }
}
...

When driverPerSuite is true the global beforeEach hook will have a nemo instance injected, but not when driverPerSuite is false

Please note: When using the driverPerTest option, there will be no reliable nemo instance in the before/after lifecycle context.

Custom CLI Options (feature incomplete)

By default, Nemo will not accept CLI arguments that are not listed under CLI Arguments

Custom arguments can be useful for programmatically customizing Nemo configuration.

Use -U or --allow-unknown-args to prevent Nemo from validating CLI arguments

$ ./bin/nemo -U --myCustomArg myValue --anotherArg

Further enhancement must be made in order to take advantage of custom arguments when running in parallel mode. Please see #21

nemo's People

Contributors

bijoyv avatar dependabot[bot] avatar ethangodt avatar grawk avatar greenkeeper[bot] avatar limengyang990726 avatar lucasmaupin avatar nikulkarni avatar noyabronok avatar sainianubhav avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nemo's Issues

Parallel by data: nemo.data merges data from top-level keys

If you run nemo with parallel:data using a data source where a sub-level key is the same as one of the other top-level keys, then nemo.data merges the identical keys, even though they're on separate levels:

Example

data source:

{
    "A": {
        "dataset": 1,
        "B": {
            "This should be the only property A can access under B": 1
        }
    },
    "B": {
        "dataset": 2
    }
}

minimal test code:

console.log("dataset:", nemo.data.dataset);
console.log(nemo.data);

output:

// A case
dataset: 1
{ A:
   { dataset: 1,
     B: { 'This should be the only property A can access under B': 1 } },
  B:
   { dataset: 2,
     'This should be the only property A can access under B': 1 },
  dataset: 1 }

// B case
dataset: 2

{ A:
   { dataset: 1,
     B: { 'This should be the only property A can access under B': 1 } },
  B: { dataset: 2 },
  dataset: 2 }

In both cases, the data from inside A or B is available at the top level of nemo.data, but the A and B keys are also there. Since A has its own B property, it gets merged with the top-level B data.

When running parallel by data, I would expect nemo.data to only contain the data from each top-level key at a time, namely:

// A case

{
  "dataset": 1,
  "B": {
    "This should be the only property A can access under B": 1
  }
}

// B case
{
  "dataset": 2
}

differentiate nemo stdout summary table

the nemo stdout summary table doesn't differentiate itself very well. if a parent process calls nemo, the summary isn't easily identifiable as "nemo" output.

Add a header that says "Nemo Run Summary"

What do you mean parallel?

What do you mean parallel? Is it really parallel, as in two threads, how? Or just asynchronous?

    -F, --file                   run parallel by file
    -D, --data                run parallel by data

Can we have a switch for zero exit code when tests fail

Can we have a switch for non-zero exit code for failing tests? This cause the jenkins job failing, and actually for me, I only need to view tests run info in the automation report, so I want the jenkins job is always successful.

An in-range update of eslint is breaking the build 🚨

The devDependency eslint was updated from 6.7.2 to 6.8.0.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

eslint is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Release Notes for v6.8.0
  • c5c7086 Fix: ignore aligning single line in key-spacing (fixes #11414) (#12652) (YeonJuan)
  • 9986d9e Chore: add object option test cases in yield-star-spacing (#12679) (YeonJuan)
  • 1713d07 New: Add no-error-on-unmatched-pattern flag (fixes #10587) (#12377) (ncraley)
  • 5c25a26 Update: autofix bug in lines-between-class-members (fixes #12391) (#12632) (YeonJuan)
  • 4b3cc5c Chore: enable prefer-regex-literals in eslint codebase (#12268) (薛定谔的猫)
  • 05faebb Update: improve suggestion testing experience (#12602) (Brad Zacher)
  • 05f7dd5 Update: Add suggestions for no-unsafe-negation (fixes #12591) (#12609) (Milos Djermanovic)
  • d3e43f1 Docs: Update no-multi-assign explanation (#12615) (Yuping Zuo)
  • 272e4db Fix: no-multiple-empty-lines: Adjust reported loc (#12594) (Tobias Bieniek)
  • a258039 Fix: no-restricted-imports schema allows multiple paths/patterns objects (#12639) (Milos Djermanovic)
  • 51f9620 Fix: improve report location for array-bracket-spacing (#12653) (Milos Djermanovic)
  • 45364af Fix: prefer-numeric-literals doesn't check types of literal arguments (#12655) (Milos Djermanovic)
  • e3c570e Docs: Add example for expression option (#12694) (Arnaud Barré)
  • 6b774ef Docs: Add spacing in comments for no-console rule (#12696) (Nikki Nikkhoui)
  • 7171fca Chore: refactor regex in config comment parser (#12662) (Milos Djermanovic)
  • 1600648 Update: Allow $schema in config (#12612) (Yordis Prieto)
  • acc0e47 Update: support .eslintrc.cjs (refs eslint/rfcs#43) (#12321) (Evan Plaice)
  • 49c1658 Chore: remove bundling of ESLint during release (#12676) (Kai Cataldo)
  • 257f3d6 Chore: complete to move to GitHub Actions (#12625) (Toru Nagashima)
  • ab912f0 Docs: 1tbs with allowSingleLine edge cases (refs #12284) (#12314) (Ari Kardasis)
  • dd1c30e Sponsors: Sync README with website (ESLint Jenkins)
  • a230f84 Update: include node version in cache (#12582) (Eric Wang)
  • 8b65f17 Chore: remove references to parser demo (#12644) (Kai Cataldo)
  • e9cef99 Docs: wrap {{}} in raw liquid tags to prevent interpolation (#12643) (Kai Cataldo)
  • e707453 Docs: Fix configuration example in no-restricted-imports (fixes #11717) (#12638) (Milos Djermanovic)
  • 19194ce Chore: Add tests to cover default object options in comma-dangle (#12627) (YeonJuan)
  • 6e36d12 Update: do not recommend require-atomic-updates (refs #11899) (#12599) (Kai Cataldo)
Commits

The new version differs by 29 commits.

  • 9738f8c 6.8.0
  • ba59cbf Build: changelog update for 6.8.0
  • c5c7086 Fix: ignore aligning single line in key-spacing (fixes #11414) (#12652)
  • 9986d9e Chore: add object option test cases in yield-star-spacing (#12679)
  • 1713d07 New: Add no-error-on-unmatched-pattern flag (fixes #10587) (#12377)
  • 5c25a26 Update: autofix bug in lines-between-class-members (fixes #12391) (#12632)
  • 4b3cc5c Chore: enable prefer-regex-literals in eslint codebase (#12268)
  • 05faebb Update: improve suggestion testing experience (#12602)
  • 05f7dd5 Update: Add suggestions for no-unsafe-negation (fixes #12591) (#12609)
  • d3e43f1 Docs: Update no-multi-assign explanation (#12615)
  • 272e4db Fix: no-multiple-empty-lines: Adjust reported loc (#12594)
  • a258039 Fix: no-restricted-imports schema allows multiple paths/patterns objects (#12639)
  • 51f9620 Fix: improve report location for array-bracket-spacing (#12653)
  • 45364af Fix: prefer-numeric-literals doesn't check types of literal arguments (#12655)
  • e3c570e Docs: Add example for expression option (#12694)

There are 29 commits in total.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

An in-range update of minimist is breaking the build 🚨


☝️ Important announcement: Greenkeeper will be saying goodbye 👋 and passing the torch to Snyk on June 3rd, 2020! Find out how to migrate to Snyk and more at greenkeeper.io


The dependency minimist was updated from 1.2.0 to 1.2.1.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

minimist is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Commits

The new version differs by 5 commits.

  • 29783cd 1.2.1
  • 6be5dae add test
  • ac3fc79 fix bad boolean regexp
  • 4cf45a2 Merge pull request #63 from lydell/dash-dash-docs-fix
  • 5fa440e move the opts['--'] example back where it belongs

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

express uninstalled if --save=dev

this may be an npm bug. but if you install a project where:

  • nemo is a devDep
  • express is a dep
  • use --save=dev npm option

express is pruned out and it causes an error when running nemo

Improvements (suggestions)

Hi,
I've been working with Nemo for a while and really enjoy it. Here is some feedback:

  • JavaScript configuration over JSON, so it will be easier to do things dynamically + avoid repetition in JSON config files
  • textEquals assertion doesn't show actual and expected text (only one of these), so it's harder to debug
  • When I run tests in parallel: "file" and one test has it.only, I would only like to run that one test. Currently it runs all files
  • When running tests in parallel, it creates a report file for each test file. It would be easier to just open a single report file with all results
  • I use a lot of async / await and if a test fails, I usually don't see in a stack trace where it happened, which is much harder to debug. I typically see this:
    image

describe.skip is not working as expected

When describe.skip is used in mocha tests, tests are failing with error Uncaught TypeError: Cannot convert undefined or null to object

Context

When describe.skip is used, mocha Suite._beforeAll array will be empty. In beforeSuite event handler we are trying to add nemo's beforeAll to the front of the actual mocha beforeAll array here https://github.com/krakenjs/nemo/blob/master/lib/runner/mocha.js#L89. When Suite._beforeAll is empty and when we do Suite._beforeAll.unshift(Suite._beforeAll.pop()) we are adding undefined to Suite._beforeAll array. When mocha tries to clean references by deleting reference test function as shown here https://github.com/mochajs/mocha/blob/master/lib/suite.js#L527 it is causing the above error

Solution

Adding a check for empty error would solve the problem

Load plugins in earlier phase.

We plan to build a Data Provider plugin for nemo.
"Data-Provider": {
"module": "Data-Provider",
"register": true,
"arguments": [
"path:...", [ ""]] }

But the problem now is that, nemo read all data from config.json file and then load plugin. The data generated by the data provider plugin will not be loaded by nemo anymore...

Is this known feature or a bug?

I can invoke tests npm run nemo -- -G '#phone #add #US' and it runs two tests only for US.

I can invoke tests npm run nemo -- -G '#phone #add #US #HOME' and it runs just one test only for US and HOME.


['AU', 'BR', 'ES', 'FR', 'GB', 'GB', 'IN', 'MX', 'US'].forEach(CC => {
	['HOME', 'WORK'].forEach(t => {
		describe(`#phone #add #${CC} #${t}`, function () {
			it(`should test adding of phone of a specific type`, async function () {
				await add.default(this.nemo, phoneInfo.types[CC][t], CC);
			});
		});
	});
});

I want to know if A) This is a known feature and that it will stay. B) If its a bug and tests shouldn't be written like it.

I think when nemo goes to build metadata about tests, it somehow parses all the specs and just don't run them. Is that so? It runs later, after reading command line arguments.

Orphaned *driver processes

Finding that geckodriver, chromedriver, etc. processes are getting orphaned. Need to understand if that is an artifact of nemo or selenium-webdriver. Current workaround is to periodically killall those processes when running in a persistent environment (like on a local development computer).

@LucasMaupin is this the issue you tried to fix with #57 ?

destroyNemo not properly called in Suite.afterAll in mocha.js

I am currently trying out your framework and enjoying it so far! However, when I test in parallel the drivers never quit. I am testing using browserstack and it can be very problematic. I changed:
Suite.afterAll(function(){
destroyNemo.call(this)
})
to
Suite.afterAll(destroyNemo);
in mocha.js and it solved the issue.

Some profiles missed when launch multiple cases in one nemo command

When I executed following command, sometime only the first profile "AddRemove*******kConfirmation" been executed, other profiles been ignored and don't have any report or logs for missed cases.
I did some debug and found the method "glob" in flow.js only added 2 instance "the first and base" and the go next step. please take a look.

node_modules/.bin/nemo -B newtests/functional/ -P AddRemovekConfirmation,AddRemoveSGkHappyFlow,Walletds,WalletCard,Withdraw*******Bank,base -D

custom CLI args not available in parallel mode

because the parent process.argv object is not available in a child process, any custom CLI args will not be available in the mocha tests. Need to add a new property to the nemo object (nemo.argv) in order to pass the parent process.argv into the tests.

Things that I think we can improve upon in code.

  1. Capitalize required or otherwise names, when new is required. e.g Glob in flow.js should be glob.
  2. If a name clashes with exported function, e.g.glob (required library) will clash with glob(); we should name imported variable xlib or xLib, e.g. globlib or globLib.
  3. Exports shuold be consistent. There are at least three ways we are exporting functions. Suggestion below.
  4. Convert promise based code to async/await one.
  5. Not using const x = function x(){} code structure. I am not sure if it helps. If anything, ESlint rules need tweaking to allow const to return value. Not every function needs to be function(){}.

Consistent Exports

function x() {

}

const y = Math.PI * 22/7;

module.exports = {
    x,
    y
};

Is there a way to know how many tests are going to run before tests start running.

There are multiple things that can be done.

Knowing how many tests are going to run helps in gauging how much time it might take; I can plan my coffee breaks accordingly.

  1. We can tell users how many tests are going to run. Just the number. This lets dev know whether the grep query accurate.
  2. We call let users dry run tests, this will list all the tests that are going to run. This also helps with grep query accuracy.

I think that nemo or mocha might know which tests to run before hand, as #37 happens.

Whaddayya think?

[Feature] Default logging behavior

Will there be a default logging behavior? I found that, while writing and testing tests, I would get zero feedback from nemo/nemo-runner. A default logger would be nice. If you made it lightly configurable, if only just to surface errors from nemo, and not necessarily from what you are testing, that would be very helpful. If someone wanted to, they could easily turn it off completely with debugger: false.

Plain JS config no longer merges top-level data

With the JSON config, a top-level data section would be merged with nemo.data at runtime, which was a neat way to set default data, even when running parallel by data:

{
    "plugins": {
        "view": {
            "module": "nemo-view",
            "arguments": [
                "path:locator"
            ]
        }
    },
    "output": {
        "reports": "path:report"
    },
    "data": {
        "THIS_WILL_SHOW_UP_IN_NEMO_DATA": 1
    },
    "profiles": {
// etc.

With the plain JS config, top-level data doesn't get merged with the runtime nemo.data:

const path = require("path");

module.exports = {
    plugins: {
        view: {
            module: "nemo-view"
        }
    },
    output: {
        reports: path.resolve("test/nemo2", "report")
    },
    data: {
        THIS_NEVER_SHOWS_UP_IN_NEMO_DATA: 1
    },
    profiles: {
        base: {
            tests: path.resolve("test/nemo2", "*test.js"),
            driver: {
                browser: "chrome"
            },
            data: {
                baseUrl: "https://www.google.com"
            },
            mocha: {
                timeout: 180000,
                reporter: "mochawesome",
                reporterOptions: {
                    quiet: true
                }
            }
        }
    }
};

Screenshot 'on exception' doesn't work on nemo@^4

with config for nemo@4:

"screenshot": {
            "module": "nemo-screenshot",
            "register": true,
            "arguments": [
                "path:report/screenshots", [ "exception"]] }

there is no screenshot on exception.
But screenshot working on click, or nemo.snap().

Dynamically generating tests

We have a situation, where we need to perform an asynchronous operation to fetch data and generate tests for each of the data. Any thoughts?

const tests = asyncGetDataFromDB();
describe('generate tests dynamically', function() {
  tests.forEach(function(test) {
    it(test.desc, async function() {
      // make assertions
    });
  });
});

option to quit driver after every test - it()

Nemo runner should allow users to quit the driver if they want after every test. For example in afterEach hook method. Currently deleting cookies in afterEach would not solve this problem. It is still using the previous login information

Plain JS config: How to load plugins?

The plain JS config example creates the plugin section as follows:

module.exports = {
    plugins: {
        view: {
            module: "nemo-view"
        }
    },

But nemo.view is undefined:

describe('@firstTest@', function () {
  it('should load a website', async function () {
    console.log('NEMO VIEW', this.nemo.view);
    await this.nemo.driver.get(this.nemo.data.baseUrl);
  });
});

Output:

NEMO VIEW undefined

I've tried both module: require.resolve("nemo-view") and module: require("nemo-view") but neither gets nemo-view to load.

Enabling DEBUG=nemo-core:log only shows snap being loaded:

nemo-core:log register plugin snap +0ms
  nemo-core:log plugin.registration fulfill with 1 plugins. +1ms

Regression in 4.9.1 with merged profiles?

I seem to be experiencing a new error with nemo 4.9.1 that I do not see in 4.6.0 or 4.9.0 for that matter.

The error I get is given below. It seems to stem from having a mocks profile defined that has a different defined data and env field than the base profile defines. Particular variables are not being merged into the map appropriately after the upgrade or thats what it looks like anyway. In my case specifically this causes a call to nemo.data.baseUrl in a spec to return undefined where in previous versions it seemed to contain a proper merge, maybe by coincidence it worked before?

Output Error

{ WebDriverError: unknown error: unhandled inspector error: {"code":-32000,"message":"Cannot navigate to invalid URL"}
  (Session info: headless chrome=70.0.3538.110)
  (Driver info: chromedriver=2.45.615355 (d5698f682d8b2742017df6c81e0bd8e6a3063189),platform=Mac OS X 10.13.6 x86_64)
URL when there was an error: data:,
    at Object.checkLegacyResponse (node_modules/selenium-webdriver/lib/error.js:546:15)
    at parseHttpResponse (node_modules/selenium-webdriver/lib/http.js:509:13)
    at doSend.then.response (node_modules/selenium-webdriver/lib/http.js:441:30)
    at propagateAslWrapper (node_modules/async-listener/index.js:504:23)
    at node_modules/async-listener/glue.js:188:31
    at node_modules/async-listener/index.js:541:70
    at node_modules/async-listener/glue.js:188:31
    at <anonymous>
    at process._tickDomainCallback (internal/process/next_tick.js:228:7)
    at process.fallback (node_modules/async-listener/index.js:565:15) name: 'WebDriverError', remoteStacktrace: '' }

config.json

{
  "plugins": {
    "view": {
      "module": "nemo-view",
      "arguments": ["path:locators"]
    },
    "jaws": {
      "module": "nemo-jaws",
      "arguments": ...
    },
    "assert": {
      "module": "path:./plugins/nemo-assert"
    },
    "cookie": {
      "module": "path:./plugins/nemo-cookie"
    },
    "context": {
      "module": "path:./plugins/nemo-context"
    },
    "window": {
      "module": "nemo-window2"
    },
    "browser": {
      "module": "path:./plugins/nemo-browser"
    }
  },
  "data": {
    "baseUrl": "env:baseUrl",
    "targetHost": "env:targetHost",
    "testDataFilePath": "path:./testData/",
    "emailPrefix": "env:emailPrefix"
  },
  "output": {
    "reports": "path:reports"
  },
  "profiles": "exec:./config/profiles"
}

profiles.js

Some vars removed due to sensitivity

const path = require('path');
const minimist = require('minimist');

module.exports = function() {
  const argv = minimist(process.argv.slice(2));
  const defaultHost = '<host-here>';
  const driverPerTest = argv.driverPerTest || false;

  // Check for overrides in environment variables and command line args
  const targetHostOverride = argv.targetHost || process.env.targetHost;
  const baseUrlOverride = argv.baseUrl || process.env.baseUrl;
  const targetServerOverride = argv.targetServer || process.env.targetServer;

  const env = {
    SELENIUM_PROMISE_MANAGER: 0,
    targetHost: targetHostOverride || defaultHost,
    baseUrl: baseUrlOverride || `<base-url-here>`,
    // other args
  };

  return {
    base: {
      tests: argv.testsPath
        ? path.resolve(__dirname, `../${argv.testsPath}/**/*.js`)
        : path.resolve(__dirname, '../spec/**/*.js'),
      driverPerTest,
      env,
      mocha: {
        timeout: argv.mochaTimeout || 600000,
        require: ['@babel/polyfill', '@babel/register'],
        grep: argv.grep,
        reporter: argv.reporter || 'mochawesome',
        reporterOptions: { enableCode: false },
      },
      driver: {
        browser: argv.targetBrowser || 'chrome',
      },
      maxConcurrent: argv.maxConcurrent || 1,
    },
    local: {
      mocha: {
        reporterOptions: { autoOpen: true },
      },
    },
    mocks: {
      mocha: {
        timeout: 600000,
        require: [
          '@babel/polyfill',
          '@babel/register',
          path.resolve(__dirname, '../config/mock-setup.js'),
        ],
        reporter: 'xunit',
        delay: true,
        grep: argv.grep,
      },
      tests: path.resolve(__dirname, '../spec/mocks/**/*.js'),
      env: {
        SELENIUM_PROMISE_MANAGER: 0,
        targetHost: targetHostOverride || 'localhost',
        baseUrl: baseUrlOverride || 'http://localhost:9000',
        data: process.env.data || '{}',
      },
      driver: {
        browser: 'chrome',
        serverCaps: {
          chromeOptions: {
            args: ['headless', 'no-sandbox', 'disable-gpu', 'window-size=1200,800'],
          },
        },
        server: targetServerOverride,
      },
    },
  };
};

Able to support multiple reporters

Right now, only one reporter supported to be used at same time.
If we config the reporter like following, only Xunit report will get generated.
"mocha": {
"timeout": 180000,
"reporter": "mochawesome",
"reporter": "xunit",
"retries": 0,
"require": "babel-register",
"grep": "argv:grep"
},
xunit report is about publish the result to Jenkins.
json report is easy to feed database.

reference link: https://github.com/mochajs/mocha/pull/2184/files

Multiple driver instances are not stable

We were instantiating a driver per test in parallel, having 20 tests. Often times we'd have failures requiring us to rerun the tests. Once we configured all the tests under a single describe, with the driverPerSuite=true default, our tests ended up much more stable.

An in-range update of flatted is breaking the build 🚨


☝️ Important announcement: Greenkeeper will be saying goodbye 👋 and passing the torch to Snyk on June 3rd, 2020! Find out how to migrate to Snyk and more at greenkeeper.io


The dependency flatted was updated from 2.0.1 to 2.0.2.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

flatted is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Commits

The new version differs by 11 commits.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

An in-range update of graphql is breaking the build 🚨

The dependency graphql was updated from 14.5.8 to 14.6.0.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

graphql is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

An in-range update of async is breaking the build 🚨

The dependency async was updated from 3.1.0 to 3.1.1.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

async is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

duration is null in summary.json file

{
	"totals": {
		"total": 12,
		"pass": 8,
		"fail": 4
	},
	"instances": [
		{
			"tags": {
				"profile": "chrome",
				"grep": "@p2",
				"key": "bing",
				"reportFile": "/03-26-2018/13-58-09/profile!chrome!grep!@p2!key!bing/nemo-report.html",
				"uid": "39746095-3cec-4cb0-b3bb-89655cfab133"
			},
			"duration": null,
			"summary": {
				"total": 1,
				"pass": 1,
				"fail": 0
			}
		},
		{
...

Support CLI option `exit` to force exit after mocha done running the tests

In mocha versions <4, mocha kill itself after it is done with running all the tests. From mocha versions >=4 mocha will be still alive when any active handles are still on the event loop and the process will just hang. If we are running mocha through command line there is an option to handle it by passing --exit option to mocha. Since we are using mocha programmatically in Nemo, we cannot pass it as a mocha option in the config. So, it would be great if we have a CLI option that we can pass to Nemo something like this nemo --exit to force exit by shutting down the event loop. We can keep the default behavior of no-exit if it is not passed as command line argument

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.