GithubHelp home page GithubHelp logo

nemo's Issues

duration is null in summary.json file

{
	"totals": {
		"total": 12,
		"pass": 8,
		"fail": 4
	},
	"instances": [
		{
			"tags": {
				"profile": "chrome",
				"grep": "@p2",
				"key": "bing",
				"reportFile": "/03-26-2018/13-58-09/profile!chrome!grep!@p2!key!bing/nemo-report.html",
				"uid": "39746095-3cec-4cb0-b3bb-89655cfab133"
			},
			"duration": null,
			"summary": {
				"total": 1,
				"pass": 1,
				"fail": 0
			}
		},
		{
...

An in-range update of eslint is breaking the build ๐Ÿšจ

The devDependency eslint was updated from 6.7.2 to 6.8.0.

๐Ÿšจ View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

eslint is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • โŒ continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Release Notes for v6.8.0
  • c5c7086 Fix: ignore aligning single line in key-spacing (fixes #11414) (#12652) (YeonJuan)
  • 9986d9e Chore: add object option test cases in yield-star-spacing (#12679) (YeonJuan)
  • 1713d07 New: Add no-error-on-unmatched-pattern flag (fixes #10587) (#12377) (ncraley)
  • 5c25a26 Update: autofix bug in lines-between-class-members (fixes #12391) (#12632) (YeonJuan)
  • 4b3cc5c Chore: enable prefer-regex-literals in eslint codebase (#12268) (่–›ๅฎš่ฐ”็š„็Œซ)
  • 05faebb Update: improve suggestion testing experience (#12602) (Brad Zacher)
  • 05f7dd5 Update: Add suggestions for no-unsafe-negation (fixes #12591) (#12609) (Milos Djermanovic)
  • d3e43f1 Docs: Update no-multi-assign explanation (#12615) (Yuping Zuo)
  • 272e4db Fix: no-multiple-empty-lines: Adjust reported loc (#12594) (Tobias Bieniek)
  • a258039 Fix: no-restricted-imports schema allows multiple paths/patterns objects (#12639) (Milos Djermanovic)
  • 51f9620 Fix: improve report location for array-bracket-spacing (#12653) (Milos Djermanovic)
  • 45364af Fix: prefer-numeric-literals doesn't check types of literal arguments (#12655) (Milos Djermanovic)
  • e3c570e Docs: Add example for expression option (#12694) (Arnaud Barrรฉ)
  • 6b774ef Docs: Add spacing in comments for no-console rule (#12696) (Nikki Nikkhoui)
  • 7171fca Chore: refactor regex in config comment parser (#12662) (Milos Djermanovic)
  • 1600648 Update: Allow $schema in config (#12612) (Yordis Prieto)
  • acc0e47 Update: support .eslintrc.cjs (refs eslint/rfcs#43) (#12321) (Evan Plaice)
  • 49c1658 Chore: remove bundling of ESLint during release (#12676) (Kai Cataldo)
  • 257f3d6 Chore: complete to move to GitHub Actions (#12625) (Toru Nagashima)
  • ab912f0 Docs: 1tbs with allowSingleLine edge cases (refs #12284) (#12314) (Ari Kardasis)
  • dd1c30e Sponsors: Sync README with website (ESLint Jenkins)
  • a230f84 Update: include node version in cache (#12582) (Eric Wang)
  • 8b65f17 Chore: remove references to parser demo (#12644) (Kai Cataldo)
  • e9cef99 Docs: wrap {{}} in raw liquid tags to prevent interpolation (#12643) (Kai Cataldo)
  • e707453 Docs: Fix configuration example in no-restricted-imports (fixes #11717) (#12638) (Milos Djermanovic)
  • 19194ce Chore: Add tests to cover default object options in comma-dangle (#12627) (YeonJuan)
  • 6e36d12 Update: do not recommend require-atomic-updates (refs #11899) (#12599) (Kai Cataldo)
Commits

The new version differs by 29 commits.

  • 9738f8c 6.8.0
  • ba59cbf Build: changelog update for 6.8.0
  • c5c7086 Fix: ignore aligning single line in key-spacing (fixes #11414) (#12652)
  • 9986d9e Chore: add object option test cases in yield-star-spacing (#12679)
  • 1713d07 New: Add no-error-on-unmatched-pattern flag (fixes #10587) (#12377)
  • 5c25a26 Update: autofix bug in lines-between-class-members (fixes #12391) (#12632)
  • 4b3cc5c Chore: enable prefer-regex-literals in eslint codebase (#12268)
  • 05faebb Update: improve suggestion testing experience (#12602)
  • 05f7dd5 Update: Add suggestions for no-unsafe-negation (fixes #12591) (#12609)
  • d3e43f1 Docs: Update no-multi-assign explanation (#12615)
  • 272e4db Fix: no-multiple-empty-lines: Adjust reported loc (#12594)
  • a258039 Fix: no-restricted-imports schema allows multiple paths/patterns objects (#12639)
  • 51f9620 Fix: improve report location for array-bracket-spacing (#12653)
  • 45364af Fix: prefer-numeric-literals doesn't check types of literal arguments (#12655)
  • e3c570e Docs: Add example for expression option (#12694)

There are 29 commits in total.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those donโ€™t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot ๐ŸŒด

differentiate nemo stdout summary table

the nemo stdout summary table doesn't differentiate itself very well. if a parent process calls nemo, the summary isn't easily identifiable as "nemo" output.

Add a header that says "Nemo Run Summary"

An in-range update of flatted is breaking the build ๐Ÿšจ


โ˜๏ธ Important announcement: Greenkeeper will be saying goodbye ๐Ÿ‘‹ and passing the torch to Snyk on June 3rd, 2020! Find out how to migrate to Snyk and more at greenkeeper.io


The dependency flatted was updated from 2.0.1 to 2.0.2.

๐Ÿšจ View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

flatted is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.

Status Details
  • โŒ continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Commits

The new version differs by 11 commits.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those donโ€™t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot ๐ŸŒด

Can we have a switch for zero exit code when tests fail

Can we have a switch for non-zero exit code for failing tests? This cause the jenkins job failing, and actually for me, I only need to view tests run info in the automation report, so I want the jenkins job is always successful.

destroyNemo not properly called in Suite.afterAll in mocha.js

I am currently trying out your framework and enjoying it so far! However, when I test in parallel the drivers never quit. I am testing using browserstack and it can be very problematic. I changed:
Suite.afterAll(function(){
destroyNemo.call(this)
})
to
Suite.afterAll(destroyNemo);
in mocha.js and it solved the issue.

Things that I think we can improve upon in code.

  1. Capitalize required or otherwise names, when new is required. e.g Glob in flow.js should be glob.
  2. If a name clashes with exported function, e.g.glob (required library) will clash with glob(); we should name imported variable xlib or xLib, e.g. globlib or globLib.
  3. Exports shuold be consistent. There are at least three ways we are exporting functions. Suggestion below.
  4. Convert promise based code to async/await one.
  5. Not using const x = function x(){} code structure. I am not sure if it helps. If anything, ESlint rules need tweaking to allow const to return value. Not every function needs to be function(){}.

Consistent Exports

function x() {

}

const y = Math.PI * 22/7;

module.exports = {
    x,
    y
};

Load plugins in earlier phase.

We plan to build a Data Provider plugin for nemo.
"Data-Provider": {
"module": "Data-Provider",
"register": true,
"arguments": [
"path:...", [ ""]] }

But the problem now is that, nemo read all data from config.json file and then load plugin. The data generated by the data provider plugin will not be loaded by nemo anymore...

Plain JS config no longer merges top-level data

With the JSON config, a top-level data section would be merged with nemo.data at runtime, which was a neat way to set default data, even when running parallel by data:

{
    "plugins": {
        "view": {
            "module": "nemo-view",
            "arguments": [
                "path:locator"
            ]
        }
    },
    "output": {
        "reports": "path:report"
    },
    "data": {
        "THIS_WILL_SHOW_UP_IN_NEMO_DATA": 1
    },
    "profiles": {
// etc.

With the plain JS config, top-level data doesn't get merged with the runtime nemo.data:

const path = require("path");

module.exports = {
    plugins: {
        view: {
            module: "nemo-view"
        }
    },
    output: {
        reports: path.resolve("test/nemo2", "report")
    },
    data: {
        THIS_NEVER_SHOWS_UP_IN_NEMO_DATA: 1
    },
    profiles: {
        base: {
            tests: path.resolve("test/nemo2", "*test.js"),
            driver: {
                browser: "chrome"
            },
            data: {
                baseUrl: "https://www.google.com"
            },
            mocha: {
                timeout: 180000,
                reporter: "mochawesome",
                reporterOptions: {
                    quiet: true
                }
            }
        }
    }
};

express uninstalled if --save=dev

this may be an npm bug. but if you install a project where:

  • nemo is a devDep
  • express is a dep
  • use --save=dev npm option

express is pruned out and it causes an error when running nemo

Able to support multiple reporters

Right now, only one reporter supported to be used at same time.
If we config the reporter like following, only Xunit report will get generated.
"mocha": {
"timeout": 180000,
"reporter": "mochawesome",
"reporter": "xunit",
"retries": 0,
"require": "babel-register",
"grep": "argv:grep"
},
xunit report is about publish the result to Jenkins.
json report is easy to feed database.

reference link: https://github.com/mochajs/mocha/pull/2184/files

Is this known feature or a bug?

I can invoke tests npm run nemo -- -G '#phone #add #US' and it runs two tests only for US.

I can invoke tests npm run nemo -- -G '#phone #add #US #HOME' and it runs just one test only for US and HOME.


['AU', 'BR', 'ES', 'FR', 'GB', 'GB', 'IN', 'MX', 'US'].forEach(CC => {
	['HOME', 'WORK'].forEach(t => {
		describe(`#phone #add #${CC} #${t}`, function () {
			it(`should test adding of phone of a specific type`, async function () {
				await add.default(this.nemo, phoneInfo.types[CC][t], CC);
			});
		});
	});
});

I want to know if A) This is a known feature and that it will stay. B) If its a bug and tests shouldn't be written like it.

I think when nemo goes to build metadata about tests, it somehow parses all the specs and just don't run them. Is that so? It runs later, after reading command line arguments.

Orphaned *driver processes

Finding that geckodriver, chromedriver, etc. processes are getting orphaned. Need to understand if that is an artifact of nemo or selenium-webdriver. Current workaround is to periodically killall those processes when running in a persistent environment (like on a local development computer).

@LucasMaupin is this the issue you tried to fix with #57 ?

Dynamically generating tests

We have a situation, where we need to perform an asynchronous operation to fetch data and generate tests for each of the data. Any thoughts?

const tests = asyncGetDataFromDB();
describe('generate tests dynamically', function() {
  tests.forEach(function(test) {
    it(test.desc, async function() {
      // make assertions
    });
  });
});

An in-range update of graphql is breaking the build ๐Ÿšจ

The dependency graphql was updated from 14.5.8 to 14.6.0.

๐Ÿšจ View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

graphql is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.

Status Details
  • โŒ continuous-integration/travis-ci/push: The Travis CI build failed (Details).

FAQ and help

There is a collection of frequently asked questions. If those donโ€™t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot ๐ŸŒด

Is there a way to know how many tests are going to run before tests start running.

There are multiple things that can be done.

Knowing how many tests are going to run helps in gauging how much time it might take; I can plan my coffee breaks accordingly.

  1. We can tell users how many tests are going to run. Just the number. This lets dev know whether the grep query accurate.
  2. We call let users dry run tests, this will list all the tests that are going to run. This also helps with grep query accuracy.

I think that nemo or mocha might know which tests to run before hand, as #37 happens.

Whaddayya think?

custom CLI args not available in parallel mode

because the parent process.argv object is not available in a child process, any custom CLI args will not be available in the mocha tests. Need to add a new property to the nemo object (nemo.argv) in order to pass the parent process.argv into the tests.

Regression in 4.9.1 with merged profiles?

I seem to be experiencing a new error with nemo 4.9.1 that I do not see in 4.6.0 or 4.9.0 for that matter.

The error I get is given below. It seems to stem from having a mocks profile defined that has a different defined data and env field than the base profile defines. Particular variables are not being merged into the map appropriately after the upgrade or thats what it looks like anyway. In my case specifically this causes a call to nemo.data.baseUrl in a spec to return undefined where in previous versions it seemed to contain a proper merge, maybe by coincidence it worked before?

Output Error

{ WebDriverError: unknown error: unhandled inspector error: {"code":-32000,"message":"Cannot navigate to invalid URL"}
  (Session info: headless chrome=70.0.3538.110)
  (Driver info: chromedriver=2.45.615355 (d5698f682d8b2742017df6c81e0bd8e6a3063189),platform=Mac OS X 10.13.6 x86_64)
URL when there was an error: data:,
    at Object.checkLegacyResponse (node_modules/selenium-webdriver/lib/error.js:546:15)
    at parseHttpResponse (node_modules/selenium-webdriver/lib/http.js:509:13)
    at doSend.then.response (node_modules/selenium-webdriver/lib/http.js:441:30)
    at propagateAslWrapper (node_modules/async-listener/index.js:504:23)
    at node_modules/async-listener/glue.js:188:31
    at node_modules/async-listener/index.js:541:70
    at node_modules/async-listener/glue.js:188:31
    at <anonymous>
    at process._tickDomainCallback (internal/process/next_tick.js:228:7)
    at process.fallback (node_modules/async-listener/index.js:565:15) name: 'WebDriverError', remoteStacktrace: '' }

config.json

{
  "plugins": {
    "view": {
      "module": "nemo-view",
      "arguments": ["path:locators"]
    },
    "jaws": {
      "module": "nemo-jaws",
      "arguments": ...
    },
    "assert": {
      "module": "path:./plugins/nemo-assert"
    },
    "cookie": {
      "module": "path:./plugins/nemo-cookie"
    },
    "context": {
      "module": "path:./plugins/nemo-context"
    },
    "window": {
      "module": "nemo-window2"
    },
    "browser": {
      "module": "path:./plugins/nemo-browser"
    }
  },
  "data": {
    "baseUrl": "env:baseUrl",
    "targetHost": "env:targetHost",
    "testDataFilePath": "path:./testData/",
    "emailPrefix": "env:emailPrefix"
  },
  "output": {
    "reports": "path:reports"
  },
  "profiles": "exec:./config/profiles"
}

profiles.js

Some vars removed due to sensitivity

const path = require('path');
const minimist = require('minimist');

module.exports = function() {
  const argv = minimist(process.argv.slice(2));
  const defaultHost = '<host-here>';
  const driverPerTest = argv.driverPerTest || false;

  // Check for overrides in environment variables and command line args
  const targetHostOverride = argv.targetHost || process.env.targetHost;
  const baseUrlOverride = argv.baseUrl || process.env.baseUrl;
  const targetServerOverride = argv.targetServer || process.env.targetServer;

  const env = {
    SELENIUM_PROMISE_MANAGER: 0,
    targetHost: targetHostOverride || defaultHost,
    baseUrl: baseUrlOverride || `<base-url-here>`,
    // other args
  };

  return {
    base: {
      tests: argv.testsPath
        ? path.resolve(__dirname, `../${argv.testsPath}/**/*.js`)
        : path.resolve(__dirname, '../spec/**/*.js'),
      driverPerTest,
      env,
      mocha: {
        timeout: argv.mochaTimeout || 600000,
        require: ['@babel/polyfill', '@babel/register'],
        grep: argv.grep,
        reporter: argv.reporter || 'mochawesome',
        reporterOptions: { enableCode: false },
      },
      driver: {
        browser: argv.targetBrowser || 'chrome',
      },
      maxConcurrent: argv.maxConcurrent || 1,
    },
    local: {
      mocha: {
        reporterOptions: { autoOpen: true },
      },
    },
    mocks: {
      mocha: {
        timeout: 600000,
        require: [
          '@babel/polyfill',
          '@babel/register',
          path.resolve(__dirname, '../config/mock-setup.js'),
        ],
        reporter: 'xunit',
        delay: true,
        grep: argv.grep,
      },
      tests: path.resolve(__dirname, '../spec/mocks/**/*.js'),
      env: {
        SELENIUM_PROMISE_MANAGER: 0,
        targetHost: targetHostOverride || 'localhost',
        baseUrl: baseUrlOverride || 'http://localhost:9000',
        data: process.env.data || '{}',
      },
      driver: {
        browser: 'chrome',
        serverCaps: {
          chromeOptions: {
            args: ['headless', 'no-sandbox', 'disable-gpu', 'window-size=1200,800'],
          },
        },
        server: targetServerOverride,
      },
    },
  };
};

Some profiles missed when launch multiple cases in one nemo command

When I executed following command, sometime only the first profile "AddRemove*******kConfirmation" been executed, other profiles been ignored and don't have any report or logs for missed cases.
I did some debug and found the method "glob" in flow.js only added 2 instance "the first and base" and the go next step. please take a look.

node_modules/.bin/nemo -B newtests/functional/ -P AddRemovekConfirmation,AddRemoveSGkHappyFlow,Walletds,WalletCard,Withdraw*******Bank,base -D

[Feature] Default logging behavior

Will there be a default logging behavior? I found that, while writing and testing tests, I would get zero feedback from nemo/nemo-runner. A default logger would be nice. If you made it lightly configurable, if only just to surface errors from nemo, and not necessarily from what you are testing, that would be very helpful. If someone wanted to, they could easily turn it off completely with debugger: false.

Multiple driver instances are not stable

We were instantiating a driver per test in parallel, having 20 tests. Often times we'd have failures requiring us to rerun the tests. Once we configured all the tests under a single describe, with the driverPerSuite=true default, our tests ended up much more stable.

An in-range update of minimist is breaking the build ๐Ÿšจ


โ˜๏ธ Important announcement: Greenkeeper will be saying goodbye ๐Ÿ‘‹ and passing the torch to Snyk on June 3rd, 2020! Find out how to migrate to Snyk and more at greenkeeper.io


The dependency minimist was updated from 1.2.0 to 1.2.1.

๐Ÿšจ View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

minimist is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.

Status Details
  • โŒ continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Commits

The new version differs by 5 commits.

  • 29783cd 1.2.1
  • 6be5dae add test
  • ac3fc79 fix bad boolean regexp
  • 4cf45a2 Merge pull request #63 from lydell/dash-dash-docs-fix
  • 5fa440e move the opts['--'] example back where it belongs

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those donโ€™t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot ๐ŸŒด

Screenshot 'on exception' doesn't work on nemo@^4

with config for nemo@4:

"screenshot": {
            "module": "nemo-screenshot",
            "register": true,
            "arguments": [
                "path:report/screenshots", [ "exception"]] }

there is no screenshot on exception.
But screenshot working on click, or nemo.snap().

describe.skip is not working as expected

When describe.skip is used in mocha tests, tests are failing with error Uncaught TypeError: Cannot convert undefined or null to object

Context

When describe.skip is used, mocha Suite._beforeAll array will be empty. In beforeSuite event handler we are trying to add nemo's beforeAll to the front of the actual mocha beforeAll array here https://github.com/krakenjs/nemo/blob/master/lib/runner/mocha.js#L89. When Suite._beforeAll is empty and when we do Suite._beforeAll.unshift(Suite._beforeAll.pop()) we are adding undefined to Suite._beforeAll array. When mocha tries to clean references by deleting reference test function as shown here https://github.com/mochajs/mocha/blob/master/lib/suite.js#L527 it is causing the above error

Solution

Adding a check for empty error would solve the problem

Plain JS config: How to load plugins?

The plain JS config example creates the plugin section as follows:

module.exports = {
    plugins: {
        view: {
            module: "nemo-view"
        }
    },

But nemo.view is undefined:

describe('@firstTest@', function () {
  it('should load a website', async function () {
    console.log('NEMO VIEW', this.nemo.view);
    await this.nemo.driver.get(this.nemo.data.baseUrl);
  });
});

Output:

NEMO VIEW undefined

I've tried both module: require.resolve("nemo-view") and module: require("nemo-view") but neither gets nemo-view to load.

Enabling DEBUG=nemo-core:log only shows snap being loaded:

nemo-core:log register plugin snap +0ms
  nemo-core:log plugin.registration fulfill with 1 plugins. +1ms

Support CLI option `exit` to force exit after mocha done running the tests

In mocha versions <4, mocha kill itself after it is done with running all the tests. From mocha versions >=4 mocha will be still alive when any active handles are still on the event loop and the process will just hang. If we are running mocha through command line there is an option to handle it by passing --exit option to mocha. Since we are using mocha programmatically in Nemo, we cannot pass it as a mocha option in the config. So, it would be great if we have a CLI option that we can pass to Nemo something like this nemo --exit to force exit by shutting down the event loop. We can keep the default behavior of no-exit if it is not passed as command line argument

Parallel by data: nemo.data merges data from top-level keys

If you run nemo with parallel:data using a data source where a sub-level key is the same as one of the other top-level keys, then nemo.data merges the identical keys, even though they're on separate levels:

Example

data source:

{
    "A": {
        "dataset": 1,
        "B": {
            "This should be the only property A can access under B": 1
        }
    },
    "B": {
        "dataset": 2
    }
}

minimal test code:

console.log("dataset:", nemo.data.dataset);
console.log(nemo.data);

output:

// A case
dataset: 1
{ A:
   { dataset: 1,
     B: { 'This should be the only property A can access under B': 1 } },
  B:
   { dataset: 2,
     'This should be the only property A can access under B': 1 },
  dataset: 1 }

// B case
dataset: 2

{ A:
   { dataset: 1,
     B: { 'This should be the only property A can access under B': 1 } },
  B: { dataset: 2 },
  dataset: 2 }

In both cases, the data from inside A or B is available at the top level of nemo.data, but the A and B keys are also there. Since A has its own B property, it gets merged with the top-level B data.

When running parallel by data, I would expect nemo.data to only contain the data from each top-level key at a time, namely:

// A case

{
  "dataset": 1,
  "B": {
    "This should be the only property A can access under B": 1
  }
}

// B case
{
  "dataset": 2
}

option to quit driver after every test - it()

Nemo runner should allow users to quit the driver if they want after every test. For example in afterEach hook method. Currently deleting cookies in afterEach would not solve this problem. It is still using the previous login information

What do you mean parallel?

What do you mean parallel? Is it really parallel, as in two threads, how? Or just asynchronous?

    -F, --file                   run parallel by file
    -D, --data                run parallel by data

Improvements (suggestions)

Hi,
I've been working with Nemo for a while and really enjoy it. Here is some feedback:

  • JavaScript configuration over JSON, so it will be easier to do things dynamically + avoid repetition in JSON config files
  • textEquals assertion doesn't show actual and expected text (only one of these), so it's harder to debug
  • When I run tests in parallel: "file" and one test has it.only, I would only like to run that one test. Currently it runs all files
  • When running tests in parallel, it creates a report file for each test file. It would be easier to just open a single report file with all results
  • I use a lot of async / await and if a test fails, I usually don't see in a stack trace where it happened, which is much harder to debug. I typically see this:
    image

An in-range update of async is breaking the build ๐Ÿšจ

The dependency async was updated from 3.1.0 to 3.1.1.

๐Ÿšจ View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

async is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.

Status Details
  • โŒ continuous-integration/travis-ci/push: The Travis CI build failed (Details).

FAQ and help

There is a collection of frequently asked questions. If those donโ€™t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot ๐ŸŒด

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.