nimasoroush / differencify Goto Github PK
View Code? Open in Web Editor NEWDifferencify is a library for visual regression testing
License: MIT License
Differencify is a library for visual regression testing
License: MIT License
It looks like when freezing a svg image, the new generated convas image is not %100 consistent on same frame which causes slightly different image every time.
Why is it so slow in jest? You can also see it in the examples, which had to increase the testing timeout to be able to finish the test. Totally unusable :(
I tried to pinpoint it and it is the fs.readStream it looks like. I will investigate further later on. Just opened this issue as a reference.
When in jest mode and any of steps errors, because we are not evaluating the returned value test will pass.
I was trying to do the following:
const target = differencify.init({
chain: false
})
await target.launch({ headless: true })
const page = await target.newPage()
await page.setViewport({
width: 1366,
height: 766
})
await page.goto(key, { waitUntil: 'networkidle0' })
await page.content()
const bodyHandle = await page.$('body')
const { width, height } = await bodyHandle.boundingBox()
await console.log(width, height)
const imag = await fs.readFileSync(
require.resolve('../../snapshots/' + value + '_screenshot.png')
)
const result = await target.toMatchSnapshot(imag)
Here( imag = local screenshot )and when im trying to match the reference screenshot with the(imag), the same local screenshot taken earlier is being saved as the present reference snapshot.
I'm having trouble getting puppetter to work - can anyone spot a problem with this test?
Thanks
import Differencify from '../index';
const differencify = new Differencify({ debug: true });
describe('Differencify', () => {
beforeAll(async () => {
await differencify.launchBrowser({ args: ['--no-sandbox', '--disable-setuid-sandbox'], headless: false });
});
afterAll(async () => {
await differencify.cleanup();
});
it.only('simple', async () => {
await differencify
.init()
.newPage()
.setViewport({ width: 1024, height: 1024 })
.goto('https://facebook.github.io/jest/docs/en/api.html')
.page // tried without also
.waitForSelector('footer')
.focus('footer')
.screenshot()
.toMatchSnapshot()
.close()
.end();
}, 20000);
webpage reference imagesnapsot is saving as undefined.snap.png.
Can we have the user defined names for the reference screenshots, is it possible?
Currently differencify goes forward with steps and screenshots if the url is 404. It should fail fast if the url is not accessible in goto step
Which image comparison alogrithm is used by differencify? Can you elaborate it.
Currently, Differencify outputs all images and the diffs, if any exist, to a folder. It could be made to produce a report as an HTML page that loads on test failure which shows what tests have failed, and the reference, output, and diff images.
I want to name the reference screenshots.How can i do that?
I am not able to save the reference screenshot in particular directory.
I want to save the screenshots in the 320x480 folder.
How can i acheive that?
`import Differencify from 'differencify'
const puppeteer = require('puppeteer')
const differencify = new Differencify({
debug: true,
imageSnapshotPath: './differencify_reports/320x480/',
})
export const liveScreenshot = (key, value, width1, height1) => {
return (async function takeLiveWebsiteSnapshot() {
try {
const target = differencify.init({
testName: value,
chain: false
})
await target.launch({ headless: true })
const page = await target.newPage()
await page.setViewport({
width: width1,
height: height1
})
await page.goto(key, { waitUntil: 'networkidle0' })
await page.content()
const bodyHandle = await page.$('body')
const { width, height } = await bodyHandle.boundingBox()
const screenshot = await page.screenshot({
clip: {
x: 0,
y: 0,
width,
height
}
})
const result = await target.toMatchSnapshot(screenshot)
await bodyHandle.dispose()
await expect(result).toEqual(true)
await console.log(result)
await target.close()
} catch (error) {
await console.log(error)
}
})()
}
`
Based on jest-puppeteer-example it would be great to extend NodeEnvironment
similar to this
for example, this will avoid differencify.launchBrowser()
and differencify.cleanup()
on this example
(async () => {
await differencify.launchBrowser();
const target = differencify.init({ testName: 'Differencify simple unchained', chain: false });
const page = await target.newPage();
await page.goto('https://github.com/NimaSoroush/differencify');
await page.setViewport({ width: 1600, height: 1200 });
await page.waitFor(1000);
const image = await page.screenshot();
const result = await target.toMatchSnapshot(image);
await page.close();
console.log(result) // True or False
await differencify.cleanup();
})();
@xfumihiro: Anything to add?
It would be great if differencify could upload HTML report files into a cloud storage which could be accessible online after each test run
GoogleDrive or dropbox could be good options
This library helps to ensure you have the correct node & npm versions installed in your environment.
https://github.com/Skyscanner/ensure-node-env
@NimaSoroush It appears as though the 1.4.0 package doesn't have the latest build artifacts. Is it possible that it was published without npm run build
occurring before hand?
Notice the differing function signatures between my build (left) and the tarball retrieved from NPM (right)
If this was a publishing mistake and not something on my end, I would recommend configuring a prepare
NPM script such as:
"prepare": "npm run build"
This will run before publishing as well as after an install from Github, it ensures that no matter how the package is pulled into a project it should have freshly built artifacts.
im trying to take a screenshot for lazyloaded page, inorder to achieve that i am calculating the length of the page, scrolling till the length and then had to put wait time before taking the screenshot .
But it is not working
I'd like to compare my homepage in my test environment directly against the live environment
Is that possible with the current API?
e.g. something like:
it('ensures TEST vs PROD is expected', () => {
const testHomepage = await page.goto('http://test.domain.com').screenshot();
const prodHomepage = await page.goto('http://www.domain.com').screenshot();
expect(
await target.toMatchImageSnapshot(testHomepage, prodHomepage)
).toEqual(true)
})
Note: I realise this isn't strictly what differencify is designed for i.e. snapshot testing. I'm trying to replicate something like wraith "capture" mode.
It would be great if Differencified photos could be accessible to cloud providers like google photos. This would help people to access their reports in CI mode.
There is a nice https://github.com/google/google-api-nodejs-client library which facilitates this
It turns out I have to add these two methods to the test so the jest test can run successfully.
.launch()
.newPage()
or else I'm getting error:
: TypeError: Cannot read property 'goto' of null
at Target._handleFunc (/Users/jayhe/Desktop/jestDemo/node_modules/differencify/dist/target.js:111:50)
The successful jest test I'm using:
import Differencify from 'differencify';
const differencify = new Differencify({ debug: true });
describe('tests differencify', () => {
it('validate github page appear correctly', async () => {
await differencify
.init()
.launch()
.newPage()
.goto('https://github.com/NimaSoroush/differencify')
.screenshot()
.toMatchSnapshot()
.close()
.end();
});
});
Currently defaulting to screenshotSelector and this will break with problematic selectors. It should default to normal screenshot instead
Hey there!
First off, this project is awesome!
I've been exploring routes for building a screenshot/diff utility to provide interactive feedback based on mismatched screenshots. This was the first project I could see using, based on flexibility and overall performance.
In my implementation, it would be really helpful if the result
returned following a call to toMatchSnapshot
contained a path to the diff image. I forked and tested the change for my use case but wanted to share it here before proposing a PR, as I'm not sure if it impedes other use cases.
Below is my branch, I'm happy to incorporate feedback if this seems like a reasonable change.
https://github.com/Swingline0/differencify/tree/result-data (Diff)
Instead of a boolean, my change returns the actual result object so it looks more like this:
"result": {
"diffPath": "[...]/Homepage 1.differencified.png",
"matched": false
}
Thanks again for this awesome and inspiring work!
While running comparisons locally and in docker and drone, I am seeing an animated svg always being different to a varying degree. Upping the threshold gets round the problem but is not ideal.
We ran locally with headless disable to check if the image was actually frozen so I would doubt that is the cause in docker/drone. It seems to be an issue of rendering the pixels a little differently each time.
Add documentation to use it with Nock
Upgrading Differencify to support Jest Snapshot Testing.
differencify
.init()
.goto('https://github.com/NimaSoroush/differencify')
.capture()
.toMatchSnapshot()
.close()
.end();
toMatchSnapshot()
would store and update captured image snapshot just as Jest do
As differencify and some underlying packages are using async/await natively, support for elder version of node and babel to build the package will go away
We need something similar to freezeImage for general dynamic content. e.g dates, timeAgo, ...
Differencify currently get screenshot of the whole page. It should support elementHandle.screenshot
Currently, if an interruption happens during test/update execution, differencify won't be able to close all opened browser. Cleanup functionality should do that
It would be useful to pass options to the puppeteer page.goto
method from the differencify goto
method.
A very useful option is the waitUntil: 'networkidle'
option.
In my particular case, it looks like chrome is not treating a link
tag in the body as blocking so my visual test images are not being generated correctly. Using the waitUntil: 'networkidle'
fixes this.
Consider replacing Jimp and Pixelmatch
As of now, with differencify, we generate diff screenshots in "differencifiedoutput" folder.
If possible, we should introduce a flag , say saveScreenshots
, which will save captured screenshots in each run. This saved screenshots will help to manually refer the diff clearly, if needed, in case of failure. Sometimes, when the **mismatchThreshold**
is low , then it's difficult to make out the actual diff , by seeing the screenshot produced in "differencifiedoutput" folder. Default value can be made false for the flag saveScreenshots
Context:
From ms88privat's comment:
Goal:
@NimaSoroush have you considered Puppeteer?
it has a couple cool things:
we should leverage jest mocks for testing
I'm getting test failures with
Error: Failed to launch a browser.
at Chromy.start$ (/node_modules/chromy/dist/index.js:147:21)
at tryCatch (/node_modules/regenerator-runtime/runtime.js:65:40)
at Generator.invoke [as _invoke] (/node_modules/regenerator-runtime/runtime.js:303:22)
at Generator.prototype.(anonymous function) [as next] (/node_modules/regenerator-runtime/runtime.js:117:21)
at tryCatch (/node_modules/regenerator-runtime/runtime.js:65:40)
at invoke (/node_modules/regenerator-runtime/runtime.js:155:20)
at /node_modules/regenerator-runtime/runtime.js:165:13
whenever I run the tests with an already running instance of headless Chrome. Killing that instance fixes the test.
Adding checks for free ports and fixing #9 will fix this.
In standard jest tests we can also do multiple snapshots.
Currently if chrome fails to capture a screenshot (for any reason) it will return null and will fall to step of comparison. We should throw when screenshot capture fails
If I specify a selector for screenshot capturing, and if
.SHIELD-UP-HIDE-BODY-OVERFLOW {
overflow: visible;
}
to my body element an error will be thrown which says:
Error: extract_area: parameter height not set
please fix
Investigate selenium integration and real device coverage. ie. safari, IOS, Android, ...
Is their any way to store snapshots in the 3rd party cloud storage?
I think it would be nice to have a way of direct communication? For example, what is the reason for this LOC: https://github.com/NimaSoroush/differencify/blob/match-puppeteer-api/src/page.js#L103 ?
this says campare
differencify/src/chromyRunner.js
Line 64 in 9cbf174
Current behavior:
Currently, when on jest mode, --updatesnapshots will update all tests.
Expected behavior:
Update should only update failed snapshots
Hi, is there any concrete example on how to get the window object? I need it for mocking but it is not working.
File paths are generated using concatenation in createDir
, compareImage
, saveImage
. Probably worth changing to using path for better cross OS support.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.