GithubHelp home page GithubHelp logo

Comments (7)

sol avatar sol commented on June 10, 2024 1

@codygman this is fixed in hspec-2.10.10. Make sure to update your code to use hspecWith.

from hspec.

codygman avatar codygman commented on June 10, 2024

My team is affected by this actually because we delete some database data in these hooks. So tests fail currently, but after this is fixed they would pass.

from hspec.

sol avatar sol commented on June 10, 2024

@codygman hey there 👋

What workflow do you use during development? Are you using hspec/sensei or just stack test or something? Basically, what exact command do you use to run your tests?

The reason I'm asking is that I'm not yet sure how easy it will be to fix this in runSpecForest (or anywhere in hspec-core for that matter). I'm puzzled whether we could get rid of this feature in hspec-core entirely and implement it in sensei or some wrapper script instead.

from hspec.

codygman avatar codygman commented on June 10, 2024

Hi @sol!

Sorry for taking a bit to reply.

So this case works for us:

cabal test # using hspec-discover
cabal test # 2nd run, using hspec-discover

However we typically use ghcid with something like which re-runs tests, but we also modify to "focus" only the failing test, then re-run all tests once the faling one passes.

Here's are hopefully all of the pieces relevant to the information you need:

ghcid \
  --command="hpack && cabal repl" \
  --no-height-limit \
  --restart=cabal.project \
  --restart=package.yaml \
  --reload=settings.env \
  --run=Ghcid.run

Where we define Ghcid.run ourselves and do quite a few environment specific things, but most relevant to here we call:

TestSuiteMain.main

There we then call:

IO.withShownDuration "suite" . runHspecWithConfig Runner.defaultConfig $
      Test.parallel
        Spec.spec


runHspecWithConfig :: Runner.Config -> Hspec.Spec -> IO ()
runHspecWithConfig config_ =
  Runner.evalSpec config_ >=> \(config, spec) ->
    Environment.getArgs
      >>= Runner.readConfig config
      >>= Environment.withArgs [] . Runner.runSpecForest spec
      >>= Runner.evaluateResult

We also have a .hspec file with:

--failure-report .hspec-failures
--format failed-examples
--rerun
--randomize
--rerun-all-on-success
--print-slow-items=100

The problem we noticed comes when .hspec-failures is non-empty and --rerun-all-on-succcess is triggered specifically.

I think the original test passes meaning maybe the hook is run there, but then when --rerun-all-on-success happens the hooks aren't run.

And in this case the failing test in question inserted some data into our test database and a before hook guarded that test to ensure the results it's expected to insert do not already exist.

I'll be available to answer more questions of course, since I'm sure the above will lead to more clarification needed 😄

from hspec.

sol avatar sol commented on June 10, 2024

@codygman can you give #797 a try? You will have to use hspecWith instead of your own runHspecWithConfig.

With how beforeAll and friends are implemented, it's not possible to fix this issue in runSpecForest. When we run the spec for a second time on --rerun-all-on-success we need to rerun evalSpec first so that MVars that are used by beforeAll and friends are correctly reinitialized.

For that reason #797 moves handling of --rerun-all-on-success from runSpecForest to hspecWithResult. Hence, --rerun-all-on-success will only work if you use hspec, hspecWith, hspecResult or hspecWithResult.

from hspec.

sol avatar sol commented on June 10, 2024

Two more things:

  1. --failure-report shouldn't be necessary with ghcid; if you don't specify it, then Hspec stores the failure report in the process environment.
  2. You probably want to apply parallel in a spec hook (like so: https://github.com/hspec/sensei/blob/main/test/SpecHook.hs).

from hspec.

codygman avatar codygman commented on June 10, 2024

@codygman can you give #797 a try? You will have to use hspecWith instead of your own runHspecWithConfig.

With how beforeAll and friends are implemented, it's not possible to fix this issue in runSpecForest. When we run the spec for a second time on --rerun-all-on-success we need to rerun evalSpec first so that MVars that are used by beforeAll and friends are correctly reinitialized.

For that reason #797 moves handling of --rerun-all-on-success from runSpecForest to hspecWithResult. Hence, --rerun-all-on-success will only work if you use hspec, hspecWith, hspecResult or hspecWithResult.

Yes, I'll give that a try and keep the 2 things in the later comment in mind.

from hspec.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.