GithubHelp home page GithubHelp logo

Comments (42)

christetreault avatar christetreault commented on August 16, 2024

Today, I did some informal testing. I ran all the files in the liquidhaskell test suite through time, both in serial and parallel. Results that I found noteworthy can be found at: https://drive.google.com/file/d/0Bx6BpheCXcBWR3dnVS1BYk9uQzg/view?usp=sharing

Some particularly interesting results:

./pos/RBTree-color.hs:real 118.07 user 302.39 sys 45.61       | ./pos/RBTree-color.hs:real 9.57 user 7.35 sys 0.61

slower: 234 finfos

./pos/RBTree-height.hs:real 90.52 user 229.64 sys 35.69       | ./pos/RBTree-height.hs:real 10.85 user 6.55 sys 0.66

slower: 263 finfos

./pos/RBTree-col-height.hs:real 239.64 user 616.80 sys 90.52  | ./pos/RBTree-col-height.hs:real 21.78 user 12.22 sys 1.22

slower: 404 finfos

These red black tree files all generated 200+ FInfos, with col-height generating 404. For all of these, the parallel solver was unusably slow.

./pos/BST.hs:real 15.33 user 15.40 sys 1.96           | ./pos/BST.hs:real 21.40 user 12.44 sys 0.97

faster: 5 finfos

./pos/Map.hs:real 45.55 user 40.94 sys 3.38           | ./pos/Map.hs:real 63.54 user 39.23 sys 1.89

faster: 2 finfos

./pos/Map0.hs:real 44.21 user 39.10 sys 3.80              | ./pos/Map0.hs:real 63.86 user 39.40 sys 2.04

faster: 2 finfos

./pos/Map2.hs:real 45.69 user 39.65 sys 3.60              | ./pos/Map2.hs:real 53.16 user 38.27 sys 1.74

faster: 2 finfos

At first it seemed like a pattern was forming here; the less FInfos, above 1, the better the parallel solver was at solving them. Unfortunately, this doesn't hold up to repeated testing. Additionally, for all of these, only one of the partitions did significant amounts of work.

Command exited with non-zero status 1               Command exited with non-zero status 1
./neg/StateConstraints0.hs:real 21.53 user 46.39 sys 8.10     | ./neg/StateConstraints0.hs:real 40.72 user 26.97 sys 2.01

faster: 18 FInfos

./pos/StateConstraints0.hs:real 15.06 user 26.30 sys 6.23     | ./pos/StateConstraints0.hs:real 19.20 user 13.99 sys 1.20

faster: 19 finfos

Something about these files plays nicely with the parallel solver. They have smallish amounts of FInfos, but even under repeated testing they are faster than the serial implementation. I plan to study these more to figure out what it is about them that makes them parallelize well.

My suspicion is that a small number of FInfos, (close to the number of cores being used) with the work distributed evenly between them is the sweet spot here. FInfo has a monoid instance, so it should be easy to turn 400 of them into 4 of them. However, making sure they have equal amounts of work is likely to be a lot harder.

from liquid-fixpoint.

gridaphobe avatar gridaphobe commented on August 16, 2024

Hmm, since pretty much everything happens in OCaml, if you split the constraints into n disjoint sets and try to solve them in parallel, you'll end up with n instances of OCaml vying for CPU and memory resources. Note that this totally defeats the benefits of the lightweight threads that forkFinally creates.

Instead of trying to divide the constraints into a smaller number of sets containing an equal amount of work (which will be very hard), why not use the natural division that liquid-fixpoint already computes and just use a semaphore to ensure that only a small number of OCaml processes are running simultaneously?

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

As it stands, there will only be X instances of fixpoint.native running, where X is the number passed to the --cores option. However, we're still firing up hundreds of separate processes and eating the associated overhead, just not at the same time.

Does fixpoint.native have an interactive mode where it listens on stdin for input, and produces output as it gets it? We could fire up X instances of fixpoint.native, and just pipe input in until it's all solved, rather than repeatedly starting and stopping, similar to the difference between classic CGI vs FastCGI.

from liquid-fixpoint.

ranjitjhala avatar ranjitjhala commented on August 16, 2024

Unfortunately no such mode BUT how about we do this:

1 Associate a weight for each partition eg the number of constraints or kvars ( something easy to compute)

2 modify partition to merge tiny partitions so that each partition is suitably large.

Eg suppose we have 10 raw partitions of sizes

6, 3, 1, 1, 1, 1, 1

We can glue them into

6, 4, 4

Or some such. Ie given a target size keep greedily unioning partitions till they get to that size?

I can probably modify the partition. Function to this but @christetreault may be a good thing for you to try?

On Aug 2, 2015, at 1:02 AM, Chris Tetreault [email protected] wrote:

As it stands, there will only be X instances of fixpoint.native running, where X is the number passed to the --cores option. However, we're still firing up hundreds of separate processes and eating the associated overhead, just not at the same time.

Does fixpoint.native have an interactive mode where it listens on stdin for input, and produces output as it gets it? We could fire up X instances of fixpoint.native, and just pipe input in until it's all solved, rather than repeatedly starting and stopping, similar to the difference between classic CGI vs FastCGI.


Reply to this email directly or view it on GitHub.

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

Sounds good. I'll start working on that tomorrow. I'll make a partitionN :: Int -> F.FInfo a -> [F.FInfo a] in Partition.hs that produces N or less partitions based on number of nodes.

This should also allow me to simplify inParallelUsing, which would be good.

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

So, as a quick update. After implementing the changes from 2077da3 we're seeing much better performance from the RBTree-*.hs tests. Parallel implementations are in line with the serial implementations, but not significantly faster.

time liquid +RTS -N4 -RTS --cores=4 tests/pos/RBTree-color.hs

real    0m7.994s
user    0m12.224s
sys     0m3.456s
time liquid tests/pos/RBTree-color.hs

real    0m8.358s
user    0m7.284s
sys     0m0.560s

time liquid +RTS -N4 -RTS --cores=4 tests/pos/RBTree-height.hs

real    0m6.891s
user    0m10.276s
sys     0m2.328s
time liquid tests/pos/RBTree-height.hs

real    0m7.339s
user    0m6.236s
sys     0m0.496s
time liquid +RTS -N4 -RTS --cores=4 tests/pos/RBTree-col-height.hs

real    0m13.583s
user    0m17.760s
sys     0m4.436s
time liquid tests/pos/RBTree-col-height.hs

real    0m14.224s
user    0m11.864s
sys 0m0.932s

Unfortunately, the StateConstraints0.hs tests are now much closer now than they were before.

time liquid +RTS -N4 -RTS --cores=4 tests/neg/StateConstraints0.hs

real    0m27.020s
user    0m35.124s
sys     0m11.540s
time liquid tests/neg/StateConstraints0.hs

real    0m32.003s
user    0m26.012s
sys     0m1.808s
time liquid +RTS -N4 -RTS --cores=4 tests/pos/StateConstraints0.hs

real    0m16.418s
user    0m17.916s
sys     0m7.028s
time liquid tests/pos/StateConstraints0.hs

real    0m20.299s
user    0m16.684s
sys     0m1.192s

...though still a bit faster. I suppose I'll need to get profiling working in order to get to the bottom of this.

from liquid-fixpoint.

ranjitjhala avatar ranjitjhala commented on August 16, 2024

Is this the case for all benchmarks? (Are there any benchmarks where the
parallel stuff is helping?)

On Tue, Aug 11, 2015 at 2:44 AM, Chris Tetreault [email protected]
wrote:

So, as a quick update. After implementing the changes from 2077da3
2077da3
we're seeing much better performance from the RBTree-*.hs tests. Parallel
implementations are in line with the serial implementations, but not
significantly faster.

time liquid +RTS -N4 -RTS --cores=4 tests/pos/RBTree-color.hs

real 0m7.994s
user 0m12.224s
sys 0m3.456s

time liquid tests/pos/RBTree-color.hs

real 0m8.358s
user 0m7.284s
sys 0m0.560s

time liquid +RTS -N4 -RTS --cores=4 tests/pos/RBTree-height.hs

real 0m6.891s
user 0m10.276s
sys 0m2.328s

time liquid tests/pos/RBTree-height.hs

real 0m7.339s
user 0m6.236s
sys 0m0.496s

time liquid +RTS -N4 -RTS --cores=4 tests/pos/RBTree-col-height.hs

real 0m13.583s
user 0m17.760s
sys 0m4.436s

time liquid tests/pos/RBTree-col-height.hs

real 0m14.224s
user 0m11.864s
sys 0m0.932s

Unfortunately, the StateConstraints0.hs tests are now much closer now than
they were before.

time liquid +RTS -N4 -RTS --cores=4 tests/neg/StateConstraints0.hs

real 0m27.020s
user 0m35.124s
sys 0m11.540s

time liquid tests/neg/StateConstraints0.hs

real 0m32.003s
user 0m26.012s
sys 0m1.808s

time liquid +RTS -N4 -RTS --cores=4 tests/pos/StateConstraints0.hs

real 0m16.418s
user 0m17.916s
sys 0m7.028s

time liquid tests/pos/StateConstraints0.hs

real 0m20.299s
user 0m16.684s
sys 0m1.192s

...though still a bit faster. I suppose I'll need to get profiling working
in order to get to the bottom of this.


Reply to this email directly or view it on GitHub
#90 (comment)
.

Ranjit.

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

I haven't run a full test yet. I plan to work on that today, and I'll let you know before the end of the day.

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

I finally got a chance to re-run the tests this morning. When testing using the parallel and serial solver, all times are within a small margin of each other. I'm sure this is because of a problem with my parallelization code; I'll take a look at it and see if I can't figure out the problem.

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

Per @ranjitjhala's suggestion that I profile directly using fixpoint, I have attempted to do so. Unfortunately, It seems that the parallel implementation no longer works directly via fixpoint.

Given the following input:

fixpoint +RTS -N4 -RTS --cores=4 ../liquidhaskell/tests/pos/.liquid/RBTree-col-height.hs.fq

A massive error is returned. Using the serial implementation works as advertised, as does the parallel implementation when called via liquid.

Clearly there is a difference between how liquid calls solvePar, and how fixpoint calls it that I'll have to look into. If anybody has an insight into what the problem may be, I'd appreciate any thoughts.

from liquid-fixpoint.

gridaphobe avatar gridaphobe commented on August 16, 2024

Well, the important piece of the error is

WARNING: Malformed Lhs Pred ((len([VV#9652]) >= 0))
VV#9652 :: FAppTy (List ) (GHC.Types.Char )
len :: func(2, [FAppTy (FAppTy (FAppTy ) @(0)) @(1) ; int])
(len([VV#9652]) >= 0)

which says there's something bogus about len([VV#9652]) >= 0, and indeed, the given type for len makes no sense to me.

len :: func(2, [FAppTy (FAppTy (FAppTy ) @(0)) @(1) ; int])

fixpoint has a quirky syntax for types ("sorts" in fixpoint parlance), here's how to read it:

  • func(...) is a function
  • func(2, ...) is a function with two quantified type variables (I believe)
  • [FAppTy (FAppTy (FAppTy ) @(0)) @(1) ; int] is the type of the function, read ; as ->
  • FAppTy is a type application, i.e. FAppTy List Int can be read as List Int
  • @(0) is a type variable, the 0 identifies the variable.

So this type should read something like a b -> Int, except there are two extra FAppTys, so the input type doesn't look syntactically valid to me. Clearly, the printing function is hiding information.

(This may well not be the actual issue since there are a number of other types with excess FAppTys in the environment, but cleaning up the printing routine would be good anyway to make things more legible..)

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

Is it possible that the .fq file is being parsed differently, and as a result, is being partitioned incorrectly?

I noticed that the function that initially produces an FInfo in fixpoint has a different type than the one in liquid:

Language.Fixpoint.Parse.rr' :: rr' :: Inputable a => String -> String -> FInfo ()

Langauge.Haskell.Liquid.Constraint.ToFixpoint.cgInfoFInfo :: GhcInfo -> CGInfo -> FilePath -> IO (F.FInfo Cinfo)

Is it possible that some info is being lost by not having that Cinfo in fixpoint?

from liquid-fixpoint.

gridaphobe avatar gridaphobe commented on August 16, 2024

It's possible since the haskell parser for .fq files is not really used much (apart from the experimental native solver). But still, it seems unlikely to me..

You said that the haskell fixpoint binary works correctly until you add the --cores flag, right? That suggests to me that the partition function is at fault. A good test would be to partition the FInfo and just dump the individual .fq files, then run fixpoint on each in single-core mode.

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

Sounds like a plan. I'll give that a shot, thanks!

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

When liquid partitions the .fq file, all partition files work. When fixpoint partitions the .fq file, we get this error.

Diffing all parition files together shows that they are not the same.

I suppose the next question is: what could make parition produce different outputs?

from liquid-fixpoint.

gridaphobe avatar gridaphobe commented on August 16, 2024

partition' is a pure function, so if the resulting FInfos from liquid-partition and fixpoint-partition are different, the input FInfo must be different as well. Perhaps the parser is to blame then, liquid certainly doesn't use it.

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

I just checked, and the changes to partition' from issue #91 don't seem to be the cause of this.

from liquid-fixpoint.

ranjitjhala avatar ranjitjhala commented on August 16, 2024

Folks the error is undoubtedly because the fixpoint parser is producing the wrong result (that parser is NOT used when run via liquid and hence there are no problems there.)

Chris: can you send me a link to the SMALLEST file that causes the problem, both the haskell source and the generated fq?

(I'm hoping this is already fixed in the cut solver branch and just a matter of pulling from there...)

On Aug 15, 2015, at 12:53 AM, Chris Tetreault [email protected] wrote:

I just checked, and the changes to partition' from issue #91 don't seem to be the cause of this.


Reply to this email directly or view it on GitHub.

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

I've found: test/pos/vecloop.hs : vecloop.hs.fq

I'll see if I can find a simpler one.

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

The smallest positive test source file that has this issue: lit.hs : lit.hs.fq

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

By the way: I have some experience with Parsec. If somebody would be willing to explain to me the syntax rules of the .fq file, I would be willing to attempt to fix the parser.

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

Finally some good news; it occurred to me that I may not be seeing any improvements in runtime with the parallel solver because I wasn't doing any work that was worth parallelizing. Sure the graphs being solved might be big, but if they can't be meaningfully split into multiple chunks there no point in bothering with a multithreaded implementation.

With that in mind, I combined a few of the larger tests into one mega test, which is a combination of: tests/pos/WBL0.hs, tests/pos/AVLRJ.hs, and tests/pos/RBTree.hs.

Times for:

Single threaded:

real    1m25.645s
user    1m11.648s
sys 0m3.376s

Parallel:

real    0m50.604s
user    1m27.616s
sys 0m8.936s

...which I consider to be a good improvement. This is testing via liquid due to issue with the parser discussed upthread. I can only assume that much of this time is a result of all the stuff that liquid is doing, and more testing needs to be done when this gets resolved.

I am working on a threshold for the parallel implementation that will prevent threading from happening if there isn't enough work to do. Currently I'm using the size of the map in the cm field of FInfo as my unit of work, but it's possible that there is a better metric.

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

I've opened issue #101 for the parser issue, as it is not actually a problem with the parallel solver.

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

@ranjitjhala, @gridaphobe, @nikivazou:

Where we stand on this issue:

I have completed implementation of all the machinery required for solvePar. I've added two command line switches to liquid and fixpoint:

--cores=n: The number of cores to use.
--minpartsize=n: The minimum "size" that is considered worth parallelizing.

The size argument defaults to 1, which will prevent empty FInfos from spinning up a fixpoint instance for no reason, however my testing shows a performance gain at ~500. This means that almost all of the test suite files aren't worth parallelizing, but my omnitest.hs file meets this requirement, and sees a 1m 25s -> 50s performance gain when being solved on 4 cores.

The size is calculated by Language.Fixpoint.Partition.cpartSize, but this function surely has a better implementation than the one I've provided.

Outstanding issues:

Issue #88 is currently a showstopper, as it prevents the test suite from passing. However, I just pushed a potential fix to this.

Issue #101 is a very serious problem that prevents the parallel solver from being usable with a previously dumped .fq file. However, since it has been decided that this issue is probably with the parser, I see no reason we need to block on this. Additionally, the parallel solver works great when called via liquid

Room for improvement:

cpartSize: This function is very rudimentary. It serves its purpose, but improvements to this would likely produce better performance.


Pending the resolution of issue #88, I propose we begin the process of getting the cores branches of liquid and fixpoint merged with master.

from liquid-fixpoint.

gridaphobe avatar gridaphobe commented on August 16, 2024

Thanks for the summary!

What partition size did you use with omnitest.hs? Is it the 500 number you mentioned above? We have a bunch of larger files in benchmarks/ that would probably be worth testing the parallel solver on too (in particular, Data.Text and Data.ByteString both take upwards of 3 minutes to check).

Also, IIUC the default minimum size of 1 will show a performance loss because the cost of spinning up fixpoint starts to dominate, is that correct? If so, could we set the default to 500, where there's at least usually a performance gain?

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

500 is where I'm seeing performance gains; I agree that 1 is bad in all circumstances, but the performance loss is minimal.

However, the liquid Config Monoid instance takes the larger of the two values, so if we set the default to 500, it's not possible to set it below that value. Ordinarly, not an issue except that the test suite will never attempt to parallelize anything, and any parallel bugs will be hidden.

I suppose we could try to override that in the test suite somehow...

I'll run some of the benchmarks and see how they go!

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024
time liquid +RTS -N4 -RTS --cores=4 --minpartsize=500 Data/Text.hs

...

real    1m42.620s
user    1m36.204s
sys 0m17.236s

time liquid Data/Text.hs

...

real    2m15.939s
user    1m16.360s
sys 0m4.672s

I'm unable to run the bytestring test. It tells me:

 Data/ByteString/Lazy/Internal.hs:50:8-20: Error: GHC Error
     Could not find module ‘Data.Generics’

...however syb is installed. I'm guessing this is cabal's fault.

from liquid-fixpoint.

ranjitjhala avatar ranjitjhala commented on August 16, 2024

What's the single core time for these benchmarks?

On Aug 21, 2015, at 12:25 PM, Chris Tetreault [email protected] wrote:

time liquid +RTS -N4 -RTS --cores=4 --minpartsize=500 Data/Text.hs
...

real 1m42.620s
user 1m36.204s
sys 0m17.236s
time liquid Data/Text.hs
...

real 2m15.939s
user 1m16.360s
sys 0m4.672s
I'm unable to run the bytestring test. It tells me:

Data/ByteString/Lazy/Internal.hs:50:8-20: Error: GHC Error
Could not find module ‘Data.Generics’
I'm guessing this is cabal's fault.


Reply to this email directly or view it on GitHub.

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

single core omnitest.hs:

real    1m25.645s
user    1m11.648s
sys 0m3.376s

single core Text.hs:

real    2m15.939s
user    1m16.360s
sys 0m4.672s

Both are seeing a ~30 second improvement when going multithreaded

from liquid-fixpoint.

gridaphobe avatar gridaphobe commented on August 16, 2024
  1. I'm not convinced that's the right behavior for the monoid instance, it might make more sense for mappend to choose the right minPartSize. Or even better, we should just drop the monoid instance altogether. AFAIK we only use it to merge configuration options from different sources (e.g. command-line, environment, pragmas in the file). But it's incorrect to think of any of these sources as defining a complete Config, rather they define a transformation on Configs (i.e. a Config -> Config). If we restructure the config parser to emit these transformer functions, we'll be able to set a default minPartSize of 500, while still allowing users to raise or lower it to their preferences.
  2. Do you have syb installed in a sandbox or globally? LH is probably not smart enough to sniff it out a sandboxed syb (unless, maybe, you use --cabaldir).
  3. Just an FYI, I think we can use https://hackage.haskell.org/package/base-4.8.1.0/docs/Control-Concurrent.html#v:setNumCapabilities to make the --cores flag imply the -RTS -N flag.

from liquid-fixpoint.

ranjitjhala avatar ranjitjhala commented on August 16, 2024

@gridaphobe: did you mean "imply" ====> "simply" ?

On Fri, Aug 21, 2015 at 3:41 PM, Eric Seidel [email protected]
wrote:

I'm not convinced that's the right behavior for the monoid instance,
it might make more sense for mappend to choose the right minPartSize.
Or even better, we should just drop the monoid instance altogether. AFAIK
we only use it to merge configuration options from different sources (e.g.
command-line, environment, pragmas in the file). But it's incorrect to
think of any of these sources as defining a complete Config, rather
they define a transformation on Configs (i.e. a Config -> Config). If
we restructure the config parser to emit these transformer functions, we'll
be able to set a default minPartSize of 500, while still allowing
users to raise or lower it to their preferences.
2.

Do you have syb installed in a sandbox or globally? LH is probably not
smart enough to sniff it out a sandboxed syb (unless, maybe, you use
--cabaldir).
3.

Just an FYI, I think we can use
https://hackage.haskell.org/package/base-4.8.1.0/docs/Control-Concurrent.html#v:setNumCapabilities
to make the --cores flag imply the -RTS -N flag.


Reply to this email directly or view it on GitHub
#90 (comment)
.

Ranjit.

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

@gridaphobe:

  1. I think that command line arguments passed to liquid should override ones stored in the environment. I was very confused when it didn't, I thought fixpoint didn't build the first time I encountered the issue.

  2. Everything is sandboxed. I could try cabal exec bash, which seems to help when I run into these sorts of issues.

  3. I became convinced that get/setNumCapabilities was GHC-only somehow (I think real world haskell says it is), but reading the haddocs now, it doesn't actually say that. My original proof of concept used getNumCapabilities and it was fine. That said, I'm not sure how the best way to incorporate all of that would be.

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

OK, cabal exec bash seems to have fixed the benchmark issue.

Singlethreaded Data/Bytesring.hs:

real    5m41.442s
user    5m8.368s
sys 0m4.312s

Multithreaded Data/Bytestring.hs:

real    2m23.215s
user    5m49.596s
sys 0m22.408s

... which is a greater than 2 minute speedup

from liquid-fixpoint.

gridaphobe avatar gridaphobe commented on August 16, 2024

I do mean 'imply', ie we'd take the argument of --cores and pass it to setNumCapabilities. I think this would be a slightly nicer UI than using getNumCapabilities. They're probably both GHC-only, but that's not a problem, we depend heavily on GHC itself.

(I think the standard priority for overriding configs is env < file < cmdline.)

from liquid-fixpoint.

gridaphobe avatar gridaphobe commented on August 16, 2024

Wow! That's quite impressive re ByteString :)

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

Should be simple to implement. Plus, presumably if some compiler doesn't support it, it'll be a (hopefully) noop and they can still pass RTS options.

Do we care if the user specifies some number greater than the available cores? I don't see any functions to check the number of cores.

from liquid-fixpoint.

ranjitjhala avatar ranjitjhala commented on August 16, 2024

Yes, that is pretty cool! Nice! (But I do wonder why we aren't getting
nearer 4x speedups, I think it points to other bottlenecks that should be
found and fixed.)

On Fri, Aug 21, 2015 at 4:30 PM, Eric Seidel [email protected]
wrote:

Wow! That's quite impressive re ByteString :)


Reply to this email directly or view it on GitHub
#90 (comment)
.

Ranjit.

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

Just watching the program go, it seems to do a lot of work before and after calling solve.

from liquid-fixpoint.

ranjitjhala avatar ranjitjhala commented on August 16, 2024

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

I just pushed a change that uses setNumCapabilities. It seems to be working fine.

from liquid-fixpoint.

ranjitjhala avatar ranjitjhala commented on August 16, 2024

Do we care if the user specifies some number greater than the available
cores? I don't see any functions to check the number of cores.

I don't think we need worry about this...

from liquid-fixpoint.

christetreault avatar christetreault commented on August 16, 2024

Given that the cores branches are merged, I'm going to close this issue.

from liquid-fixpoint.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.