GithubHelp home page GithubHelp logo

kurbo's Introduction

kurbo, a Rust 2D curves library

Build Status Docs Crates.io

The kurbo library contains data structures and algorithms for curves and vector paths. It is probably most appropriate for creative tools, but is general enough it might be useful for other applications.

The name "kurbo" is Esperanto for "curve".

There is a focus on accuracy and good performance in high-accuracy conditions. Thus, the library might be useful in engineering and science contexts as well, as opposed to visual arts where rough approximations are often sufficient. Many approximate functions come with an accuracy parameter, and analytical solutions are used where they are practical. An example is area calculation, which is done using Green's theorem.

The library is still in fairly early development stages. There are traits intended to be useful for general curves (not just Béziers), but these will probably be reorganized.

Minimum supported Rust version

Since version 0.9, kurbo makes use of generic associated types and thus requires rustc version 1.65 or greater.

Similar crates

Here we mention a few other curves libraries and touch on some of the decisions made differently here.

  • lyon_geom has a lot of very good vector algorithms. It's most focused on rendering.

  • flo_curves has good Bézier primitives, and seems tuned for animation. It's generic on the coordinate type, while we use f64 for everything.

  • vek has both 2D and 3D Béziers among other things, and is tuned for game engines.

Some code has been copied from lyon_geom with adaptation, thus the author of lyon_geom, Nicolas Silva, is credited in the AUTHORS file.

More info

To learn more about Bézier curves, A Primer on Bézier Curves by Pomax is indispensable.

Contributing

Contributions are welcome. The Rust Code of Conduct applies. Please document any changes in CHANGELOG.md as part of your PR, and feel free to add your name to the AUTHORS file in any substantive pull request.

kurbo's People

Contributors

anthrotype avatar chubei-oppen avatar cmyr avatar derekdreery avatar derlando avatar dfrg avatar djmcnab avatar halfvoxel avatar jaicewizard avatar jatentaki avatar jneem avatar laurmaedje avatar matthiasbeyer avatar michael-f-bryan avatar mlwilkerson avatar msiglreith avatar mwcampbell avatar notgull avatar platlas avatar profaurore avatar rahix avatar raphlinus avatar ratmice avatar razrfalcon avatar rsheeter avatar secondflight avatar simoncozens avatar vanshoe avatar waywardmonkeys avatar xstrom avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kurbo's Issues

Add Vec2::normalize

I've needed this in a few places, and though it's a one liner I think it would be worth having as a method.

Affine * Rect ?

Hey there — I noticed in the code comments that a TranslateScale via Mul is the recommended (only?) way to transform a Rect right now. For my use-case I'd like to apply an arbitrary Affine transform to Rects, e.g. including rotation.

Is TranslateScale-only for Rect an intentional design decision, has support simply not been implemented, or is something else afoot? If it's a matter of "not-yet-implemented," could you give a brain-dump of constraints/TODOs to help a would-be contributor?

I dug through the code, docs, and GH issues but haven't found any clues yet. Thanks in advance! Impressed by both kurbo and piet from my tire-kicking.

Add a FuzzyEq trait

We should have a floating-point aware equality comparison. There are crates out there that provide this logic, and I think it's tricky enough that it's worth a dependency if somebody has done it well.

I suspect the best way to do this would be via some sort of FuzzyEq trait. the float-cmp crate looks like a likely candidate, but I would ideally like something that could, say, infer a good epsilon value for you based on the input values, although maybe this is a bit optimistic.

QuadBez::nearest may return `NaN`

When working on curve fitting with quadratic Bezier curves, I ran into an issue which can be briefly described as the following test failing:

#[test]
fn quadbez_nearest_large_numbers() {
    let quad = QuadBez {
        p0: (0.0, 0.0).into(),
        p1: (32.5, 20.0).into(),
        p2: (65.0, 40.0).into(),
    };

   let p: Point = (90., 30.).into();

   let (t, d) = quad.nearest(p, 1e-6);

   assert!(!t.is_nan());
   assert!(!d.is_nan());
}

I will try to understand and maybe fix this issue myself, but I would expect the author of the implementation to have an easier time doing so :)

Document `Affine::rotate` units

The documentation string for this function should clearly state the unit used, which I assume is radians? It might also be worth naming the argument in such a way that it communicates the unit.

Add Rect::ZERO

Not having this is an oversight, useful in various UI boilerplate.

Document `tolerance` argument

Shape::to_bez_path and Shape::into_bez_path both take a tolerance argument, but nowhere is it mentioned how the user might select an appropriate value, or what a reasonable choice might be in a given scenario.

Fitting of a curve by multiple cubic bezier segments

I recently published a blog post on fitting cubic beziers. For some applications, such as merging two beziers into one, it is sufficient. For other applications, the end goal is representation of the source curve by a sequence of bezier segments.

A standard approach to this problem is to attempt to fit the curve with a single bezier, then measure the error. If that is below the threshold, we're done. If it's above, then split in half (where "half" can be defined by the original parameterization, or by arc length, or something else; usually the exact detail doesn't matter much) and apply the algorithm recursively. In general this produces somewhere around 1.5 as many segments as necessary. To pick a simple example, assume that the source curve can be rendered using 5 bezier segments. The recursive subdivision approach will generally create 8. That said, the recursive subdivision approach is simple and fast, so it will likely be useful in some contexts.

This issue discusses how to produce a globally optimum bezier representation, subject to the error threshold.

Definition of the problem

We'll pose a slightly tricky statement of the problem; it is carefully engineered to make the problem more tractable.

Generate a sequence of beziers that approximates the source curve. The number of segments should be minimal, ie there is no sequence with fewer segments for which the L2 error of each segment is below the error threshold. Within that constraint, minimize the maximum L2 error across the segments. Each segment is required to be G1-consistent with the source curve and also match its area exactly.

Note that this is very similar to minimizing Frechet distance, but subtly different. If there is an application for which Frechet minimization is absolutely required, I think it's likely we could use this solution as a starting point, then fine-tune, for example using a gradient descent mechanism. I find it hard to imagine a use case where that would actually be important.

A few assumptions. The L2 error is assumed to be monotonic when one endpoint is fixed and the other is varied. (Intuition is that if this were not true, then a de Casteljau subdivision of the lower-error fit for the longer segment would be less error. However, making that argument mathematically rigorous is tricky, as the subdivided bezier may not be G1-consistent with the source curve. Thus, numerical techniques should be chosen to be robust against failure of strict monotonicity.)

Thus, a minimax optimum is when all beziers have the same L2 error wrt the source curve. Any perturbation would reduce one error but increase another, thus increasing the maximum.

Outline of the solution

The solution should be both fast and accurate. Thus, it breaks down into three major components. First, an intermediate representation of a (close) approximation of the source curve, designed for efficient query operations (there will be a lot of queries to that curve). Second, a doubly nested solver to determine the optimum value of the subdivision points. And third, a method to determine the optimum bezier (and calculate its error) for each segment; that's already explained in the blog post.

Query structure

The intermediate curve representation supports two queries. One resolves a position, parameterized by arc length, to a point and tangent. The second resolves an interval, parameterized by arc length of each endpoint, to signed area and raw first moments. A central part of this design is a clever approach to serve both queries in effectively O(1) time, regardless of the complexity of the source curve, at the cost of some preprocessing. I'll also consider different points in the tradeoff space that make queries more expensive but reduce preprocessing cost.

The simpler approach first. Essentially the curve is an array of segments, each of which has arc length of total arc length / n. Each segment is a G1 cubic Hermite interpolation of the arc length parameterization of the source curve. Another way of saying that is that the first derivative at the endpoints has the same tangent direction as the source curve, and its magnitude is the length of the segment, ie arc length / n. I expect error to scale as O(n^4) but with a constant factor Somewhat better than, eg, piecewise quadratic bezier or piecewise Euler spiral. Obviously this is something to check emprically.

Then, querying point and tangent from arc length is simple: multiply the query arc length by n. The integer part is an index into the array of segments, and the fractional part is the t parameter for the resulting cubic bezier.

A clever aspect to this design is the range query: in addition to the bezier, also store the prefix sum of the area and first moments. Then, to resolve a range query, compute the area and moments of the fractional pieces at each end, and add the difference of those values sampled at the integer values. This allows resolution of all queries in O(1) time.

A concern is the number of subdivisions required if the source curve has a cusp or region of huge curvature variation - all segments are the same length, therefore the segment length must be chosen to satisfy the error threshold for the worst case. Below are two alternatives to address this concern, though the simpler approach is probably good enough for many applications, and the worst thing that can happen is excessive time spent on preprocessing.

Nested solver for segment endpoints

The determination of segment endpoints is a doubly nested loop of bisection-style solvers. We'll consider the inner loop first as it's fairly straightforward. Given a starting position and an error target, determine the end position. Given the assumption that error is monotonic, that can readily be solved by bisection (or similar but better methods, see below).

The outer loop has an initial phase of determining n and a lower bound on the target error. That walks the curve from the beginning to the end, using the inner loop with the target error. All segments but the last will have that error, and the last will have some smaller error. This is is then a solution with the minimal number of segments, and meets the error threshold, but the fact that the last segment is short is unsatisfying. The error of the last segment is a lower bound.

Next is a bisection to find the actual error. Each iteration consists of a curve walk as above with n - 1 segments, then the error is computed for the last segment. The "value" for the bisection algorithm is the error of the last segment minus the error of all previous segments.

I said bisection here, but the ITP method will likely yield significantly faster results. Another trick is to consistently use the sixth root of error in place of raw error, as we expect O(n^6) scaling of the bezier fit, so this should make the resulting curves closer to linear.

Error calculation

I already do something like this in the notebook (not yet published) for the curve fitting post, but will describe it here. Compute the L2 error of a bezier wrt the source curve as follows. The goal is to compute the integral of (error vector)^2 over a normalized arc length parameterization of both curves. To do that the straightforward way requires solving an inverse arc length parameterization of the bezier, which is potentially slow.

Instead, compute Lagrange-Gauss quadrature, using the t parameter of the bezier as the integration parameter. For each t compute the corresponding s by doing a forward arc length evaluation on the bezier. Then find the point on the bezier (query by t) and the point on the source curve (query by s) and just evaluate Δx^2 + Δy^2. Then multiply by ds/dt, which is the norm of the derivative of the bezier. Do the dot product of those values with the w parameters from LGQ, et voila, a good, reasonably cheap numerical integration.

This is the inner loop of a bunch of nested solver loops, so it's important to be fast. As a result, query of the source curve needs to be fast, as does arc length calculation on the bezier. Fortunately, we have good approaches for both.

Alternative query structures

The fact that all segments are equal length is not great when there's a cusp - the segment near the cusp needs to be short, therefore all segments need to be short.

One approach is a variable size segment, and binary search of cumulative arc length to find the right segment. Each segment could be the same as above. A somewhat more exotic approach is to store a reference to the source curve and parameters for a function to solve the inverse arc length problem (ie map s within the segment to t on the source curve). One approach is to model the arc length as piecewise quadratic, in particular the integral of a piecewise linear representation of the magnitude of the first derivative of the source curve, and use the quadratic formula to solve for t on each query. This will generally model cusps well, so shouldn't require very many segments. Also, because it references the source curve, the query of position and tangent will always produce a value on the curve (though it's not clear this is an important distinction, as errors in measuring arc length probably have a very similar effect).

Another approach is an approach reminisent of a radix tree. At each level in the tree, each node represents a constant amount of arc length. The node can either be a cubic segment, or an array similar to what was described above. Thus, query is proportional to tree depth, but the constant factors should be good.

My main concern is complexity, so it's not clear which approach is best. A nontrivial part of that complexity is determining the error of the C1 cubic Hermite approximation relative to the source curve, and thus the appropriate value for n. The radix tree is more forgiving in that regard; one can be fairly sloppy in the choice of the radix, and use a simple "does it meet the error threshold" test to decide whether to subdivide.

Last thoughts

This all assumes the curve is smooth. If it contains corners, those should be determined before starting. The same goes for cusps, though in theory the bezier curve fitting should be able to match a curve containing a cusp in the middle.

For font applications, it's generally a good idea to subdivide at horizontal and vertical extrema, as well. Thus, each curve segment goes through 90 degrees of angular deviation at most; two or at most three beziers should be sufficient to represent that for most fonts.

The doubly nested loop solution is fairly similar to what I described in section 9.6.3 of my thesis. I do think the use of the ITP method is an improvement, as I'd be worried about robustness of the secant method as recommended in the thesis; one of the great advantages of ITP is that its worst-case behavior is no worse than bisection.

impl Shape for Arc

I'm wanting to use Arcs for a UI and I notice that it doesn't implement Shape, so I did the following to get what I wanted:

fn arc(arc: Arc) -> BezPath {
    // this is hard-coded for my choice of x_rotation
    let start = Point {
        x: (1.0 - arc.start_angle.sin()) * arc.center.x,
        y: (2.0 + (arc.start_angle.cos() - 1.0)) * arc.center.y,
    };

    let mut out = BezPath::new();
    out.move_to(start);
    for el in arc.append_iter(1.0) {
        out.push(el);
    }
    out
}

Is there anything preventing an implementation of Shape for Arc?

inv_arclen broken/changed in 0.8.1

let c = kurbo::CubicBez::from_points(0.2, 0.73, 0.35, 1.08, 0.85, 1.08, 1.0, 0.73);
println!("{}", c.inv_arclen(0.5, 0.5));
  • 0.8.0 returns 0.4926965024447632
  • 0.8.1 returns 0.25

Converting iterator of path elements to segments

There is currently no built-in way to transform an iterator of path elements to an iterator of path segments. My use case is to take an argument of type impl Shape and to iterate over its segments. Shape::to_bez_path only returns an iterator over path elements. I could, of course, collect into a BezPath and the call segments() on that, but I'd like to avoid the intermediate allocation.

Looking into the source code, there is the private function BezPath::segments_of_slice, which has a todo that it should maybe be public, and there is the underlying BezPathSegs iterator. Unfortunately, segments_of_slice still wouldn't work with general iterators.

To fix this, how about adding a freestanding function like this:

pub fn segments(path: impl IntoIterator<Item = PathEl>) -> impl Iterator<Item = PathSeg> { ... }

This function would be nicely parallel to flatten and could similarly link to BezPath::segments, which would then be implemented in terms of it. segments_of_slice could then be removed, I think.

If you think this sounds like a good way to do it, I'd be happy to open a PR!

(An alternative, also stated in segments_of_slice's comment, would be to create a trait such that iter.segments() works on any iterator. Personally, I think this wouldn't be worth it and the parallelity with flatten would also be lost.)

Overflow in mindist

I've done something bad here:

Cubic(CubicBez { p0: (232.0, 126.0), p1: (134.0, 126.0), p2: (139.0, 232.0), p3: (141.0, 301.0) }) <-> Line(Line { p0: (359.0, 416.0), p1: (367.0, 755.0) })
thread '<unnamed>' panicked at 'attempt to subtract with overflow', /Users/simon/.cargo/git/checkouts/kurbo-aa548d4979964553/e964d99/src/mindist.rs:206:49

choose(n, i) as f64 * (1.0 - u as f64).powi((n - i) as i32) * u.powi(i as i32)

serde support?

Is there any specific consideration that this crate does not have conditional serde serialization support? by the way great work.

Segment/segment intersections

We currently have a line/segment intersection method, which is great, but I think a generic segment/segment intersection trait would be very useful: line/line, line/curve, curve/curve. I have this code in beziers.py and not having it in kurbo is a blocker for me in moving some projects to kurbo.

The only problem is that segment/segment intersections have many gnarly edge cases, and I'm not 100% convinced that my code handles all of them.

Is something... really wrong with solve_cubic?

I called solve_cubic with a = -20.0 b = 30.0 c = 0.0 d = -5.0.
I expected [1.36603, -0.36603, 0.5].
I got [-2.732050807568877, 0.7320508075688771, 2.0000000000000004].

Taking the x=2.0 case gives us -20*2**3 + 30*2**2 + 0*2**2 - 5 = -45 - which is pretty far from zero. Is something broken or is it my expectations?

Ambiguous license

The top level of the repository has files for both the MIT and Apache licenses, suggesting dual licensing. But src/lib.rs indicates only the Apache license. I would prefer dual licensing, since I can't use any dependency that's solely under the Apache license in the core of AccessKit.

Figure out sign conventions

One of the current points of confusion is whether the coordinate system is y-up or y-down. The former is more conventional in math, the latter completely standard in graphics these days (though this was less true, PostScript was y-up, as are fonts to this day).

Related issues are conventions around whether a positive area shape is clockwise or withershins, and whether a clockwise circle has positive or negative signed curvature. The winding number should match area - it should be positive if the point is inside a positive area shape. Either convention will do, but it should be consistent.

Another thing that falls in the same category is the convention of Vec2::cross. The right hand rule says right ^ up = 1, but is up (0, 1) or (0, -1)? For this, I think we pick one and document it.

I do think kurbo is interesting in both math and graphics contexts. For methods like Vec2::from_angle and Affine::rotate it's hard to just pick a convention, so I'm inclined to split them into _y_up and _y_down variants, so that the user makes an explicit choice. For the signed area stuff, I'm not sure there's a strong convention; e^{iπt} is anticlockwise in a y-up coordinate space as t increases. Do people know of other conventions we can follow? Something along the lines of JuliaLang/julia#8750 which surveys rounding conventions of many languages might be a model.

Tagging @behdad @jrus @Pomax @nical @Connicpu @kvark as they might have horses in this race.

Add Rect::with_origin and Rect::with_size

Two simple convenience methods that return a new rect with the current rect's size (or origin) and a new origin (or size).

Approximately:

// impl Rect {

pub fn with_origin(self, origin: Point) -> Rect {
    Rect::from_origin_size(origin, self.size())
}

Some desired methods

impl PathSeg {
fn start(&self) -> Point;
fn end(&self) -> Point;
// Because pattern matching just to get start/end points is annoying, and using to_cubic is hacky.
}

impl Point {
fn square_distance(&self, other: Point) -> f64;
fn unitize(&self) -> Point;
fn angle(&self) -> f64;
fn dot(&self, other: Point) -> f64;
fn cross(&self, other: Point) -> f64;
}

Probably more as I think of them. I don't know if you want to turn every struct into a Ruby-like grab-bag of methods, but these are things I end up reaching for frequently.

Possibility of using the `Float` trait instead of `f64`?

This relates to linebender/norad#108.

The num_traits crate declares a trait called Float which implements Neg, Add, Mul, Rem, Div, etc etc. It then implements this trait for f32 and f64.

pub trait Float: Num + Copy + NumCast + PartialOrd + Neg<Output = Self> {...}

Many libraries which work on numbers use this crate to be generic over floating point types.

I wonder if Kurbo could too?

Edited to add: I think this issue is fundamentally different from, but similar to, #159, because it basically lets the API consumer decide whether or not they are willing to take the risks of f32 lower precision.

Interpolatable cubic-to-quadratic

One facility which would be very useful (all right, indispensable) for font work is the ability to convert multiple cubic curves to quadratics together, with the guarantee that the returned curves have the same structure (same number of quadratics and, ideally, the same elidable "implied oncurves").

fontmake's cu2qu has an algorithm with these guarantees; we could steal it, or do something clever ourselves.

Enable `RoundedRect` to change the radius on all four corners

Currently, to do this, you'd have to do the same thing as the Shape implementation for RoundedRect does, with the exception that each corner could have a different radius (or none). It would also make it much easier to implement styling of all corners in druid.

The overhead on RoundedRect would be minimal. This could easily be implemented by making RoundedRectPathIter::arcs Option<T>s and adding three more f64 for each corner in RoundedRect. I'm not sure how a non-rounded corner (ie. zero radius) should be specified. Two possibilities are making the radius variables in RoundedRect also Option or when creating the RoundedRectPathIter comparing the radius to some minimum value.

I will implement this if I get the Ok and some feedback on the problem I mentioned.

kurbo v0.5.10 breaks compat with druid

I'm considering this an issue in kurbo, since according to semver a patch shouldn't break compatibility. There are nine places where this occurs, all with the same error:

error[E0277]: the trait bound `kurbo::size::Size: std::convert::From<kurbo::vec2::Vec2>` is not satisfied
   --> /Users/code/.cargo/git/checkouts/druid-1a6b6c4d20db75fe/4728004/druid/src/widget/checkbox.rs:101:13
    |
101 |             RoundedRect::from_origin_size(Point::ORIGIN, Size::new(size, size).to_vec2(), 2.);
    |             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `std::convert::From<kurbo::vec2::Vec2>` is not implemented for `kurbo::size::Size`
    |
   ::: /Users/code/.cargo/registry/src/github.com-1ecc6299db9ec823/kurbo-0.5.10/src/rounded_rect.rs:67:20
    |
67  |         size: impl Into<Size>,
    |                    ---------- required by this bound in `kurbo::rounded_rect::RoundedRect::from_origin_size`
    |
    = help: the following implementations were found:
              <kurbo::size::Size as std::convert::From<(f64, f64)>>
    = note: required because of the requirements on the impl of `std::convert::Into<kurbo::size::Size>` for `kurbo::vec2::Vec2`

Add contains(Point) fn to shape, with default impl

This would be 100% about ergonomics.

Motivated by linebender/druid#231. Checking whether a point is inside some shape is going to be a very common task when doing UI development, and the current way to do this is not very discoverable; winding is a term that many people (myself included) may not be aware of or notice when they're thinking about hit-testing.

My idea would be to add a method to shape, with the signature

trait Shape {
    fn contains(&self, point: Point) -> bool {
        let winding = self.winding(point);
        winding != 0 && winding.singum() == self.area().signum() as i32
    }
}

I suspect there may be reasons I haven't thought of that make this not a good idea, in which case an alternative would be to just add a contains() method to a few important shapes like Rect and Circle. (edit: I see this already exists on Rect...)

Support f32?

I'm currently porting font-rs to use kurbo (for some experimental purposes) and I found that kurbo is using f64 everywhere, which seems to be an unnatural choice.

I think f64 is mostly to avoid for any compute intensive scenarios; they consume more memory bandwidth and are slower on GPUs. So can we have kurbo support f32 via generics, or simply use f32 everywhere (although it would be an breaking change)?

Add non-uniform scale `Affine`

Hello, the (quite new) arcs crate is using kurbo as a dependency. I want to implement basic transformations based on Affine. What seems to be missing is a non-uniform-scale constructor.

#[inline]
pub const fn scale_non_uniform(s_x: f64, s_y: f64) -> Affine {
    Affine([s_x, 0.0, 0.0, s_y, 0.0, 0.0])
}

It would be nice to be able to call this "natively" from kurbo instead of doing it in the arcs crate.

Is this something you would add to kurbo? I can also issue a pull request but like some feedback on naming and implementation first, as I'm not too familiar with kurbo

Add EdgeInsets type

This would be a type that could be used to describe the distance between the edges of two rectangles.

struct EdgeInsets {
    top: f64,
    right: f64,
    bottom: f64,
    left: f64,
}

operators: rect +- insets => rect, and possibly rect - rect => insets.

from/into: From<f64> is uniform insets; From<(f64, f64)> is uniform top/bottom and left/right. From<(f64, f64, f64, f64)> is as expected.

Add Size::clamp

looking through druid this is one we will definitely want.

Policy on NaN in vectors?

What is the policy on NaN values?

At the moment, Vec2::new() will happily accept NaN, -0, and infinity as parameters even though they tend to "contaminate" any math done with them and result in garbage values.

I normally treat constructing a Vec2 with non-finite values as a programming error (e.g. as a result of divide-by-zero) and programming errors should show themselves quickly and loudly, so using that logic there should be some sort of debug_assert!(x.is_finite()) in the Vec2::new() constructor. Possibly with a Vec2::new_unchecked() which skips the checks and is const fn.

I'm doing a review with cargo-crev and the "This produces NaN values when the magnitutde is 0" comment above Vec2::normalize() made me curious about the crate's policy.

Use `sin_cos` for slightly faster computation

let s = th.sin();

sin and cos are calculated using some expansion or other (I assume Taylor series?). There exists a method f64::sin_cos that calculates both at the same time, saving a few numeric operations. Using this function here and anywhere else you need both results will result in a slight speedup.

ParamCurveArclen::inv_arclen accuracy bug

Hi guys, I ran into a small bug of ParamCurveArclen::inv_arclen. The number of iteration was calculated as -log2(accuracy), which doesn't work when the whole arc length is much greater than 1.

I tried a fix in #152. Please tell me what you think.

Add missing (commutative) Mul impls

There are a bunch of places where we have code like,

impl Mul<Line> for Affine {
    type Output = Line;

    #[inline]
    fn mul(self, other: Line) -> Line {
        Line {
            p0: self * other.p0,
            p1: self * other.p1,
        }
    }
}

Without also having impl Mul<Affine> for Line. This means that affine * line works but line * affine doesn't, which is a papercut.

Positive/Negative Insets

Positive Insets in kurbo make a rectangle larger and negative ones smaller. I find this pretty confusing because I think of insetting something as moving it inwards. From a very quick search, this is also how it's done in UIKit, Android Graphics and in Java AWT. Maybe there's maybe a good reason why it's different here (and it's also a bit bug-prone change to do now) but I still wanted to point it out because I think its kind of confusing.

Concerns around impl From<((f64, f64), (f64, f64))> for Rect

This is currently implemented, and I think that it is likely to be misused.

A rect can be constructed from a pair of points or from a point and a size. There is no way to know from this implementation which behaviour to expect.

I think it would be reasonable to have a conversion between Rect and (f64, f64, f64, f64), cooresponding to (minx, miny, maxx, maxy), but I think ((f64, f64), (f64, f64)), if implemented, should default to (origin, size), but the fact that there's any room for confusion makes me feel that it should be removed.

BezPath::contains returns incorrect results

Description

The current implementation of BezPath::contains gives incorrect results in many cases.

Environment

  • Rustc - 1.52.0
  • Kurbo - 0.8.1 (latest)

Steps to Reproduce

Compile and run the example below as main.rs. This compares against a different check of points being inside the triangle, though based on the note on PathSeg::intersect_line, this is not a reliable check.

use kurbo::{BezPath, Line, Point, Shape};

fn contains_count_intersections(path: &BezPath, point: Point) -> bool {
    let bbox = path.bounding_box();
    let outside_point = kurbo::Point::new(bbox.min_x() - 1.0, bbox.min_y());
    let line = Line::new(point, outside_point);
    let num_intersections = path
        .segments()
        .map(|seg| seg.intersect_line(line).len())
        .sum::<usize>();

    num_intersections % 2 != 0
}

fn main() {
    let mut path = BezPath::new();
    path.move_to((0.0, 0.0));
    path.line_to((1.0, 1.0));
    path.line_to((2.0, 0.0));
    path.close_path();

    let test_point = Point::new(1.0, 0.5);

    println!(
        "path.contains({}) = {}",
        test_point,
        path.contains(test_point),
    );
    println!(
        "contains_count_intersections({}) = {}",
        test_point,
        contains_count_intersections(&path, test_point),
    );
}

Expected behavior

Program runs and outputs

path.contains((1, 0.5)) = true
contains_count_intersections((1, 0.5)) = true

Observed behavior

Program runs and outputs

path.contains((1, 0.5)) = false
contains_count_intersections((1, 0.5)) = true

Rethink ParamCurve trait hierarchy

Right now, there are a lot of essentially one-method traits in the ParamCurve hierarchy. The main motivation is to encourage implementations of parametrized curves, and not all of the traits are easy to implement, so implementers might want to pick and choose. Another strong motivation is to allow finite numbers of derivatives. (Infinite differentiation is of course totally reasonable for parametric polynomials, and another way to get there more generally is to implement symbolic math, but going finite is a tradeoff I'm quite willing to make). But having tons of fine-grained traits is annoying for users, and ultimately I think not all that useful.

Here's my current thinking. Have three main traits: ParamCurveRaw (which is basically the current ParamCurve), ParamCurveDeriv (basically the same as now), and ParamCurve, where the last one is "batteries included." The first two exist to support differentiation: the deriv method of ParamCurveDeriv outputs a ParamCurveRaw which can either impl ParamCurveDeriv or not. And ParamCurve will probably have a bound of first and second derivatives.

To ParamCurve we add the rather powerful to_bez_path which takes a tolerance. This is the gateway to Shape, allowing all curves to be rendered very easily in piet. (For coherence reasons, I think that'll be a newtype rather than a blanket impl, but that doesn't feel bad to me). More important, it lets us add new methods to ParamCurve as long as there's a default impl in terms of the Bézier path; I'm particularly thinking of things like area, which can get tricky for general curves, and maybe not that useful.

Many users will just be able to import ParamCurve. If they want derivatives, they'll need to import ParamCurveDeriv, and if they want to evaluate or subdivide, they'll need to import ParamCurveRaw as well. This seems like a reasonable balance.

Here are some remaining issues to sort out:

The type of extrema should probably not be ArrayVec<[f64; 4]>, as in general a curve can have more than 4 extrema. My current thinking is a SmallVec so it's efficient in the common case, but fully general. I'm open to ideas.

(Going to post this now, but will probably think of other issues, expect edits or followups)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.