GithubHelp home page GithubHelp logo

Collision detection about time-steward HOT 11 OPEN

elidupree avatar elidupree commented on July 19, 2024
Collision detection

from time-steward.

Comments (11)

elidupree avatar elidupree commented on July 19, 2024

Basic goal: Spatial-tree-based, bounding-box-based collision detection that reports when objects start or stop being "nearby" each other, and is very fast in normal cases (O(1) overhead per movement when there are no nearby boxes, rather than O(log n)). Since the tree nodes are stationary, it's an (acceptable) flaw that moving objects have a cost of O(distance moved/size).

The API for users of this would be something like:

trait BoundingBoxCollisionDetectable {
  type Object: StewardData;
  type Space: StewardData; // a way to distinguish if the same object can be put in multiple collision detectors of the same type. Can easily be () if not.

  // An Object generally has to store some opaque data for the collision detector.
  // It would normally include a DataHandle to a tree node.
  // These are getter and setter methods for that data.
  fn get_collision_detector_data (& EventAccessor, & DataHandle<Object>, & Space)->Data; // possibly different with query-by-reference
  fn set_collision_detector_data (& EventAccessor, & DataHandle<Object>, & Space, Data);

  fn calculate_current_bounding_box (& EventAccessor, & DataHandle<Object>, & Space)->BoundingBox;
  fn when_escapes (& EventAccessor, & DataHandle<Object>, & Space, BoundingBox)->Time;
  fn becomes_nearby (& EventAccessor, & DataHandle<Object>, & Space);
  fn stops_being_nearby (& EventAccessor, & DataHandle<Object>, & Space);
}

...

impl<B: BoundingBoxCollisionDetectable> BoundingBoxCollisionDetector<B> {
  fn insert (& EventAccessor, & DataHandle<Object>, & Space, location_hint: Option <& DataHandle<Object>>);
  fn remove (& EventAccessor, & DataHandle<Object>, & Space);
  fn neighbors (& EventAccessor, & DataHandle<Object>, & Space) -> Iter;
}

from time-steward.

elidupree avatar elidupree commented on July 19, 2024

A few particular issues complicate making this efficient in common cases.

  1. If we use a simple Z-order-based tree, then objects that move across large-power-of-2 boundaries have to follow O(log n) parent pointers to get to the nodes they're going towards. We need neighbor pointers of some sort.
  2. When a small object moves its bounding box, how does it know whether it would start neighboring a large object? The normal way would be to document the large object only in a large node, but then the small object would have to follow parent pointers all the way up the tree on every movement to make sure no such object existed.

from time-steward.

elidupree avatar elidupree commented on July 19, 2024

So, here's my clever math for dealing with the issue:

Think of a complete tree of power-of-2-aligned boxes. Now, double the size of every box. But don't scale them up around a corner – scale them up around their centers.

The original tree had a problem: At high powers of 2, there were single boundaries that were shared between nodes of many different tree levels. But the Double Tree distributes the boundaries evenly over the integers, in each dimension). This gives it some very nice properties. In particular:

  1. Because the nodes at each level overlap, any object can be completely contained in a single node of no more than twice its width in its largest dimension.
  2. Every node overlaps the boundaries of at most 2^dimensions larger nodes in the entire tree. We call those nodes its larger cousins. (We'll call any note that partially overlaps a "cousin". Also, each node has 1 "parent" twice its width that completely contains it, and 2^dimensions "children" half its width that are completely contained inside it.)

Therefore, here is how the actual data structure can be arranged:
A. Every object has mutual references with the smallest node that contains it. We call nodes that actually contain objects "live nodes".
B. Every live node has mutual references with every other live node that overlaps it. (This may use O(n^2) space, but that's mostly unavoidable when N objects are overlapping each other anyway.) Between this and A, any object can easily find any other object that overlaps it.
C. Every node (except the root that contains the whole space) has a reference to its direct parent node. (i.e. this is like a simple trie, not a radix trie.)
D. Every node has a reference to each of its direct children that currently exist, but not ones that aren't forced to exist by the other rules.
E. Every node has references to all its larger cousins. (Or maybe there's an optimization where we could get away with storing only half of them? Never mind that for now.)

I aim to show that you can move an object from one node to a parent, child, or same-size cousin, without knowing in advance whether that node exists, in O(1) operations (plus the acceptable expense of O(number of objects in the nodes that overlap the object's node)). But I'll do that in another comment because this one is long already.

from time-steward.

elidupree avatar elidupree commented on July 19, 2024

Proof that we can maintain C, D, and E in O(1) time while moving an object in one of those ways:

  • If moving to from node N to N's parent: Trivial, all we need to do is drop N if it no longer contains any objects.
  • If moving from N to N's child: If the child currently exists, N knows it and the operation is trivial. Otherwise, you create the child with N as its parent, and no children. The child's larger cousins are all either the parent's larger cousins, which we know, or the parent's same-size cousins, which we will cover how to find/construct next.
  • If moving from N to N's same-size cousin Q: Q is always the direct child of either N's parent or one of N's larger cousins. So if it already exists, we can find it. If it doesn't exist, we can construct it. Q's larger cousins can always be found among N's larger cousins, Q's parent's larger cousins, and N's parent.(EDIT: I neglected the case where Q has a larger cousin the same size as N's parent which is 1 step away from Q's parent, 2 steps away from N's parent. This isn't a larger cousin of N, so it doesn't necessarily exist at the moment. I'll have to think more…) (Observe that any larger cousin bigger than your parent must also be a larger cousin of your parent, because it can't intersect you without intersecting your parent, and it can't completely contain your parent without also completely containing you. And among larger cousins the same size as Q's parent, it can't have traveled far enough away from N to touch any nodes of that size except for N's parent and its own.)

Proof that we can maintain B in (almost) O(number of overlaps) time while moving an object in one of these ways:

  • When moving a single step like this, the only larger nodes you start intersecting are the larger cousins of your new node, and the only ones you stop intersecting at the larger cousins of your old node. So you can find all of those nodes in O(1) time.
  • The only same-size nodes you start/stop intersecting are the children of your parent and larger cousins.
  • As for smaller nodes, you just iterate the whole subtree, because it only has O(number of objects in the subtree) nodes overall. Wait, not quite. It actually can have O(number of objects in the subtree * log(ratio of your size to the smallest object in the subtree / number of objects in the subtree)) nodes because it's not a radix tree. Well, that's still a pretty good bound, and you don't REALLY want to have a large object that moves close to lots of tiny objects anyway. (The nicest performance bounds are still maintained if the large object never moves, only has tiny objects bounce off of it.) Or maybe we could make it into a radix tree somehow? I guess the larger-cousin references could point to ancestors of the logical larger-cousin if it doesn't exist in the tree. But that would also mean that creating a new child potentially has to update a lot of larger-cousin pointers instead of just one parent pointer like a regular radix tree.

from time-steward.

elidupree avatar elidupree commented on July 19, 2024

I've just given it a bunch more thinking, and the above EDIT is probably an intractable problem with the design, at least with the exact invariants given above. It's hard to visualize, though!

My current idea is to expand each node by two-thirds instead of 100%. This makes the tree structure much simpler: If D is a descendant of A, then if D is in A's left subtree, it may overlap with the A-sized node one step to A's left but never overlaps the one to A's right, and if D is in A's right subtree, it map overlap with the A-sized node one step to A's right but never overlaps the one to A's left. This differs from the current situation where, even considering only one dimension, a node can overlap 3 nodes of the same larger size.

Also, I don't think it makes sense to keep all larger cousins of larger cousins - I think that'll always cascade to creating log n things at a time. I'm thinking we need to keep only the larger nodes that overlap live nodes...

from time-steward.

elidupree avatar elidupree commented on July 19, 2024

Okay, here's my current model. Unfortunately I'm not up to making it fully rigorous today, but here goes:

Nodes are expanded by two-thirds, as noted above.

A "neighbor" of a node A is a node at the same level as A that is exactly one step away from A in one or more dimensions (and zero steps away in the others).

All existing nodes maintain pointers to their parent, children that exist, and neighbors that exist.

WLOG regarding directions:

  1. Whenever a node A has a live node D in its left subtree, A's left neighbor exists. Whenever a node A has a live node D in its upper left quadrant, its upper left neighbor exists, etc.. This guarantees that for any D, all nodes larger than D which overlap with D exist (in fact, it's slightly stricter than that, which makes the rule simpler).
  2. Each existing node D keeps a pointer to the unique ancestor A for which D is a leftmost descendant (leftmost among nodes at D's level) of a right child of A. (D being a direct right child of A counts.) This is the unique A which might need to gain left neighbors according to (1) if an entity living in D moves left from D.
  3. If any of those neighbors don't exist, we can create them. Either A is a right child, meaning the new neighbors are children of A's parent, or A is a left child, meaning the new neighbors are children of A's parent's left neighbor, which must exist because D is in the left subtree of A's parent. (That wording is wrong about diagonals, but I'm pretty sure it generalizes.)
  4. No other new nodes need to be created, keeping this to O(1) (well, O(2^dimensions)).
  5. Moving an entity to a parent or child of its current node (rather than to a neighbor) is even easier, because that doesn't change what subtree it's in for any larger nodes.

from time-steward.

elidupree avatar elidupree commented on July 19, 2024

Oh, here's a little complication: how do we know when we are allowed to delete nodes? Perhaps we need to have each node maintain the count of entities which are currently enforcing its existence (which gets updated at the same time it's potentially-created, and then updated again using the same pointers when an object moves back in the other direction)

from time-steward.

elidupree avatar elidupree commented on July 19, 2024

The trouble with keeping a count is that when you create a new small entity next to an existing small entity, the count has to be updated in (log n) nodes.

Take #N+1: The data structure is a radix tree of live nodes. Thus, there isn't automatically a (log n) cost to creating a new small entity in the middle of nowhere, or having an entity cross over a large boundary.

As an aside, one downside of the radix tree model is this: suppose there are 2 tiny entities far away from each other, and nothing else. When these entities move, they might have to check their closest-existing-ancestor pointer, which points to the shared ancestor, and modify its children; this makes the events be entangled unnecessarily. But maybe we could model the radix tree in a way where each subtree of the shared ancestor is a separate entity, so the shared ancestor doesn't need to be touched most of the time. Decisions about the structure of the data structure can be partially separated from decisions about how it maps to entities.

Just like my last model, nodes are expanded by two-thirds, and a node in a particular quadrant of an ancestor is "potentially overlapping" the in-the-direction-of-that-quadrant neighbors of that ancestor. Live nodes keep mutual references with all other live nodes they are "potentially overlapping". So next, we must address discoverability of newly overlapped live nodes:

Similar to (2) from my last model, each live node or branching node keeps a pointer to any ancestor it's touching a midline of; those nodes are required to be explicitly represented in the tree (so that those pointers never have to be updated), but they do NOT require such midline-ancestor pointers themselves, unless they are also live-or-branching (otherwise they would recursively require the whole path to exist), and they keep an explicit count of how many live-or-branching nodes are requiring them to exist in this way (this is trivial to maintain). If I've analyzed things right, this rule will create only O(1) non-live, non-branching nodes along the link between any two live-or-branching nodes, meaning it doesn't incur any additional asymptotic cost. Thus, even these nodes can find their midline-ancestors (by looking up the tree until they find a real node). Also, a pragmatic optimization might be to only keep such pointers for midline-ancestors that are at least (some constant, maybe 3) levels up. (EDIT: we also have to worry about branches induced by these nodes - those shouldn't count as "branching" for the purposes of the above rule, "branching" should only mean branches between live nodes.)

Next: how shall neighbor pointers be maintained? Gonna post this and then contine in another post...

from time-steward.

elidupree avatar elidupree commented on July 19, 2024

So, you're a node N, and you want to find your left neighbor (or its nearest existing ancestor).

N can find its "midline ancestor for the direction left" in O(1), as discussed. The node we're looking for is in one of that midline ancestor's left quadrants – the one next to N. If that quadrant is empty, we are done. But if it has branches, it might cost us (log n) to follow them back down to N's level – that's no good. It follows that we need to have preemptively "moved the neighbor information downwards" when a branch occurs.

Let's see: can a branch node that's on the midline of a (some constant number of levels)-larger node force the existence of, and mutual links with, its orthogonal neighbor node across the midline? If yes, then the procedure to find your orthogonal neighbor is just "follow parent pointers until you find a branch, follow neighbor pointer, follow child pointer" (because your branch forces the existence of the branch on the other side, and if there's a branch on the other side first, it forces the existence of a node on your side that can discover it). This does seem like it would force a rather large amount of nodes. Can we make it so that the neighbors are only forced if they actually have live descendants? That makes it trickier to find the nearest ancestor of a nonexistent neighbor (that node might be in the middle of the log n levels between ourself and our midline ancestor, and our side might be densely packed with branches).

Would those nodes interfere with the process above for finding your midline ancestor in O(1)? Let's see… I think we could maintain pointers that allow us to to navigate the "live tree" (the part that doesn't include the extra nodes) – the only reason you couldn't do that is if you needed extra branches, but having extra tree-augmentation-children doesn't change the amount of live-tree-children in a subtree, so each node can just maintain two sets of descendant pointers. (EDIT: No, there's a problem - if you create a new live child of a leaf augmentation node, how does it find its live parent in O(1)? the augmentation nodes can't maintain pointers to their nearest live ancestor because it could change for a lot of them at a time...)

As an aside... trivially, nodes only really need to keep midline ancestor pointers to ancestors that are beyond their direct ancestor within the tree, which could reduce the amount of midline ancestor nodes that are forced to exist?

from time-steward.

elidupree avatar elidupree commented on July 19, 2024

Wild idea: If these paired neighbors have to exist at the same time, can we make them be the "same node" in one or more senses? (Remember that there's not always just a pair - at corners, you have up to 2^dimensions nodes that are all linked)

from time-steward.

elidupree avatar elidupree commented on July 19, 2024

Let's see... the extreme form of that would be that ALL nodes are (dimensions)-way boundary nodes, so each node has 3^dimensions "children" - the middle one is its own, and the others are shared with its neighbors. So each node also has up to 2^dimensions "parents". And any neighbor of a node N is always a "child" of one of N's "parents". Nice!

Also, we might not even need to do the "expanded node" thing - each level's nodes can simply partition the space, because an entity can only be overlapping (dimensions+1) split-surfaces at levels bigger than it, so there's always a node that will contain it while being no more than (2^(dimensions+1)) times bigger than it. Nice-ish! (We could also get away with making up to (2^dimensions) entries for the entity to keep it within an area 2-times-bigger than the entity, like we did with the old Z-order system from the Lasercake prototype).

And the equivalent of "midpoint ancestor for a dimension" would be the unique ancestor-level where your node is split between nodes on that level along that dimension. So as an entity moves to an orthogonal neighbor, you just follow the before-and-after "split-level pointers" to find the nodes you have begun and ceased overlapping. Nice!

Remains to be seen whether this can support radix-tree-like optimizations to avoid costing (log n) when creating and deleting entities...

from time-steward.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.