hawkw / sharded-slab Goto Github PK
View Code? Open in Web Editor NEWa lock-free concurrent slab (experimental)
License: MIT License
a lock-free concurrent slab (experimental)
License: MIT License
we should look into replacing the vector of pages with a growable mmap
ed region. then, when we need more pages, rather than hitting malloc to make a big vec, we can increase the size of the mem map. growing a shard could potentially be much faster.
the downside is that this inherently means a bunch more unsafe code. it could be feature-flagged, but i'm not sure if it would be worth maintaining 2 impls.
Currently, the Guard
and PoolGuard
types borrow the Slab
/Pool
and return a guard with the 'a
lifetime, where 'a
is the lifetime of the Slab
or Pool
. This ensures that references to entries don't outlive the Slab
/Pool
that owns them.
However, the generic lifetime means that these guards may only be stored on the stack. They cannot be placed in any type which must have a 'static
lifetime, or live on the heap for an unknown amount of time.
We might want to consider adding a way to check out an owned reference, when the Slab
/Pool
is in an Arc
. Since &Arc<Self>
is a valid receiver type, we can do this. In this case, we would return a different type of guard, which clones the Arc
ed Pool
/Slab
into the guard. This would ensure that the reference keeps the slab that the referenced item lives in alive.
In this case, we would have to use an *const T
(probably should actually be a NonNull
) for the actual reference to the object. But, this is fine, since the Arc
keeps the owner alive --- we just can't prove this to the compiler.
I had to read the source code to find out that UniqueIter
yields references. It's also unclear what happens if new items are pushed while the iterator is alive.
I'm maintaining the packages for this crate for Fedora Linux, and it appears that the update to v0.1.5 introduces issues that prevent its tests from compiling on 32-bit x86 (i.e. i686-unknown-linux-gnu), in what looks like compile-time evaluation issue:
error[E0080]: evaluation of `<page::slot::Generation<tests::custom_config::CustomConfig> as Pack<tests::custom_config::CustomConfig>>::LEN` failed
--> src/page/slot.rs:59:24
|
59 | const LEN: usize = (cfg::WIDTH - C::RESERVED_BITS) - Self::SHIFT;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ attempt to compute `8_usize - 30_usize`, which would overflow
note: erroneous constant used
--> src/lib.rs:1030:27
|
1030 | let shift = 1 << (Self::LEN - 1);
| ^^^^^^^^^
note: erroneous constant used
--> src/lib.rs:1043:25
|
1043 | const MASK: usize = Self::BITS << Self::SHIFT;
| ^^^^^^^^^^
note: the above error was encountered while instantiating `fn Slab::<u64, tests::custom_config::CustomConfig>::insert::{closure#0}`
--> /builddir/build/BUILD/rustc-1.72.0-src/library/core/src/ops/function.rs:305:13
note: erroneous constant used
--> src/page/slot.rs:688:60
|
688 | const LEN: usize = cfg::WIDTH - (Lifecycle::<C>::LEN + Generation::<C>::LEN);
| ^^^^^^^^^^^^^^^^^^^^
note: erroneous constant used
--> src/lib.rs:1038:46
|
1038 | const SHIFT: usize = Self::Prev::SHIFT + Self::Prev::LEN;
| ^^^^^^^^^^^^^^^
note: erroneous constant used
--> src/lib.rs:1043:39
|
1043 | const MASK: usize = Self::BITS << Self::SHIFT;
| ^^^^^^^^^^^
note: erroneous constant used
--> src/page/slot.rs:760:24
|
760 | const LEN: usize = Generation::<C>::LEN;
| ^^^^^^^^^^^^^^^^^^^^
note: erroneous constant used
--> src/cfg.rs:43:30
|
43 | const USED_BITS: usize = Generation::<Self>::LEN + Generation::<Self>::SHIFT;
| ^^^^^^^^^^^^^^^^^^^^^^^
note: erroneous constant used
--> src/page/slot.rs:705:35
|
705 | pub(crate) const MAX: usize = Self::BITS - 1;
| ^^^^^^^^^^
note: erroneous constant used
--> src/cfg.rs:90:13
|
90 | RefCount::<Self>::MAX,
| ^^^^^^^^^^^^^^^^^^^^^
note: erroneous constant used
--> src/cfg.rs:160:34
|
160 | .field("used_bits", &C::USED_BITS)
| ^^^^^^^^^^^^
note: erroneous constant used
--> src/cfg.rs:163:50
|
163 | .field("max_concurrent_references", &RefCount::<C>::MAX)
| ^^^^^^^^^^^^^^^^^^
note: erroneous constant used
--> src/page/slot.rs:710:61
|
710 | test_println!("-> get: {}; MAX={}", self.value, RefCount::<C>::MAX);
| ^^^^^^^^^^^^^^^^^^
For more information about this error, try `rustc --explain E0080`.
error: could not compile `sharded-slab` (lib test) due to previous error
(filing this issue for posterity, much discussion has happened on Discord already: https://discord.com/channels/500028886025895936/627696030334582784/865274661260886086)
MVCE is here: https://github.com/tobz/sharded-slab-memory-explosion
Essentially, when using Pool::create_owned
to get an owned handle to a pool entry, the pooling behavior when the handle is manually cleared or dropped does not seem to behave correctly, leading to unbounded memory growth.
Instead of silently wrapping, the refcount increment should panic if it overflows.
We should update our tests to use loom
0.3. The new version makes some significant performance improvements, as well as making the model more accurate. This is a breaking change, but hopefully beyond renaming CausalCell
to UnsafeCell
, the impact should be pretty small.
I'm not sure if it is possible. But currently we cannot simply store an async Mutex
in Pool
. Since the lifetime of PoolGuard
will cross await
.
Example:
use async_std::{sync::Mutex, task::spawn};
use sharded_slab::{Clear, Pool};
#[derive(Default)]
struct Data(Mutex<()>);
impl Clear for Data { fn clear(&mut self) {} }
#[async_std::main]
async fn main() {
let pool: Pool<Data> = Default::default();
let key = pool.create(|_| {}).unwrap();
spawn(async move {
let guard = pool.get(key).unwrap();
*guard.0.lock().await = (); // <-- error: `guard` is not `Send`
// -- `guard` is dropped here, crossing `await`
});
}
It'd be nice to be able to have an Option<Key>
which didn't take up more space than Key
in memory.
Now Entry: !Send
; thus, it cannot cross the .await
point in multithreaded runtimes like tokio.
I don't see anything in the code that relies on these restrictions, so the main question here is whether Entry
can be an instance of Send
or not. If not, can you recommend something for the case below?
I use the sharded-slab
as actor storage in elfo, and it has been working fine for years in production, so thanks for the crate. The next version will use cordyceps
as part of the mailbox implementation, so thanks for that implementation, too.
In the simplest case, when some actor wants to send a message to another one with a known address and wants async backpressure, it gets a target actor by its address in the slab. In the simplified form, it looks like the following code:
async fn send_to<M: Message>(&self, recipient: usize, message: M) -> Result<(), SendError<M>> {
let Some(actor) = self.slab.get_owned(recipient) else {
return Err(SendError(message)
};
actor.mailbox.send(message).await
}
The problem is that get_owned
clones Arc
every time, creating a massive contention on the counter that kills performance on multicore (especially on NUMA) machines. The implementation for sync try_send_to
(uses slab.get()
inside) doesn't have a similar behavior.
I use get_owned
here instead of get
only because Entry
cannot cross the .await
point. Logically, the future cannot overlive the slab.
Due to the expect
in Slab::unique_iter
, the function panics when used on an empty slab.
Trying to use this as a way to speed up a fork of async_executor
, which currently uses Mutex<slab::Slab<Waker>>
, but a correct Drop
impl requires at the minimum a iterator over the entire slab, and preferably a draining iterator.
^
Context:
I am implementing a sftp client in Rust, which requires me to provide a unique u32
request id for each request.
Currently Slab::get
returns the Guard
type while Pool::get
returns a PoolGuard
. But looking at the contents of these two types, we can see that they carry the same information. It would be a great win for users if we are able to unify these types.
The reason for the difference for the two types comes to their Drop
implementations:
Drop
impl for Guard
Lines 499 to 511 in ffa5c7a
PoolGuard
Lines 232 to 249 in ffa5c7a
Both these impls, while calling different methods, are calling the "mark for release" chain. The final guard being dropped is responsible for actually clearing the storage. The calls eventually lead to Slot::try_remove_value
for Guard
s drop impl and Slot::try_clear_storage
for PoolGuard
s drop impl. The sole difference comes down to the mutator
parameter being passed to the release_with
method for the Slab
and Pool
impls.
The mutator closure in Pool
s case calls the Clear::clear
on T
, therefore we need the T: Clear
bound to be true. In the Slab
s case we need the type to be an Option<T>
in order to call Option::take
.
If we can find a way to scoping those two pieces of functionality to impl block bounded by the specific generic types we will be able to expose a unified guard type. Not only that but we will be able to share a lot more code throughout the crate as a lot of the functions in this particular code path are repetitive.
Hello,
I am getting a build error when using sharded-slab v0.1.2. It was caused by "hint::spin_loop()" on line 909 in src/page/slot.rs. The error was:
error[E0658]: use of unstable library feature 'renamed_spin_loop'
--> ...../sharded-slab-0.1.2/src/page/slot.rs:909:9
note: see issue #55002 rust-lang/rust#55002 for more information
error: aborting due to previous error
Please could you fix the cause of this error, or let me know if you think this was caused by a mistake on my end? Thank you!
Environment info:
rustup unchanged - 1.24.3
info: latest update on 2021-07-29, rust version 1.54.0 (a178d0322 2021-07-26)
It should be possible to clone a guard fairly trivially, by incrementing the ref count on the accessed entry.
Hi,
Do you know if you plan for no_std support?
It would be really helpful, because I'm precisely looking for a lock-free, no_std slab storage
Looking at your crate, it does not seem like you use a lot of std-only features, I would venture saying maybe only no_std + alloc would be sufficient, and it would not be much overhead going no_std (I just don't know for the std::thread::panicking
in panic_in_drop
)
I can possibly help for the commit to support no_std
What do you think?
In some cases, entries will not be accessed concurrently. It might be nice to have an API for mutably accessing an entry by index if it is not being concurrently accessed. This would need to ensure that the reference count is 0, and (probably) set a bit in the slot's lifecycle to that prevents immutable access while the slot is accessed mutably.
If the slot is already mutably accessed, we would want to return None
or an AlreadyAccessed
error of some kind (so that users can distinguish between cases where a slot cannot be referenced mutably, and cases where the slot does not exist). We probably don't want to have a method that spins until the slot can be accessed mutably โ if users want to block the calling thread, they'd be better off using a read-write lock.
use sharded_slab::Slab;
fn main() {
let slab = Slab::new();
let key = slab.insert("hello world").unwrap();
println!("Key : {}!", key);
let key = slab.insert("hello world").unwrap();
println!("Key : {}!", key); // value is present, return already present key instead of new insert result
}
Or size() or len() or capacity() or unused_len() or something
use case: we're using this to allocate disk buffers, and have to make sure we don't run out of memory; we can track how many items we have currently used in the pool, but there's no API I can see to query how many of these equal sized objects there are in the sharded pool that aren't currently used.
Or possibly better still... how many there just currently in the pool, whether in use, or pending use.
This should be exposed by a (default off?) compat feature flag.
I'm seeing unbounded memory growth when I set RESERVED_BITS to anything other than 0.
This is somewhat similar to #56 but the repro is less complex. Running on my Mac this is OOM killed after the 25th iteration.
Gist with repro and full outputs: https://gist.github.com/lrowe/56bc03b32e9d4bdabdd7d9f28f64af8d
use sharded_slab::Config;
use sharded_slab::Slab;
use memory_stats::memory_stats;
struct CustomConfig;
impl Config for CustomConfig {
const RESERVED_BITS: usize = 1; // This is the cause.
}
fn main() {
let slab = Slab::new_with_config::<CustomConfig>();
for n in 0..10000 {
let key = slab.insert(0).unwrap();
slab.remove(key);
let usage = memory_stats().unwrap();
println!("n:{}\tkey:{}\trss:{}\tvs:{}", n, key, usage.physical_mem, usage.virtual_mem);
}
}
n:0 key:0 rss:1523712 vs:418300739584
n:1 key:32 rss:1540096 vs:418300739584
n:2 key:96 rss:1540096 vs:418300739584
n:3 key:224 rss:1540096 vs:418300739584
n:4 key:480 rss:1556480 vs:418300739584
n:5 key:992 rss:1589248 vs:418300739584
n:6 key:2016 rss:1671168 vs:418434957312
n:7 key:4064 rss:1769472 vs:418434957312
n:8 key:8160 rss:1966080 vs:418434957312
n:9 key:16352 rss:2359296 vs:418434957312
n:10 key:32736 rss:3145728 vs:418434957312
n:11 key:65504 rss:4718592 vs:418434957312
n:12 key:131040 rss:7864320 vs:418434957312
n:13 key:262112 rss:14155776 vs:418434957312
n:14 key:524256 rss:26755072 vs:418447556608
n:15 key:1048544 rss:51920896 vs:418472722432
n:16 key:2097120 rss:102252544 vs:418523054080
n:17 key:4194272 rss:202915840 vs:418623717376
n:18 key:8388576 rss:404242432 vs:418825043968
n:19 key:16777184 rss:806895616 vs:419227697152
n:20 key:33554400 rss:1612201984 vs:420033003520
n:21 key:67108832 rss:3222814720 vs:421643616256
n:22 key:134217696 rss:6444040192 vs:424864841728
n:23 key:268435424 rss:6939295744 vs:431307292672
n:24 key:536870880 rss:4106715136 vs:444192194560
n:25 key:1073741792 rss:1419673600 vs:469961998336
zsh: killed cargo run
The virtual size grows absolutely huge, seemingly much bigger than that reported by the memory-stats crate.
==== Summary for process 81937
ReadOnly portion of Libraries: Total=809.2M resident=30.2M(4%) swapped_out_or_unallocated=779.1M(96%)
Writable regions: Total=96.6G written=58.2G(60%) resident=1.0G(1%) swapped_out=57.2G(59%) unallocated=38.3G(40%)
VIRTUAL RESIDENT DIRTY SWAPPED VOLATILE NONVOL EMPTY REGION
REGION TYPE SIZE SIZE SIZE SIZE SIZE SIZE SIZE COUNT (non-coalesced)
=========== ======= ======== ===== ======= ======== ====== ===== =======
Kernel Alloc Once 32K 0K 0K 16K 0K 0K 0K 1
MALLOC guard page 96K 0K 0K 0K 0K 0K 0K 5
MALLOC metadata 160K 160K 160K 0K 0K 0K 0K 8
MALLOC_LARGE 58.4G 1.0G 1.0G 57.2G 0K 0K 0K 470 see MALLOC ZONE table below
MALLOC_LARGE (reserved) 37.6G 0K 0K 0K 0K 0K 0K 1 see MALLOC ZONE table below
MALLOC_LARGE metadata 16K 16K 16K 0K 0K 0K 0K 1 see MALLOC ZONE table below
MALLOC_MEDIUM 16.0M 144K 144K 11.8M 0K 0K 0K 2 see MALLOC ZONE table below
MALLOC_MEDIUM (empty) 112.0M 0K 0K 0K 0K 0K 0K 14 see MALLOC ZONE table below
MALLOC_NANO 128.0M 96K 96K 96K 0K 0K 0K 1 see MALLOC ZONE table below
MALLOC_NANO (empty) 384.0M 0K 0K 0K 0K 0K 0K 3 see MALLOC ZONE table below
MALLOC_SMALL 8192K 64K 64K 48K 0K 0K 0K 1 see MALLOC ZONE table below
MALLOC_TINY 1024K 32K 32K 0K 0K 0K 0K 1 see MALLOC ZONE table below
Stack 8160K 48K 48K 0K 0K 0K 0K 1
Stack Guard 56.0M 0K 0K 0K 0K 0K 0K 1
VM_ALLOCATE 32K 0K 0K 0K 0K 0K 0K 2
VM_ALLOCATE (reserved) 128K 0K 0K 0K 0K 0K 0K 1 reserved VM address space (unallocated)
__AUTH 46K 43K 0K 2608 0K 0K 0K 11
__AUTH_CONST 77K 77K 0K 0K 0K 0K 0K 40
__DATA 193K 105K 32K 87K 0K 0K 0K 39
__DATA_CONST 215K 167K 0K 0K 0K 0K 0K 42
__DATA_DIRTY 78K 36K 35K 43K 0K 0K 0K 22
__LINKEDIT 802.6M 24.8M 0K 0K 0K 0K 0K 2
__OBJC_RO 66.4M 45.1M 0K 0K 0K 0K 0K 1
__OBJC_RW 2012K 1456K 0K 28K 0K 0K 0K 1
__TEXT 6780K 5468K 0K 0K 0K 0K 0K 44
dyld private memory 272K 32K 32K 16K 0K 0K 0K 2
shared memory 16K 16K 16K 0K 0K 0K 0K 1
unused but dirty shlib __DATA 76K 29K 29K 47K 0K 0K 0K 24
=========== ======= ======== ===== ======= ======== ====== ===== =======
TOTAL 97.5G 1.1G 1.0G 57.2G 0K 0K 0K 742
TOTAL, minus reserved VM space 97.5G 1.1G 1.0G 57.2G 0K 0K 0K 742
VIRTUAL RESIDENT DIRTY SWAPPED ALLOCATION BYTES DIRTY+SWAP REGION
MALLOC ZONE SIZE SIZE SIZE SIZE COUNT ALLOCATED FRAG SIZE % FRAG COUNT
=========== ======= ========= ========= ========= ========= ========= ========= ====== ======
MallocHelperZone_0x10426c000 58.5G 1.0G 1.0G 57.2G 33 96.0G 0K 0% 489
DefaultMallocZone_0x1042a4000 128.0M 96K 96K 96K 225 10K 182K 95% 1
=========== ======= ========= ========= ========= ========= ========= ========= ====== ======
TOTAL 58.6G 1.0G 1.0G 57.2G 258 96.0G 0K 0% 490
In this code:
Lines 292 to 294 in 40579b9
Tid::current().as_usize()
may return usize::MAX
if the thread was poisoned. This makes it hard to debug since the error seems to indicate the user spawned to many threads, which is not the case.
error[E0195]: lifetime parameters or bounds on type Target
do not match the trait declaration
--> /Users/altanis/.cargo/registry/src/index.crates.io-6f17d22bba15001f/sharded-slab-0.1.7/src/tid.rs:32:1
|
32 | / lazy_static! {
33 | | static ref REGISTRY: Registry = Registry {
34 | | next: AtomicUsize::new(0),
35 | | free: Mutex::new(VecDeque::new()),
36 | | };
37 | | }
| |_^ lifetimes do not match type in trait
|
= note: this error originates in the macro __lazy_static_internal
which comes from the expansion of the macro lazy_static
(in Nightly builds, run with -Z macro-backtrace for more info)
Compiling thread_local v1.1.7
error[E0609]: no field free
on type REGISTRY
--> /Users/altanis/.cargo/registry/src/index.crates.io-6f17d22bba15001f/sharded-slab-0.1.7/src/tid.rs:150:14
|
150 | .free
| ^^^^ unknown field
|
= note: available fields are: __private_field
error[E0609]: no field next
on type REGISTRY
--> /Users/altanis/.cargo/registry/src/index.crates.io-6f17d22bba15001f/sharded-slab-0.1.7/src/tid.rs:161:35
|
161 | let id = REGISTRY.next.fetch_add(1, Ordering::AcqRel);
| ^^^^ unknown field
|
= note: available fields are: __private_field
error[E0609]: no field free
on type REGISTRY
--> /Users/altanis/.cargo/registry/src/index.crates.io-6f17d22bba15001f/sharded-slab-0.1.7/src/tid.rs:191:42
|
191 | let mut free_list = REGISTRY.free.lock().unwrap_or_else(PoisonError::into_inner);
| ^^^^ unknown field
|
= note: available fields are: __private_field
Some errors have detailed explanations: E0195, E0609.
For more information about an error, try rustc --explain E0195
.
error: could not compile sharded-slab
(lib) due to 4 previous errors
warning: build failed, waiting for other jobs to finish...
Is it possible to predefine a key so that works like a concurrent map?
slab.insert("key", "hello world").unwrap();
assert_eq!(slab.get("key").unwrap(), "hello world");
Since a guard has a reference to the slab and knows the index of the entry, it should be possible to add a method to mark the entry to be cleared. This would drop the guard, since the entry may no longer be accessible. This could be a little more ergonomic than having to call the clear
/remove
method on the Slab
/Pool
in some cases.
The docs.rs page references the Slab
type at https://docs.rs/sharded-slab/0.1.7/src/sharded_slab/lib.rs.html#19 but this actually brings me to the loom
crate (I suspect due to slab
being linked to the crate at https://docs.rs/sharded-slab/0.1.7/src/sharded_slab/lib.rs.html#137).
it would be nice to not have to move data on the stack to insert into the slab, esp. since the intention is to provide preallocated storage. Currently, we use Option<T>
for our slots, and move an item into the Option
(making it Some
) when inserting. This means copying data onto the stack, which is a shame if the slab is being used to store large objects:
We might want to consider using MaybeUninit
instead, so we can have an API like
pub fn insert_ptr(&self) -> Option<(usize, &mut MaybeUninit<T>>)>
with the MaybeUninit
reference being used as an out-param.
It might be better for this to be more like
pub fn insert_with(&self, f: impl FnOnce(&mut MaybeUninit<T>)) -> Option<usize>
since this would play nicer with loom
's CausalCell::with
, and because users wouldn't have to unpack the tuple.
This has the disadvantage of forcing users to deal with MaybeUninit
, which would currently require unsafe code and is somewhat significantly less ergonomic. However, we can continue providing the current API as well, implemented on top of a MaybeUninit
-based API.
lazy_static
involves macros, takes a bit longer to compile, and once_cell
provides more flexible ways on the way to being added to std
. Replace all usages of lazy_static
with static VAR: Lazy<T>
.
One potential drawback is that lazy_static
supports spin-waiting on the result, while Lazy
doesn't. I don't think any of the crate's usages are perf-bound by this though.
When the thread ID limit defined by the Config
is reached, we currently panic. This isn't great.
As I said in tokio-rs/tracing#1485:
I do think
sharded-slab
could handle this better. We could potentially reserve the last thread ID (in this case, 4096) for a special shard that allows concurrent mutation by multiple threads, with a mutex, and use that as a fallback when the thread ID cap is reached. That way, rather than panicking, we'd degrade to contending a mutex for threads over the max, which might be a better solution. However, that would be a bit of work to implement...but I'll open a ticket for it upstream.
This seems to be happening at least on docs.rs https://docs.rs/sharded-slab/0.1.4/sharded_slab/.
Maybe a mistake was made with markdown links that caused this?
Currently, looking up an item in the slab requires dereferencing three to four Box
es: the box around the shard slice:
Line 202 in b160433
Lines 244 to 249 in b160433
Line 73 in b160433
It would be good to reduce the number of dereferences. Ideally, we could use fixed size arrays, since the slab is configured at compile time; however, doing this nicely would require const generics. Alternatively, we could consider using DSTs for at least some of our structs. However, this would require manually managing our allocations, and would still require some boxing โ it would take careful design to actually reduce derefs with DSTs.
Since a slab is (almost) always in an Arc
in most use-cases, we might want to make the entire slab a DST, reducing at least one box...this could have ergonomic disadvantages though.
In tid.rs
LEN
is defined as:
Line 47 in 8ebe120
that gives for the default config (MAX_THREADS = 4096
) 13 bit space instead of 12:
assert_eq!(Tid::<crate::cfg::DefaultConfig>::LEN, 13);
Meanwhile, we have two different checks for TID overflow:
I've missed something?
tracing: 6d8b995c99f1d7b8758da687e5b7df25456f9559
This commit has tracing-subscriber
depending on sharded-slab
0.0.6
When building tracing-subscriber
for a 32 bit target, e.g. cargo build --target i686-linux-android
, this results in build errors in sharded-slab
code.
Compiling sharded-slab v0.0.6
...
Compiling tracing-subscriber v0.2.0-alpha.1 (/Users/tk/repos/tracing/tracing-subscriber)
error[E0080]: could not evaluate constant
--> /Users/tk/.cargo/registry/src/github.com-1ecc6299db9ec823/sharded-slab-0.0.6/src/page/slot.rs:54:24
|
54 | const LEN: usize = (cfg::WIDTH - C::RESERVED_BITS) - Self::SHIFT;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ attempt to subtract with overflow
error[E0080]: could not evaluate constant
--> /Users/tk/.cargo/registry/src/github.com-1ecc6299db9ec823/sharded-slab-0.0.6/src/lib.rs:691:26
|
691 | let shift = 1 << (Self::LEN - 1);
| ^^^^^^^^^^^^^^^ referenced constant has errors
error[E0080]: could not evaluate constant
--> /Users/tk/.cargo/registry/src/github.com-1ecc6299db9ec823/sharded-slab-0.0.6/src/cfg.rs:40:30
|
40 | const USED_BITS: usize = Generation::<Self>::LEN + Generation::<Self>::SHIFT;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ referenced constant has errors
error[E0080]: could not evaluate constant
--> /Users/tk/.cargo/registry/src/github.com-1ecc6299db9ec823/sharded-slab-0.0.6/src/cfg.rs:142:34
|
142 | .field("used_bits", &C::USED_BITS)
| ^^^^^^^^^^^^ referenced constant has errors
error[E0080]: could not evaluate constant
--> /Users/tk/.cargo/registry/src/github.com-1ecc6299db9ec823/sharded-slab-0.0.6/src/lib.rs:704:25
|
704 | const MASK: usize = Self::BITS << Self::SHIFT;
| ^^^^^^^^^^^^^^^^^^^^^^^^^ referenced constant has errors
error[E0080]: could not evaluate constant
--> /Users/tk/.cargo/registry/src/github.com-1ecc6299db9ec823/sharded-slab-0.0.6/src/page/slot.rs:560:24
|
560 | const LEN: usize = Generation::<C>::LEN;
| ^^^^^^^^^^^^^^^^^^^^ referenced constant has errors
error[E0080]: could not evaluate constant
--> /Users/tk/.cargo/registry/src/github.com-1ecc6299db9ec823/sharded-slab-0.0.6/src/page/slot.rs:495:37
|
495 | const LEN: usize = cfg::WIDTH - (Lifecycle::<C>::LEN + Generation::<C>::LEN);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ referenced constant has errors
error[E0080]: could not evaluate constant
--> /Users/tk/.cargo/registry/src/github.com-1ecc6299db9ec823/sharded-slab-0.0.6/src/lib.rs:699:26
|
699 | const SHIFT: usize = Self::Prev::SHIFT + Self::Prev::LEN;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ referenced constant has errors
error: aborting due to 8 previous errors
For more information about this error, try `rustc --explain E0080`.
error: could not compile `tracing-subscriber`.
it's probably a bad idea to try to run the loom
tests in miri, since they are quite long & running code in miri is significantly slower than running actual compiled code. but, then, we would need a good way to ensure that at least the likeliest concurrent permutations are exercised in miri...
Spinning has different performance/energy characteristics than mutexes/condition vars, and it would be good to document that this is what take
does.
This is from a unit test in rustup - I can push the branch up easily enough but basically: create several Arc<pool>, add a Vec to one, downgrade, hand to a worker thread, panic in the main thread when the test fails.
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
thread 'diskio::test::test_complete_file_threaded' panicked at 'attempt to subtract with overflow', C:\Users\robertc\.cargo\registry\src\github.com-1ecc6299db9ec823\sharded-slab-0.1.1\src\page\slot.rs:718:26
stack backtrace:
0: 0x7ff6bd1c9c5e - std::backtrace_rs::backtrace::dbghelp::trace
at /rustc/6a1835ad74247c069b0d24703c8267818487d7f5\/library\std\src\..\..\backtrace\src\backtrace\dbghelp.rs:98
1: 0x7ff6bd1c9c5e - std::backtrace_rs::backtrace::trace_unsynchronized
at /rustc/6a1835ad74247c069b0d24703c8267818487d7f5\/library\std\src\..\..\backtrace\src\backtrace\mod.rs:66
2: 0x7ff6bd1c9c5e - std::sys_common::backtrace::_print_fmt
at /rustc/6a1835ad74247c069b0d24703c8267818487d7f5\/library\std\src\sys_common\backtrace.rs:67
3: 0x7ff6bd1c9c5e - std::sys_common::backtrace::_print::{{impl}}::fmt
at /rustc/6a1835ad74247c069b0d24703c8267818487d7f5\/library\std\src\sys_common\backtrace.rs:46
4: 0x7ff6bd1e790b - core::fmt::write
at /rustc/6a1835ad74247c069b0d24703c8267818487d7f5\/library\core\src\fmt\mod.rs:1096
5: 0x7ff6bd1c3f88 - std::io::Write::write_fmt<std::sys::windows::stdio::Stderr>
at /rustc/6a1835ad74247c069b0d24703c8267818487d7f5\/library\std\src\io\mod.rs:1568
6: 0x7ff6bd1cd13d - std::sys_common::backtrace::_print
at /rustc/6a1835ad74247c069b0d24703c8267818487d7f5\/library\std\src\sys_common\backtrace.rs:49
7: 0x7ff6bd1cd13d - std::sys_common::backtrace::print
at /rustc/6a1835ad74247c069b0d24703c8267818487d7f5\/library\std\src\sys_common\backtrace.rs:36
8: 0x7ff6bd1cd13d - std::panicking::default_hook::{{closure}}
at /rustc/6a1835ad74247c069b0d24703c8267818487d7f5\/library\std\src\panicking.rs:208
9: 0x7ff6bd1ccc09 - std::panicking::default_hook
at /rustc/6a1835ad74247c069b0d24703c8267818487d7f5\/library\std\src\panicking.rs:225
10: 0x7ff6bcc35b9d - alloc::boxed::{{impl}}::call<tuple<core::panic::PanicInfo*>,Fn<tuple<core::panic::PanicInfo*>>,alloc::alloc::Global>
at C:\Users\robertc\.rustup\toolchains\beta-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\alloc\src\boxed.rs:1535
11: 0x7ff6bcb3f549 - rustup::currentprocess::with::{{closure}}::{{closure}}<closure-0,core::result::Result<tuple<>, anyhow::Error>>
at C:\Users\robertc\Documents\src\rustup.rs\src\currentprocess.rs:145
12: 0x7ff6bd1cda32 - std::panicking::rust_panic_with_hook
at /rustc/6a1835ad74247c069b0d24703c8267818487d7f5\/library\std\src\panicking.rs:595
13: 0x7ff6bd1cd4f3 - std::panicking::begin_panic_handler::{{closure}}
at /rustc/6a1835ad74247c069b0d24703c8267818487d7f5\/library\std\src\panicking.rs:495
14: 0x7ff6bd1ca5bf - std::sys_common::backtrace::__rust_end_short_backtrace<closure-0,!>
at /rustc/6a1835ad74247c069b0d24703c8267818487d7f5\/library\std\src\sys_common\backtrace.rs:141
15: 0x7ff6bd1cd479 - std::panicking::begin_panic_handler
at /rustc/6a1835ad74247c069b0d24703c8267818487d7f5\/library\std\src\panicking.rs:493
16: 0x7ff6bd1e5ed0 - core::panicking::panic_fmt
at /rustc/6a1835ad74247c069b0d24703c8267818487d7f5\/library\core\src\panicking.rs:92
17: 0x7ff6bd1e5e1c - core::panicking::panic
at /rustc/6a1835ad74247c069b0d24703c8267818487d7f5\/library\core\src\panicking.rs:50
18: 0x7ff6bcb396ed - sharded_slab::page::slot::RefCount<sharded_slab::cfg::DefaultConfig>::decr<sharded_slab::cfg::DefaultConfig>
at C:\Users\robertc\.cargo\registry\src\github.com-1ecc6299db9ec823\sharded-slab-0.1.1\src\page\slot.rs:718
19: 0x7ff6bcb3aba0 - sharded_slab::page::slot::Slot<alloc::vec::Vec<u8, alloc::alloc::Global>, sharded_slab::cfg::DefaultConfig>::release<alloc::vec::Vec<u8, alloc::alloc::Global>,sharded_slab::cfg::DefaultConfig>
at C:\Users\robertc\.cargo\registry\src\github.com-1ecc6299db9ec823\sharded-slab-0.1.1\src\page\slot.rs:506
20: 0x7ff6bcb3ad4d - sharded_slab::page::slot::Guard<alloc::vec::Vec<u8, alloc::alloc::Global>, sharded_slab::cfg::DefaultConfig>::release<alloc::vec::Vec<u8, alloc::alloc::Global>,sharded_slab::cfg::DefaultConfig>
at C:\Users\robertc\.cargo\registry\src\github.com-1ecc6299db9ec823\sharded-slab-0.1.1\src\page\slot.rs:604
21: 0x7ff6bcbfde5c - sharded_slab::pool::{{impl}}::drop<alloc::vec::Vec<u8, alloc::alloc::Global>,sharded_slab::cfg::DefaultConfig>
at C:\Users\robertc\.cargo\registry\src\github.com-1ecc6299db9ec823\sharded-slab-0.1.1\src\pool.rs:1131
22: 0x7ff6bcbf7fcf - core::ptr::drop_in_place<sharded_slab::pool::OwnedRef<alloc::vec::Vec<u8, alloc::alloc::Global>, sharded_slab::cfg::DefaultConfig>>
at C:\Users\robertc\.rustup\toolchains\beta-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\ptr\mod.rs:179
23: 0x7ff6bcbf654c - core::ptr::drop_in_place<rustup::diskio::threaded::PoolReference>
at C:\Users\robertc\.rustup\toolchains\beta-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\ptr\mod.rs:179
24: 0x7ff6bcbf46c3 - core::ptr::drop_in_place<rustup::diskio::FileBuffer>
at C:\Users\robertc\.rustup\toolchains\beta-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\ptr\mod.rs:179
25: 0x7ff6bcbf4302 - core::ptr::drop_in_place<rustup::diskio::Kind>
at C:\Users\robertc\.rustup\toolchains\beta-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\ptr\mod.rs:179
26: 0x7ff6bcbf425a - core::ptr::drop_in_place<rustup::diskio::Item>
at C:\Users\robertc\.rustup\toolchains\beta-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\ptr\mod.rs:179
27: 0x7ff6bcb50a18 - rustup::diskio::test::test_complete_file::{{closure}}
at C:\Users\robertc\Documents\src\rustup.rs\src\diskio\test.rs:137
Issues like #9 could be avoided if we had a CI job that tried to cross-compile to a 32-bit platform.
It would be great if we can add Serde support.
My personal use case is to use sharded slab in place of a vector which I can access without using locks, however, I will lose the ability to serialize/deserialize the data.
I'm not very knowledgeable about lock-free data structures; would it be possible to provide a draining iterator that yields all items that exist at the time it is constructed?
Here's the API in question. Is there anything stopping sharded-slab
from exposing something similar?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.