nim-works / loony Goto Github PK
View Code? Open in Web Editor NEWA high throughput MPMC lock-free queue based on a paper by Giersch, Nolte et al implemented in pure Nim.
Home Page: https://nim-works.github.io/loony/
License: MIT License
A high throughput MPMC lock-free queue based on a paper by Giersch, Nolte et al implemented in pure Nim.
Home Page: https://nim-works.github.io/loony/
License: MIT License
Hi, I'm quite new to Nim and I work on it occasionally during weekends.
I'm testing loony for a simple double thread producer/consumer paradigm. Here's the code:
import std/[locks, os]
import loony
type
Message = object
value: string
let fifo:LoonyQueue[Message] = newLoonyQueue[Message]()
var L: Lock
var producing = true
proc producer() {.thread.} =
for i in 1..10:
let Message = Message(value: "Message " & $i)
acquire(L)
fifo.push Message
release(L)
echo "Produced: ", Message
sleep(100)
proc consumer() {.thread.} =
while producing:
acquire(L)
let item = fifo.pop
if item.value != "":
echo "Consumed: ", item
release(L)
sleep(10)
# Create worker threads
var producerThread, consumerThread: Thread[void]
initLock(L)
# Start worker threads
createThread(producerThread, producer)
createThread(consumerThread, consumer)
joinThread(producerThread)
producing = false
joinThread(consumerThread)
deinitLock(L)
It's a simple, documentation-alike, piece of software, but when I run it I get this output:
Produced: (value: "Message 1")
Produced: (value: "Message 2")
Produced: (value: "Message 3")
Consumed: (value: "��\x0Ew\x14�\x03@")
Produced: (value: "Message 4")
Produced: (value: "Message 5")
Consumed: (value: "��\x0Ew\x14�\x03@")
Produced: (value: "Message 6")
Produced: (value: "Message 7")
Consumed: (value: "��\x0Ew\x14�\x03@")
Produced: (value: "Message 8")
Produced: (value: "Message 9")
Consumed: (value: "��\x0Ew\x14�\x03@")
Produced: (value: "Message 10")
Consumed: (value: "��\x0Ew\x14�\x03@")
I would like to talk about the weird codes I get from the popped Messages' values and asking if there's something I don't know about strings that explain this.
I also tested the same code using a LoonyQueue[string] directly, and in this case the code didn't compile at all
Using MacOS Catalina 10.15.7, nim 1.6.12, latest loony
Hi, I tried to use the example code from the readme (https://github.com/nim-works/loony#how-to-use) but I got some errors
Hint: used config file '/home/---/.choosenim/toolchains/nim-#devel/config/nim.cfg' [Conf] Hint: used config file '/home/---/.choosenim/toolchains/nim-#devel/config/config.nims' [Conf] ................................................................ /home/---/Desktop/Nim/test.nim(10, 11) template/generic instantiation of
pushfrom here /home/---/.nimble/pkgs/loony-0.1.1/loony.nim(177, 27) template/generic instantiation of
fetchIncTailfrom here /home/---/.nimble/pkgs/loony-0.1.1/loony.nim(71, 26) Error: attempting to call undeclared routine: 'fetchAdd'
This happens both on the 0.1.0 and 0.1.1
Make tests with balls.
ico@workdoos:/tmp/loony(main)$ nim r --passC:-fsanitize=thread --passL:-fsanitize=thread --threads:on tests/test.nim
[...]
🟢 creation and initialization of the queue
FATAL: ThreadSanitizer CHECK failed: ../../../../src/libsanitizer/tsan/tsan_interface_atomic.cpp:223 "((IsLoadOrder(mo))) != (0)" (0x0, 0x0)
#0 __tsan::TsanCheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) ../../../../src/libsanitizer/tsan/tsan_rtl_report.cpp:47 (libtsan.so.0+0x972d2)
#1 __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) ../../../../src/libsanitizer/sanitizer_common/sanitizer_termination.cpp:78 (libtsan.so.0+0xb513a)
#2 AtomicLoad<long long unsigned int> ../../../../src/libsanitizer/tsan/tsan_interface_atomic.cpp:223 (libtsan.so.0+0x762e4)
#3 __tsan_atomic64_load ../../../../src/libsanitizer/tsan/tsan_interface_atomic.cpp:539 (libtsan.so.0+0x762e4)
#4 load_OOZloonyZnode_116 <null> (test_EF21B045952947A3759AD00DE01B49BBBC3590C4+0x6baa1)
#5 advTail_test_334 <null> (test_EF21B045952947A3759AD00DE01B49BBBC3590C4+0x72289)
#6 push_test_306 <null> (test_EF21B045952947A3759AD00DE01B49BBBC3590C4+0x731f4)
#7 enqueue_test_303 <null> (test_EF21B045952947A3759AD00DE01B49BBBC3590C4+0x73378)
#8 NimMainModule <null> (test_EF21B045952947A3759AD00DE01B49BBBC3590C4+0x819d5)
#9 NimMainInner <null> (test_EF21B045952947A3759AD00DE01B49BBBC3590C4+0x797f3)
#10 NimMain <null> (test_EF21B045952947A3759AD00DE01B49BBBC3590C4+0x79849)
#11 main <null> (test_EF21B045952947A3759AD00DE01B49BBBC3590C4+0x798b9)
#12 __libc_start_main ../csu/libc-start.c:308 (libc.so.6+0x26d09)
#13 _start <null> (test_EF21B045952947A3759AD00DE01B49BBBC3590C4+0x94d9)
Error: execution of an external program failed: '/home/ico/.cache/nim/test_d/test_EF21B045952947A3759AD00DE01B49BBBC3590C4 '
I'd like to have some controls on the queue:
pop
clients (so they receive nil
).clear
the entire queue, discarding the contents.pop
that works while the queue is paused so that I can, for example, tear down items in the queue with extra logic.Now, maybe the right way to implement this is with a queue manager that sits outside the queue itself, but in any event, these are pretty minor features that we should be able to support for programmer comfort.
Currently I use linux futex and windows waitonaddress to manage backpressure when using Wards and Loony as a threadpool queue.
Mac/Darwin do not have primitives or APIs that are similar.
At the moment, I can only think of using conditions and locks.
Thread analyzer determines the deallocshared proc called by a thread on completion is a data-race.
Atm only solution I can come up with involves changing or providing a modified std lib thread library that corrects this behaviour. The safest methodology is having the pointer deallocshared acts on being atomically stored and loaded for destruction? Haven't look too deep into it yet.
Nodes aren't hitting their memory reclamation operations
a) Nim's Weave (which offers different queues, work-stealing scheduler with some twists, but most importantly excellent performance - I don't know of any other as generic solution which scales nearly linearly and uses so high percentage of computing power per CPU)
Refactor all these alias types like TagPtr to use a distinct type with converters to avoid boilerplate spaghetti as per disruptek
Using a slightly modified example code for more consumer threads my system throws SIGSEGV error.
import std/[locks, os, atomics]
import loony
type
Message = ref object
value: string
let fifo = newLoonyQueue[Message]()
var terminate: Atomic[bool]
proc producer() {.thread.} =
var n: int = 0
for i in 1..1000000:
n.inc()
let msg = Message(value: "Message " & $i)
if n > 10000:
echo "Producing ", repr(msg)
n = 0
fifo.push msg
#sleep(0)
terminate.store(true)
proc consumer() {.thread.} =
var n: int = 0
while true:
let item = fifo.pop
if not item.isNil:
n.inc()
if n > 10000:
echo "Consumed: ", repr(item)
n = 0
else:
if terminate.load:
break
#sleep(0)
# Create worker threads
var producerThread: Thread[void]
var consumerthreads: array[512, Thread[void]]
const THREADS: int = 14
# Start worker threads
createThread(producerThread, producer)
for t in 0..<THREADS:
echo "create thread", t
consumerthreads[t].createThread(consumer)
joinThread(producerThread)
for t in 0..<THREADS:
joinThread(consumerthreads[t])
Hi,
i see SIGSEGV on calls to queue.isEmpty
. The queue is initialized using newLoonyQueue
. It has not received any items yet. I tried if
- and while
- makes no difference. I tried std/atomics
and threading/atomics
. It feels a bit like a timing/init-problem. Though i never managed to make it work reliably. Goes like this - the whole app runs thru and finishes. Then on the next run it breaks early. On the following start i might run and finish or break in-between.
I use the main-thread to fill the queue and one separate thread to read and process the items from the queue ? A queue-items contains a seq[int], int, string
, so nothing special here.
Im on Nim 1.6.6 / macOS
this is my thread-proc ::
proc warden[K, V]( map: Map[K, V] ) {.thread, nimcall.} =
echo "maintanance:: started"
while true:
while map.workQ.isEmpty:
sleep 500
continue
var elem = map.workQ.pop
#if elem.isNil: break
echo &"ward :: ", elem.task, " :: ", elem.parent, " - ",elem.slot
if elem.slot == -1:
echo "\nmaintanance:: shuts down..\n"
return
Traceback for :
nim r -d:b32 --gc:arc --threads:on -d:useMalloc -d:debug -d:nimOldCaseObjects Mapi.nim
/Users/asc/sw/nim/ctrie/champ/Mapi.nim(30) warden
/Users/asc/.nimble/pkgs/loony-0.1.12/loony.nim(68) isEmpty
/Users/asc/.choosenim/toolchains/nim-1.6.6/lib/pure/concurrency/atomics.nim(336) load
SIGSEGV: Illegal storage access. (Attempt to read from nil?)
Error: execution of an external program failed: '/Users/asc/.cache/nim/Mapi_d/Mapi_558E24AF8FBA3F576C55E835091AA4F6A61AFFB3 '
Using some kind of flag like -d:loonyDebug
want to be able to count and relay the number of enqueues and dequeues performed with the use of a queue; allocations and deallocations of nodes; index tracking/alert when reaching overflow etc. I don't mind if it affects performance.
As with all great things, somewhere along the way we screwed up. The nodes memory reclamation algorithm is not correct for some changes that were made while refactoring.
Currently the leak has been corrected up to a degree of ~+90% (with previous test node leaks of about 29 now down to 2+/-2). I’m on the last stretch of correcting the remainder.
you can see #15 for progress
Orc crashes when a Continuation (or any ref object) passed through the queue reaches the end of its life span and the final =destroy
is run.
This is evident by incrementing the ref count of an object by 1 before the end of its lifespan and comparing the behaviour before and after.
ARC does not share this issue; the final =destroy
can be called without issue and the memory is collected appropriately.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.