GithubHelp home page GithubHelp logo

real-logic / agrona Goto Github PK

View Code? Open in Web Editor NEW
2.7K 157.0 388.0 5.13 MB

High Performance data structures and utility methods for Java

License: Apache License 2.0

Java 99.97% Shell 0.03%
java performance

agrona's Introduction

Agrona

Javadocs GitHub

Actions Status CodeQL Status

Agrona provides a library of data structures and utility methods that are a common need when building high-performance applications in Java. Many of these utilities are used in the Aeron efficient reliable UDP unicast, multicast, and IPC message transport and provides high-performance buffer implementations to support the Simple Binary Encoding Message Codec.

For the latest version information and changes see the Change Log.

The latest release and downloads can be found in Maven Central.

Utilities Include:

  • Buffers - Thread safe direct and atomic buffers for working with on and off heap memory with memory ordering semantics.
  • Lists - Array backed lists of int/long primitives to avoid boxing.
  • Maps - Open addressing and linear probing with int/long primitive keys to object reference values.
  • Maps - Open addressing and linear probing with int/long primitive keys to int/long values.
  • Sets - Open addressing and linear probing for int/long primitives and object references.
  • Cache - Set Associative with int/long primitive keys to object reference values.
  • Clocks - Clock implementations to abstract system clocks, allow caching, and enable testing.
  • Queues - Lock-less implementations for low-latency applications.
  • Ring/Broadcast Buffers - implemented off-heap for IPC communication.
  • Simple Agent framework for concurrent services.
  • Signal handling to support "Ctrl + c" in a server application.
  • Scalable Timer Wheel - For scheduling timers at a given deadline with O(1) register and cancel time.
  • Code generation from annotated implementations specialised for primitive types.
  • Off-heap counters implementation for application telemetry, position tracking, and coordination.
  • Implementations of InputStream and OutputStream that can wrap direct buffers.
  • DistinctErrorLog - A log of distinct errors to avoid filling disks with existing logging approaches.
  • IdGenerator - Concurrent and distributed unique id generator employing a lock-less implementation of the Twitter Snowflake algorithm.

Build

Java Build

Build the project with Gradle using this build.gradle file.

You require the following to build Agrona:

  • The Latest release of Java 8. Agrona is tested with Java 8, 17 and 21.

Full clean and build:

$ ./gradlew

License (See LICENSE file for full license)

Copyright 2014-2024 Real Logic Limited.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

agrona's People

Contributors

anthony-maire avatar ariksher-nex avatar avrecko avatar danielshaya avatar epickrram avatar giltene avatar io7m avatar jessefugitt avatar jpwatson avatar karypid avatar m-anthony avatar mikeb01 avatar mjpt777 avatar nitsanw avatar obourgain avatar onkarjoshi avatar pjulien avatar quanty avatar ratcashdev avatar richardwarburton avatar sebastien-doyon avatar smandy avatar snazy avatar tkowalcz avatar tmontgomery avatar tom-smalls avatar vdaniloff avatar vyazelenko avatar wlukowicz avatar wojciech-adaptive avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

agrona's Issues

Behavior of getOrDefault in Agrona Maps is not consistent with behavior from standard JDK Maps

With the new additions to the Map interface in Java 8, computeIfAbsent looks up the key and if it doesn't exist, it puts the computed value in the map before returning the computed value but getOrDefault never modifies the map (it simply returns the default value but does not do the put operation). However, the Agrona implementations of the different Map classes all provide a getOrDefault method that is not consistent with the JDK behavior (because they also put the default value in the map before returning it).

Java 8

default V getOrDefault(Object key, V defaultValue) {
    V v;
    return (((v = get(key)) != null) || containsKey(key))
        ? v
        : defaultValue;
}

Agrona

public V getOrDefault(final long key, final Supplier<V> supplier)
{
        V value = get(key);
        if (value == null)
        {
            value = supplier.get();
            put(key, value);
        }
        return value;
}

This tripped me up when looking at the Aeron code (ActiveSubscriptions.java for example) because it wasn't apparent how the value returned from getOrDefault was being added to the Agrona map. Although I sort of prefer the Agrona behavior over the Java 8 behavior, I think the function name in the Agrona classes should be changed to prevent confusion in the future for others since the behavior is not consistent with existing Java 8 Map classes. Maybe getOrPutDefault or getOrDefaultAndPut or actually anything other than getOrDefault would be ideal.

ClassCastException in Unsafe buffer wrapping a HeapByteBufferR

Hi,

I'm getting a class cast exception using an UnsafeBuffer. Looks like HeapByteBufferR doesn't implement DirectBuffer and is causing problems. Here's the pertinent stack trace fragments:

java.lang.ClassCastException: java.nio.HeapByteBufferR cannot be cast to sun.nio.ch.DirectBuffer
    at org.agrona.concurrent.UnsafeBuffer.putBytes(UnsafeBuffer.java:891)
    at org.agrona.concurrent.UnsafeBuffer.putBytes(UnsafeBuffer.java:869)

Thanks,
Robert

UnsafeBuffer - Slice

Ability to get a sliced UnsafeBuffer.

E.g.

UnsafeBuffer slice = buff.slice(index, length)

UnsafeBuffer may not take offset into account

When calling

public void wrap(final ByteBuffer buffer, final int offset, final int length)

and the ByteBuffer is not a heap buffer, the offset is ignored:

else
{
     byteArray = null;
     addressOffset = BufferUtil.address(buffer);
}

DirectBuffer InputStream

Hi,

I have a use-case where I need to use an InputStream to access data using another library - Jackson in this case. The data I have is wrapped by a DirectBuffer, and it would be awesome if this could be exposed as an InputStream.

Thanks,
Robert

MPSC/SPSC queues drain implementations delay the release of queue space

This is problematic when due to what ever reason a large amount of work has built up in the queues.
This means that producers face the worse case of waiting for the consumer to work through the full queue capacity in order to be allowed a slot. The savings we get for delaying the availability of space in the queue is (arguably) marginal, as we're only sparing ourselves an ordered write of the head.
E.g. for SPSC we have:

    try
    {
        do
        {
            final long elementOffset = sequenceToBufferOffset(nextSequence, mask);
            final E item = (E)UNSAFE.getObjectVolatile(buffer, elementOffset);
            if (null == item)
            {
                break;
            }

            UNSAFE.putOrderedObject(buffer, elementOffset, null);
            nextSequence++;
            elementHandler.accept(item);
        }
        while (true);
    }
    finally
    {
        UNSAFE.putOrderedLong(this, HEAD_OFFSET, nextSequence);
    }

Releasing the space in the queue as we go will be:

    do
    {
        final long elementOffset = sequenceToBufferOffset(nextSequence, mask);
        final E item = (E)UNSAFE.getObjectVolatile(buffer, elementOffset);
        if (null == item)
        {
            break;
        }

        UNSAFE.putOrderedObject(buffer, elementOffset, null);
        nextSequence++;
        UNSAFE.putOrderedLong(this, HEAD_OFFSET, nextSequence);
        elementHandler.accept(item);
    }
    while (true);

This suffers from the potential for theoretically never returning from drain. In JCTools this was resolved by imposing a limit on the number of elements drained.
Note that this problem has been concretely observed in systems under load and is not hypothetical.

ManyToManyConcurrentArrayQueue with capacity = 1

Hi,

I was playing with ManyToManyConcurrentArrayQueue and got confused about behaviour of the queue when I set its capacity to 1.

My agrona version is 0.4.8 and here is my test:

import org.junit.Test;

import static org.junit.Assert.assertFalse;
import uk.co.real_logic.agrona.concurrent.ManyToManyConcurrentArrayQueue;

public class ManyToManyTest
{

    private ManyToManyConcurrentArrayQueue<Integer> queue = new ManyToManyConcurrentArrayQueue<>( 1 );

    @Test
    public void shouldNotOfferSecondInteger() {
        queue.offer( 1 );
        assertFalse(queue.offer( 2 ));
    }

}

queue.offer(2) overwrites the first element and returns true. Is ManyToMany queue with capacity=1 not supported (I check the source but couldn't see anything related to it) or am I missing something?

Thanks in advance

put operation on Long2ObjectHashMap will run into infinite loop if the loadFactor is 1

resizeThreshold = capacity * loadFactor and the datastructure is resized when size > resizeThreshold.
So when the capacity has reached, the values array will be full and the loop in put method will run infinitely.

We may conservatively set resizeThreshold to (capacity * loadFactor) - 1 or change the resize condition to size >= resizeThreshold.

Discussion: Map/Set load factor

I was wondering about the following two aspects concerning load factor:

  • Why is it a double? is it such precision needed? I'm looking at it more from the space perspective, these extra 4 bytes I don't think are needed, even for comparison sake (I'm not sure this one is important TBH; comparing floats vs doubles), specially if you are serializing that Set or Map, these 4 extra bytes are more of a burden than helpful.
  • 1.0 load factor, it sounds stupid to have 1.0 as load factor but sometimes a Set can be use only as a wire-transport where hash collision -and its performance implication- is not important but memory occupation of that particular Set or Map, another reason is that when load factor is not important but space the user to go around your 1.0 restriction will set it to 0.99 so in the end I don't think it will have the effect you intended it to have.

ManyToOneRingBuffer header is on the end of the buffer preventing use of constant offset counter fields

At the moment the header is put at the end of the buffer, so we have:

    tailCounterIndex = capacity + RingBufferDescriptor.TAIL_COUNTER_OFFSET;
    headCounterIndex = capacity + RingBufferDescriptor.HEAD_COUNTER_OFFSET;

If the header was at the beginning of the buffer we could save ourselves all offset loads, E.g. instead of loading the tail counter like this:

load this.tailCounterIndex
load this.addressOffset
offset = addressOffset + tailCounterIndex
load offset

we could use a constant and eliminate the first load.

ManyToOneRingBuffer read is only releasing full buffer at the end of read

This is similar in spirit to #36 but complexity here lies in the release mechanism difference.
Where for the arrays element release overhead is fixed and already in the loop, for the ringbuffer the cost of clearing read elements is 2 fold, incrementing the head and filling the buffer with zero's. It is beneficial to zero the buffer in bulk (arguably it is also beneficial to null the array in bulk, but punt that discussion point) so it is perhaps unhelpful to clear one message at a time.
We could clear messages on either a fixed message count or read byte count to avoid this issue. Performance down side is less clear here, so needs further experimentation and testing of alternatives.

Change load factor from double to float

From discussion at #63 load factor can be a float, there is no need for such high precision, specially for a number between 0 and 1 where decimal points are not much affected.

Also there is a saving of 4 extra bytes (serialization saving) and possibly (haven't tested) a slight performance gain when comparing floats vs doubles.

Corrupted ManyToOneRingBuffer

Hi,

we are using Agrona in our project, specifically ManyToOneRingBuffer and today we encountered an issue which we don't know how it could happen.

We are writing into the buffer from multiple threads and reading the data from single thread. At one point the reader thread got stuck at .waitForMsgLengthVolatile(ManyToOneRingBuffer.java:312) inside of read method and it is not possible to read the data from buffer anymore.

Environment:
agrona version: 0.3.1
the size of the buffer is set to: 16777216
when we debug the read method on a given buffer we see the following:
record index: 16777208
tail: 184549360
head: 167772152
msgType on a given index: -1
msgLength: 0

We see that in version 0.4.3 which we use in trunk the read method is different (specifically the while loop was removed and replaced with break). Do you know whether there was some bug in version 0.3.1 which could cause the "corrupted" buffer we are seeing?

Please let us know in case you want us to upload the buffer somewhere - we are happy to do it.

Thank you.

Petr

UnsafeBuffer does not respect wrapped buffer ordering

An UnsafeBuffer can wrap, or be constructed using an existing ByteBuffer. A ByteBuffer has an ordering (or rather a 'bigEndian' flag) but an UnsafeBuffer does not. This can yield some surprise should a ByteBuffer setup to a particular ordering (via ByteBuffer.ordering(ByteOrder)) be wrapped by an UnsafeBuffer.
UnsafeBuffer currently handles ordering as a JVM wide choice (via the NATIVE_BYTE_ORDER constant). We should either document this discrepancy (I think sprinkling some javadocs would be suffecient) or accommodate ordering choice by buffer.

Issue with head cache usage in ManyToOneRingBuffer

Please look at test. Claim method should recheck head when there is insufficient continuous space.

@Test
public void shouldInsertPaddingAndWriteToBuffer() {
    int padding = 200;
    int messageLength = 400;

    // buffer is free
    long tail = 2 * CAPACITY - padding;
    long head = tail;

    // free space is (200 + 300) more than message length (400) but contiguous space (300) is less than message length (400)
    long headCache = CAPACITY + 300;

    buffer.putLong(TAIL_COUNTER_INDEX, tail);
    buffer.putLong(HEAD_COUNTER_INDEX, head);
    buffer.putLong(HEAD_COUNTER_CACHE_INDEX, headCache);

    // all subsequent calls cant write message
    UnsafeBuffer srcBuffer = new UnsafeBuffer(new byte[messageLength]);
    assertTrue(ringBuffer.write(MSG_TYPE_ID, srcBuffer, 0, messageLength));
}

Will IntArrayList be added in the future?

I was wondering if primitive support for List is going to be added, it might not be very common and you could argue that a Set could do but sometimes you are just transferring a List of IDs and you know they are unique.

I was using Agrona but because it was missing IntList support I had to switch to FastUtil v7.x though it is too big for my needs.

Int2IntHashmap incorrect size() after rehash

   @Test
    public void correctSizeAfterRehash() throws Exception
    {

        Long2LongHashMap map = new Long2LongHashMap(16, 0.6D, -1);

        IntStream.range(1, 17).forEach(i -> map.put(i, i));

        assertEquals(16, map.size());
        ArrayList<Long> keys = new ArrayList<>();

        map.forEach((k, v) -> keys.add(k));

        keys.forEach(map::remove);

        assertTrue(map.isEmpty());
    }

P.S There is still typo in the IAE in the constructor.

if (loadFactor >= 1.0)
        {
            throw new IllegalArgumentException("Load factor must be <= 1.0");
        }

Since you are asserting for >= , the message should state < only

Enhancement to expose AtomicArray's internal volatile array reference to derived classes

In several cases where I've used AtomicArray, I've needed a way to do something slightly different than what the existing doAction or forEach allows me to do. For example, I might want to do an action on each element that also passes an argument to each element (in a way that doesn't capture) or I might want to call each element with an arg and then return the resulting total (as an int or long). Ideally, it would be great if I could create a derived class and add the specific types of iteration there but unfortunately there is no means to get at the internal array reference.

If the AtomicArray's internal arrayRef were exposed via a protected scope function:

    protected T[] arrayRef()
    {
        return (T[])arrayRef;
    }

then I could implement a derived class in whatever way I needed such as:

public class DerivedAtomicArray<T, A> extends AtomicArray<T>
{
    public long doActionWithArg(final ToLongBiFunction<? super T, A> action, final A arg)
    {
        final T[] array = super.arrayRef();
        long total = 0;
        for(int i = 0; i < array.length; i++)
        {
            total += action.applyAsLong(array[i], arg);
        }
        return total;
    }
}

and the usage of that new class might look something like:

DerivedAtomicArray<String, String> myArray = new DerivedAtomicArray<String, String>();
myArray.add("Hello World");
myArray.add("For");
myArray.add("Bar");
long result = myArray.doActionWithArg(countAction, "or");

where I could send in an argument without being forced to capture and return whatever result I wanted.

I will plan to open a pull request with the enhancement.

Endless Loop in Int2IntHashMap.get(), uk.co.real-logic:Agrona:0.4.2

Hello @RichardWarburton

I wanted to use the map for a bounded in size cache. I've set the capacity to be pow of 2 and the loading factor to 1, because I don't want to trigger a resize operation.

Unfortunately when the loading factor is >= 1 there is a endless loop in get.

 @Test
    public void testSanity() throws Exception
    {

        Long2LongHashMap map = new Long2LongHashMap(4, 1, 0);
        map.put(1, 1);
        map.put(2, 1);
        map.put(3, 1);
        map.put(4, 1);

        map.get(5);

    }

boundsCheck in UnsafeBuffer

Would it be better to keep the SHOULD_BOUNDS_CHECK condition outside the boundsCheck method?
This way it might help inlining, boundsCheck method seem to be around 60 bytes.

ManyToOneRingBuffer should verify/enforce buffer alignment to avoid unaligned and therefore not atomic reads/writes of counters

Given that the requirement from supplied AtomicBuffer is:
The underlying buffer must a power of 2 in size plus sufficient space for the TRAILER_LENGTH.
And given that AtomicBuffer has no alignment guarantee, there is nothing stopping the counter alignment from straddling a cache line. E.g.:
new ManyToOneRingBuffer(new UnsafeBuffer(new byte[32_1024+TRAILER_LENGTH+1], 1, 32_1024+TRAILER_LENGTH+1));
The ring buffer will have capacity of 32K + trailer, but the start address is not aligned to anything. With slightly more effort you can construct an UnsafeBuffer such that counters are guaranteed to straddle the cache line.
One solution is to raise the capacity requirement by 1 cache line and force the RingBuffer start position to align to the cache line.

ExpandableArrayBuffer

Add an expandable array buffer that implements MutableDirectBuffer which can expand an array backing store.

Aeron RateSubscriber hangs on shutdown

Not sure if this should be an Aeron bug or an Agrona bug, but I suspect Agrona.

After the changes made in Agrona commit a3affda, the Aeron RateSubscriber sample app no longer shuts down gracefully for me. Instead, I get these endlessly once a second:

timeout await for agent. Retrying...
timeout await for agent. Retrying...
timeout await for agent. Retrying...
timeout await for agent. Retrying...
timeout await for agent. Retrying...

If I instead use the head version of Agrona, but with the older version of AgentRunner.java swapped in, it once again appears to shut down cleanly.

Infinite loop in IntHashSet when reached capacity

Whenever the IntHashSet/LongHashSet reaches the requested capacity, it will enter an infinite loop searching for an empty slot (confirmed for capacities of 1 and 2). I would propose the following:

  1. the set should keep operating correctly when it has reached the requested capacity;
  2. it should either document its unusual failure mode or change it to an exception.

Revise the documentation of Int/LongHashSet

The docs on Int/LongHashSet have become stale:

Fixed-size, garbage and allocation free long element specific hash set.

As of late the sets are no longer fixed-size and, as a conequence, no longer garbage- and allocation-free.

Another minor isseu in the generated LongIterator:

An iterator for a sequence of primitive longegers.

Bug in UnsafeBuffer?

UnsafeBuffer.putBytes(int, ByteBuffer, int, int) may have a bug:

in case the srcBuffer has an array, the srcIndex is added twice to the offset:

Once in line 610:
srcBaseOffset = ARRAY_BASE_OFFSET + srcBuffer.arrayOffset() + srcIndex;
And again in the final line 618:
UNSAFE.copyMemory(srcByteArray, srcBaseOffset + srcIndex, ...

This looks wrong. Am I missing something?

Unit tests for Agrona are failing on Java 8 on 32bit architecture

Description:
Some unit tests in Agrona are failing on a 32bit architecture due to the fact that the memory alignment on a 32bit architecture is 4 bytes instead 8 bytes which is the case for a 64bit architecture.

Environment:
Java 8 on 32bit architecture

Steps to reproduce:
Try to perform a build of Agrona using Gradle.

Logs:
This is the output generated by the unit tests:

uk.co.real_logic.agrona.concurrent.AtomicBufferTest > shouldVerifyBufferAlignment FAILED
    org.junit.experimental.theories.internal.ParameterizedAssertionError
        Caused by: java.lang.AssertionError at AtomicBufferTest.java:86

uk.co.real_logic.agrona.concurrent.CountersManagerTest > managerShouldNotOverAllocateCounters FAILED
    java.lang.IllegalStateException at CountersManagerTest.java:40

uk.co.real_logic.agrona.concurrent.CountersManagerTest > allocatedCountersCanBeMapped FAILED
    java.lang.IllegalStateException at CountersManagerTest.java:40

uk.co.real_logic.agrona.concurrent.CountersManagerTest > managerShouldStoreLabels FAILED
    java.lang.IllegalStateException at CountersManagerTest.java:40

uk.co.real_logic.agrona.concurrent.CountersManagerTest > managerShouldStoreMultipleLabels FAILED
    java.lang.IllegalStateException at CountersManagerTest.java:40

uk.co.real_logic.agrona.concurrent.CountersManagerTest > shouldFreeAndReuseCounters FAILED
    java.lang.IllegalStateException at CountersManagerTest.java:40

This is the stack trace for a failing test:

    org.junit.experimental.theories.internal.ParameterizedAssertionError: shouldVerifyBufferAlignment("uk.co.real_logic.agrona.concurrent.UnsafeBuffer@1833882" <from BYTE_ARRAY_BACKED>)
    at org.junit.experimental.theories.Theories$TheoryAnchor.reportParameterizedError(Theories.java:288)
    at org.junit.experimental.theories.Theories$TheoryAnchor$1$1.evaluate(Theories.java:237)
    at org.junit.experimental.theories.Theories$TheoryAnchor.runWithCompleteAssignment(Theories.java:218)
    at org.junit.experimental.theories.Theories$TheoryAnchor.runWithAssignment(Theories.java:204)
    at org.junit.experimental.theories.Theories$TheoryAnchor.runWithIncompleteAssignment(Theories.java:212)
    at org.junit.experimental.theories.Theories$TheoryAnchor.runWithAssignment(Theories.java:202)
    at org.junit.experimental.theories.Theories$TheoryAnchor.evaluate(Theories.java:187)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
    at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
    at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
Caused by: java.lang.AssertionError: All buffers should be aligned java.lang.IllegalStateException: AtomicBuffer is not correctly aligned: addressOffset=12 in not divisible by 8
    at org.junit.Assert.fail(Assert.java:88)
    at uk.co.real_logic.agrona.concurrent.AtomicBufferTest.shouldVerifyBufferAlignment(AtomicBufferTest.java:86)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at org.junit.experimental.theories.Theories$TheoryAnchor$2.evaluate(Theories.java:274)
    at org.junit.experimental.theories.Theories$TheoryAnchor$1$1.evaluate(Theories.java:232)
    ... 20 more

Note:
To retrieve the JVM architecture we can use the following method call: System.getProperty("sun.arch.data.model"); to retrieve that system property. (http://docs.oracle.com/javame/config/cdc/cdc-opt-impl/ojmeec/1.1/architecture/html/properties.htm)

Extend Collections

Wondering if you can expand the collections to be able to use as a generic library.

LongIterator.nextValue breaks if not preceded by hasNext

This is what the nextValue() method does [currently](https://github.com/real-
logic/Agrona/blob/master/src/main/java/uk/co/real_logic/agrona/collections/Long2LongHashMap.java#L426):

public long nextValue()
{
    final long entry = entries[index];
    nextIndex();
    return entry;
}

It returns whatever is at current index, then moves to the next index. If this method is called in a loop, all map slots (regardless if used or not) will be returned. The loop will also be infinite, wrapping around the slot array.

This probably makes sense for highly optimized usage where API contract is deliberately disobeyed. But perhaps it doesn't suit a public library as much. At least not without severe warnings in the docs :)

Agrona:0.4.8:uk.co.real_logic.agrona.collections missing 'L' sources.

The lib jar and the sources jars are not in sync. For example, the Agrona-0.4.8-souces.jar is missing all of the source files under the collections package beginning with the letter, 'L'... respectively,
Long2LongHashMap.java
Long2ObjectCache.java
Long2ObjectHashMap.java
LongHashSet.java
LongIterator.java
LongLongConsumer.java
LongLruCache.java

ManyToOneRingBuffer.read zeroing buffer after read, can that be made redundant?

Currently a read may result in many messages being handled followed by a U.setMemory which will erase the contents of the messages. The motivation, I assume, would be security. Otherwise the contents are of no consequence as the producer is copying in rather than reusing the chunks of memory it gets allocated.
The only hole in this theory is that we expect the 'length' field in the (last +1) message to be 0. This is a problem in particular as the MTORB allows variable length messages (so the consumer zeroing the length is not possible). A potential solution is to separate the message length array from the data stream.

Long2LongHashMap.keySet().iterator().next() returns self

FindBugs has a dedicated check for this pattern because it was employed in OpenJDK and emulated in many other places (it has already been fixed in OpenJDK). The trick which saves from emitting garbage also breaks code such as new ArrayList<>(map.entrySet()). AFAIK Java 8 has significantly improved Escape Analysis so a safe implementation should still be garbage-free in most usages.

https://github.com/real-logic/Agrona/blob/master/src/main/java/uk/co/real_logic/agrona/collections/Long2LongHashMap.java#L470

https://github.com/real-logic/Agrona/blob/master/src/main/java/uk/co/real_logic/agrona/collections/Long2ObjectHashMap.java#L693

Allow Long2XX maps to return efficient sets for keySet()/valueSet()

It would be useful to be able to do:

Long2LongHashMap map = new Long2LongHashMap(0);
LongHashSet keys = map.longKeySet();

So that I don't need to auto unbox when iterating over the keys of a map. Similarly for Ints on the Int2XX maps and for values on the Long2Long/Int2Int maps.

Impossible cast in IntHashSet

The IntHashSet#toArray(T[]) method makes an unchecked cast of an int[] into T[], but T cannot possibly be int. This means that there is no argument which can be pased to the method and have it succeed.

I propose either throwing UnsupportedOperationException (if this method is not of real interest) or supplying the following impl:

@SuppressWarnings("unchecked")
public <T> T[] toArray(T[] into) {
    final Class<?> aryType = into.getClass().getComponentType();
    if (!aryType.isAssignableFrom(Integer.class)) {
        throw new ClassCastException("Cannot store Integers in array of type " + aryType);
    }
    final int[] values = this.values;
    final Object[] ret = into.length >= this.size? into : (T[]) Array.newInstance(aryType, this.size);
    for (int from = 0, to = 0; from < values.length; from++) {
        final int val = values[from];
        if (val != missingValue) {
            ret[to++] = val;
        }
    }
    if (ret.length > values.length) {
        ret[values.length] = null;
    }
    return (T[]) ret;
}

long index and length overloads

Underlying Unsafe uses long indexes hence it makes sense to have long index overloads than just the int indexing and lengths you have now.

For this to be really useful you will also need a way to to get the UnsafeBuffer to wrap around memory allocated through allocateMemory. Also for better safely a factory method to allocate and return an UnsafeBuffer

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.