GithubHelp home page GithubHelp logo

hazelcast / hazelcast Goto Github PK

View Code? Open in Web Editor NEW
5.9K 298.0 1.8K 394.22 MB

Hazelcast is a unified real-time data platform combining stream processing with a fast data store, allowing customers to act instantly on data-in-motion for real-time insights.

Home Page: https://www.hazelcast.com

License: Other

Java 99.73% Shell 0.05% Batchfile 0.01% C 0.01% FreeMarker 0.20% Python 0.01% Kotlin 0.01%
java hazelcast in-memory big-data scalability distributed caching hacktoberfest stream-processing low-latency

hazelcast's Introduction

Hazelcast

Slack javadoc Docker pulls Quality Gate Status


What is Hazelcast

The world’s leading companies trust Hazelcast to modernize applications and take instant action on data in motion to create new revenue streams, mitigate risk, and operate more efficiently. Businesses use Hazelcast’s unified real-time data platform to process streaming data, enrich it with historical context and take instant action with standard or ML/AI-driven automation - before it is stored in a database or data lake.

Hazelcast is named in the Gartner Market Guide to Event Stream Processing and a leader in the GigaOm Radar Report for Streaming Data Platforms. To join our community of CXOs, architects and developers at brands such as Lowe’s, HSBC, JPMorgan Chase, Volvo, New York Life, and others, visit hazelcast.com.

When to use Hazelcast

Hazelcast provides a platform that can handle multiple types of workloads for building real-time applications.

  • Stateful data processing over streaming data or data at rest
  • Querying streaming and batch data sources directly using SQL
  • Ingesting data through a library of connectors and serving it using low-latency SQL queries
  • Pushing updates to applications on events
  • Low-latency queue-based or pub-sub messaging
  • Fast access to contextual and transactional data via caching patterns such as read/write-through and write-behind
  • Distributed coordination for microservices
  • Replicating data from one region to another or between data centers in the same region

Key Features

  • Stateful and fault-tolerant data processing and querying over data streams and data at rest using SQL or dataflow API
  • A comprehensive library of connectors such as Kafka, Hadoop, S3, RDBMS, JMS and many more
  • Distributed messaging using pub-sub and queues
  • Distributed, partitioned, queryable key-value store with event listeners, which can also be used to store contextual data for enriching event streams with low latency
  • A production-ready Raft-implementation which allows lineralizable (CP) concurrency primitives such as distributed locks.
  • Tight integration for deploying machine learning models with Python to a data processing pipeline
  • Cloud-native, run everywhere architecture
  • Zero-downtime operations with rolling upgrades
  • At-least-once and exactly-once processing guarantees for stream processing pipelines
  • Data replication between data centers and geographic regions using WAN
  • Microsecond performance for key-value point lookups and pub-sub
  • Unique data processing architecture results in 99.99% latency of under 10ms for streaming queries with millions of events per second.
  • Client libraries in Java, Python, Node.js, .NET, C++ and Go

Operational Data Store

Hazelcast provides distributed in-memory data structures which are partitioned, replicated and queryable. One of the main use cases for Hazelcast is for storing a working set of data for fast querying and access.

The main data structure underlying Hazelcast, called IMap, is a key-value store which has a rich set of features, including:

Hazelcast stores data in partitions, which are distributed to all the nodes. You can increase the storage capacity by adding additional nodes, and if one of the nodes go down, the data is restored automatically from the backup replicas.

You can interact with maps using SQL or a programming language client of your choice. You can create and interact with a map as follows:

CREATE MAPPING myMap (name varchar EXTERNAL NAME "__key", age INT EXTERNAL NAME "this") 
TYPE IMap
OPTIONS ('keyFormat'='varchar','valueFormat'='int');
INSERT INTO myMap VALUES('Jake', 29);
SELECT * FROM myMap;

The same can be done programmatically as follows using one of the supported programming languages. Here are some exmaples in Java and Python:

var hz = HazelcastClient.newHazelcastClient();
IMap<String, Integer> map = hz.getMap("myMap");
map.set(Alice, 25);
import hazelcast

client = hazelcast.HazelcastClient()
my_map = client.get_map("myMap")
age = my_map.get("Alice").result()

Other programming languages supported are C#, C++, Node.js and Go.

Alternatively, you can ingest data directly from the many sources supported using SQL:

CREATE MAPPING csv_ages (name VARCHAR, age INT)
TYPE File
OPTIONS ('format'='csv',
    'path'='/data', 'glob'='data.csv');
SINK INTO myMap
SELECT name, age FROM csv_ages;

Hazelcast also provides additional data structures such as ReplicatedMap, Set, MultiMap and List. For a full list, refer to the distributed data structures section of the docs.

Stateful Data Processing

Hazelcast has a built-in data processing engine called Jet. Jet can be used to build both streaming and batch data pipelines that are elastic. You can use it to process large volumes of real-time events or huge batches of static datasets. To give a sense of scale, a single node of Hazelcast has been proven to aggregate 10 million events per second with latency under 10 milliseconds. A cluster of Hazelcast nodes can process billion events per second.

An application which aggregates millions of sensor readings per second with 10-millisecond resolution from Kafka looks like the following:

var hz = Hazelcast.bootstrappedInstance();

var p = Pipeline.create();

p.readFrom(KafkaSources.<String, Reading>kafka(kafkaProperties, "sensors"))
 .withTimestamps(event -> event.getValue().timestamp(), 10) // use event timestamp, allowed lag in ms
 .groupingKey(reading -> reading.sensorId())
 .window(sliding(1_000, 10)) // sliding window of 1s by 10ms
 .aggregate(averagingDouble(reading -> reading.temperature()))
 .writeTo(Sinks.logger());

hz.getJet().newJob(p).join();

Use the following command to deploy the application to the server:

bin/hazelcast submit analyze-sensors.jar

Jet supports advanced streaming features such as exactly-once processing and watermarks.

Data Processing using SQL

Jet also powers the SQL engine in Hazelcast which can execute both streaming and batch queries. Internally, all SQL queries are converted to Jet jobs.

CREATE MAPPING trades (
    id BIGINT,
    ticker VARCHAR,
    price DECIMAL,
    amount BIGINT)
TYPE Kafka
OPTIONS (
    'valueFormat' = 'json',
    'bootstrap.servers' = 'kafka:9092'
);
SELECT ticker, ROUND(price * 100) AS price_cents, amount
  FROM trades
  WHERE price * amount > 100;
+------------+----------------------+-------------------+
|ticker      |           price_cents|             amount|
+------------+----------------------+-------------------+
|EFGH        |                  1400|                 20|

Messaging

Hazelcast provides lightweight options for adding messaging to your application. The two main constructs for messaging are topics and queues.

Topics

Topics provide a publish-subscribe pattern where each message is fanned out to multiple subscribers. See the examples below in Java and Python:

var hz = Hazelcast.bootstrappedInstance();
ITopic<String> topic = hz.getTopic("my_topic");
topic.addMessageListener(msg -> System.out.println(msg));
topic.publish("message");
topic = client.get_topic("my_topic")

def handle_message(msg):
    print("Received message %s"  % msg.message)
topic.add_listener(on_message=handle_message)
topic.publish("my-message")

For examples in other languages, please refer to the docs.

Queues

Queues provide FIFO-semantics and you can add items from one client and remove from another. See the examples below in Java and Python:

var client = Hazelcast.newHazelcastClient();
IQueue<String> queue = client.getQueue("my_queue");
queue.put("new-item")
import hazelcast

client = hazelcast.HazelcastClient()
q = client.get_queue("my_queue")
my_item = q.take().result()
print("Received item %s" % my_item)

For examples in other languages, please refer to the docs.

Get Started

Follow the Getting Started Guide to install and start using Hazelcast.

Documentation

Read the documentation for in-depth details about how to install Hazelcast and an overview of the features.

Get Help

You can use Slack for getting help with Hazelcast

How to Contribute

Thanks for your interest in contributing! The easiest way is to just send a pull request. Have a look at the issues marked as good first issue for some guidance.

Building From Source

Building Hazelcast requires at minimum JDK 17. Pull the latest source from the repository and use Maven install (or package) to build:

$ git pull origin master
$ ./mvnw clean package -DskipTests

It is recommended to use the included Maven wrapper script. It is also possible to use local Maven distribution with the same version that is used in the Maven wrapper script.

Additionally, there is a quick build activated by setting the -Dquick system property that skips validation tasks for faster local builds (e.g. tests, checkstyle validation, javadoc, source plugins etc) and does not build extensions and distribution modules.

Testing

Take into account that the default build executes thousands of tests which may take a considerable amount of time. Hazelcast has 3 testing profiles:

  • Default:
    ./mvnw test

to run quick/integration tests (those can be run in parallel without using network by using -P parallelTest profile).

  • Slow Tests:
    ./mvnw test -P nightly-build

to run tests that are either slow or cannot be run in parallel.

  • All Tests:
    ./mvnw test -P all-tests

to run all tests serially using network.

Some tests require Docker to run. Set -Dhazelcast.disable.docker.tests system property to ignore them.

When developing a PR it is sufficient to run your new tests and some related subset of tests locally. Our PR builder will take care of running the full test suite.

Trigger Phrases in the Pull Request Conversation

When you create a pull request (PR), it must pass a build-and-test procedure. Maintainers will be notified about your PR, and they can trigger the build using special comments. These are the phrases you may see used in the comments on your PR:

  • run-lab-run - run the default PR builder
  • run-lts-compilers - compiles the sources with JDK 17 and JDK 21 (without running tests)
  • run-ee-compile - compile hazelcast-enterprise with this PR
  • run-ee-tests - run tests from hazelcast-enterprise with this PR
  • run-windows - run the tests on a Windows machine (HighFive is not supported here)
  • run-with-ibm-jdk-8 - run the tests with IBM JDK 8
  • run-cdc-debezium-tests - run all tests in the extensions/cdc-debezium module
  • run-cdc-mysql-tests - run all tests in the extensions/cdc-mysql module
  • run-cdc-postgres-tests - run all tests in the extensions/cdc-postgres module
  • run-mongodb-tests - run all tests in the extensions/mongodb module
  • run-s3-tests - run all tests in the extensions/s3 module
  • run-nightly-tests - run nightly (slow) tests. WARNING: Use with care as this is a resource consuming task.
  • run-ee-nightly-tests - run nightly (slow) tests from hazelcast-enterprise. WARNING: Use with care as this is a resource consuming task.
  • run-sql-only - run default tests in hazelcast-sql, hazelcast-distribution, and extensions/mapstore modules
  • run-docs-only - do not run any tests, check that only files with .md, .adoc or .txt suffix are added in the PR
  • run-sonar - run SonarCloud analysis
  • run-arm64 - run the tests on arm64 machine

Where not indicated, the builds run on a Linux machine with Oracle JDK 17.

Creating PRs for Hazelcast SQL

When creating a PR with changes located in the hazelcast-sql module and nowhere else, you can label your PR with SQL-only. This will change the standard PR builder to one that will only run tests related to SQL (see run-sql-only above), which will significantly shorten the build time vs. the default PR builder. NOTE: this job will fail if you've made changes anywhere other than hazelcast-sql.

Creating PRs which contain only documentation

When creating a PR which changes only documentation (files with suffix .md or .adoc) it makes no sense to run tests. For that case the label docs-only can be used. The job will fail in case you've made other changes than in .md, .adoc or .txt files.

License

Source code in this repository is covered by one of two licenses:

  1. Apache License 2.0
  2. Hazelcast Community License

The default license throughout the repository is Apache License 2.0 unless the header specifies another license.

Acknowledgments

Thanks to YourKit for supporting open source software by providing us a free license for their Java profiler.

We owe (the good parts of) our CLI tool's user experience to picocli.

Copyright

Copyright (c) 2008-2024, Hazelcast, Inc. All Rights Reserved.

Visit www.hazelcast.com for more info.

hazelcast's People

Contributors

ahmetmircik avatar asimarslan avatar bilalyasar avatar blazember avatar cangencer avatar danny-hazelcast avatar david-strom-hazelcast avatar dependabot[bot] avatar devopshazelcast avatar donnerbart avatar eminn avatar emrahkocaman avatar enesakar avatar frant-hartm avatar fuadm avatar gurbuzali avatar hasancelik avatar ihsandemir avatar jackpgreen avatar jerrinot avatar kwart avatar mdogan avatar metanet avatar mmedenjak avatar noctarius avatar orcunc avatar pveentjer avatar serdaro avatar vbekiaris avatar viliam-durina avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hazelcast's Issues

ExecutorService queue size limit and monitoring (enhancement request)

It would be nice to be able to configure a maximum queue size for an ExecutorService such that if there are more tasks submitted than available execution threads and queue slots, then newly submitted tasks will be immediately rejected.

A possible configuration variation would be to discard the oldest task instead of the newest task. (this variation would be comparable to Java's 'DiscardOldestPolicy' RejectedExecutionHandler http://download.oracle.com/javase/6/docs/api/index.html?java/util/concurrent/ThreadPoolExecutor.DiscardOldestPolicy.html )

I would also be nice for the current queue size to be exposed on the ExecutorService API, so that it can be monitored. (and adding to the JMX attributes and monitor tool would be handy as well)

Note, this is a follow up to the forum thread:
https://groups.google.com/d/topic/hazelcast/ybl9paUGUDQ/discussion

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=543

Hazelcast ressource adapter doesn't work on WebSphere 6.1/7

Hi,

I have tried to use the Hazelcast-ra.rar version 1.9.1 on WebSphere both version 6.1 and 7.0, but I can't get it working on any of them.
I have installed the ressource adapter and configured it.

I have used the following jsp code:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"><%@page
language="java" contentType="text/html; charset=ISO-8859-1"
pageEncoding="ISO-8859-1"%>
<html>
<head>
<title>test</title>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
</head>
<body>

<%@page import="javax.resource.ResourceException" %>
<%@page import="javax.transaction." %>
<%@page import="javax.naming.
" %>
<%@page import="javax.resource.cci." %>
<%@page import="java.util.
" %>
<%@page import="com.hazelcast.core.Hazelcast" %>

<%@page import="javax.resource.ResourceException" %>
<%@page import="javax.transaction." %>
<%@page import="javax.naming.
" %>
<%@page import="javax.resource.cci." %>
<%@page import="java.util.
" %>
<%@page import="com.hazelcast.core.Hazelcast" %>

<%
UserTransaction txn = null;
Connection conn = null;
Queue queue = Hazelcast.getQueue ("default");
Map map = Hazelcast.getMap ("default");
Set set = Hazelcast.getSet ("default");
List list = Hazelcast.getList ("default");

try {
Context context = new InitialContext();
txn = (UserTransaction) context.lookup("java:comp/UserTransaction");
txn.begin();

ConnectionFactory cf = (ConnectionFactory) context.lookup (&quot;hazel/test&quot;); 
conn = cf.getConnection();  

queue.offer(&quot;newitem&quot;);
map.put (&quot;1&quot;, &quot;value1&quot;);
set.add (&quot;item1&quot;);
list.add (&quot;listitem1&quot;);

txn.commit(); 

} catch (Throwable e) {
if (txn != null) {
try{
txn.rollback();
}catch (Exception ix) {ix.printStackTrace();};
}
e.printStackTrace();
} finally {
if (conn != null) {
try{
conn.close();
}catch (Exception ignored) {};
}
}
%>

</body>
</html>

If I debug the code I can see that when the server runs the line cf.getConnection() I get the following exception:

[20-01-11 10:36:32:396 CET] 0000001c RegisteredRes E WTRN0078E: An attempt by the transaction manager to call start on a transactional resource has resulted in an error. The error code was XAER_RMERR. The exception stack trace follows: javax.transaction.xa.XAException: XAResource threw an unchecked exception
at com.ibm.ws.Transaction.JTA.JTAResourceBase.processThrowable(JTAResourceBase.java:367)
at com.ibm.ws.Transaction.JTA.JTAResourceBase.start(JTAResourceBase.java:165)
at com.ibm.tx.jta.RegisteredResources.startRes(RegisteredResources.java:988)
at com.ibm.ws.tx.jta.RegisteredResources.enlistResource(RegisteredResources.java:877)
at com.ibm.ws.tx.jta.TransactionImpl.enlistResource(TransactionImpl.java:1718)
at com.ibm.ws.tx.jta.TranManagerSet.enlistOnePhase(TranManagerSet.java:608)
at com.ibm.ejs.j2c.LocalTransactionWrapper.enlist(LocalTransactionWrapper.java:587)
at com.ibm.ejs.j2c.ConnectionManager.initializeForUOW(ConnectionManager.java:1567)
at com.ibm.ejs.j2c.ConnectionManager.involveMCInTran(ConnectionManager.java:1194)
at com.ibm.ejs.j2c.ConnectionManager.allocateConnection(ConnectionManager.java:698)
at com.hazelcast.jca.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:43)
at com.ibm._jsp._test._jspService(_test.java:127)
at com.ibm.ws.jsp.runtime.HttpJspBase.service(HttpJspBase.java:98)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:831)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1530)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:829)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:458)
at com.ibm.ws.webcontainer.servlet.ServletWrapperImpl.handleRequest(ServletWrapperImpl.java:175)
at com.ibm.wsspi.webcontainer.servlet.GenericServletWrapper.handleRequest(GenericServletWrapper.java:121)
at com.ibm.ws.jsp.webcontainerext.AbstractJSPExtensionServletWrapper.handleRequest(AbstractJSPExtensionServletWrapper.java:239)
at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:3742)
at com.ibm.ws.webcontainer.webapp.WebGroup.handleRequest(WebGroup.java:276)
at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:929)
at com.ibm.ws.webcontainer.WSWebContainer.handleRequest(WSWebContainer.java:1583)
at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:178)
at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:455)
at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink.java:384)
at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.ready(HttpInboundLink.java:272)
at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:214)
at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:113)
at com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:165)
at com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217)
at com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161)
at com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:138)
at com.ibm.io.async.ResultHandler.complete(ResultHandler.java:204)
at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:775)
at com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:905)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1550)

[20-01-11 10:36:32:396 CET] 0000001c LocalTransact E J2CA0030E: Method enlist caught javax.transaction.RollbackException: XAResource start association error:XAER_RMERR
at com.ibm.ws.tx.jta.TransactionImpl.enlistResource(TransactionImpl.java:1741)
at com.ibm.ws.tx.jta.TranManagerSet.enlistOnePhase(TranManagerSet.java:608)
at com.ibm.ejs.j2c.LocalTransactionWrapper.enlist(LocalTransactionWrapper.java:587)
at com.ibm.ejs.j2c.ConnectionManager.initializeForUOW(ConnectionManager.java:1567)
at com.ibm.ejs.j2c.ConnectionManager.involveMCInTran(ConnectionManager.java:1194)
at com.ibm.ejs.j2c.ConnectionManager.allocateConnection(ConnectionManager.java:698)
at com.hazelcast.jca.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:43)
at com.ibm._jsp._test._jspService(_test.java:127)
at com.ibm.ws.jsp.runtime.HttpJspBase.service(HttpJspBase.java:98)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:831)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1530)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:829)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:458)
at com.ibm.ws.webcontainer.servlet.ServletWrapperImpl.handleRequest(ServletWrapperImpl.java:175)
at com.ibm.wsspi.webcontainer.servlet.GenericServletWrapper.handleRequest(GenericServletWrapper.java:121)
at com.ibm.ws.jsp.webcontainerext.AbstractJSPExtensionServletWrapper.handleRequest(AbstractJSPExtensionServletWrapper.java:239)
at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:3742)
at com.ibm.ws.webcontainer.webapp.WebGroup.handleRequest(WebGroup.java:276)
at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:929)
at com.ibm.ws.webcontainer.WSWebContainer.handleRequest(WSWebContainer.java:1583)
at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:178)
at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:455)
at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink.java:384)
at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.ready(HttpInboundLink.java:272)
at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:214)
at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:113)
at com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:165)
at com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217)
at com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161)
at com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:138)
at com.ibm.io.async.ResultHandler.complete(ResultHandler.java:204)
at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:775)
at com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:905)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1550)
Caused by: javax.transaction.SystemException: XAResource start association error:XAER_RMERR
at com.ibm.tx.jta.RegisteredResources.startRes(RegisteredResources.java:1039)
at com.ibm.ws.tx.jta.RegisteredResources.enlistResource(RegisteredResources.java:877)
at com.ibm.ws.tx.jta.TransactionImpl.enlistResource(TransactionImpl.java:1718)
... 33 more

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=488

eviction-policy and time-to-live and idle-time for multimap

What steps will reproduce the problem?

  1. use multimap and try to use "eviction-policy" or "max-idle" or "time-to-live"
  2. these setting are not being acted upon.

What is the expected output? What do you see instead?
These settings are being supported for maps and since multimap is based on map, the behavior should be consistent.

What version of the product are you using? On what operating system?
1.9.3, all supported OS's.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=577


earlier comments

lasagni said, at 2011-11-30T08:56:08.000Z:

It is not trivial what the implementation should be.

For my use case I would like every single entry to have an own time to live (or idle time). To me this looks like an intuitive solution.

Another possible implementation could be to associate the time to live (or idle time) with the key. In this case all entries with this key would be removed when the time to live of the oldest entry with the same key expires.

dousti said, at 2011-12-01T11:34:38.000Z:

The crux of the matter is consistency. "eviction-policy" and "max-idle" and "time-to-live" with the same (consistent) interpretation for both maps and multimaps. We are not talking about specific use cases. The TTL in a map applies uniformly to all the entries; the same should be supported for multimap, ie, the TTL expires and that particular entry would be removed from the multimap, not all the entries with the same key.

lasagni said, at 2011-12-03T21:49:31.000Z:

The question is: is it a map of bags or just a bag with an index. Anyway having the configurable settings have any effect would be fine ;-)

Allow for greater configuration/customisation of the parallel executor behind executor service instances

Background discussion in https://groups.google.com/forum/?fromgroups#!topic/hazelcast/3OZOYQV8diU

Currently (in 1.9.2.2), ParallelExecutorService has three different ParallelExecutor implementations that are used to serve the needs of various Hazelcast internal operations, as well as the named, configurable executor services available via the Hazelcast.getExecutorService factory.

As discussed in the aforementioned thread, ParallelExecutorImpl (which we are forced to use if getting an executor service via Hazelcast.getExecutorService) satisfies the requirement for certain Hazelcast internal operations to be executed in a strict order, but at the expense of effectively pre-allocating tasks to threads, which can result in threads being left idle when there are unclaimed tasks to be run, irrespective of whether those tasks care about the order in which they're executed.

One way of looking at this is to say that ParallelExecutorImpl ought to be improved to eliminate this inefficiency without compromising any contract it fulfils around task ordering. This may or may not be easy to do, however. Another approach is to consider that any given executor service instance is going to be used for a certain type of task processing - e.g. we may require strictly ordered tasks or we may not care about ordering at all (beyond the use of a queue to deliver tasks to the executors that is).

It would also be remiss not to mention the potential inefficiency that is inherent in pushing tasks out to nodes, rather than having them pulled from a distributed queue. There are cases where pushing is necessary, because you want a task to be executed on a specific subset of nodes (like an event notification), or if you follow the load-balancing approach whereby the pushing node decides which target node is least loaded. There are also many cases where pulling is more efficient, as it guarantees that no executor thread is idle while there are unclaimed tasks to be processed... and tasks on a distributed queue could potentially survive node failures by virtue of persistence. Each has its own pros and cons. I mention this because the implementation detail is probably quite different in each case (though with some important commonalities of course, like callbacks)... and the best approach may be to allow for either mechanism to be used, rather than going for one-mechanism-fits-all.

Ideally, the executor service config should allow us to make whatever customisations are necessary to cover the different use cases, whether interface implementations are provided with Hazelcast or can be plugged in. Being able to supply my own ParallelExecutor implementation for a specific executor service instance would itself be a coup, as I could then do whatever I like with it, and not impact the Hazelcast internal operations in the process, which are hardwired to the likes of ParallelExecutorImpl. I may even want to manage my own ThreadPoolExecutor, rather than use Hazelcast's internally shared one (meaning I don't have to throttle concurrency through intermediate Runnables like ParallelExecutorImpl.ExecutionSegment).

I would also consider what flexibility might be added on the task submission side... e.g. as alluded to above, if I didn't want a MemberCall wrapping a DistributedTask to be pushed out to a node, but rather placed on a distributed queue, with a polling thread sitting on each node to consume/delegate tasks if and only if there is local executor thread capacity. I expect this would be a common and intuitive use case.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=545

How to detect data loss?

What steps will reproduce the problem?
Consider this scenario, based on the "TestApp" example:

  1. you create a cluster with 3 members
  2. you populate a Set with some 10 values
  3. you kill 2 out of three members
    having configured the parameter: <backup-count>1</backup-count>
    i did expect data loss BUT I also expected that the remaining member
    could receive a notification about the data loss, to be able to start
    a recovery procedure or a cluster shutdown procedure!
    On the contrary what happens is that the remaining member goes on as
    if nothing had happened, even after having lost data.

What is the expected output? What do you see instead?
Again: the problem is not in the data loss per se. The problem is that
I still haven't found a way to be warned (or even be aware) that there
has been data loss.

What version of the product are you using? On what operating system?
1.8.3

Please provide any additional information below.
Neither the MembershipListener nor the MigrationListener help.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=260


earlier comments

lajos.kesik said, at 2010-12-29T18:07:21.000Z:

There should be a possibility to detect it. It is a mandatory feature for production usage.

twal7ers said, at 2011-11-30T17:13:46.000Z:

Was this before the instance listener?

http://www.hazelcast.com/javadoc/com/hazelcast/core/Hazelcast.html#addInstanceListener(com.hazelcast.core.InstanceListener)

Support indexing within other collection types. Ex: "addIndex" on ISet

I noticed that the addIndex method only exists for IMap. Can it be added
to the other collections?

I've got a list of unique objects, and don't have any problems adding them
to a map using any of various key methods (the object's hashCode method for
example) but it seems redundant.

I'd much rather have a Set of unique objects, not use a Key at all, have
the Set complain if I try to insert a duplicate, yet not lose the addIndex
optimizations.

Or is it that way because IMap is different?
http://groups.google.com/group/hazelcast/browse_thread/thread/78f75a392bec787d#

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=250


earlier comments

fuadmalik said, at 2012-03-02T09:12:34.000Z:

Issue 678 has been merged into this issue.

IPv6 support

What steps will reproduce the problem?

Hazelcast assumes IPv4 by default, e.g. for parsing and storing of IP
addresses. Try using Hazelcast with IPv6 addresses.

What is the expected output? What do you see instead?

Hazelcast should support both IPv4 and IPv6.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=209

Support for request response communication between members

Hazelcast does not currently provide an api to allow request response
communication between members.

With the current api the only ways I can see to achieve this are

  1. Using an executor - but this only allows access to static state in the
    receiver
  2. By creating per member or temporary queues which is a little messy and
    inconvenient.

    Migrated from http://code.google.com/p/hazelcast/issues/detail?id=287


    earlier comments

fuadmalik said, at 2011-05-03T10:51:20.000Z:

What type of API do you think we should have? What about each Member having it's topic. And anyone can send a message to that topic. A response can be sent to the requester's topic.

noctariushtc said, at 2011-08-19T10:50:06.000Z:

I guess he's thinking of some kind of clusterwide notification system to notify all members about something (in kind of events or messages).

henry.coles said, at 2011-08-19T18:46:40.000Z:

For my particular requirement I need a mechanism to make a call from one node to another and receive a reply synchronously.

I could use the per member topic suggested by fuadmalik to achieve this. It would however be nice it hazelcast could provide a synchronous API, perhaps built ontop of per member topics?

noctariushtc said, at 2011-08-19T19:11:05.000Z:

What about using JMS (for example ActiveMQ) for communication between servers. By using the nodes from Hazelcast you can connect to them. JMS supports a- / synchron communication.

Pluggable Serializer (Feature request)

It would be an enhancement if the client of hazelcast could plug in a
custom serializer. Sometimes the cient knows best how to serialize objects
as fast as possible, and some librariers out there claim to be faster than
the default java framework.

It could be implemented by adding a serializer interface to the API, and
clients could set a serialzier object into the configuration (preferred),
or the configuration could take a classname and the create on lazily (xml
configuration)

For example

<code>

inteface HzSerializer{

public void serialize(Object object, OutputStream stream);

public Object deserialize(InputStream stream);

}

</code>

or something like that.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=153


earlier comments

constantin.rack said, at 2010-02-20T15:27:37.000Z:

Issue 219 has been merged into this issue.

constantin.rack said, at 2010-02-23T08:14:21.000Z:

Suggestion from mailing list to use Kyro, may be worth a look: http://code.google.com/p/kryo/

dtravin said, at 2010-04-04T12:59:27.000Z:

I am looking forward to implement this myself.

Extracting an interface from a class com.hazelcast.nio.Serializer is not a complex
thing to do.
But, what is the way to inject that interface into ThreadContext class ?

ian.phillips said, at 2010-04-04T16:47:28.000Z:

Well, there is an interface already: Serializer$TypeSerializer, it just needs to be made public (and probably moved to it's own top level file rather than being a nested interface).

The question is how to handle registration of new TypeSerializers and how to tag the data when serialised. As I
see it there are 2 options: require the user to handle this manually (e.g. a registerSerializer(int tag,
TypeSerializer serializer) method) or to try to handle this automatically (e.g. a registerSerializer(TypeSerializer
serializer) method). The latter option could be accomplished either by using a distributed map or even just a
simple counter (which would need to be protected with a distributed lock).

I think that I prefer having the user handle tagging manually as it is much simpler, and the other options
could always be implemented on top of this in user code so they're something that could be added at a later
data if there was enough interest.

dtravin said, at 2010-04-05T19:55:21.000Z:

I do not get your point and my question was how to inject a serializer interface into ThreadContext. At this moment serializer is instantiated once as a final field of ThreadContext in a static method get() and there are 56 usages of that method in code.

Config c = new Config();
HazelcastSerializer myCustomWhatEverSerializer = ......
c.getSerializerConfig().setSerializer(myCustomWhatEverSerializer);
HazelcastInstance hz = HazelcastInstance.newInstance(c);

That is how I see the usage.

Please, explain me your vision in a bit detailed mode.t

ian.phillips said, at 2010-04-08T15:07:19.000Z:

First off some background thinking; as I see it there are a number of ways of allowing custom serializers and 2 main things that are affected:

(a) how the data will be tagged on the wire; and

(b) how the serializers will be incorporated into the current API.

Let's look at (b) first as it's by far the thornier issue (I know, I've had a go at implementing this and it is
tricky!). First off: I don't think that Serializer should be part of the ThreadContext class. If we assume that a
given JVM can have multiple HazelcastInstance objects (which it can) and that these can be connected to
different clusters then holding serializers in thread local storage isn't going to work.

Another problem lies with the fact that currently much of the Serializer code is static and this also won't work
in the presence of multiple Hazelcast instances.

So, some design decisions need to be made:

  1. all of the Serializer code needs to be made non-static;

  2. the Serializer instances need to be stored in the HazelcastInstance (FactoryImpl or HazelcastClient),
    presumably adding a getSerializer() method to the HazelcastInstance interface, this will probably still need to
    be handled in a thread local manner;

  3. the factory needs to create & inject the Serializer instance based on the configuration provided, I don't think
    that it is an unreasonable expectation that all members of a cluster use the same serializer configuration, but
    there are probably also ways around this if need be.

The serializer could be handed down to where it is needed probably by attaching it to the Node instance that
the factory creates, although I haven't checked this in detail.

One huge glaring open question: how will clients learn about custom serializers given that they do not
currently have access to the config data? One option could be to make the SerializationConfig class
serializable and then load it from whichever cluster member the client connects to.

OK, I've gone on for quite a while now, so I'll give you a chance to air your thoughts on the matter ;-)

ian.phillips said, at 2010-04-08T15:10:24.000Z:

By the way: if oztalip or fuad have any comments on my proposed approach I'd be interested to hear them:

does this sounds reasonable to you guys?

does my primitive approach to client handling sound suitable for a first draft?

can you think of any issues that I'm missing?

and, I guess, given that this is sounding like a fairly intrusive change would you still be interested in receiving
patches for this? one big patch or break it up into smaller ones?

dtravin said, at 2010-04-13T07:54:52.000Z:

Hi, Ian

I have started a refactoring just to move serializer to FactoryImpl and met some
obstacles.

  1. The use of factory is almost everywhere done by accessing public field in Node.
    To my mind node.factory.getSerializer() is a bit ugly.
  2. import static IOUtils.toData and toObject is used in many places
    I had to change signatures of those methods to accept serializer and to adapt the all
    places where it is used.
    I can make it work, but this looks weird.

Talip and Fuad, do you have any comments?

ian.phillips said, at 2010-04-13T14:42:01.000Z:

I took a slightly different approach - I added Serializer a field on the node class, and added corresponding toData and toObject methods. As you spotted, it turns out that most of the places that serialization is used the is a node instance handy, and node.toData/node.toObject looks just fine.

I deleted the static methods on IOUtils, and also the Serializer from ThreadContext.

I've got a couple of outstanding issues to resolve then I'll post a patch here, hopefully this evening sometime.

ian.phillips said, at 2010-04-15T13:03:45.000Z:

Hi dtravin,

Hmm, this is getting complicated!

I'm not sure how to go about implementing client serializers right now. The issue is that the client code does
not reference the core hazelcast module, as I see it there are 2 options, neither trivial:

a) separate out some of the I/O code into a new module which would be used by both the hazelcast and
hazelcast-client modules; or

b) force users to write 2 versions of each custom TypeSerializer (this could be simplified if the user was
prepared to depend on the hazelcast module from their custom serializer).

I'd be reasonably happy with (a), the new module could also hold the test support classes, then hazelcast-
client would not need to depend on hazelcast at all, which seems like a nice bonus.

I'm going to attach a partial patch here to illustrate where I'm going with my changes - I haven't included all of
the files here but rather that subset which I think illustrates the relevant changes.

As well as this there are a number of changes to other files to use the HazelcastClient/Node serialzer rather
than the static one which no longer exists, and some updated to the test suite - these aren't included here as it
makes the patch a bit too big to scan easily.

Talip and Fuad, still interested in hearing your thoughts on the matter.

Cheers,
Ian.

oztalip said, at 2010-04-15T13:06:42.000Z:

Sorry for not being responsive. I started looking at it. I will get back to you with details very soon.

-talip

oztalip said, at 2010-04-15T13:38:45.000Z:

Ian,

Quick note: In your implementation (patch) each hazelcast instance has its own Serializer and all user threads are
actually using the same Serializer instance but the default Serializer is not thread-safe, it is using the same
none-thread-safe FastByteArrayOutputStream instance for example.

ian.phillips said, at 2010-04-15T13:40:34.000Z:

Hi Talip,

No problem at all - I just find myself with some unexpected free time today due to a flight being cancelled.

Going back to my shared module approach (option a from my last comment) the attached file is a first stab at
what would need to be separated out into a common module, it may be possible to reduce this list of files
with some closer analysis - this is just a naive approach based on moving files until the module builds.

If you do want to take that approach it should be possible to unify the client and cluster
Serializer/TypeSerializer implementations, possibly changing the interface from this:

public interface TypeSerializer {
boolean isSerializable(Object object);
void write(FastByteArrayOutputStream bbos, T obj) throws Exception;
T read(FastByteArrayInputStream bbis) throws Exception;
}

to this:

public interface TypeSerializer {
boolean isSerializable(Object object);
void write(FastByteArrayOutputStream bbos, T obj, boolean client) throws Exception;
T read(FastByteArrayInputStream bbis, boolean client) throws Exception;
}

Anyway, just jotting down some thoughts for you at this stage.

Cheers,
Ian.

ian.phillips said, at 2010-04-15T14:30:20.000Z:

Fixed, it's ThreadLocal on Node now. I'm also making some more changes to my version and will upload a new patch later today.

Cheers,
Ian.

dtravin said, at 2010-04-24T18:58:25.000Z:

Hey, Ian

Where is your final patch?
I want to see it in action

Daniel

drew.botwinick said, at 2010-04-27T20:12:29.000Z:

I'm new to this project (and just recently wrote this to the group thread discussing this issue), but based on reading the comments in this issue, this is really becoming messy. I know java serialization is unpopular, but by using readResolve() and writeReplace(), you can substitute a different "container" object that itself can be Externalizable and make a much simpler interface that works on top of java serialization. It's really easy.

public interface SerializableData extends Serializable {
public Object writeReplace() throws ObjectStreamException;
}

public interface SerializedDataContainer extends Externalizable {
public Object readResolve() throws ObjectStreamException;

@Override
public void writeExternal(ObjectOutput out) throws IOException;

@Override
public void readExternal(ObjectInput in) throws IOException, 

ClassNotFoundException;
}

You can use this approach with java serialization backed by any serialization
"engine" the user chooses. You get 99% the performance of the back-end serialization
engine and barely more overheard than the back-end serialization engine. More
importantly, it's ridiculously simple and integrates with anything that uses java
serialization. That means it's portable AND doesn't require any special
considerations on the part of the library (i.e. hazelcast, in this case).

-Drew Botwinick

ian.phillips said, at 2010-04-28T12:51:15.000Z:

Hi Drew,

Sure, the writeReplace/readResolve mechanism is really useful, but… one of the issues with Java serialization
that I've been thinking about (and Daniel, this is the main reason I've not uploaded the full patch) is that it's
Java specific, and one of the stated goals for Hazelcast is to support non-Java clients. Using Java serialization
as a portable object format (POF) strikes me as a little clunky.

I'm currently thinking of this as the interface into the POF system: we have a POFService (which fulfils basically
the same rôle as the current Serializer class) and a POFContext which holds all of the thread local buffers and
has methods to read and write data, like so

interface POFService {
Data toData(Object object); // resets buffers, writes tag
Object toObject(Data data);
}

interface POFContext {
// avoids broken UTF implementation in DataInput/DataOutput
// good for the .NET and other clients
void write(Object object);

<T> T read(Class<T> type);

// ... methods for primitive types ...

void writeAll(Iterable iterable);
void writeAll(Map map);
void writeAll(Object[] array);

<T, C extends Collection> C readAll(Class<T> type, C into, boolean includeTags);
<T, M extends Map> M readAll(Class<T> type, M into, boolean includeTags);
<T> T[] readAll(Class<T> type, T[] into, boolean includeTags);

}

and an implementation of these, similar to the Serializer impl in my previous patch

POFServiceImpl implements POFService, POFContext {
ThreadLocal contexts = … ;
Map<POFSerializer, Integer> typeToIdMap = … ;
Map<Integer, POFSerializer> idToTypeMap = … ;
}

and we still need a TypeSerializer (renamed for consistency):

interface POFSerializer {
void write(T object, POFContext context);
T read(POFContext context);
}

then a class which uses this would be defined like so

public class Employee {
private final int employeeNumber;
private String name;
private int age;
private double salary;
private Address address;

// no need for a default constructor
public Employee(int employeeNumber) {
    this.employeeNumber = employeeNumber;
}

// getters, setters …

public static class Serializer implements POFSerializer<Employee> {
    public void write(Employee e, POFContext context) {
        context.write(e.firstName);
        context.write(e.lastName);
        context.write(e.age);
        context.write(e.salary);
        context.write(e.address);
    }
    public Employee read(POFContext context) {
        int employeeNumber = context.read(Integer.class);
        Employee e = new Employee(employeeNumber);
        e.name = context.read(String.class);
        e.age = context.read(Integer.class);
        e.salary = context.read(Double.class);
        e.address = context.read(Address.class);
    }
}

}

and could be configured like so

com.example.Foo.Serializer com.example.io.FooSerializer com.example.Bar.Serializer com.example.io.BarSerializer

I'll probably have a crack at coding this up over the coming long weekend (when I'll be away with spotty
internet cover, so it'll give me something to do :-)

I'm interested to hear what peoples thoughts are w.r.t. the cross-platform/language possibilities.

/Ian.

drew.botwinick said, at 2010-04-28T18:14:44.000Z:

Hi Ian,

I forgot to consider cross-language compatibility... It is certainly true that it'd
be better to avoid some of java serialization's quirks for a "POF". (It would be
messy and ridiculous, although I suppose somewhat useful, to have a "java
serialization interpreter" for .net.) With that in mind, I like your proposal.

I also like the idea of using an integer to tag the class/serializer (but unlike
Kryo, defining the integer in config so that it is more portable). This essentially
amounts to your own serialization mechanism, but I think that might be necessary for
a solid cross-language mechanism.

This approach would require implementing the POF system on every target language, but
that'd probably be the most consistent solution. I'm sold.

Good work! :-)

-Drew

P.S.>> You forgot to return the new Employee in Employee.Serializer.read(...) :-P

j.gonon said, at 2011-02-15T15:26:17.000Z:

Hi, I'm new here and like to help with serialization. What I'm currently using is an interface looking like "POFContext" but "InputStream" and "OutputStream" objects are passed as parameters. I don't understand the use of "POFService".

I agree on the fact that "id <-> type" should be found in a configuration file.

noctariushtc said, at 2011-05-29T14:55:30.000Z:

Maybe that patch could be an idea how to do it (sorry just missed that issue when I initially opened the new one): http://code.google.com/p/hazelcast/issues/detail?id=571

paul.woodward said, at 2011-10-26T08:19:45.000Z:

This issue has been open for 2 years now, what (if any) are the plans for support for custom serializers?

fuadmalik said, at 2011-10-26T17:02:40.000Z:

We have made some changes to support custom serialization. Even with some tweaks I was able to plug the Protobuf. But still we need to work on and shape the API.

mehmetdoghan said, at 2011-11-22T08:14:37.000Z:

Issue 571 has been merged into this issue.

noctariushtc said, at 2011-12-07T15:33:04.000Z:

Just want to point at the possible patch posted in issue 571.

Allow deleting an entire map/namespace with a single call via memcached api

Subject feature request. Here's a suggested implementation:

[adhir@dante:~/hazelcast-read-only] svn diff

Index: hazelcast/src/main/java/com/hazelcast/impl/ascii/memcache/DeleteCommandProcessor.java

--- hazelcast/src/main/java/com/hazelcast/impl/ascii/memcache/DeleteCommandProcessor.java(revision 1918)
+++ hazelcast/src/main/java/com/hazelcast/impl/ascii/memcache/DeleteCommandProcessor.java(working copy)
@@ -51,10 +51,14 @@
textCommandService.sendResponse(command);
}
textCommandService.incrementDeleteCount();

  •    textCommandService.delete(mapName, key);
    
  • if (key.isEmpty()) {
  •   textCommandService.delete(mapName);
    
  • } else {
  •   textCommandService.delete(mapName, key);
    
  • }
    }

public void handleRejection(DeleteCommand command) {
handle(command);
}
-}
\ No newline at end of file
+}

Index: hazelcast/src/main/java/com/hazelcast/impl/ascii/TextCommandService.java

--- hazelcast/src/main/java/com/hazelcast/impl/ascii/TextCommandService.java (revision 1918)
+++ hazelcast/src/main/java/com/hazelcast/impl/ascii/TextCommandService.java (working copy)
@@ -52,6 +52,8 @@

Object delete(String mapName, String key);

  • Object delete(String mapName);

Stats getStats();

Node getNode();

Index: hazelcast/src/main/java/com/hazelcast/impl/ascii/TextCommandServiceImpl.java

--- hazelcast/src/main/java/com/hazelcast/impl/ascii/TextCommandServiceImpl.java (revision 1918)
+++ hazelcast/src/main/java/com/hazelcast/impl/ascii/TextCommandServiceImpl.java (working copy)
@@ -164,6 +164,10 @@
return hazelcast.getMap(mapName).remove(key);
}

  • public Object delete(String mapName) {
  • return hazelcast.getMap(mapName).clear();
  • }

public boolean offer(String queueName, Object value) {
return hazelcast.getQueue(queueName).offer(value);
}

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=610


earlier comments

mehmetdoghan said, at 2011-07-26T08:03:04.000Z:

Issue 606 has been merged into this issue.

Support PriorityBlockingQueue

http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/PriorityBlockingQueue.html

i.e. add prioritization support to the clustered blocking queue.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=72


earlier comments

mingfai.ma said, at 2009-05-22T07:06:56.000Z:

i meant PriorityQueue, not PriorityBlockingQueue http://java.sun.com/j2se/1.5.0/docs/api/java/util/PriorityQueue.html

it would be great if there is a non-FIFO queue.

kevin.witten222 said, at 2010-12-07T15:49:46.000Z:

Also implement PriorityBlockingQueue, so far I like Hazelcast, but this would be a requirement to it to use.

jatinksharma said, at 2011-11-03T21:31:15.000Z:

This would be really useful.

[email protected] said, at 2011-12-20T22:08:24.000Z:

agreed - i would really use this. in fact, i am trying to combine the concept of a multimap with a priority queue - or even a multimap with an ordered collection for each key...

More EntryListener interfaces

There are cases where HC memebers not interested in all events specified in EntryListener interface (for example, my app is using entryAdded only). Please supply more interfaces or some flag to indicate HC which events are desired.

What version of the product are you using? On what operating system?
1.8.4

Please provide any additional information below.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=294


earlier comments

joan.balaguero said, at 2010-08-06T12:28:39.000Z:

In addition to this point, I think there is another interesting point to improve.

In my case, I use an Imap to store a cache of xml documents. Each object in cache has a pointer to disk, where the physical document is.

I have a listener attached to this Imap because it's the only way to take actions is case of an entry is evicted. In this case, the entryEvicted method is executed and I remove the xml document from disk. But my storage is a shared storage, common to all cluster nodes. Then, I want the EntryEvicted executed only in the member owner of the key.

What about to add an additional 'int' parameter to 'addEntryListener' method with these values:

0: ALL_MEMBERS --> the events are sent to all members (I think this is the current behavior).
1: OWNER_MEMBER --> the events are only sent to that member owner of the key that is currently being added/removed/evicted.
2: LOCAL_MEMBER --> the events are only sent to the same member where the get/put happened.

Scalable way to wait for incoming elements from thousands of queues

q.take() is a blocking operation, as you say. What if I need to wait for inputs from thousands of queues? I guess for such scenarios (which are not so seldom, as one may think), the q.take() approach does not scale.

There are two alternatives:

  • use asynchronous notifications (as it is the case with topic listeners now)
  • introduce a UNIX select-like functionality, i.e. blocking wait for an input from one of the provided sources.
    This can be either an API call or can be eventually modeled as a "virtual" queue or topic that in fact collects from multiple sources (I think it would be rather elegant to implement it as a virtual queue or topic).

For discussion of relevant scenarios and this proposal, please see this thread:
http://groups.google.com/group/hazelcast/browse_thread/thread/17348f035fc965b9/2155d4497deba8c6#2155d4497deba8c6

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=408

Create a well defined native client Protocol

We have implemented Java Client. However it would be very useful to define the protocol used by the client. Later on with the given protocol everyone can implement clients in different languages.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=302


earlier comments

fuadmalik said, at 2010-11-04T11:40:42.000Z:

We are planning to implement it in middle term. Here are the steps that we plan to take. For all items below Java Client will be used. 1. Identify the operations that will be part of the protocol 2. Define the protocol for each operation 3. Implement server side and java client side of it 4. Implement each protocol for C# client. It will validate whether the protocol is language agnostic. 5. Document the protocol

By the end of this actions a protocol will be defined together with new C# client.

Client cannot connect to symmetric encrypted cluster

when a client (1.9) tries to connect to a symmetric encrypted cluster,
I get many loop messages "WARNING: Connection to Connection [147] [localhost/127.0.0.1:5701] is lost
Dec 10, 2010 4:04:13 PM com.hazelcast.nio.SocketPacketWriter
INFO: [group] Writer started with SymmetricEncryption
Dec 10, 2010 4:04:13 PM com.hazelcast.nio.SocketPacketReader
INFO: [group] Reader started with SymmetricEncryption"

See attached file for test

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=450


earlier comments

fuadmalik said, at 2010-12-15T08:13:50.000Z:

Yes, Connecting to an encrypted cluster is not implemented yet. We will implement it, but not a very trivial thing though.

Switch MapStores from sync to async and back

I'd like to be able to switch the sync/async property of MapStores while HazelCast is running, so that in case of a power outage, the UPS can send a signal to my JVM to turn persistence from async to sync, just to be on the safe side. If and when power gets back up before the UPS runs out of gas, it should be able to send the signal to switch it back to async.

In the described scenario, I would need to control this JVM-wide, rather than globally for a map (spanning all machines that host that map).

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=406

poor-man's cyclic barrier functionality, implemented in Hazelcast

Hello:

This is not a bug per se, but more of a poor-man's implementation of cyclic barrier with hazelcast. There are three files included here: 1) interface (GenericRunnableJob) that implements Runnable and Serializable; 2) CyclicBarrierHazelcast which extends HazelcastInstanceAwareObject; and 3) a simple JUnit3 test case that demonstrates cyclic barrier functionality. Only the await() functionality of the cyclic barrier has been tested. My hackish code requires N separate distributed locks for N separate distributed threads, b/c to the best of my knowledge hazelcast 1.9.x does not implement distributed lock Conditions (Java's CyclicBarrier appears to employ this).

Similar work can be done for other mutexes that have been implemented. I am sure I have made many errors stemming from my ignorance in Hazelcast programming. One obvious point of improvement would be in the proper choice of canonical keys to Hazelcast's distributed locks and AtomicNumber. Another may be the use of a proper distributed object reference to the cyclic barrier, perhaps through an IMap key/value entry.

Tanim Islam

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=435


earlier comments

tanim.islam said, at 2010-12-04T02:14:07.000Z:

sorry, not a defect, an enhancement request

Add JavaDoc

What steps will reproduce the problem?

  1. Most classes and their members do not have JavaDocs.

What is the expected output? What do you see instead?
Each class and its public members should be documented.

Please use labels and text to provide additional information.
For Eclipse, there is the JAutoDoc plugin, which helps to fill in JavaDoc
templates very easily.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=80

Selecting cache members for remote execution

Please provide easier way to select members where particular distributed task can be executed.

Currently members can be selected only by addresses/ports, which is "too dynamic" to configure externally.

Adding additional field to Member class (description field, something similar like isSuperClient()) would make filtering easier. Another option would be to enable/disable particular execution services in member configuration.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=396


earlier comments

[email protected] said, at 2011-01-03T19:41:47.000Z:

I second that! Near term, you could allow applications to assign an arbitrary name to a node (as part of node configuration maybe) and make the name visible to other grid nodes via Member.getName(). This would make it easier for applications to select specific nodes for task execution.

support eviction in distributed list/set

It would be a nice enhancement to support eviction in a list/sets. You may ask, where would you possibly use this? Here's an example:

In the ws-security standard, while using digest authentication, the client consuming a webservice must provide a 'salt' when digesting a password. This salt obfuscates the actual password. The salt is transmitted in plaintext so the server can do the same calculation.

Once a salt is used, it's important that the server never accepts the same salt twice within the salt expiration window, or a hacker could merely 'replay' the http coversation and gain access to the system.

In a clustered environment, it's important that all of your cluster members share a list of used salts.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=310

ManagedConnectionFactoryImpl::createConnectionFactory()::getConnection() always returns null exception

Hi,
i'm trying to use Hazelcast in a OpenEJB stateful EJB.

Why does ManagedConnectionFactoryImpl::createConnectionFactory() returns
createConnectionFactory(null) (instead of throwing an exception) ?

The connection factory will always throws a null exception on getConnection() calls because connection manager is null.

is there another solution to set the connection manager ?

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=319


earlier comments

[email protected] said, at 2010-07-23T11:14:33.000Z:

when the connection is allocated, it is casted in ConnectionImpl, is it possible to cast it in Connection (the interface) ?

Using interface avoids to have class cast exception if the container uses a proxy.

If it is to verify that the connection is a ConnectionImpl, instanceof should be enough (i think).

oztalip said, at 2010-08-18T21:08:40.000Z:

How can I reproduce the error? Can you post your code and explain me how I can reproduce it?

[email protected] said, at 2010-08-19T07:02:31.000Z:

here is the code http://openejb.979440.n4.nabble.com/file/n2300346/hazelcast-openejb.zip

in this project, i recompiled hazelcast ra removing the cast replacing it by a cast with the interface and printing a message.

i added a wrapper to create the connection factory too.

to reproduce the error use the hazelcast rar instead of mine.

Refactor code in order to seperate all classes in single files instead of using nested classes.

What steps will reproduce the problem?

  1. Many classes used in Hazelcast are nested in other classes.

What is the expected output? What do you see instead?
Each class should have its own file. If classes are related to one super
class, create a new sub-package.

Please use labels and text to provide additional information.
This is only for coding style, but makes the source files much smaller and
easier to read.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=79


earlier comments

john.channing said, at 2009-06-05T21:41:00.000Z:

I agree. Currently there are some *very* large classes which are making the code difficult to understand and test.

oztalip said, at 2009-12-03T11:30:58.000Z:

We are not there yet but we are certainly making progress as classes like CMap, Record, MapMigrator are taken out of ConcurrentMapManager. I agree with the fact that ConcurrentMapManager is still huge.

[email protected] said, at 2011-12-20T10:41:28.000Z:

I am not sure that readability is that much an issue. Testing however, is a big one. The better a projects supports testing (TDD) the better I like it :)

Add destroy intention (soft destroy) for ILock

Feature request:
I want to destroy ILock only if it is not used at this moment. For example:

ILock lock = Hazelcast.getLock("test");
lock.lock()
try {
  .....
} finally {
     lock.unlock();
     lock.intentDestroy(); // should be called async and destroy lock only if it is not used by some other nodes
     // or event something like this: lock.unlockWithDestroyIntention() if there is no intention for this lock acquiring
}

Or add any other possibility to clear stale unused clustered locks.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=517


earlier comments

abracham.mitchell said, at 2011-08-26T17:30:52.000Z:

I want same function too!

ORDER feature in Predicate.

What steps will reproduce the problem?
1.Ordering should be available in predicate.

Eg: Put many items into the distributed map. How it is possible to get the last x items? It is possible to predicate items by time however it is not possible to get the last x items. While IdGenerator starts from 0 at cluster restart it is impossible to save items with unique id-s 1,2,3,4..... because cluster restart it forgot the value.

So it seems it is almost impossible to get last x items effectively.

The only way I can see now to get all the items from the map iterate them and get the last x created items.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=449


earlier comments

lajos.kesik said, at 2010-12-10T17:45:23.000Z:

It is an enhancement of course

Logging Configuration through hazelcast.xml

Currently logging configuration is possible through the
hazelcast.logging.type parameter, say by passing in a value (e.g, -
Dhazelcast.logging.type=log4j) at the command line.

It would make configuration much more flexible if the hazelcast.xml
configuration file also supported logging configuration.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=252


earlier comments

jefftrimm said, at 2011-02-08T18:13:08.000Z:

FWIW, this is problematic for us right now. Namely, user story is that we have a mixed-application environment where there is a strong desire to not manage application-specific settings in the JVM startup script of the webapp container (which is running multiple other non-hazelcast applications).

possibility to initialize the id-generator with a start-id

The docs say that the cluster-wide id-generator starts at 0 again, if the
whole cluster is restarted.

I miss the possibility to set the starting id (other than 0).
I use an IMap with a MapLoader/MapStore impl. and store the entries in a
MySql DB. With write-behind cache I cant rely on the DB's auto-generated IDs.
So I was glad when I saw your id-genrator. But I need to make sure I wont
get duplicate IDs after all cluster nodes have been gone, which could
happen during a server update/downtime etc. My idea was to get the last
used ID from DB and tell the hazelcast generator to start from there.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=256


earlier comments

lajos.kesik said, at 2010-12-29T18:04:34.000Z:

It could be an automatic mechanism. At cluster start hazelcast should read the last value from a place where it is stored. Eg: Last value can be stored in a database. Storage could happen at every MapStore operation so there is no overhead at increment time. Eg.: MapStore, MapLoader could have an additional method to store and read the last id value. Load method can be used at cluster start to get the last value of the Id.

[email protected] said, at 2011-12-29T03:20:20.000Z:

I implemented a method to initialize the values ​​of IdGenerator, this method receive as parameter a map with initial value by range and can be initialized in the method loadAllKeys of MapLoader. I.E. retrieves the values to put on map with the query: SELECT (ID / 1000000) as RANGE_ID, MAX(ID) AS LAST_ID FROM ENTITY GROUP BY (ID / 1000000) then create the map and initialize the IdGenerator.

Too big memory overhead

What steps will reproduce the problem?
Run the following program:

 IMap&lt;Integer, byte[]&gt; map = Hazelcast.getMap(&quot;test&quot;);
 System.gc();
 long initialMemoryUsage = Runtime.getRuntime().totalMemory() -

Runtime.getRuntime().freeMemory();
System.out.println(initialMemoryUsage);

 for (int i = 0 ; i&lt; RECORDS; i++){
   map.put(i, new byte[RECORD_SIZE]);
 }

 System.gc();
 long memoryUsage = Runtime.getRuntime().totalMemory() -

Runtime.getRuntime().freeMemory() - initialMemoryUsage;
System.out.println(memoryUsage);
long usagePerRecord = memoryUsage / RECORDS;
System.out.println("Memory usage per record is "+ usagePerRecord+ "
bytes");

What is the expected output? What do you see instead?
for RECORD_SIZE=0 output is "Memory usage per record is 952 bytes".
That's too big value. There should be a possibility to define map, for
which memory usage doesn't exceed 100 B/record.

What version of the product are you using? On what operating system?
1.8.4-SNAPSHOT (14.04.2010)
Linux x86_64
Java Sun 1.6.0_19 x86_64

Please provide any additional information below.
After commenting out line 1009 in CMap.java (turning off values indexing):
//updateIndexes(record);
memory usage falls down to about 680 B/record, but that's still too much.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=255


earlier comments

[email protected] said, at 2010-04-15T09:36:44.000Z:

For a test I created a map wrapper, which groups map entries in a buckets. For 700000 records and 10000 buckets memory usage falls to about 60B/record. But sadly put and remove operations are much slower because of transactions (buckets locking).

oztalip said, at 2010-05-01T23:25:02.000Z:

With the latest updates we are not down from 952 to 411 bytes! not enough though. We will keep working on the memory cost.

Add putMulti and removeMulti to MultiMap and removeMulti to IMap

I noticed that there aren't MultiMap.putAll, MultiMap.removeAll and IMap.removeAll in API. It will be great to have these abilities to increase performance of 'multi' operations (i.e. do it in a bulk). We really need them (especially for MultiMap).

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=553


earlier comments

borislav.andruschuk said, at 2011-05-13T09:51:30.000Z:

and the same IMap/MultiMap -> evictAll please.

borislav.andruschuk said, at 2011-05-14T09:23:44.000Z:

I've noticed that IMap.localKeySet already returns Entries - the list of local map entries. I think it will be quite good to have IMap.getLocalEntries() in API. Actually I' seeking for good filtering API for local entries w/o predicate usage: could you please add IMap.getAllLocalValues(Set keys) or IMap.getAllLocalEntries(Set keys) because you can efficiently filter local entries in CMap.

hugo.zwaal said, at 2011-05-17T14:25:52.000Z:

I would also like to see a MultiMap.putMulti(K key, V values...) and a removeMulti()

Support Hazelcast update in running cluster

Hazelcast does not let different version of Hazelcast participate in a
cluster. This gets in to a difficult situation when we can't afford to
restart the entire cluster when we want to plan updating hazelcast jar in
already running cluster.

As discussed on the hazelcast mailing list, since wire protocols do not
change so often, the version check can be ignored by a check if the wire
protocol did not really change.

A complete fix will be to allow some kind of incremental upgrade of the
cluster.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=270


earlier comments

erlendbi said, at 2011-09-29T07:15:55.000Z:

Is anything done is this area? Is there still no way to upgrade a running cluster?

Using DataInputStream/DataOutoutStream

Would it be possible to allow using DataInputStream/DataOutputStream for serialization instead of DataInput/DataOutput?

We are storing ready-to-serialize protobuf messages wich can serialize to/from OutputStream/InputStream. It is necessary to create temporary byte array and save it's size for serialization to/from DataInput/DataOutput. It is possible create InputStream/OutputStreams wrapper over DataInput/Output for less overhead, but it ts necessary to store size of data anyway.

Is any another way for simplify serialization for ready-to-serialize objects?

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=353


earlier comments

oztalip said, at 2011-05-02T19:13:25.000Z:

Can you post a sample protobuf serialization that will be using Data[Input|Output]Stream? I just want to see how it will help...

vladislav.tsybulnik said, at 2011-07-11T14:20:01.000Z:

My code looks like:

class WrapProtoObj implements com.hazelcast.nio.DataSerializable {
ProtoObj obj;
public void writeData( java.io.DataOutput out ) throws java.io.IOException {
byte[] b = obj.toByteArray();
out.writeInt( b.length );
out.write( b );
}
public void readData( java.io.DataInput in ) throws java.io.IOException {
int len = in.readInt();
byte[] b = new byte[len];
in.readFully( b );
obj = ProtoObj.parseFrom( b );
}
}

I want something like:

class WrapProtoObj implements ? {
ProtoObj obj;
public void write?( OutputStream out ) throws java.io.IOException {
obj.writeTo( out );
}
public void read?( InputStream in ) throws java.io.IOException {
obj = ProtoObj.parseFrom( in );
}
}

Add possibility to check that distributed Lock is acquired

Feature request:

add possibility to check that lock is acquired by someone i.e. just method ILock.isLocked() to have ability to implement spin lock / wait for a lock release to perform non blocking code, i.e. to cover the following case:

ILock lock = Hazelcast.getLock("lock");
if(lock.tryLock()) {
try {

  // single thread model

} finally {
lock.unlock();
}
} else {
// spin here until lock will be released
while(lock.isLocked()) { LockSupport.parkUntil(System.currentTimeInMillis() +2000); };

// do something _w/o blocking_

}

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=510

hazelcast as jpa provider

Hi,

should it be possible to have a jpa provider for hazelast?

It could be very useful to be able to use hazelcast as a datasource and why not save it automatically in a real database (with the persistence features).

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=498


earlier comments

jeanouii said, at 2011-07-06T14:10:44.000Z:

Any information on that feature? Are you confident on the way to implement it?

Jean-Louis

rmannibucau said, at 2011-08-30T09:36:54.000Z:

Hi,

any news about it?

it could be a really ueful feature.

i started a hazelcast jdbc driver here: http://code.google.com/p/rmannibucau/source/browse/#hg%2Fhazelcast%2Fhazelcast-jdbc

the idea is "once the jdbc driver is here, the jpa layer will be easy to do"

can we have some feedback about this feature, is Hazelcast will provide it (maybe for Hazelcast 2.0 ;)) or is it clearly something hazelcast will not take into account?

new load balancer ?

Hi,

is it possible to provide its own load balancer or to use something else than round robin ? (something like priority, CPU average, ...)

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=308


earlier comments

oztalip said, at 2010-07-07T08:38:31.000Z:

are you asking this for distributed executor service?

[email protected] said, at 2010-07-07T08:46:58.000Z:

yes, i know i can set my member when i create DistributedTask but like there is a load balancer in the distributed execution service i was wondering if it was possible to replace it.

oztalip said, at 2010-07-07T09:25:34.000Z:

I see. possible but not yet implemented and we should.

[email protected] said, at 2010-07-07T10:04:14.000Z:

ok thanks

david.koch444 said, at 2012-02-02T09:29:23.000Z:

Would it also make sense in the scope of this enhancement to make the ExecutorService implementation on the executing member exchangeable or interceptable? Than I could know about the real load on the executing node. This could than be published and used in order to decide about load balancing.

Overall maximum size for distributed queues.

A feature suggestion: you could add the option - in the distributed queue implementation - of having an overall (across the entire cluster) maximum size, as well as the currently implemented per-jvm max capacity.

This would be very useful for people like me - I am using Hazelcast's distributed blocking queue to implement distributed throughput control.

I am submitting this issue per Talip Ozturk's request.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=350


earlier comments

igor.ribeiro said, at 2010-08-30T22:21:47.000Z:

Just a note: this is not a defect - but a feature request instead.

[email protected] said, at 2011-01-10T20:14:09.000Z:

To give a few more details and my rationale for requesting this feature, this is how I am implementing distributed throughput control using your blocking queue:

  • I have a single producer in the entire cluster. It produces a number of permits (an empty object, really) each second on an array blocking queue (for which the maximum size is 20).
  • I have multiple consumers. A consumer starts asking for permits (until it has 20, for instance) and then use them to send a batch of jms messages.

This guarantees that my target throughput is on average respected.

I would like to be able to add new consumers to my cluster at any given time. However, when I do that, currently the maximum size of the array blocking queue adjusts. I have to account for that in the application, which proved surprisingly hard to do properly.

If it was possible to set an overall maximum size for the distributed queue, this would be unnecessary.

fuadmalik said, at 2011-05-03T12:20:10.000Z:

Issue 500 has been merged into this issue.

BlockingDeque

What steps will reproduce the problem?

  1. The is no implementation for BlockingDeque which is part of JDK 6

What is the expected output? What do you see instead?
It would be great if we could have the BlockingDeque implementation.

What version of the product are you using? On what operating system?
1.8.4 on Linux.

Please provide any additional information below.
I am trying to move a local cache which uses LinkedBlockingDeque from JDK 6 to a distributed cache and would really like to use Hazelcast to get it done. Thanks.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=349


earlier comments

ryiu1029 said, at 2010-08-29T06:30:23.000Z:

Please change it to an Enhancement request which was my initial intention. Thanks.

oztalip said, at 2010-08-29T06:58:43.000Z:

Sounds good!

indexes for base types

Description of the new feature

It would be nice to have possibility to have indexes for maps like Map<Long, String>

so it looks like index-based containsValue.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=400

Filter by values in a Map

I have values in a Map in my Hazelcast cluster and I have to index and filter with these values.

To implement this, I had to make some modification in the hazelcast source code. I created an interface ExternalValueSelector, and some changes in com.hazelcast.query.Predicates.GetExpressionImpl.doGetValue(Object obj).

public interface ExternalValueSelector {

Object getValueByPath(String name);

}

    private Object doGetValue(Object obj) {
        if (obj instanceof MapEntry) {
            obj = ((MapEntry) obj).getValue();
        }
        if (obj == null) return null;
        try {
            if (getter == null) {

                if (obj instanceof ExternalValueSelector) {
                    getter = new ExternalValueGetter(input);
                } else {
    ...                    


    class ExternalValueGetter extends Getter {

        private final String getterInput;

        public ExternalValueGetter(String input) {
            super(null);
            this.getterInput = input;
        }

        Object getValue(Object obj) throws Exception {
            return ((ExternalValueSelector)obj).getValueByPath(this.getterInput);
        }

        @Override
        Class getReturnType() {
            return Object.class;
        }
    }

With these changes if an element implements this interface, the doGetValue calls the ExternalValueSelector.getValueByPath(String name) and skips the reflection getters.

I checked out the source code from http://hazelcast.googlecode.com/svn/trunk and made a patch. I will attach the patch. Please apply the code if you like it.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=522

Improve of transaction isolation

What steps will reproduce the problem?

  1. Following pseudo code shows two threads:
    In Thread X:
    tx.begin()
    modify(A)
    modify(B)
    tx.commit

In Thread Y
lockAndRead(A)
lockAndRead(B)

  1. In a high concurrent environment, in thread Y, it may read inconsistent version for A and B, since the in thread X, the transaction will first commit the TransactionRecord for A and then for B. There's possibility that in thread X, the change is committed and lock is released for A, but not for B, so in thread Y, it may read an old version for B and a newer version for A.
    en

What is the expected output? What do you see instead?

The expected behaviour should be that in during the transaction commit phase it commits the changes for all transaction records, and then release all the locks on transaction records. This is not as optimized as the concurrent behaviour, so it's even better if we can introduce a configuration option to control this.

What version of the product are you using? On what operating system?

1.9.3 linux
Please provide any additional information below.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=588


earlier comments

Li.ChangGeng said, at 2011-06-22T06:59:48.000Z:

Actually I meant in thread Y we do: lockAndRead(A) read(B) //without lock, this may return a version before the modification in Thread X

fuadmalik said, at 2011-07-26T12:03:22.000Z:

What you describe here is a two phase commit behavior. Hazelcast doesn't support it.

ReentrantReadWriteLock with distributed Map

What steps will reproduce the problem?

  1. Set a ReentrantReadWriteLock into a distributed Map
  2. Thread A gets the write lock and sleep for a while
  3. While Thread A is sleeping, thread B gets the write lock

What is the expected output? What do you see instead?
Thread B should be waiting for the write lock until thread A release it. Instead both thread A and thread B can obtain the write lock at the same time.

What version of the product are you using? On what operating system?
Hazelcast 1.8.5. Java 1.5 on Win XP SP2.

Please provide any additional information below.
I know that Hazelcast supports distributed lock. What about distributed read-write lock? Any workaround for this or any chance of adding this in the near future? By the way, I would also like to know if the distributed lock support the fairness option (i.e. new ReentrantLock(true); in java 1.5). Thanks.

Migrated from http://code.google.com/p/hazelcast/issues/detail?id=329


earlier comments

byzhang said, at 2011-10-04T04:25:34.000Z:

Is this feature on the plan?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.