Clustering currently only works with JDBC-Jobstore, and each node of the cluster share the same database. Load-balancing occurs automatically, each node of the cluster firing jobs as quickly as it can. When a trigger's firing time occurs, the first node to acquire it with a lock on it. For example, if the job has a repeating trigger every 10 seconds, at 1:00:00 one node runs the job, and at 1:00:10, one node runs the job again, the second node maybe same with first one or not. Fail-over occurs when one of the nodes fails, the other nodes detect the condition and identify the jobs from the database that were in progress within the failed node and fire them.
ClusterManager is responsible for managing the cluster failover. Each instance will send health check to table: SCHEDULER_STATE per specify interval. If one instance doesn't send health check, other instances will acquire the lock:STATE_ACCESS firstly, then recover the failed triggers of failed instance.
Queues is straightforward—messages are basically stored in first in, first out order (FIFO). See below figure for a depiction of this. One message is dispatched to ONLY a single consumer at a time. Only when that message has been consumed and acknowledged it can be deleted from the broker’s message store. If one consumer fails to ACK the message, it will be consumed by other consumer. A queue can be consumed by multiple consumers, but one message ONLY can be consumed by one of them.
Topic, one message can be consumed by multiple consumers. For durable subscribers to a topic, a durable subscriber object in the store maintains a pointer to its next stored message and dispatches a copy of it to its consumer as shown in below figure. The message store is implemented in this manner because each durable subscriber could be consuming messages at different rates or they may not all be running at the same time. Also, because every message can potentially have many consumers, a message can’t be deleted from the store until it’s been successfully delivered to every interested durable subscriber.
From ActiveMQ5.4, KahaDB is default storage of AMQ. It has three parts: data logs, BTree indexes and cache.
It is the messages of broker. The Journal consists of a rolling log of messages and commands stored in data files of a certern length.
Itholds messages for fast retrieval in memory after they have bean written to the journal. It will periodically update the reference store with its current message ids and location of the messsages in the journal.
It ONLY uses one index file for all its destinations. All index file updates also recorded in a redo log. This ensures that the indexes can be brought back in a consistent state.
The directory structure of KahaDB
- db-x.log It stores the journal data files.
- db.data It is BTree indexes, records all the indexes of desctinations.
- db.redo All index file updates are also recorded in the redo log. This ensures that the indexes can be brought back in a consistent state when broker is shutdown uncleanly.
- lock It ensures ONLY one broker can access the data at give anytime, it used in hot stand-by status where there are more than one broker with same name.
Consumer code mainly has three parts: read message via transport, the memory queue to store pooled messages and dispatcher to dispatch message to consumers which consums the messages. The queue and topic consumers are same process flow.