Memphis is an intelligent, frictionless message broker.
Made to enable developers to build real-time and streaming apps fast.
Memphis.dev is more than a broker. It's a new streaming stack.
It accelerates the development of real-time applications that require
high throughput, low latency, small footprint, and multiple protocols,
with minimum platform operations, and all the observability you can think of.
Highly resilient, distributed architecture, cloud-native, and run on any Kubernetes,
on any cloud without zookeeper, bookeeper, or JVM.
$ npm install memphis-dev
Notice: you may receive an error about the "murmurhash3" package, to solve it please install g++
$ sudo yum install -y /usr/bin/g++
For Javascript, you can choose to use the import or required keyword. This library exports a singleton instance of memphis
with which you can consume and produce messages.
const { memphis } = require('memphis-dev');
For Typescript, use the import keyword. You can import Memphis
to aid for typechecking assistance.
import { memphis, Memphis } from 'memphis-dev';
To leverage Nestjs dependency injection feature
import { Module } from '@nestjs/common';
import { Memphis, MemphisModule, MemphisService } from 'memphis-dev';
First, we need to connect with Memphis by using memphis.connect
.
/* Javascript and typescript project */
await memphis.connect({
host: "<memphis-host>",
port: <port>, // defaults to 6666
username: "<username>", // (root/application type user)
accountId: <accountId> //You can find it on the profile page in the Memphis UI. This field should be sent only on the cloud version of Memphis, otherwise it will be ignored
connectionToken: "<broker-token>", // you will get it on application type user creation
password: "<string>", // depends on how Memphis deployed - default is connection token-based authentication
reconnect: true, // defaults to true
maxReconnect: 10, // defaults to 10
reconnectIntervalMs: 1500, // defaults to 1500
timeoutMs: 15000, // defaults to 15000
// for TLS connection:
keyFile: '<key-client.pem>',
certFile: '<cert-client.pem>',
caFile: '<rootCA.pem>'
suppressLogs: false // defaults to false - indicates whether to suppress logs or not
});
Nest injection
@Module({
imports: [MemphisModule.register()],
})
class ConsumerModule {
constructor(private memphis: MemphisService) {}
startConnection() {
(async function () {
let memphisConnection: Memphis;
try {
memphisConnection = await this.memphis.connect({
host: "<memphis-host>",
username: "<application type username>",
connectionToken: "<broker-token>",
});
} catch (ex) {
console.log(ex);
memphisConnection.close();
}
})();
}
}
Once connected, the entire functionalities offered by Memphis are available.
To disconnect from Memphis, call close()
on the memphis object.
memphisConnection.close();
Unexist stations will be created automatically through the SDK on the first producer/consumer connection with default values.
If a station already exists nothing happens, the new configuration will not be applied
const station = await memphis.station({
name: '<station-name>',
schemaName: '<schema-name>',
retentionType: memphis.retentionTypes.MAX_MESSAGE_AGE_SECONDS, // defaults to memphis.retentionTypes.MAX_MESSAGE_AGE_SECONDS
retentionValue: 604800, // defaults to 604800
storageType: memphis.storageTypes.DISK, // defaults to memphis.storageTypes.DISK
replicas: 1, // defaults to 1
idempotencyWindowMs: 0, // defaults to 120000
sendPoisonMsgToDls: true, // defaults to true
sendSchemaFailedMsgToDls: true, // defaults to true
tieredStorageEnabled: false, // defaults to false
partitionsNumber: 1, // defaults to 1
dlsStation:'<station-name>' // defaults to "" (no DLS station) - If selected DLS events will be sent to selected station as well
});
Creating a station with Nestjs dependency injection
@Module({
imports: [MemphisModule.register()],
})
class stationModule {
constructor(private memphis: MemphisService) { }
createStation() {
(async function () {
const station = await this.memphis.station({
name: "<station-name>",
schemaName: "<schema-name>",
retentionType: memphis.retentionTypes.MAX_MESSAGE_AGE_SECONDS, // defaults to memphis.retentionTypes.MAX_MESSAGE_AGE_SECONDS
retentionValue: 604800, // defaults to 604800
storageType: memphis.storageTypes.DISK, // defaults to memphis.storageTypes.DISK
replicas: 1, // defaults to 1
idempotencyWindowMs: 0, // defaults to 120000
sendPoisonMsgToDls: true, // defaults to true
sendSchemaFailedMsgToDls: true, // defaults to true
tieredStorageEnabled: false, // defaults to false
dlsStation:'<station-name>' // defaults to "" (no DLS station) - If selected DLS events will be sent to selected station as well
});
})();
}
}
Memphis currently supports the following types of retention:
memphis.retentionTypes.MAX_MESSAGE_AGE_SECONDS;
Means that every message persists for the value set in retention value field (in seconds)
memphis.retentionTypes.MESSAGES;
Means that after max amount of saved messages (set in retention value), the oldest messages will be deleted
memphis.retentionTypes.BYTES;
Means that after max amount of saved bytes (set in retention value), the oldest messages will be deleted
memphis.retentionTypes.ACK_BASED; // for cloud users only
Means that after a message is getting acked by all interested consumer groups it will be deleted from the Station.
The retention values
are directly related to the retention types
mentioned above, where the values vary according to the type of retention chosen.
All retention values are of type int
but with different representations as follows:
memphis.retentionTypes.MAX_MESSAGE_AGE_SECONDS
is represented in seconds, memphis.retentionTypes.MESSAGES
in a number of messages, memphis.retentionTypes.BYTES
in a number of bytes and finally memphis.retentionTypes.ACK_BASED
is not using the retentionValue param at all.
After these limits are reached oldest messages will be deleted.
Memphis currently supports the following types of messages storage:
memphis.storageTypes.DISK;
Means that messages persist on disk
memphis.storageTypes.MEMORY;
Means that messages persist on the main memory
Destroying a station will remove all its resources (producers/consumers)
await station.destroy();
await memphisConnection.createSchema({schemaName: "<schema-name>", schemaType: "<schema-type>", schemaFilePath: "<schema-file-path>" });
await memphisConnection.enforceSchema({ name: '<schema-name>', stationName: '<station-name>' });
await memphisConnection.attachSchema({ name: '<schema-name>', stationName: '<station-name>' });
await memphisConnection.detachSchema({ stationName: '<station-name>' });
The most common client operations are produce
to send messages and consume
to
receive messages.
Messages are published to a station and consumed from it by creating a consumer. Consumers are pull based and consume all the messages in a station unless you are using a consumers group, in this case messages are spread across all members in this group.
Memphis messages are payload agnostic. Payloads are Uint8Arrays
.
In order to stop getting messages, you have to call consumer.destroy()
. Destroy will terminate regardless
of whether there are messages in flight for the client.
const producer = await memphisConnection.producer({
stationName: '<station-name>',
producerName: '<producer-name>',
});
Creating producers with nestjs dependecy injection
@Module({
imports: [MemphisModule.register()],
})
class ProducerModule {
constructor(private memphis: MemphisService) { }
createProducer() {
(async function () {
const producer = await memphisConnection.producer({
stationName: "<station-name>",
producerName: "<producer-name>"
});
})();
}
}
Without creating a producer. In cases where extra performance is needed, the recommended way is to create a producer first and produce messages by using the produce function of it
await memphisConnection.produce({
stationName: '<station-name>',
producerName: '<producer-name>',
message: 'Uint8Arrays/object/string/DocumentNode graphql', // Uint8Arrays/object (schema validated station - protobuf) or Uint8Arrays/object (schema validated station - json schema) or Uint8Arrays/string/DocumentNode graphql (schema validated station - graphql schema) or Uint8Arrays/object (schema validated station - avro schema)
ackWaitSec: 15, // defaults to 15
asyncProduce: true // defaults to true. For better performance. The client won't block requests while waiting for an acknowledgment.
headers: headers, // defults to empty
msgId: 'id', // defaults to null
producerPartitionKey: "key", // produce to specific partition. defaults to null
producerPartitionNumber: -1 // produce to specific partition number. defaults to -1
});
Creating a producer first
await producer.produce({
message: 'Uint8Arrays/object/string/DocumentNode graphql', // Uint8Arrays/object (schema validated station - protobuf) or Uint8Arrays/object (schema validated station - json schema) or Uint8Arrays/string/DocumentNode graphql (schema validated station - graphql schema) or Uint8Arrays/object (schema validated station - avro schema)
ackWaitSec: 15, // defaults to 15,
producerPartitionKey: "key", // produce to specific partition. defaults to null
producerPartitionNumber: -1 // produce to specific partition number. defaults to -1
});
Note: When producing to a station with more than one partition, the producer will produce messages in a Round Robin fashion between the different partitions.
const headers = memphis.headers();
headers.add('<key>', '<value>');
await producer.produce({
message: 'Uint8Arrays/object/string/DocumentNode graphql', // Uint8Arrays/object (schema validated station - protobuf) or Uint8Arrays/object (schema validated station - json schema) or Uint8Arrays/string/DocumentNode graphql (schema validated station - graphql schema)
headers: headers // defults to empty
});
or
const headers = { key: 'value' };
await producer.produce({
message: 'Uint8Arrays/object/string/DocumentNode graphql', // Uint8Arrays/object (schema validated station - protobuf) or Uint8Arrays/object (schema validated station - json schema) or Uint8Arrays/string/DocumentNode graphql (schema validated station - graphql schema) or Uint8Arrays/object (schema validated station - avro schema)
headers: headers
});
For better performance. The client won't block requests while waiting for an acknowledgment. Defaults to true.
await producer.produce({
message: 'Uint8Arrays/object/string/DocumentNode graphql', // Uint8Arrays/object (schema validated station - protobuf) or Uint8Arrays/object (schema validated station - json schema) or Uint8Arrays/string/DocumentNode graphql (schema validated station - graphql schema) or Uint8Arrays/object (schema validated station - avro schema)
ackWaitSec: 15, // defaults to 15
asyncProduce: true, // defaults to true. For better performance. The client won't block requests while waiting for an acknowledgment
producerPartitionKey: "key", // produce to specific partition. defaults to null
producerPartitionNumber: -1 // produce to specific partition number. defaults to -1
});
Stations are idempotent by default for 2 minutes (can be configured). Idempotency is achieved by adding a message-id
await producer.produce({
message: 'Uint8Arrays/object/string/DocumentNode graphql', // Uint8Arrays/object (schema validated station - protobuf) or Uint8Arrays/object (schema validated station - json schema) or Uint8Arrays/string/DocumentNode graphql (schema validated station - graphql schema) or Uint8Arrays/object (schema validated station - avro schema)
ackWaitSec: 15, // defaults to 15
msgId: 'id' // defaults to null
});
await producer.destroy();
const consumer = await memphisConnection.consumer({
stationName: '<station-name>',
consumerName: '<consumer-name>',
consumerGroup: '<group-name>', // defaults to the consumer name.
pullIntervalMs: 1000, // defaults to 1000
batchSize: 10, // defaults to 10
batchMaxTimeToWaitMs: 5000, // defaults to 5000
maxAckTimeMs: 30000, // defaults to 30000
maxMsgDeliveries: 10, // defaults to 10
startConsumeFromSequence: 1, // start consuming from a specific sequence. defaults to 1
lastMessages: -1, // consume the last N messages, defaults to -1 (all messages in the station)
consumerPartitionKey: "key", // consume by specific partition key. Defaults to null
consumerPartitionNumber: -1 // consume by specific partition number. Defaults to -1
});
Note: When consuming from a station with more than one partition, the consumer will consume messages in Round Robin fashion from the different partitions.
consumer.setContext({ key: 'value' });
consumer.on('message', (message, context) => {
// processing
console.log(message.getData());
message.ack();
});
const msgs = await memphis.fetchMessages({
stationName: '<station-name>',
consumerName: '<consumer-name>',
consumerGroup: '<group-name>', // defaults to the consumer name.
batchSize: 10, // defaults to 10
batchMaxTimeToWaitMs: 5000, // defaults to 5000
maxAckTimeMs: 30000, // defaults to 30000
maxMsgDeliveries: 10, // defaults to 10
startConsumeFromSequence: 1, // start consuming from a specific sequence. defaults to 1
lastMessages: -1, // consume the last N messages, defaults to -1 (all messages in the station)
consumerPartitionKey: "key", // consume by specific partition key. Defaults to null
consumerPartitionNumber: -1 // consume by specific partition number. Defaults to -1
});
const msgs = await consumer.fetch({
batchSize: 10, // defaults to 10
consumerPartitionKey: "key", // fetch by specific partition key. Defaults to null
consumerPartitionNumber: -1 // fetch by specific partition number. Defaults to -1
});
To set up a connection in nestjs
import { MemphisServer } from 'memphis-dev'
async function bootstrap() {
const app = await NestFactory.createMicroservice<MicroserviceOptions>(
AppModule,
{
strategy: new MemphisServer({
host: '<memphis-host>',
username: '<application type username>',
connectionToken: '<broker-token>'
}),
},
);
await app.listen();
}
bootstrap();
To consume messages in NestJS
export class Controller {
import { MemphisConsume, Message } from 'memphis-dev';
@MemphisConsume({
stationName: '<station-name>',
consumerName: '<consumer-name>',
consumerGroup: ''
})
async messageHandler(message: Message) {
console.log(message.getData().toString());
message.ack();
}
}
Acknowledge a message indicates the Memphis server to not re-send the same message again to the same consumer/consumers group
message.ack();
Delay the message and tell the Memphis server to re-send the same message again to the same consumer group. The message will be redelivered only in case Consumer.maxMsgDeliveries
is not reached yet.
message.delay(delayInMilliseconds);
As Uint8Array
msg = message.getData();
As Json
msg = message.getDataAsJson();
Get headers per message
headers = message.getHeaders();
Get message sequence number
sequenceNumber = message.getSequenceNumber();
consumer.on('error', (error) => {
// error handling
});
Stopping a consumer simply stops it from consuming messages in the code.
Let's say you don't want listeners of a consumer to receive messages anymore (even if messages are still being produced to its station), stop the consumer and that's it.
await consumer.stop();
This is different from stopping a consumer. Destroying a consumer destroys it from the station and the broker itself. It won't exist again.
await consumer.destroy();
memphisConnection.isConnected();