GithubHelp home page GithubHelp logo

big-vi / memphis.js Goto Github PK

View Code? Open in Web Editor NEW

This project forked from superstreamlabs/memphis.js

0.0 0.0 0.0 1007 KB

Node.js client for Memphis. Memphis is an event processing platform

Home Page: https://www.npmjs.com/package/memphis-dev

License: Apache License 2.0

JavaScript 49.21% TypeScript 50.79%

memphis.js's Introduction

Banner- Memphis dev streaming

Memphis is an intelligent, frictionless message broker.
Made to enable developers to build real-time and streaming apps fast.

CNCF Silver Member

CNCF Silver Member

Docs - Twitter - YouTube

Discord Code Of Conduct GitHub release (latest by date)

Memphis.dev is more than a broker. It's a new streaming stack.

It accelerates the development of real-time applications that require
high throughput, low latency, small footprint, and multiple protocols,
with minimum platform operations, and all the observability you can think of.

Highly resilient, distributed architecture, cloud-native, and run on any Kubernetes,
on any cloud without zookeeper, bookeeper, or JVM.

Installation

$ npm install memphis-dev

Notice: you may receive an error about the "murmurhash3" package, to solve it please install g++

$ sudo yum install -y /usr/bin/g++

Importing

For Javascript, you can choose to use the import or required keyword. This library exports a singleton instance of memphis with which you can consume and produce messages.

const { memphis } = require('memphis-dev');

For Typescript, use the import keyword. You can import Memphis to aid for typechecking assistance.

import { memphis, Memphis } from 'memphis-dev';

To leverage Nestjs dependency injection feature

import { Module } from '@nestjs/common';
import { Memphis, MemphisModule, MemphisService } from 'memphis-dev';

Connecting to Memphis

First, we need to connect with Memphis by using memphis.connect.

/* Javascript and typescript project */
await memphis.connect({
            host: "<memphis-host>",
            port: <port>, // defaults to 6666
            username: "<username>", // (root/application type user)
            accountId: <accountId> //You can find it on the profile page in the Memphis UI. This field should be sent only on the cloud version of Memphis, otherwise it will be ignored
            connectionToken: "<broker-token>", // you will get it on application type user creation
            password: "<string>", // depends on how Memphis deployed - default is connection token-based authentication
            reconnect: true, // defaults to true
            maxReconnect: 10, // defaults to 10
            reconnectIntervalMs: 1500, // defaults to 1500
            timeoutMs: 15000, // defaults to 15000
            // for TLS connection:
            keyFile: '<key-client.pem>',
            certFile: '<cert-client.pem>',
            caFile: '<rootCA.pem>'
            suppressLogs: false // defaults to false - indicates whether to suppress logs or not
      });

Nest injection

@Module({
    imports: [MemphisModule.register()],
})

class ConsumerModule {
    constructor(private memphis: MemphisService) {}

    startConnection() {
        (async function () {
            let memphisConnection: Memphis;

            try {
               memphisConnection = await this.memphis.connect({
                    host: "<memphis-host>",
                    username: "<application type username>",
                    connectionToken: "<broker-token>",
                });
            } catch (ex) {
                console.log(ex);
                memphisConnection.close();
            }
        })();
    }
}

Once connected, the entire functionalities offered by Memphis are available.

Disconnecting from Memphis

To disconnect from Memphis, call close() on the memphis object.

memphisConnection.close();

Creating a Station

Unexist stations will be created automatically through the SDK on the first producer/consumer connection with default values.

If a station already exists nothing happens, the new configuration will not be applied

const station = await memphis.station({
    name: '<station-name>',
    schemaName: '<schema-name>',
    retentionType: memphis.retentionTypes.MAX_MESSAGE_AGE_SECONDS, // defaults to memphis.retentionTypes.MAX_MESSAGE_AGE_SECONDS
    retentionValue: 604800, // defaults to 604800
    storageType: memphis.storageTypes.DISK, // defaults to memphis.storageTypes.DISK
    replicas: 1, // defaults to 1
    idempotencyWindowMs: 0, // defaults to 120000
    sendPoisonMsgToDls: true, // defaults to true
    sendSchemaFailedMsgToDls: true, // defaults to true
    tieredStorageEnabled: false, // defaults to false
    partitionsNumber: 1, // defaults to 1
    dlsStation:'<station-name>' // defaults to "" (no DLS station) - If selected DLS events will be sent to selected station as well
});

Creating a station with Nestjs dependency injection

@Module({
    imports: [MemphisModule.register()],
})

class stationModule {
    constructor(private memphis: MemphisService) { }

    createStation() {
        (async function () {
                  const station = await this.memphis.station({
                        name: "<station-name>",
                        schemaName: "<schema-name>",
                        retentionType: memphis.retentionTypes.MAX_MESSAGE_AGE_SECONDS, // defaults to memphis.retentionTypes.MAX_MESSAGE_AGE_SECONDS
                        retentionValue: 604800, // defaults to 604800
                        storageType: memphis.storageTypes.DISK, // defaults to memphis.storageTypes.DISK
                        replicas: 1, // defaults to 1
                        idempotencyWindowMs: 0, // defaults to 120000
                        sendPoisonMsgToDls: true, // defaults to true
                        sendSchemaFailedMsgToDls: true, // defaults to true
                        tieredStorageEnabled: false, // defaults to false
                        dlsStation:'<station-name>' // defaults to "" (no DLS station) - If selected DLS events will be sent to selected station as well
                  });
        })();
    }
}

Retention types

Memphis currently supports the following types of retention:

memphis.retentionTypes.MAX_MESSAGE_AGE_SECONDS;

Means that every message persists for the value set in retention value field (in seconds)

memphis.retentionTypes.MESSAGES;

Means that after max amount of saved messages (set in retention value), the oldest messages will be deleted

memphis.retentionTypes.BYTES;

Means that after max amount of saved bytes (set in retention value), the oldest messages will be deleted

memphis.retentionTypes.ACK_BASED; // for cloud users only

Means that after a message is getting acked by all interested consumer groups it will be deleted from the Station.

Retention Values

The retention values are directly related to the retention types mentioned above, where the values vary according to the type of retention chosen.

All retention values are of type int but with different representations as follows:

memphis.retentionTypes.MAX_MESSAGE_AGE_SECONDS is represented in seconds, memphis.retentionTypes.MESSAGES in a number of messages, memphis.retentionTypes.BYTES in a number of bytes and finally memphis.retentionTypes.ACK_BASED is not using the retentionValue param at all.

After these limits are reached oldest messages will be deleted.

Storage types

Memphis currently supports the following types of messages storage:

memphis.storageTypes.DISK;

Means that messages persist on disk

memphis.storageTypes.MEMORY;

Means that messages persist on the main memory

Destroying a Station

Destroying a station will remove all its resources (producers/consumers)

await station.destroy();

Creating a new schema

await memphisConnection.createSchema({schemaName: "<schema-name>", schemaType: "<schema-type>", schemaFilePath: "<schema-file-path>" });

Enforcing a schema on an existing Station

await memphisConnection.enforceSchema({ name: '<schema-name>', stationName: '<station-name>' });

Deprecated - Use enforceSchema instead

await memphisConnection.attachSchema({ name: '<schema-name>', stationName: '<station-name>' });

Detaching a schema from Station

await memphisConnection.detachSchema({ stationName: '<station-name>' });

Produce and Consume messages

The most common client operations are produce to send messages and consume to receive messages.

Messages are published to a station and consumed from it by creating a consumer. Consumers are pull based and consume all the messages in a station unless you are using a consumers group, in this case messages are spread across all members in this group.

Memphis messages are payload agnostic. Payloads are Uint8Arrays.

In order to stop getting messages, you have to call consumer.destroy(). Destroy will terminate regardless of whether there are messages in flight for the client.

Creating a Producer

const producer = await memphisConnection.producer({
    stationName: '<station-name>',
    producerName: '<producer-name>',
});

Creating producers with nestjs dependecy injection

@Module({
    imports: [MemphisModule.register()],
})

class ProducerModule {
    constructor(private memphis: MemphisService) { }

    createProducer() {
        (async function () {
                const producer = await memphisConnection.producer({
                    stationName: "<station-name>",
                    producerName: "<producer-name>"
                });
        })();
    }
}

Producing a message

Without creating a producer. In cases where extra performance is needed, the recommended way is to create a producer first and produce messages by using the produce function of it

await memphisConnection.produce({
        stationName: '<station-name>',
        producerName: '<producer-name>',
        message: 'Uint8Arrays/object/string/DocumentNode graphql', // Uint8Arrays/object (schema validated station - protobuf) or Uint8Arrays/object (schema validated station - json schema) or Uint8Arrays/string/DocumentNode graphql (schema validated station - graphql schema) or Uint8Arrays/object (schema validated station - avro schema)
        ackWaitSec: 15, // defaults to 15
        asyncProduce: true // defaults to true. For better performance. The client won't block requests while waiting for an acknowledgment.
        headers: headers, // defults to empty
        msgId: 'id', // defaults to null
        producerPartitionKey: "key", // produce to specific partition. defaults to null
        producerPartitionNumber: -1 // produce to specific partition number. defaults to -1
});

Creating a producer first

await producer.produce({
    message: 'Uint8Arrays/object/string/DocumentNode graphql', // Uint8Arrays/object (schema validated station - protobuf) or Uint8Arrays/object (schema validated station - json schema) or Uint8Arrays/string/DocumentNode graphql (schema validated station - graphql schema) or Uint8Arrays/object (schema validated station - avro schema)
    ackWaitSec: 15, // defaults to 15,
    producerPartitionKey: "key", // produce to specific partition. defaults to null
    producerPartitionNumber: -1 // produce to specific partition number. defaults to -1
});

Note: When producing to a station with more than one partition, the producer will produce messages in a Round Robin fashion between the different partitions.

Add Headers

const headers = memphis.headers();
headers.add('<key>', '<value>');
await producer.produce({
    message: 'Uint8Arrays/object/string/DocumentNode graphql', // Uint8Arrays/object (schema validated station - protobuf) or Uint8Arrays/object (schema validated station - json schema) or Uint8Arrays/string/DocumentNode graphql (schema validated station - graphql schema)
    headers: headers // defults to empty
});

or

const headers = { key: 'value' };
await producer.produce({
    message: 'Uint8Arrays/object/string/DocumentNode graphql', // Uint8Arrays/object (schema validated station - protobuf) or Uint8Arrays/object (schema validated station - json schema) or Uint8Arrays/string/DocumentNode graphql (schema validated station - graphql schema) or Uint8Arrays/object (schema validated station - avro schema)
    headers: headers
});

Async produce

For better performance. The client won't block requests while waiting for an acknowledgment. Defaults to true.

await producer.produce({
    message: 'Uint8Arrays/object/string/DocumentNode graphql', // Uint8Arrays/object (schema validated station - protobuf) or Uint8Arrays/object (schema validated station - json schema) or Uint8Arrays/string/DocumentNode graphql (schema validated station - graphql schema) or Uint8Arrays/object (schema validated station - avro schema)
    ackWaitSec: 15, // defaults to 15
    asyncProduce: true, // defaults to true. For better performance. The client won't block requests while waiting for an acknowledgment
    producerPartitionKey: "key", // produce to specific partition. defaults to null
    producerPartitionNumber: -1 // produce to specific partition number. defaults to -1
});

Message ID

Stations are idempotent by default for 2 minutes (can be configured). Idempotency is achieved by adding a message-id

await producer.produce({
    message: 'Uint8Arrays/object/string/DocumentNode graphql', // Uint8Arrays/object (schema validated station - protobuf) or Uint8Arrays/object (schema validated station - json schema) or Uint8Arrays/string/DocumentNode graphql (schema validated station - graphql schema) or Uint8Arrays/object (schema validated station - avro schema)
    ackWaitSec: 15, // defaults to 15
    msgId: 'id' // defaults to null
});

Destroying a Producer

await producer.destroy();

Creating a Consumer

const consumer = await memphisConnection.consumer({
    stationName: '<station-name>',
    consumerName: '<consumer-name>',
    consumerGroup: '<group-name>', // defaults to the consumer name.
    pullIntervalMs: 1000, // defaults to 1000
    batchSize: 10, // defaults to 10
    batchMaxTimeToWaitMs: 5000, // defaults to 5000
    maxAckTimeMs: 30000, // defaults to 30000
    maxMsgDeliveries: 10, // defaults to 10
    startConsumeFromSequence: 1, // start consuming from a specific sequence. defaults to 1
    lastMessages: -1, // consume the last N messages, defaults to -1 (all messages in the station)
    consumerPartitionKey: "key", // consume by specific partition key. Defaults to null
    consumerPartitionNumber: -1 // consume by specific partition number. Defaults to -1
});

Note: When consuming from a station with more than one partition, the consumer will consume messages in Round Robin fashion from the different partitions.

Passing context to message handlers

consumer.setContext({ key: 'value' });

Processing messages

consumer.on('message', (message, context) => {
    // processing
    console.log(message.getData());
    message.ack();
});

Fetch a single batch of messages

const msgs = await memphis.fetchMessages({
    stationName: '<station-name>',
    consumerName: '<consumer-name>',
    consumerGroup: '<group-name>', // defaults to the consumer name.
    batchSize: 10, // defaults to 10
    batchMaxTimeToWaitMs: 5000, // defaults to 5000
    maxAckTimeMs: 30000, // defaults to 30000
    maxMsgDeliveries: 10, // defaults to 10
    startConsumeFromSequence: 1, // start consuming from a specific sequence. defaults to 1
    lastMessages: -1, // consume the last N messages, defaults to -1 (all messages in the station)
    consumerPartitionKey: "key", // consume by specific partition key. Defaults to null
    consumerPartitionNumber: -1 // consume by specific partition number. Defaults to -1
});

Fetch a single batch of messages after creating a consumer

const msgs = await consumer.fetch({
    batchSize: 10, // defaults to 10
    consumerPartitionKey: "key", // fetch by specific partition key. Defaults to null
    consumerPartitionNumber: -1 // fetch by specific partition number. Defaults to -1
});

To set up a connection in nestjs

import { MemphisServer } from 'memphis-dev'

async function bootstrap() {
  const app = await NestFactory.createMicroservice<MicroserviceOptions>(
    AppModule,
    {
      strategy: new MemphisServer({
        host: '<memphis-host>',
        username: '<application type username>',
        connectionToken: '<broker-token>'
      }),
    },
  );

  await app.listen();
}
bootstrap();

To consume messages in NestJS

export class Controller {
    import { MemphisConsume, Message } from 'memphis-dev';

    @MemphisConsume({
        stationName: '<station-name>',
        consumerName: '<consumer-name>',
        consumerGroup: ''
    })
    async messageHandler(message: Message) {
        console.log(message.getData().toString());
        message.ack();
    }
}

Acknowledge a message

Acknowledge a message indicates the Memphis server to not re-send the same message again to the same consumer/consumers group

message.ack();

Delay and resend the message after a given duration

Delay the message and tell the Memphis server to re-send the same message again to the same consumer group. The message will be redelivered only in case Consumer.maxMsgDeliveries is not reached yet.

message.delay(delayInMilliseconds);

Get message payload

As Uint8Array

msg = message.getData();

As Json

msg = message.getDataAsJson();

Get headers

Get headers per message

headers = message.getHeaders();

Get message sequence number

Get message sequence number

sequenceNumber = message.getSequenceNumber();

Catching async errors

consumer.on('error', (error) => {
    // error handling
});

Stopping a Consumer

Stopping a consumer simply stops it from consuming messages in the code.

Let's say you don't want listeners of a consumer to receive messages anymore (even if messages are still being produced to its station), stop the consumer and that's it.

await consumer.stop();

Destroying a Consumer

This is different from stopping a consumer. Destroying a consumer destroys it from the station and the broker itself. It won't exist again.

await consumer.destroy();

Check if the broker is connected

memphisConnection.isConnected();

memphis.js's People

Contributors

idanasulin2706 avatar shay23b avatar kayzethegeek avatar yanivbh1 avatar valerabr avatar shohamroditimemphis avatar daniel-davidd avatar idonaaman123 avatar obumnwabude avatar ormemphis avatar avitaltrifsik avatar elchinmemphis avatar saarryan avatar talg123 avatar big-vi avatar brunobandev avatar devpahuja avatar john-memphis avatar origranot avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.