GithubHelp home page GithubHelp logo

luin / ioredis Goto Github PK

View Code? Open in Web Editor NEW
14.0K 160.0 1.2K 6.18 MB

🚀 A robust, performance-focused, and full-featured Redis client for Node.js.

License: MIT License

TypeScript 98.41% Dockerfile 0.08% Shell 0.02% JavaScript 1.49%
redis redis-cluster redis-client redis-sentinel typescript redis-module nodejs

ioredis's Introduction

ioredis

Build Status Coverage Status Commitizen friendly semantic-release

Discord Twitch YouTube Twitter

A robust, performance-focused and full-featured Redis client for Node.js.

Supports Redis >= 2.6.12. Completely compatible with Redis 7.x.

Features

ioredis is a robust, full-featured Redis client that is used in the world's biggest online commerce company Alibaba and many other awesome companies.

  1. Full-featured. It supports Cluster, Sentinel, Streams, Pipelining, and of course Lua scripting, Redis Functions, Pub/Sub (with the support of binary messages).
  2. High performance 🚀.
  3. Delightful API 😄. It works with Node callbacks and Native promises.
  4. Transformation of command arguments and replies.
  5. Transparent key prefixing.
  6. Abstraction for Lua scripting, allowing you to define custom commands.
  7. Supports binary data.
  8. Supports TLS 🔒.
  9. Supports offline queue and ready checking.
  10. Supports ES6 types, such as Map and Set.
  11. Supports GEO commands 📍.
  12. Supports Redis ACL.
  13. Sophisticated error handling strategy.
  14. Supports NAT mapping.
  15. Supports autopipelining.

100% written in TypeScript and official declarations are provided:

TypeScript Screenshot

Versions

Version Branch Node.js Version Redis Version
5.x.x (latest) main >= 12 2.6.12 ~ latest
4.x.x v4 >= 6 2.6.12 ~ 7

Refer to CHANGELOG.md for features and bug fixes introduced in v5.

🚀 Upgrading from v4 to v5

Links


Quick Start

Install

npm install ioredis

In a TypeScript project, you may want to add TypeScript declarations for Node.js:

npm install --save-dev @types/node

Basic Usage

// Import ioredis.
// You can also use `import { Redis } from "ioredis"`
// if your project is a TypeScript project,
// Note that `import Redis from "ioredis"` is still supported,
// but will be deprecated in the next major version.
const Redis = require("ioredis");

// Create a Redis instance.
// By default, it will connect to localhost:6379.
// We are going to cover how to specify connection options soon.
const redis = new Redis();

redis.set("mykey", "value"); // Returns a promise which resolves to "OK" when the command succeeds.

// ioredis supports the node.js callback style
redis.get("mykey", (err, result) => {
  if (err) {
    console.error(err);
  } else {
    console.log(result); // Prints "value"
  }
});

// Or ioredis returns a promise if the last argument isn't a function
redis.get("mykey").then((result) => {
  console.log(result); // Prints "value"
});

redis.zadd("sortedSet", 1, "one", 2, "dos", 4, "quatro", 3, "three");
redis.zrange("sortedSet", 0, 2, "WITHSCORES").then((elements) => {
  // ["one", "1", "dos", "2", "three", "3"] as if the command was `redis> ZRANGE sortedSet 0 2 WITHSCORES`
  console.log(elements);
});

// All arguments are passed directly to the redis server,
// so technically ioredis supports all Redis commands.
// The format is: redis[SOME_REDIS_COMMAND_IN_LOWERCASE](ARGUMENTS_ARE_JOINED_INTO_COMMAND_STRING)
// so the following statement is equivalent to the CLI: `redis> SET mykey hello EX 10`
redis.set("mykey", "hello", "EX", 10);

See the examples/ folder for more examples. For example:

All Redis commands are supported. See the documentation for details.

Connect to Redis

When a new Redis instance is created, a connection to Redis will be created at the same time. You can specify which Redis to connect to by:

new Redis(); // Connect to 127.0.0.1:6379
new Redis(6380); // 127.0.0.1:6380
new Redis(6379, "192.168.1.1"); // 192.168.1.1:6379
new Redis("/tmp/redis.sock");
new Redis({
  port: 6379, // Redis port
  host: "127.0.0.1", // Redis host
  username: "default", // needs Redis >= 6
  password: "my-top-secret",
  db: 0, // Defaults to 0
});

You can also specify connection options as a redis:// URL or rediss:// URL when using TLS encryption:

// Connect to 127.0.0.1:6380, db 4, using password "authpassword":
new Redis("redis://:[email protected]:6380/4");

// Username can also be passed via URI.
new Redis("redis://username:[email protected]:6380/4");

See API Documentation for all available options.

Pub/Sub

Redis provides several commands for developers to implement the Publish–subscribe pattern. There are two roles in this pattern: publisher and subscriber. Publishers are not programmed to send their messages to specific subscribers. Rather, published messages are characterized into channels, without knowledge of what (if any) subscribers there may be.

By leveraging Node.js's built-in events module, ioredis makes pub/sub very straightforward to use. Below is a simple example that consists of two files, one is publisher.js that publishes messages to a channel, the other is subscriber.js that listens for messages on specific channels.

// publisher.js

const Redis = require("ioredis");
const redis = new Redis();

setInterval(() => {
  const message = { foo: Math.random() };
  // Publish to my-channel-1 or my-channel-2 randomly.
  const channel = `my-channel-${1 + Math.round(Math.random())}`;

  // Message can be either a string or a buffer
  redis.publish(channel, JSON.stringify(message));
  console.log("Published %s to %s", message, channel);
}, 1000);
// subscriber.js

const Redis = require("ioredis");
const redis = new Redis();

redis.subscribe("my-channel-1", "my-channel-2", (err, count) => {
  if (err) {
    // Just like other commands, subscribe() can fail for some reasons,
    // ex network issues.
    console.error("Failed to subscribe: %s", err.message);
  } else {
    // `count` represents the number of channels this client are currently subscribed to.
    console.log(
      `Subscribed successfully! This client is currently subscribed to ${count} channels.`
    );
  }
});

redis.on("message", (channel, message) => {
  console.log(`Received ${message} from ${channel}`);
});

// There's also an event called 'messageBuffer', which is the same as 'message' except
// it returns buffers instead of strings.
// It's useful when the messages are binary data.
redis.on("messageBuffer", (channel, message) => {
  // Both `channel` and `message` are buffers.
  console.log(channel, message);
});

It's worth noticing that a connection (aka a Redis instance) can't play both roles at the same time. More specifically, when a client issues subscribe() or psubscribe(), it enters the "subscriber" mode. From that point, only commands that modify the subscription set are valid. Namely, they are: subscribe, psubscribe, unsubscribe, punsubscribe, ping, and quit. When the subscription set is empty (via unsubscribe/punsubscribe), the connection is put back into the regular mode.

If you want to do pub/sub in the same file/process, you should create a separate connection:

const Redis = require("ioredis");
const sub = new Redis();
const pub = new Redis();

sub.subscribe(/* ... */); // From now, `sub` enters the subscriber mode.
sub.on("message" /* ... */);

setInterval(() => {
  // `pub` can be used to publish messages, or send other regular commands (e.g. `hgetall`)
  // because it's not in the subscriber mode.
  pub.publish(/* ... */);
}, 1000);

PSUBSCRIBE is also supported in a similar way when you want to subscribe all channels whose name matches a pattern:

redis.psubscribe("pat?ern", (err, count) => {});

// Event names are "pmessage"/"pmessageBuffer" instead of "message/messageBuffer".
redis.on("pmessage", (pattern, channel, message) => {});
redis.on("pmessageBuffer", (pattern, channel, message) => {});

Streams

Redis v5 introduces a new data type called streams. It doubles as a communication channel for building streaming architectures and as a log-like data structure for persisting data. With ioredis, the usage can be pretty straightforward. Say we have a producer publishes messages to a stream with redis.xadd("mystream", "*", "randomValue", Math.random()) (You may find the official documentation of Streams as a starter to understand the parameters used), to consume the messages, we'll have a consumer with the following code:

const Redis = require("ioredis");
const redis = new Redis();

const processMessage = (message) => {
  console.log("Id: %s. Data: %O", message[0], message[1]);
};

async function listenForMessage(lastId = "$") {
  // `results` is an array, each element of which corresponds to a key.
  // Because we only listen to one key (mystream) here, `results` only contains
  // a single element. See more: https://redis.io/commands/xread#return-value
  const results = await redis.xread("block", 0, "STREAMS", "mystream", lastId);
  const [key, messages] = results[0]; // `key` equals to "mystream"

  messages.forEach(processMessage);

  // Pass the last id of the results to the next round.
  await listenForMessage(messages[messages.length - 1][0]);
}

listenForMessage();

Expiration

Redis can set a timeout to expire your key, after the timeout has expired the key will be automatically deleted. (You can find the official Expire documentation to understand better the different parameters you can use), to set your key to expire in 60 seconds, we will have the following code:

redis.set("key", "data", "EX", 60);
// Equivalent to redis command "SET key data EX 60", because on ioredis set method,
// all arguments are passed directly to the redis server.

Handle Binary Data

Binary data support is out of the box. Pass buffers to send binary data:

redis.set("foo", Buffer.from([0x62, 0x75, 0x66]));

Every command that returns a bulk string has a variant command with a Buffer suffix. The variant command returns a buffer instead of a UTF-8 string:

const result = await redis.getBuffer("foo");
// result is `<Buffer 62 75 66>`

It's worth noticing that you don't need the Buffer suffix variant in order to send binary data. That means in most case you should just use redis.set() instead of redis.setBuffer() unless you want to get the old value with the GET parameter:

const result = await redis.setBuffer("foo", "new value", "GET");
// result is `<Buffer 62 75 66>` as `GET` indicates returning the old value.

Pipelining

If you want to send a batch of commands (e.g. > 5), you can use pipelining to queue the commands in memory and then send them to Redis all at once. This way the performance improves by 50%~300% (See benchmark section).

redis.pipeline() creates a Pipeline instance. You can call any Redis commands on it just like the Redis instance. The commands are queued in memory and flushed to Redis by calling the exec method:

const pipeline = redis.pipeline();
pipeline.set("foo", "bar");
pipeline.del("cc");
pipeline.exec((err, results) => {
  // `err` is always null, and `results` is an array of responses
  // corresponding to the sequence of queued commands.
  // Each response follows the format `[err, result]`.
});

// You can even chain the commands:
redis
  .pipeline()
  .set("foo", "bar")
  .del("cc")
  .exec((err, results) => {});

// `exec` also returns a Promise:
const promise = redis.pipeline().set("foo", "bar").get("foo").exec();
promise.then((result) => {
  // result === [[null, 'OK'], [null, 'bar']]
});

Each chained command can also have a callback, which will be invoked when the command gets a reply:

redis
  .pipeline()
  .set("foo", "bar")
  .get("foo", (err, result) => {
    // result === 'bar'
  })
  .exec((err, result) => {
    // result[1][1] === 'bar'
  });

In addition to adding commands to the pipeline queue individually, you can also pass an array of commands and arguments to the constructor:

redis
  .pipeline([
    ["set", "foo", "bar"],
    ["get", "foo"],
  ])
  .exec(() => {
    /* ... */
  });

#length property shows how many commands in the pipeline:

const length = redis.pipeline().set("foo", "bar").get("foo").length;
// length === 2

Transaction

Most of the time, the transaction commands multi & exec are used together with pipeline. Therefore, when multi is called, a Pipeline instance is created automatically by default, so you can use multi just like pipeline:

redis
  .multi()
  .set("foo", "bar")
  .get("foo")
  .exec((err, results) => {
    // results === [[null, 'OK'], [null, 'bar']]
  });

If there's a syntax error in the transaction's command chain (e.g. wrong number of arguments, wrong command name, etc), then none of the commands would be executed, and an error is returned:

redis
  .multi()
  .set("foo")
  .set("foo", "new value")
  .exec((err, results) => {
    // err:
    //  { [ReplyError: EXECABORT Transaction discarded because of previous errors.]
    //    name: 'ReplyError',
    //    message: 'EXECABORT Transaction discarded because of previous errors.',
    //    command: { name: 'exec', args: [] },
    //    previousErrors:
    //     [ { [ReplyError: ERR wrong number of arguments for 'set' command]
    //         name: 'ReplyError',
    //         message: 'ERR wrong number of arguments for \'set\' command',
    //         command: [Object] } ] }
  });

In terms of the interface, multi differs from pipeline in that when specifying a callback to each chained command, the queueing state is passed to the callback instead of the result of the command:

redis
  .multi()
  .set("foo", "bar", (err, result) => {
    // result === 'QUEUED'
  })
  .exec(/* ... */);

If you want to use transaction without pipeline, pass { pipeline: false } to multi, and every command will be sent to Redis immediately without waiting for an exec invocation:

redis.multi({ pipeline: false });
redis.set("foo", "bar");
redis.get("foo");
redis.exec((err, result) => {
  // result === [[null, 'OK'], [null, 'bar']]
});

The constructor of multi also accepts a batch of commands:

redis
  .multi([
    ["set", "foo", "bar"],
    ["get", "foo"],
  ])
  .exec(() => {
    /* ... */
  });

Inline transactions are supported by pipeline, which means you can group a subset of commands in the pipeline into a transaction:

redis
  .pipeline()
  .get("foo")
  .multi()
  .set("foo", "bar")
  .get("foo")
  .exec()
  .get("foo")
  .exec();

Lua Scripting

ioredis supports all of the scripting commands such as EVAL, EVALSHA and SCRIPT. However, it's tedious to use in real world scenarios since developers have to take care of script caching and to detect when to use EVAL and when to use EVALSHA. ioredis exposes a defineCommand method to make scripting much easier to use:

const redis = new Redis();

// This will define a command myecho:
redis.defineCommand("myecho", {
  numberOfKeys: 2,
  lua: "return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}",
});

// Now `myecho` can be used just like any other ordinary command,
// and ioredis will try to use `EVALSHA` internally when possible for better performance.
redis.myecho("k1", "k2", "a1", "a2", (err, result) => {
  // result === ['k1', 'k2', 'a1', 'a2']
});

// `myechoBuffer` is also defined automatically to return buffers instead of strings:
redis.myechoBuffer("k1", "k2", "a1", "a2", (err, result) => {
  // result[0] equals to Buffer.from('k1');
});

// And of course it works with pipeline:
redis.pipeline().set("foo", "bar").myecho("k1", "k2", "a1", "a2").exec();

Dynamic Keys

If the number of keys can't be determined when defining a command, you can omit the numberOfKeys property and pass the number of keys as the first argument when you call the command:

redis.defineCommand("echoDynamicKeyNumber", {
  lua: "return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}",
});

// Now you have to pass the number of keys as the first argument every time
// you invoke the `echoDynamicKeyNumber` command:
redis.echoDynamicKeyNumber(2, "k1", "k2", "a1", "a2", (err, result) => {
  // result === ['k1', 'k2', 'a1', 'a2']
});

As Constructor Options

Besides defineCommand(), you can also define custom commands with the scripts constructor option:

const redis = new Redis({
  scripts: {
    myecho: {
      numberOfKeys: 2,
      lua: "return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}",
    },
  },
});

TypeScript Usages

You can refer to the example for how to declare your custom commands.

Transparent Key Prefixing

This feature allows you to specify a string that will automatically be prepended to all the keys in a command, which makes it easier to manage your key namespaces.

Warning This feature won't apply to commands like KEYS and SCAN that take patterns rather than actual keys(#239), and this feature also won't apply to the replies of commands even if they are key names (#325).

const fooRedis = new Redis({ keyPrefix: "foo:" });
fooRedis.set("bar", "baz"); // Actually sends SET foo:bar baz

fooRedis.defineCommand("myecho", {
  numberOfKeys: 2,
  lua: "return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}",
});

// Works well with pipelining/transaction
fooRedis
  .pipeline()
  // Sends SORT foo:list BY foo:weight_*->fieldname
  .sort("list", "BY", "weight_*->fieldname")
  // Supports custom commands
  // Sends EVALSHA xxx foo:k1 foo:k2 a1 a2
  .myecho("k1", "k2", "a1", "a2")
  .exec();

Transforming Arguments & Replies

Most Redis commands take one or more Strings as arguments, and replies are sent back as a single String or an Array of Strings. However, sometimes you may want something different. For instance, it would be more convenient if the HGETALL command returns a hash (e.g. { key: val1, key2: v2 }) rather than an array of key values (e.g. [key1, val1, key2, val2]).

ioredis has a flexible system for transforming arguments and replies. There are two types of transformers, argument transformer and reply transformer:

const Redis = require("ioredis");

// Here's the built-in argument transformer converting
// hmset('key', { k1: 'v1', k2: 'v2' })
// or
// hmset('key', new Map([['k1', 'v1'], ['k2', 'v2']]))
// into
// hmset('key', 'k1', 'v1', 'k2', 'v2')
Redis.Command.setArgumentTransformer("hmset", (args) => {
  if (args.length === 2) {
    if (args[1] instanceof Map) {
      // utils is a internal module of ioredis
      return [args[0], ...utils.convertMapToArray(args[1])];
    }
    if (typeof args[1] === "object" && args[1] !== null) {
      return [args[0], ...utils.convertObjectToArray(args[1])];
    }
  }
  return args;
});

// Here's the built-in reply transformer converting the HGETALL reply
// ['k1', 'v1', 'k2', 'v2']
// into
// { k1: 'v1', 'k2': 'v2' }
Redis.Command.setReplyTransformer("hgetall", (result) => {
  if (Array.isArray(result)) {
    const obj = {};
    for (let i = 0; i < result.length; i += 2) {
      obj[result[i]] = result[i + 1];
    }
    return obj;
  }
  return result;
});

There are three built-in transformers, two argument transformers for hmset & mset and a reply transformer for hgetall. Transformers for hmset and hgetall were mentioned above, and the transformer for mset is similar to the one for hmset:

redis.mset({ k1: "v1", k2: "v2" });
redis.get("k1", (err, result) => {
  // result === 'v1';
});

redis.mset(
  new Map([
    ["k3", "v3"],
    ["k4", "v4"],
  ])
);
redis.get("k3", (err, result) => {
  // result === 'v3';
});

Another useful example of a reply transformer is one that changes hgetall to return array of arrays instead of objects which avoids an unwanted conversation of hash keys to strings when dealing with binary hash keys:

Redis.Command.setReplyTransformer("hgetall", (result) => {
  const arr = [];
  for (let i = 0; i < result.length; i += 2) {
    arr.push([result[i], result[i + 1]]);
  }
  return arr;
});
redis.hset("h1", Buffer.from([0x01]), Buffer.from([0x02]));
redis.hset("h1", Buffer.from([0x03]), Buffer.from([0x04]));
redis.hgetallBuffer("h1", (err, result) => {
  // result === [ [ <Buffer 01>, <Buffer 02> ], [ <Buffer 03>, <Buffer 04> ] ];
});

Monitor

Redis supports the MONITOR command, which lets you see all commands received by the Redis server across all client connections, including from other client libraries and other computers.

The monitor method returns a monitor instance. After you send the MONITOR command, no other commands are valid on that connection. ioredis will emit a monitor event for every new monitor message that comes across. The callback for the monitor event takes a timestamp from the Redis server and an array of command arguments.

Here is a simple example:

redis.monitor((err, monitor) => {
  monitor.on("monitor", (time, args, source, database) => {});
});

Here is another example illustrating an async function and monitor.disconnect():

async () => {
  const monitor = await redis.monitor();
  monitor.on("monitor", console.log);
  // Any other tasks
  monitor.disconnect();
};

Streamify Scanning

Redis 2.8 added the SCAN command to incrementally iterate through the keys in the database. It's different from KEYS in that SCAN only returns a small number of elements each call, so it can be used in production without the downside of blocking the server for a long time. However, it requires recording the cursor on the client side each time the SCAN command is called in order to iterate through all the keys correctly. Since it's a relatively common use case, ioredis provides a streaming interface for the SCAN command to make things much easier. A readable stream can be created by calling scanStream:

const redis = new Redis();
// Create a readable stream (object mode)
const stream = redis.scanStream();
stream.on("data", (resultKeys) => {
  // `resultKeys` is an array of strings representing key names.
  // Note that resultKeys may contain 0 keys, and that it will sometimes
  // contain duplicates due to SCAN's implementation in Redis.
  for (let i = 0; i < resultKeys.length; i++) {
    console.log(resultKeys[i]);
  }
});
stream.on("end", () => {
  console.log("all keys have been visited");
});

scanStream accepts an option, with which you can specify the MATCH pattern, the TYPE filter, and the COUNT argument:

const stream = redis.scanStream({
  // only returns keys following the pattern of `user:*`
  match: "user:*",
  // only return objects that match a given type,
  // (requires Redis >= 6.0)
  type: "zset",
  // returns approximately 100 elements per call
  count: 100,
});

Just like other commands, scanStream has a binary version scanBufferStream, which returns an array of buffers. It's useful when the key names are not utf8 strings.

There are also hscanStream, zscanStream and sscanStream to iterate through elements in a hash, zset and set. The interface of each is similar to scanStream except the first argument is the key name:

const stream = redis.hscanStream("myhash", {
  match: "age:??",
});

You can learn more from the Redis documentation.

Useful Tips It's pretty common that doing an async task in the data handler. We'd like the scanning process to be paused until the async task to be finished. Stream#pause() and Stream#resume() do the trick. For example if we want to migrate data in Redis to MySQL:

const stream = redis.scanStream();
stream.on("data", (resultKeys) => {
  // Pause the stream from scanning more keys until we've migrated the current keys.
  stream.pause();

  Promise.all(resultKeys.map(migrateKeyToMySQL)).then(() => {
    // Resume the stream here.
    stream.resume();
  });
});

stream.on("end", () => {
  console.log("done migration");
});

Auto-reconnect

By default, ioredis will try to reconnect when the connection to Redis is lost except when the connection is closed manually by redis.disconnect() or redis.quit().

It's very flexible to control how long to wait to reconnect after disconnection using the retryStrategy option:

const redis = new Redis({
  // This is the default value of `retryStrategy`
  retryStrategy(times) {
    const delay = Math.min(times * 50, 2000);
    return delay;
  },
});

retryStrategy is a function that will be called when the connection is lost. The argument times means this is the nth reconnection being made and the return value represents how long (in ms) to wait to reconnect. When the return value isn't a number, ioredis will stop trying to reconnect, and the connection will be lost forever if the user doesn't call redis.connect() manually.

When reconnected, the client will auto subscribe to channels that the previous connection subscribed to. This behavior can be disabled by setting the autoResubscribe option to false.

And if the previous connection has some unfulfilled commands (most likely blocking commands such as brpop and blpop), the client will resend them when reconnected. This behavior can be disabled by setting the autoResendUnfulfilledCommands option to false.

By default, all pending commands will be flushed with an error every 20 retry attempts. That makes sure commands won't wait forever when the connection is down. You can change this behavior by setting maxRetriesPerRequest:

const redis = new Redis({
  maxRetriesPerRequest: 1,
});

Set maxRetriesPerRequest to null to disable this behavior, and every command will wait forever until the connection is alive again (which is the default behavior before ioredis v4).

Reconnect on Error

Besides auto-reconnect when the connection is closed, ioredis supports reconnecting on certain Redis errors using the reconnectOnError option. Here's an example that will reconnect when receiving READONLY error:

const redis = new Redis({
  reconnectOnError(err) {
    const targetError = "READONLY";
    if (err.message.includes(targetError)) {
      // Only reconnect when the error contains "READONLY"
      return true; // or `return 1;`
    }
  },
});

This feature is useful when using Amazon ElastiCache instances with Auto-failover disabled. On these instances, test your reconnectOnError handler by manually promoting the replica node to the primary role using the AWS console. The following writes fail with the error READONLY. Using reconnectOnError, we can force the connection to reconnect on this error in order to connect to the new master. Furthermore, if the reconnectOnError returns 2, ioredis will resend the failed command after reconnecting.

On ElastiCache instances with Auto-failover enabled, reconnectOnError does not execute. Instead of returning a Redis error, AWS closes all connections to the master endpoint until the new primary node is ready. ioredis reconnects via retryStrategy instead of reconnectOnError after about a minute. On ElastiCache instances with Auto-failover enabled, test failover events with the Failover primary option in the AWS console.

Connection Events

The Redis instance will emit some events about the state of the connection to the Redis server.

Event Description
connect emits when a connection is established to the Redis server.
ready If enableReadyCheck is true, client will emit ready when the server reports that it is ready to receive commands (e.g. finish loading data from disk).
Otherwise, ready will be emitted immediately right after the connect event.
error emits when an error occurs while connecting.
However, ioredis emits all error events silently (only emits when there's at least one listener) so that your application won't crash if you're not listening to the error event.
close emits when an established Redis server connection has closed.
reconnecting emits after close when a reconnection will be made. The argument of the event is the time (in ms) before reconnecting.
end emits after close when no more reconnections will be made, or the connection is failed to establish.
wait emits when lazyConnect is set and will wait for the first command to be called before connecting.

You can also check out the Redis#status property to get the current connection status.

Besides the above connection events, there are several other custom events:

Event Description
select emits when the database changed. The argument is the new db number.

Offline Queue

When a command can't be processed by Redis (being sent before the ready event), by default, it's added to the offline queue and will be executed when it can be processed. You can disable this feature by setting the enableOfflineQueue option to false:

const redis = new Redis({ enableOfflineQueue: false });

TLS Options

Redis doesn't support TLS natively, however if the redis server you want to connect to is hosted behind a TLS proxy (e.g. stunnel) or is offered by a PaaS service that supports TLS connection (e.g. Redis.com), you can set the tls option:

const redis = new Redis({
  host: "localhost",
  tls: {
    // Refer to `tls.connect()` section in
    // https://nodejs.org/api/tls.html
    // for all supported options
    ca: fs.readFileSync("cert.pem"),
  },
});

Alternatively, specify the connection through a rediss:// URL.

const redis = new Redis("rediss://redis.my-service.com");

If you do not want to use a connection string, you can also specify an empty tls: {} object:

const redis = new Redis({
  host: "redis.my-service.com",
  tls: {},
});

TLS Profiles

Warning TLS profiles described in this section are going to be deprecated in the next major version. Please provide TLS options explicitly.

To make it easier to configure we provide a few pre-configured TLS profiles that can be specified by setting the tls option to the profile's name or specifying a tls.profile option in case you need to customize some values of the profile.

Profiles:

  • RedisCloudFixed: Contains the CA for Redis.com Cloud fixed subscriptions
  • RedisCloudFlexible: Contains the CA for Redis.com Cloud flexible subscriptions
const redis = new Redis({
  host: "localhost",
  tls: "RedisCloudFixed",
});

const redisWithClientCertificate = new Redis({
  host: "localhost",
  tls: {
    profile: "RedisCloudFixed",
    key: "123",
  },
});

Sentinel

ioredis supports Sentinel out of the box. It works transparently as all features that work when you connect to a single node also work when you connect to a sentinel group. Make sure to run Redis >= 2.8.12 if you want to use this feature. Sentinels have a default port of 26379.

To connect using Sentinel, use:

const redis = new Redis({
  sentinels: [
    { host: "localhost", port: 26379 },
    { host: "localhost", port: 26380 },
  ],
  name: "mymaster",
});

redis.set("foo", "bar");

The arguments passed to the constructor are different from the ones you use to connect to a single node, where:

  • name identifies a group of Redis instances composed of a master and one or more slaves (mymaster in the example);
  • sentinelPassword (optional) password for Sentinel instances.
  • sentinels are a list of sentinels to connect to. The list does not need to enumerate all your sentinel instances, but a few so that if one is down the client will try the next one.
  • role (optional) with a value of slave will return a random slave from the Sentinel group.
  • preferredSlaves (optional) can be used to prefer a particular slave or set of slaves based on priority. It accepts a function or array.
  • enableTLSForSentinelMode (optional) set to true if connecting to sentinel instances that are encrypted

ioredis guarantees that the node you connected to is always a master even after a failover. When a failover happens, instead of trying to reconnect to the failed node (which will be demoted to slave when it's available again), ioredis will ask sentinels for the new master node and connect to it. All commands sent during the failover are queued and will be executed when the new connection is established so that none of the commands will be lost.

It's possible to connect to a slave instead of a master by specifying the option role with the value of slave and ioredis will try to connect to a random slave of the specified master, with the guarantee that the connected node is always a slave. If the current node is promoted to master due to a failover, ioredis will disconnect from it and ask the sentinels for another slave node to connect to.

If you specify the option preferredSlaves along with role: 'slave' ioredis will attempt to use this value when selecting the slave from the pool of available slaves. The value of preferredSlaves should either be a function that accepts an array of available slaves and returns a single result, or an array of slave values priorities by the lowest prio value first with a default value of 1.

// available slaves format
const availableSlaves = [{ ip: "127.0.0.1", port: "31231", flags: "slave" }];

// preferredSlaves array format
let preferredSlaves = [
  { ip: "127.0.0.1", port: "31231", prio: 1 },
  { ip: "127.0.0.1", port: "31232", prio: 2 },
];

// preferredSlaves function format
preferredSlaves = function (availableSlaves) {
  for (let i = 0; i < availableSlaves.length; i++) {
    const slave = availableSlaves[i];
    if (slave.ip === "127.0.0.1") {
      if (slave.port === "31234") {
        return slave;
      }
    }
  }
  // if no preferred slaves are available a random one is used
  return false;
};

const redis = new Redis({
  sentinels: [
    { host: "127.0.0.1", port: 26379 },
    { host: "127.0.0.1", port: 26380 },
  ],
  name: "mymaster",
  role: "slave",
  preferredSlaves: preferredSlaves,
});

Besides the retryStrategy option, there's also a sentinelRetryStrategy in Sentinel mode which will be invoked when all the sentinel nodes are unreachable during connecting. If sentinelRetryStrategy returns a valid delay time, ioredis will try to reconnect from scratch. The default value of sentinelRetryStrategy is:

function (times) {
  const delay = Math.min(times * 10, 1000);
  return delay;
}

Cluster

Redis Cluster provides a way to run a Redis installation where data is automatically sharded across multiple Redis nodes. You can connect to a Redis Cluster like this:

const Redis = require("ioredis");

const cluster = new Redis.Cluster([
  {
    port: 6380,
    host: "127.0.0.1",
  },
  {
    port: 6381,
    host: "127.0.0.1",
  },
]);

cluster.set("foo", "bar");
cluster.get("foo", (err, res) => {
  // res === 'bar'
});

Cluster constructor accepts two arguments, where:

  1. The first argument is a list of nodes of the cluster you want to connect to. Just like Sentinel, the list does not need to enumerate all your cluster nodes, but a few so that if one is unreachable the client will try the next one, and the client will discover other nodes automatically when at least one node is connected.

  2. The second argument is the options, where:

    • clusterRetryStrategy: When none of the startup nodes are reachable, clusterRetryStrategy will be invoked. When a number is returned, ioredis will try to reconnect to the startup nodes from scratch after the specified delay (in ms). Otherwise, an error of "None of startup nodes is available" will be returned. The default value of this option is:

      function (times) {
        const delay = Math.min(100 + times * 2, 2000);
        return delay;
      }

      It's possible to modify the startupNodes property in order to switch to another set of nodes here:

      function (times) {
        this.startupNodes = [{ port: 6790, host: '127.0.0.1' }];
        return Math.min(100 + times * 2, 2000);
      }
    • dnsLookup: Alternative DNS lookup function (dns.lookup() is used by default). It may be useful to override this in special cases, such as when AWS ElastiCache used with TLS enabled.

    • enableOfflineQueue: Similar to the enableOfflineQueue option of Redis class.

    • enableReadyCheck: When enabled, "ready" event will only be emitted when CLUSTER INFO command reporting the cluster is ready for handling commands. Otherwise, it will be emitted immediately after "connect" is emitted.

    • scaleReads: Config where to send the read queries. See below for more details.

    • maxRedirections: When a cluster related error (e.g. MOVED, ASK and CLUSTERDOWN etc.) is received, the client will redirect the command to another node. This option limits the max redirections allowed when sending a command. The default value is 16.

    • retryDelayOnFailover: If the target node is disconnected when sending a command, ioredis will retry after the specified delay. The default value is 100. You should make sure retryDelayOnFailover * maxRedirections > cluster-node-timeout to insure that no command will fail during a failover.

    • retryDelayOnClusterDown: When a cluster is down, all commands will be rejected with the error of CLUSTERDOWN. If this option is a number (by default, it is 100), the client will resend the commands after the specified time (in ms).

    • retryDelayOnTryAgain: If this option is a number (by default, it is 100), the client will resend the commands rejected with TRYAGAIN error after the specified time (in ms).

    • retryDelayOnMoved: By default, this value is 0 (in ms), which means when a MOVED error is received, the client will resend the command instantly to the node returned together with the MOVED error. However, sometimes it takes time for a cluster to become state stabilized after a failover, so adding a delay before resending can prevent a ping pong effect.

    • redisOptions: Default options passed to the constructor of Redis when connecting to a node.

    • slotsRefreshTimeout: Milliseconds before a timeout occurs while refreshing slots from the cluster (default 1000).

    • slotsRefreshInterval: Milliseconds between every automatic slots refresh (by default, it is disabled).

Read-Write Splitting

A typical redis cluster contains three or more masters and several slaves for each master. It's possible to scale out redis cluster by sending read queries to slaves and write queries to masters by setting the scaleReads option.

scaleReads is "master" by default, which means ioredis will never send any queries to slaves. There are other three available options:

  1. "all": Send write queries to masters and read queries to masters or slaves randomly.
  2. "slave": Send write queries to masters and read queries to slaves.
  3. a custom function(nodes, command): node: Will choose the custom function to select to which node to send read queries (write queries keep being sent to master). The first node in nodes is always the master serving the relevant slots. If the function returns an array of nodes, a random node of that list will be selected.

For example:

const cluster = new Redis.Cluster(
  [
    /* nodes */
  ],
  {
    scaleReads: "slave",
  }
);
cluster.set("foo", "bar"); // This query will be sent to one of the masters.
cluster.get("foo", (err, res) => {
  // This query will be sent to one of the slaves.
});

NB In the code snippet above, the res may not be equal to "bar" because of the lag of replication between the master and slaves.

Running Commands to Multiple Nodes

Every command will be sent to exactly one node. For commands containing keys, (e.g. GET, SET and HGETALL), ioredis sends them to the node that serving the keys, and for other commands not containing keys, (e.g. INFO, KEYS and FLUSHDB), ioredis sends them to a random node.

Sometimes you may want to send a command to multiple nodes (masters or slaves) of the cluster, you can get the nodes via Cluster#nodes() method.

Cluster#nodes() accepts a parameter role, which can be "master", "slave" and "all" (default), and returns an array of Redis instance. For example:

// Send `FLUSHDB` command to all slaves:
const slaves = cluster.nodes("slave");
Promise.all(slaves.map((node) => node.flushdb()));

// Get keys of all the masters:
const masters = cluster.nodes("master");
Promise.all(
  masters
    .map((node) => node.keys())
    .then((keys) => {
      // keys: [['key1', 'key2'], ['key3', 'key4']]
    })
);

NAT Mapping

Sometimes the cluster is hosted within a internal network that can only be accessed via a NAT (Network Address Translation) instance. See Accessing ElastiCache from outside AWS as an example.

You can specify nat mapping rules via natMap option:

const cluster = new Redis.Cluster(
  [
    {
      host: "203.0.113.73",
      port: 30001,
    },
  ],
  {
    natMap: {
      "10.0.1.230:30001": { host: "203.0.113.73", port: 30001 },
      "10.0.1.231:30001": { host: "203.0.113.73", port: 30002 },
      "10.0.1.232:30001": { host: "203.0.113.73", port: 30003 },
    },
  }
);

This option is also useful when the cluster is running inside a Docker container.

Transaction and Pipeline in Cluster Mode

Almost all features that are supported by Redis are also supported by Redis.Cluster, e.g. custom commands, transaction and pipeline. However there are some differences when using transaction and pipeline in Cluster mode:

  1. All keys in a pipeline should belong to slots served by the same node, since ioredis sends all commands in a pipeline to the same node.
  2. You can't use multi without pipeline (aka cluster.multi({ pipeline: false })). This is because when you call cluster.multi({ pipeline: false }), ioredis doesn't know which node the multi command should be sent to.

When any commands in a pipeline receives a MOVED or ASK error, ioredis will resend the whole pipeline to the specified node automatically if all of the following conditions are satisfied:

  1. All errors received in the pipeline are the same. For example, we won't resend the pipeline if we got two MOVED errors pointing to different nodes.
  2. All commands executed successfully are readonly commands. This makes sure that resending the pipeline won't have side effects.

Pub/Sub

Pub/Sub in cluster mode works exactly as the same as in standalone mode. Internally, when a node of the cluster receives a message, it will broadcast the message to the other nodes. ioredis makes sure that each message will only be received once by strictly subscribing one node at the same time.

const nodes = [
  /* nodes */
];
const pub = new Redis.Cluster(nodes);
const sub = new Redis.Cluster(nodes);
sub.on("message", (channel, message) => {
  console.log(channel, message);
});

sub.subscribe("news", () => {
  pub.publish("news", "highlights");
});

Events

Event Description
connect emits when a connection is established to the Redis server.
ready emits when CLUSTER INFO reporting the cluster is able to receive commands (if enableReadyCheck is true) or immediately after connect event (if enableReadyCheck is false).
error emits when an error occurs while connecting with a property of lastNodeError representing the last node error received. This event is emitted silently (only emitting if there's at least one listener).
close emits when an established Redis server connection has closed.
reconnecting emits after close when a reconnection will be made. The argument of the event is the time (in ms) before reconnecting.
end emits after close when no more reconnections will be made.
+node emits when a new node is connected.
-node emits when a node is disconnected.
node error emits when an error occurs when connecting to a node. The second argument indicates the address of the node.

Password

Setting the password option to access password-protected clusters:

const Redis = require("ioredis");
const cluster = new Redis.Cluster(nodes, {
  redisOptions: {
    password: "your-cluster-password",
  },
});

If some of nodes in the cluster using a different password, you should specify them in the first parameter:

const Redis = require("ioredis");
const cluster = new Redis.Cluster(
  [
    // Use password "password-for-30001" for 30001
    { port: 30001, password: "password-for-30001" },
    // Don't use password when accessing 30002
    { port: 30002, password: null },
    // Other nodes will use "fallback-password"
  ],
  {
    redisOptions: {
      password: "fallback-password",
    },
  }
);

Special Note: Aws Elasticache Clusters with TLS

AWS ElastiCache for Redis (Clustered Mode) supports TLS encryption. If you use this, you may encounter errors with invalid certificates. To resolve this issue, construct the Cluster with the dnsLookup option as follows:

const cluster = new Redis.Cluster(
  [
    {
      host: "clustercfg.myCluster.abcdefg.xyz.cache.amazonaws.com",
      port: 6379,
    },
  ],
  {
    dnsLookup: (address, callback) => callback(null, address),
    redisOptions: {
      tls: {},
    },
  }
);

Autopipelining

In standard mode, when you issue multiple commands, ioredis sends them to the server one by one. As described in Redis pipeline documentation, this is a suboptimal use of the network link, especially when such link is not very performant.

The TCP and network overhead negatively affects performance. Commands are stuck in the send queue until the previous ones are correctly delivered to the server. This is a problem known as Head-Of-Line blocking (HOL).

ioredis supports a feature called “auto pipelining”. It can be enabled by setting the option enableAutoPipelining to true. No other code change is necessary.

In auto pipelining mode, all commands issued during an event loop are enqueued in a pipeline automatically managed by ioredis. At the end of the iteration, the pipeline is executed and thus all commands are sent to the server at the same time.

This feature can dramatically improve throughput and avoids HOL blocking. In our benchmarks, the improvement was between 35% and 50%.

While an automatic pipeline is executing, all new commands will be enqueued in a new pipeline which will be executed as soon as the previous finishes.

When using Redis Cluster, one pipeline per node is created. Commands are assigned to pipelines according to which node serves the slot.

A pipeline will thus contain commands using different slots but that ultimately are assigned to the same node.

Note that the same slot limitation within a single command still holds, as it is a Redis limitation.

Example of Automatic Pipeline Enqueuing

This sample code uses ioredis with automatic pipeline enabled.

const Redis = require("./built");
const http = require("http");

const db = new Redis({ enableAutoPipelining: true });

const server = http.createServer((request, response) => {
  const key = new URL(request.url, "https://localhost:3000/").searchParams.get(
    "key"
  );

  db.get(key, (err, value) => {
    response.writeHead(200, { "Content-Type": "text/plain" });
    response.end(value);
  });
});

server.listen(3000);

When Node receives requests, it schedules them to be processed in one or more iterations of the events loop.

All commands issued by requests processing during one iteration of the loop will be wrapped in a pipeline automatically created by ioredis.

In the example above, the pipeline will have the following contents:

GET key1
GET key2
GET key3
...
GET keyN

When all events in the current loop have been processed, the pipeline is executed and thus all commands are sent to the server at the same time.

While waiting for pipeline response from Redis, Node will still be able to process requests. All commands issued by request handler will be enqueued in a new automatically created pipeline. This pipeline will not be sent to the server yet.

As soon as a previous automatic pipeline has received all responses from the server, the new pipeline is immediately sent without waiting for the events loop iteration to finish.

This approach increases the utilization of the network link, reduces the TCP overhead and idle times and therefore improves throughput.

Benchmarks

Here's some of the results of our tests for a single node.

Each iteration of the test runs 1000 random commands on the server.

Samples Result Tolerance
default 1000 174.62 op/sec ± 0.45 %
enableAutoPipelining=true 1500 233.33 op/sec ± 0.88 %

And here's the same test for a cluster of 3 masters and 3 replicas:

Samples Result Tolerance
default 1000 164.05 op/sec ± 0.42 %
enableAutoPipelining=true 3000 235.31 op/sec ± 0.94 %

Error Handling

All the errors returned by the Redis server are instances of ReplyError, which can be accessed via Redis:

const Redis = require("ioredis");
const redis = new Redis();
// This command causes a reply error since the SET command requires two arguments.
redis.set("foo", (err) => {
  err instanceof Redis.ReplyError;
});

This is the error stack of the ReplyError:

ReplyError: ERR wrong number of arguments for 'set' command
    at ReplyParser._parseResult (/app/node_modules/ioredis/lib/parsers/javascript.js:60:14)
    at ReplyParser.execute (/app/node_modules/ioredis/lib/parsers/javascript.js:178:20)
    at Socket.<anonymous> (/app/node_modules/ioredis/lib/redis/event_handler.js:99:22)
    at Socket.emit (events.js:97:17)
    at readableAddChunk (_stream_readable.js:143:16)
    at Socket.Readable.push (_stream_readable.js:106:10)
    at TCP.onread (net.js:509:20)

By default, the error stack doesn't make any sense because the whole stack happens in the ioredis module itself, not in your code. So it's not easy to find out where the error happens in your code. ioredis provides an option showFriendlyErrorStack to solve the problem. When you enable showFriendlyErrorStack, ioredis will optimize the error stack for you:

const Redis = require("ioredis");
const redis = new Redis({ showFriendlyErrorStack: true });
redis.set("foo");

And the output will be:

ReplyError: ERR wrong number of arguments for 'set' command
    at Object.<anonymous> (/app/index.js:3:7)
    at Module._compile (module.js:446:26)
    at Object.Module._extensions..js (module.js:464:10)
    at Module.load (module.js:341:32)
    at Function.Module._load (module.js:296:12)
    at Function.Module.runMain (module.js:487:10)
    at startup (node.js:111:16)
    at node.js:799:3

This time the stack tells you that the error happens on the third line in your code. Pretty sweet! However, it would decrease the performance significantly to optimize the error stack. So by default, this option is disabled and can only be used for debugging purposes. You shouldn't use this feature in a production environment.

Running tests

Start a Redis server on 127.0.0.1:6379, and then:

npm test

FLUSH ALL will be invoked after each test, so make sure there's no valuable data in it before running tests.

If your testing environment does not let you spin up a Redis server ioredis-mock is a drop-in replacement you can use in your tests. It aims to behave identically to ioredis connected to a Redis server so that your integration tests is easier to write and of better quality.

Debug

You can set the DEBUG env to ioredis:* to print debug info:

$ DEBUG=ioredis:* node app.js

Join in!

I'm happy to receive bug reports, fixes, documentation enhancements, and any other improvements.

And since I'm not a native English speaker, if you find any grammar mistakes in the documentation, please also let me know. :)

Contributors

This project exists thanks to all the people who contribute:

License

MIT

FOSSA Status

ioredis's People

Contributors

alavers avatar avvs avatar colmhally avatar dependabot[bot] avatar frankvm04 avatar imatlopez avatar ioredis-robot avatar jeffjen avatar jstewmon avatar leibale avatar lpinca avatar luin avatar marcbachmann avatar mjomble avatar nakulgan avatar patrickjs avatar phlip9 avatar rakshitmidha avatar reconbot avatar ruimarinho avatar rwky avatar sambergeron avatar samswen avatar scg82 avatar semantic-release-bot avatar shaharmor avatar shogunpanda avatar silverwind avatar tysonandre avatar zhuangya avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ioredis's Issues

[cluster mode] inconsistent errors

Cluster is created and then destroyed before and after the tests. So test runs are idempotent.

For some reason, sometimes redis cluster client instance doesn't have sendCommand on it and sometimes it does. I assume that context can be rewritten on retry or smth and become 'undefined'

Vitalys-MacBook-Pro:mservice-users vitaly$ npm test

> [email protected] test /Users/vitaly/projects/mservice-users
> BLUEBIRD_DEBUG=1 multi='list=- xunit=test-report.xml' ./node_modules/.bin/mocha -R mocha-multi --timeout 10000


20:06:56 users: connected to rabbit@Vitalys-MacBook-Pro v3.5.2
20:06:56 users: queue "amq.gen-HS1YKx5rwxfoVL6TOyEKiA" created
20:06:56 users: queue "amq.gen-HS1YKx5rwxfoVL6TOyEKiA" binded to exchange "test-mocha" on route "test.users.*"actions#register() should fail on incorrect payload: 5ms
  ․ actions#register() should create user, activate it, no captcha specified: 51ms
  ․ actions#register() should reject creating user, because it already exists: 59ms
  ․ actions#register() should fail when invalid captcha is passed: 36ms

  4 passing (4s)

Vitalys-MacBook-Pro:mservice-users vitaly$ npm test

> [email protected] test /Users/vitaly/projects/mservice-users
> BLUEBIRD_DEBUG=1 multi='list=- xunit=test-report.xml' ./node_modules/.bin/mocha -R mocha-multi --timeout 10000


20:07:11 users: connected to rabbit@Vitalys-MacBook-Pro v3.5.2
20:07:11 users: queue "amq.gen-57pK5eobgqjeLt8s_IU7Cw" created
20:07:11 users: queue "amq.gen-57pK5eobgqjeLt8s_IU7Cw" binded to exchange "test-mocha" on route "test.users.*"actions#register() should fail on incorrect payload: 6ms
  1) actions#register() should create user, activate it, no captcha specified
  2) actions#register() should reject creating user, because it already exists
  3) actions#register() should fail when invalid captcha is passed

  1 passing (22s)
  3 failing

  1) actions#register() should create user, activate it, no captcha specified:
     Uncaught TypeError: Cannot call method 'sendCommand' of undefined
  ---------------------------------------------
  ---------------------------------------------
  ---------------------------------------------
      at module.exports (lib/connectors/redis.js:12:21)
      at module.exports (lib/service.js:106:9)
      at Context.<anonymous> (test/register.js:56:52)
      at exithandler (child_process.js:656:7)
      at maybeClose (child_process.js:766:16)
      at Socket.<anonymous> (child_process.js:979:11)
      at Pipe.close (net.js:466:12)

  2) actions#register() should reject creating user, because it already exists:
     Error: timeout of 10000ms exceeded. Ensure the done() callback is being called in this test.


  3) actions#register() should fail when invalid captcha is passed:
     Error: timeout of 10000ms exceeded. Ensure the done() callback is being called in this test.
// connectors/redis.js
'use strict';

var redis = require('ioredis');
var instance;

module.exports = function (config) {
    if (instance) {
        return instance;
    }

    instance = new redis.Cluster(config.redis.hosts, config.redis.options || {});
    // forked version used with lazyConnect enabled
    return instance.connect();
};

Support for node_redis arrays of commands

We've looked at this project as a replacement for our current library for node redis support, as it supports cluster, but we heavily use passing arrays of flattened commands to multi() and cannot see how to do this with ioredis:
https://github.com/mranney/node_redis#multiexec-callback-
Specifically:

client.multi([
        ["mget", "multifoo", "multibar", redis.print],
        ["incr", "multifoo"],
        ["incr", "multibar"]
    ]).exec(function (err, replies) {
        console.log(replies);
    });

Is it possible?

Thanks

John

question: using Cluster ctor and specifying non-cluster enabled node

Lets assume opts are like this:

    "redisCluster": {
        "hosts": [
            { "host": "localhost", "port": 6379 }
        ],
        "options": {

        }
    },

Then I create cluster connection:

var clusterClient = new Redis.Cluster(config.redisCluster.hosts, config.redisCluster.options);

Redis is 3.x, but cluster mode is disabled for this instances on my local machine. The behaviour in the result is the following:

22:09:53 ng.ark.com: Connected to AMQP
(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
  at Cluster.addListener (events.js:160:15)
  at Cluster.once (events.js:185:8)
  at Cluster.<anonymous> (/Users/vitaly/projects/api_v2/node_modules/ioredis/lib/cluster.js:87:10)
  at tryCatcher (/Users/vitaly/projects/api_v2/node_modules/bluebird/js/main/util.js:24:31)
  at Promise._resolveFromResolver (/Users/vitaly/projects/api_v2/node_modules/bluebird/js/main/promise.js:427:31)
  at new Promise (/Users/vitaly/projects/api_v2/node_modules/bluebird/js/main/promise.js:53:37)
  at Cluster.connect (/Users/vitaly/projects/api_v2/node_modules/ioredis/lib/cluster.js:65:10)
  at Cluster.<anonymous> (/Users/vitaly/projects/api_v2/node_modules/ioredis/lib/cluster.js:99:16)
  at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
  at Cluster.addListener (events.js:160:15)
  at Cluster.once (events.js:185:8)
  at Cluster.<anonymous> (/Users/vitaly/projects/api_v2/node_modules/ioredis/lib/cluster.js:88:10)
  at tryCatcher (/Users/vitaly/projects/api_v2/node_modules/bluebird/js/main/util.js:24:31)
  at Promise._resolveFromResolver (/Users/vitaly/projects/api_v2/node_modules/bluebird/js/main/promise.js:427:31)
  at new Promise (/Users/vitaly/projects/api_v2/node_modules/bluebird/js/main/promise.js:53:37)
  at Cluster.connect (/Users/vitaly/projects/api_v2/node_modules/ioredis/lib/cluster.js:65:10)
  at Cluster.<anonymous> (/Users/vitaly/projects/api_v2/node_modules/ioredis/lib/cluster.js:99:16)
  at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)

I havent really dug into the code, but it seems like .connect is called multiple times and previous event listeners are not removed. Maybe this is specific to the wrong configuration (using cluster ctor on non-cluster node), but maybe not. Should probably look at the error handling here

[Cluster] Failed to load script by sha1

require("co")(function *() {
  try
  {    
    var sha1 = yield cluster.script("load","return {KEYS[1],ARGV[1],ARGV[2]}  ");
    console.log(sha1);
    var res = yield cluster.evalsha(sha1, 1, "hello","world","1234");
    console.log(res);

  }catch(e)
  {
    console.error(e)  
  }

})

result

6c93d86d0ba4c64778fe11439a0aa890a1425097 //sha1


{ [ReplyError: NOSCRIPT No matching script. Please use EVAL.]
  name: 'ReplyError',
  message: 'NOSCRIPT No matching script. Please use EVAL.',
  command:
   { name: 'evalsha',
     args:
      [ '6c93d86d0ba4c64778fe11439a0aa890a1425097',
        '1',
        'hello',
        'world',
        '1234' ] } }

It doesn't always have such error, depends on the content of script, so I suspect it depends on the sha1 key is in the instance (which is client connecting for turning to) or not.

Monitor listener not present after reconnect

I have my Redis master setup on a private DNS name with a TTL of 30. Average reconnection it obviously around that time once everything propagates.

One thing I've noticed is that any monitor that is running on the client is not present after a reconnect.

To see if I could recreate it on a reconnect I first called monitor.off('monitor'); and then attached the listener again once redis.status was ready.

It's not an important part of my server so I wasn't worried about it not working, but one thing that happened was this lovely error:

[2015-05-29 20:30:50.073] [ERROR] [Server]-> Caught exception: Error: Command queue state error. If you can reproduce this, please report it.
    at Redis.exports.returnReply (/home/compilr/node_modules/ioredis/lib/redis/prototype/parser.js:125:33)
    at HiredisReplyParser.<anonymous> (/home/compilr/node_modules/ioredis/lib/redis/prototype/parser.js:27:10)
    at HiredisReplyParser.emit (events.js:107:17)
    at HiredisReplyParser.execute (/home/compilr/node_modules/ioredis/lib/parsers/hiredis.js:42:12)
    at Socket.<anonymous> (/home/compilr/node_modules/ioredis/lib/redis/event_handler.js:86:22)
    at Socket.emit (events.js:107:17)
    at readableAddChunk (_stream_readable.js:163:16)
    at Socket.Readable.push (_stream_readable.js:126:10)
    at TCP.onread (net.js:538:20)

So I'm reporting it just like you asked. Looks to be a hiredis issue and not yours perhaps.

If you need more info let me know!

Not available on npm registry yet?

Getting the following error when trying to install.

$ npm install ioredis --save
npm ERR! fetch failed https://registry.npmjs.org/ioredis/-/ioredis-1.0.5.tgz
npm WARN retry will retry, error on last attempt: Error: fetch failed with status code 404

Is having no keys for a Lua script supported?

Hi, I just wanted to check if this issue is an actual problem with the code. So, I define the command below.

redis.defineCommand('randomKeyValues',{
    numberOfKeys:0,
    lua:"return ARGV[1]"
});

Then, I set up the following call.

redis.randomKeyValues("25",function(err,result) {
        if(err){
            console.log(err);
        }else{
            console.log(result);
            res.send(JSON.stringify(result));
        }
    });

Upon execution, I receive the following error message.

{ [ReplyError: ERR Number of keys can't be greater than number of args]
  name: 'ReplyError',
  message: 'ERR Number of keys can\'t be greater than number of args',
  command: 
   { name: 'evalsha',
     args: [ '8d962c3bde43d4d0c42bde6c2148c3c626a4541d', '25' ] } }

I worked around this issue by defining the command to have a single key and then ignoring the key in my Lua script but I would like to know if I made some kind of mistake.

Lazily connect to the Redis

Currently when a Redis instance is created, a connection to Redis server is established at the same time. Let's add an option to enable lazy connecting that the connection wouldn't be established until a command is invoked.

delete vs null + hashmode keys

I had an idea about a small optimization in the lib. I'm not sure it will affect performance a lot, since this is only used in the node selection process in the cluster mode, but here are the thoughts:

  1. keys must not imply hash-mode of Object. It slows down lookups. Therefore, keys should be something like node_10_10_100_01_3123 instead of 10.10.100.03:3123 - that way object is kept in the fast mode
  2. do not use delete. Instead set obj[key] to undefined OR null. In case where we need to completely recreate the map of hosts, simply create a new object

I'll look for a few proof-of-theory articles, but afaik thats how it is

support for lua script keys/args as array

My script support dynamic number of keys/agrs, so I want to be able and do something like this:

redis.defineCommand('echo', {
lua: ''
});

var KEYS = ['k1', 'k2'];
var ARGS = ['a1', 'a2', 'a3'];

redis.echo(KEYS, ARGS, function (err, result) {
});

KEYS = ['k1', 'k2', 'k3'];
ARGS = ['a1', 'a2', 'a3', 'a4'];

redis.echo(KEYS, ARGS, function (err, result) {
});

any chance?

Unfulfilled commands sent on wrong database after reconnection

Using sentinel, unfulfilled commands are resent before the database has been reselected, causing them to run against the wrong database. This is reproducible with the following code, killing the master before the command has completed:

var Redis = require("ioredis");
var redis = new Redis({
    sentinels: [...],
    name: "mastername",
    db: 1
});

redis.blpop("some-absent-key", 30, function(){});

The output of MONITOR shows the following:

On master:
1432220467.655639 [0 xxx:39015] "info"
1432220467.659202 [0 xxx:39015] "select" "1"
1432220522.605756 [1 xxx:39015] "blpop" "some-absent-key" "30"

... master killed ...

On new master:
1432220532.258167 [0 xxx:60414] "info"
1432220532.258962 [0 xxx:60414] "blpop" "some-absent-key" "30"
1432220563.055939 [0 xxx:60414] "select" "1"

The unfulfilled BLPOP command is executed before the correct database is reselected. Expected behavior is for the correct database to be selected before running the command.

The code responsible for this behavior appears to be in event_handler.js. The correct database is selected for items in offlineQueue (when necessary), but not for items in prevCommandQueue.

Missing hiredis module causes hang on ioredis module load

I have an error that crops up from time to time where a missing hiredis module results in my node program hanging. I traced it down to the require('hiredis') line in the ioredis/lib/parsers/hiredis.js file.

When wrapped in a try...catch block, an error is thrown and the program exits, otherwise the program hangs with no additional output.

Running a transaction on empty lists is extremely slow

The command is this:

redis.multi().lrange(redis_key, 0, 99).ltrim(redis_key, 100, -1).exec()

Strangely, it takes seconds to return result from a local redis cluster(3 masters, 3 slaves, exactly like the official tutorial) on empty lists. I've tested the same command inside redis-cli and as expected it returns the result instantly.

use promisifyAll maybe more easy

var Redis = require('ioredis');

var Promise = require('bluebird');
Promise.promisifyAll(Redis);
Promise.promisifyAll(Redis.prototype);

var redis = new Redis();

DEBUG messages

This is just a question, it's not a request. Why are you using the debug module? IMO this is a really BAD practice. I have my own debugging system usign the DEBUG env variable and I don't want to have another module (debug) which I cannot control cluttering the stdout with messages that I have not asked for. Have you considered removing it?

"message" Event is not fired in Cluster mode

Problem should be here:

https://github.com/luin/ioredis/blob/master/lib/redis/prototype/parser.js#L78

this.listeners('message').length is 0

I tried same piece of code in single redis mode and it worked.

Here is my snippet:

var cluster = new Redis.Cluster(
    [
        {
            port: 30001,
            host: '127.0.0.1'
        }
    ]);


cluster.subscribe('news', function (err, count) {
    // this is called
});



cluster.on('message', function (channel, message) {
        // this is not fired
    console.log('Receive message %s from channel %s', message, channel);
});

I publish message via redis-cli

PUBLISH news something

incorrect benchmark ?

Hi,

I have ran the benchmarks in my local machine with nodejs 0.12.2, node_redis 0.12.1 and the master branch of ioredis and I have got the following results:

$ npm run bench
> [email protected] bench /tmp/ioredis
> matcha benchmarks/*.js

child_process: customFds option is deprecated, use stdio instead.

                  simple set
     109,742 op/s » ioredis
     105,823 op/s » node_redis

                  simple get
      88,683 op/s » ioredis
     102,495 op/s » node_redis

                  simple get with pipeline
      10,600 op/s » ioredis
      13,012 op/s » node_redis

                  lrange 100
      68,533 op/s » ioredis
      72,331 op/s » node_redis

 Suites:  4
 Benches: 8
 Elapsed: 53,973.10 ms

I have ran the benchmark with and without hiredis having similar performance.

Those results show better or equal performance between node_redis and ioredis. However, in the documentation, the performance of ioredis is quite better than node_redis. Any idea?

Multiple transactions with .multi()

I may just be missing something obvious, but I'm having trouble performing multiple .multi() calls.

return redis.multi()
    .sismember('test-set', someID)
    .sismember('another-test-set', someID)
    .exec(function (err, results) {
        var exists = results.reduce(function (prev, curr, i) {
            return (curr[1] === 1 || prev)
        }, false)

        if (exists === false) {
            return redis.multi()
                .sadd('test-set', someID)
                .sadd('another-test-set', someID)
                .exec()
        }
    })

Given this snippet (ignoring the uselessness of the actual inserts, obviously redis won't allow duplicate entries), I would expect the behavior of the redis.multi() to chain the promises such that the first .exec() invocation doesn't resolve/reject anything until after the second call to redis.multi() has finished its exec().

That is not the case however. I currently have to wrap the whole block in my own new Promise(function (resolve, reject) { } and invoke resolve() manually in the second .exec() callback.

return new Promise(function (resolve, reject) {
    return redis.multi()
        .sismember('test-set', someID)
        .sismember('another-test-set', someID)
        .exec(function (err, results) {
            var exists = results.reduce(function (prev, curr, i) {
                return (curr[1] === 1 || prev)
            }, false)

            if (exists === false) {
                return redis.multi()
                    .sadd('test-set', someID)
                    .sadd('another-test-set', someID)
                    .exec(function (err, results) {
                        resolve()
                    })
            }
            resolve()
        })
})

Is this intentional behavior? Perhaps this is just a wont-fix issue we could leave around for other's benefit?

Redis#connect should return promise

I want to handle redis connection error. It will be very useful:

var redisClient = new Redis({ lazyConnect: true });
redisClient.connect().then(function() {
    console.log('Yehhhuuu!');
}, function(error) {
    console.error(error);
});

repeated code in the lib/commander, not recommended arguments use

There are code parts like this https://github.com/luin/ioredis/blob/master/lib/commander.js#L43-L53

      var args = _.toArray(arguments);
      var callback;

      if (typeof args[args.length - 1] === 'function') {
        callback = args.pop();
      }

      var options = { replyEncoding: 'utf8' };
      if (this.options.showFriendlyErrorStack) {
        options.errorStack = new Error().stack;
      }

Part about the arguments: https://github.com/petkaantonov/bluebird/wiki/Optimization-killers#32-leaking-arguments

I understand that there is lots of boilerplate code, but it is supposed to speed up the module quite a bit

smth: function() {
   var length = arguments.length;
   var lastArgIndex = length - 1;
   var callback = arguments[lastArgIndex];
   if (typeof callback !== 'function') {
      callback = undefined;
   } else {
      length = lastArgIndex;
   }
   var args = new Array(length);
   for (var i = 0; i < length; i++) {
      args[i] = arguments[i];
   }

   // .... options part

   var command = new Command(commandName, args, options, callback);
   return this.sendCommand(command);
}

Connecting to redis sentinel troubleshooting

I'm trying to use this configuration for a sentinal connection

var REDIS_CONFIG = {
    sentinels: [{ host: 'redis-1', port: 26379},
                { host: 'redis-2', port: 26379},
                { host: 'redis-3', port: 26379}],
    name: 'redismastert',
    db: 0
};

var REDIS_CLIENT = new Redis(REDIS_CONFIG);

But connection fail to establish due to:

info: connected to sentinel redis-1:26379 successfully, but got a invalid reply: null
info: connected to sentinel redis-2:26379 successfully, but got a invalid reply: null
info: connected to sentinel redis-3:26379 successfully, but got a invalid reply: null

what is this invalid reply and how can i solve it?

thanks luin, love this library

LUA eval not recognizing key slot

I am using the cluster access capabilities of ioredis. Running scripts that SET data are working, but when trying to run a simple GET:
"return redis.call('GET', 'foo')"
the client is not able to recognize where the key "foo" is stored in the cluster.

Is there a way to do this besides parsing CLUSTER SLOTS to find the keyslot for foo manually?

Client not brpoping after lost/re-establishing connection

I noticed that after I restarted redis-server, none of my services (all listening on lists) were reading from their lists. I was able to duplicate the problem this morning with this code:

var Redis = require('ioredis');
var client = new Redis();

function ListenToRedis(){
    client.brpop('testlist', 0, function(listname, item){
        console.log(item);
        process.nextTick(function(){
            ListenToRedis();
        });
    });

}
ListenToRedis();

If I restart my redis server (even very quickly - sudo service redis-server restart), items begin building up in the list, the above script is no longer brpoping them. How can I solve this?

Thanks!

How to handle errors?

Like these:

Unhandled rejection ReplyError: CLUSTERDOWN The cluster is down
Unhandled rejection ReplyError: UNBLOCKED force unblock from blocking operation, instance state changed (master -> slave?)

Unref new connections

Node process stays live as long as there are open connections, so when running a short-lived process, the Redis connection prevents the process from exiting when there's no more work to be done.

Right now we use this:

  client.on('connect', function() {
    client.connector.stream.unref();
  });

A proper API, connection option, or default behavior would help here.

consider using SRV records for redis & sentinel hosts

In an ever changing world like docker & microservices , the IP-adresses + PORTS of hosts change.
Traditionally only DNS takes care of the hostname->IP

  • Ideally on connection failure the client would lookup the DNS & Port again.
  • On redis-sentinel topology change it could lookup the master & slave DNS & Port again.

We could leverage the SRV record to find the current redis host & port instead of a regular dns lookup.

This ticket is to ask if you will find the useful, and would be willing to accept a pullrequest for implementing this.

Question: speed improvements over node_redis

I've been following this project with a lot of interest and just saw your benchmarks against node_redis. Can you please explain what causes this improvement? I always had the impression that node_redis focussed a lot on speed as well. Thanks!

rfc - a preferred slave list in a sentinel setup

We want to run a sentinel setup:

  • each webserver has it's own redis slave (will NOT become master - using slave-priority = 0)
  • a set of sentinels running to protect us from partioning and handle the quorum
  • a master and a few redis slaves for failover of master

In the current implementation , the slaves can be selected using the role slave but it picks a rather arbitrary slave. We would propose to have a way to indicate a list of preferred slaves for a client.

This is useful is in multi-AZ aws setups, where we want to keep the latency to app<->slave low .
A new syntax could look like:

var redis = new Redis({
  sentinels: [{ host: 'localhost', port: 26379 }, { host: 'localhost', port: 26380 }],
  preferred_slaves: [ { prio: 10 , host: 'localhost', port: 6381 }]
  name: 'mymaster'
});

Change of redis cluster master does not update in the node application using ioredis

I have a cluster of 3 redis and 3 sentinels. To make the test simple I have installed all of them in the same machine in the following ports

Redis 1,2,3 : localhost 6379, 6380, 6381
Sentinel 1,2,3: localhost 26379, 26380, 26381

Initially redis master is configured at port 6379 in redis and sentinel conf files.

To test master fail over to another slave instance say 6380, from redis-client I have configured new master with below command

SLAVEOF localhost 6380

I have configured my app's ioredis setup as below. After SLAVEOF changes Node application does not automatically switch to new master, as I expect this to be transparent to the application

var options = {sentinels: [{ host: 'localhost', port: 26379 }, { host: 'localhost', port: 26380 },  
{ host:     'localhost', port: 26381 }],
name: 'mymaster'};

// required for passport sessions
app.use(session({
    store: new RedisStore({ client: new Redis(options) }),
    secret: sessionConf.secret
}));

From redis-client I can observe that the master is now switched to 6380. But from application logs I can see write error as I am trying to write to slave redis in this case at port 6379. I am not sure if this is a problem of ioredis or I am configuring redis incorrectly

I am using the redis and sentinel confs in this location => https://github.com/saltukalakus/xuser/tree/master/infra/redis

Retries per request limit

I'm migrating to ioredis from node_redis and cannot find max_attempts feature analog, is this about me bad in searching?
It's limit for retries per each request after which it will just fail.

I can implement it on my own as a wrapper but I need to stop requests so that they will not continue to try requesting and this feature is also missing.

How to avoid CLUSTERDOWN and UNBLOCKED?

Revisiting this problem, I asked on the redis google group about it and here it is what @antirez said:

Hello,

CLUSTERDOWN is a transient error that happens when at least one master
node is down. If you want the partial portion of the cluster which is
still up to run regardless of a set of hash slots not covered, there
is an option inside the example "redis.conf" file, doing exactly this.

UNBLOCKED is unavoidable since it is delivered to clients that are
blocked into lists that are moved from a different master because of
resharding. We don't want them to wait forever for something that will
never happen, since those lists are not moved into a different master.

So CLUSTERDOWN should be handled by the application and or at client
level directly by retrying. UNBLOCKED should be handled rescanning the
config with CLUSTER SLOTS and connecting to the right node.

Cheers,
Salvatore

Configuration-wise, I've set my cluster with "cluster-require-full-coverage" to "no" so I just need to know how to cover this on the application side

Question: is an error thrown at startup if you cannot connect to Redis?

I am using this for session storage and in development I sometimes forget to start redis first. I'd like this to throw some type of error if it can't connect to redis. Any advice is welcome.

// Redis for session storage
var redis = new Redis(process.env.REDIS_URI, {
  showFriendlyErrorStack: true,
  retryStrategy: function (times) {
    var delay = Math.min(times * 2, 2000);
    return delay;
  }
});

mySession.store = new RedisStore({
  client: redis
});

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.