GithubHelp home page GithubHelp logo

rabbitmq / discussions Goto Github PK

View Code? Open in Web Editor NEW
3.0 3.0 4.0 4 KB

Please use RabbitMQ mailing list for questions. Issues that are questions, discussions or lack details necessary to investigate them are moved to this repository.

discussions's People

Contributors

deadtrickster avatar johanrhodin avatar michaelklishin avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

discussions's Issues

How do i find out the client who consumes the queue

Hi, can anyone help me out on how to find out the client nor the consumer who consumes the queue.
is there something like signalR connectionId for each client. how ever the reason behind this is i need to send messages to a specific client on a particular queue. how do i achieve this. i believe there should be a connectionId for each client. my application is built on c# .net core project.
this is my architecture
image

however, my task is to find out who sent the message and respond back to that client.

Backup/restore definitions to different nodes of cluster

Currently, when I export defintions of a rabbitmq cluster, and import it to another cluster, all the queues will locate to the same node after importing. I think this is by design? But this is a bit unresonable, especially when the original queues are balanced accross the cluster:

image

the restored cluster:
image
All the queues centralize to the same node.

Issue reaching port 15670

Having trouble reaching the port example on Rancher in combination with RMQ HA.

Has anyone done this.

MQTT: client ID Raft machine fails with a "missing_segment_header" during plugin activation

Hi,

With the latest update the service doesn't start. I did some test and the problem seems related with the mqtt plugin. If I disable all plugins the service start flawlessly:

2019-11-11 13:59:19.424 [info] <0.8.0> Log file opened with Lager
2019-11-11 13:59:24.922 [info] <0.8.0> Feature flags: list of feature flags found:
2019-11-11 13:59:24.922 [info] <0.8.0> Feature flags:   [ ] implicit_default_bindings
2019-11-11 13:59:24.922 [info] <0.8.0> Feature flags:   [ ] quorum_queue
2019-11-11 13:59:24.923 [info] <0.8.0> Feature flags:   [ ] virtual_host_metadata
2019-11-11 13:59:24.923 [info] <0.8.0> Feature flags: feature flag states written to disk: yes
2019-11-11 13:59:31.195 [info] <0.296.0> ra: meta data store initialised. 1 record(s) recovered
2019-11-11 13:59:31.196 [info] <0.301.0> WAL: recovering ["/var/lib/rabbitmq/mnesia/rabbit@ip-172-31-10-228/quorum/rabbit@ip-172-31-10-228/00000108.wal"]
2019-11-11 13:59:31.200 [info] <0.305.0> 
 Starting RabbitMQ 3.8.1 on Erlang 22.1.7
 Copyright (c) 2007-2019 Pivotal Software, Inc.
 Licensed under the MPL 1.1. Website: https://rabbitmq.com
2019-11-11 13:59:31.203 [info] <0.305.0> 
 node           : rabbit@ip-172-31-10-228
 home dir       : /var/lib/rabbitmq
 config file(s) : /etc/rabbitmq/rabbitmq.conf
 cookie hash    : VcOVvu2ZTjHbYbayr0NOdg==
 log(s)         : /var/log/rabbitmq/[email protected]
                : /var/log/rabbitmq/rabbit@ip-172-31-10-228_upgrade.log
 database dir   : /var/lib/rabbitmq/mnesia/rabbit@ip-172-31-10-228
2019-11-11 13:59:31.232 [info] <0.305.0> Running boot step pre_boot defined by app rabbit
2019-11-11 13:59:31.232 [info] <0.305.0> Running boot step rabbit_core_metrics defined by app rabbit
2019-11-11 13:59:31.233 [info] <0.305.0> Running boot step rabbit_alarm defined by app rabbit
2019-11-11 13:59:31.250 [info] <0.320.0> Memory high watermark set to 782 MiB (820319027 bytes) of 1955 MiB (2050797568 bytes) total
2019-11-11 13:59:31.279 [info] <0.336.0> Enabling free disk space monitoring
2019-11-11 13:59:31.280 [info] <0.336.0> Disk free limit set to 50MB
2019-11-11 13:59:31.285 [info] <0.305.0> Running boot step code_server_cache defined by app rabbit
2019-11-11 13:59:31.285 [info] <0.305.0> Running boot step file_handle_cache defined by app rabbit
2019-11-11 13:59:31.286 [info] <0.347.0> FHC read buffering:  OFF
2019-11-11 13:59:31.286 [info] <0.347.0> FHC write buffering: ON
2019-11-11 13:59:31.286 [info] <0.346.0> Limiting to approx 32671 file handles (29401 sockets)
2019-11-11 13:59:31.293 [info] <0.305.0> Running boot step worker_pool defined by app rabbit
2019-11-11 13:59:31.294 [info] <0.306.0> Will use 2 processes for default worker pool
2019-11-11 13:59:31.294 [info] <0.306.0> Starting worker pool 'worker_pool' with 2 processes in it
2019-11-11 13:59:31.294 [info] <0.305.0> Running boot step database defined by app rabbit
2019-11-11 13:59:37.296 [info] <0.305.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2019-11-11 13:59:37.296 [info] <0.305.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2019-11-11 13:59:37.320 [info] <0.305.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2019-11-11 13:59:37.321 [info] <0.305.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping registration.
2019-11-11 13:59:37.321 [info] <0.305.0> Running boot step database_sync defined by app rabbit
2019-11-11 13:59:37.321 [info] <0.305.0> Running boot step feature_flags defined by app rabbit
2019-11-11 13:59:37.321 [info] <0.305.0> Running boot step codec_correctness_check defined by app rabbit
2019-11-11 13:59:37.321 [info] <0.305.0> Running boot step external_infrastructure defined by app rabbit
2019-11-11 13:59:37.321 [info] <0.305.0> Running boot step rabbit_registry defined by app rabbit
2019-11-11 13:59:37.321 [info] <0.305.0> Running boot step rabbit_auth_mechanism_cr_demo defined by app rabbit
2019-11-11 13:59:37.321 [info] <0.305.0> Running boot step rabbit_queue_location_random defined by app rabbit
2019-11-11 13:59:37.321 [info] <0.305.0> Running boot step rabbit_event defined by app rabbit
2019-11-11 13:59:37.321 [info] <0.305.0> Running boot step rabbit_auth_mechanism_amqplain defined by app rabbit
2019-11-11 13:59:37.322 [info] <0.305.0> Running boot step rabbit_auth_mechanism_plain defined by app rabbit
2019-11-11 13:59:37.322 [info] <0.305.0> Running boot step rabbit_exchange_type_direct defined by app rabbit
2019-11-11 13:59:37.322 [info] <0.305.0> Running boot step rabbit_exchange_type_fanout defined by app rabbit
2019-11-11 13:59:37.322 [info] <0.305.0> Running boot step rabbit_exchange_type_headers defined by app rabbit
2019-11-11 13:59:37.322 [info] <0.305.0> Running boot step rabbit_exchange_type_topic defined by app rabbit
2019-11-11 13:59:37.322 [info] <0.305.0> Running boot step rabbit_mirror_queue_mode_all defined by app rabbit
2019-11-11 13:59:37.322 [info] <0.305.0> Running boot step rabbit_mirror_queue_mode_exactly defined by app rabbit
2019-11-11 13:59:37.322 [info] <0.305.0> Running boot step rabbit_mirror_queue_mode_nodes defined by app rabbit
2019-11-11 13:59:37.322 [info] <0.305.0> Running boot step rabbit_priority_queue defined by app rabbit
2019-11-11 13:59:37.322 [info] <0.305.0> Priority queues enabled, real BQ is rabbit_variable_queue
2019-11-11 13:59:37.322 [info] <0.305.0> Running boot step rabbit_queue_location_client_local defined by app rabbit
2019-11-11 13:59:37.323 [info] <0.305.0> Running boot step rabbit_queue_location_min_masters defined by app rabbit
2019-11-11 13:59:37.323 [info] <0.305.0> Running boot step kernel_ready defined by app rabbit
2019-11-11 13:59:37.323 [info] <0.305.0> Running boot step rabbit_sysmon_minder defined by app rabbit
2019-11-11 13:59:37.323 [info] <0.305.0> Running boot step rabbit_epmd_monitor defined by app rabbit
2019-11-11 13:59:37.327 [info] <0.378.0> epmd monitor knows us, inter-node communication (distribution) port: 25672
2019-11-11 13:59:37.328 [info] <0.305.0> Running boot step guid_generator defined by app rabbit
2019-11-11 13:59:37.333 [info] <0.305.0> Running boot step rabbit_node_monitor defined by app rabbit
2019-11-11 13:59:37.333 [info] <0.382.0> Starting rabbit_node_monitor
2019-11-11 13:59:37.334 [info] <0.305.0> Running boot step delegate_sup defined by app rabbit
2019-11-11 13:59:37.335 [info] <0.305.0> Running boot step rabbit_memory_monitor defined by app rabbit
2019-11-11 13:59:37.335 [info] <0.305.0> Running boot step core_initialized defined by app rabbit
2019-11-11 13:59:37.335 [info] <0.305.0> Running boot step upgrade_queues defined by app rabbit
2019-11-11 13:59:37.358 [info] <0.305.0> Running boot step rabbit_connection_tracking defined by app rabbit
2019-11-11 13:59:37.358 [info] <0.305.0> Running boot step rabbit_connection_tracking_handler defined by app rabbit
2019-11-11 13:59:37.358 [info] <0.305.0> Running boot step rabbit_exchange_parameters defined by app rabbit
2019-11-11 13:59:37.358 [info] <0.305.0> Running boot step rabbit_mirror_queue_misc defined by app rabbit
2019-11-11 13:59:37.359 [info] <0.305.0> Running boot step rabbit_policies defined by app rabbit
2019-11-11 13:59:37.360 [info] <0.305.0> Running boot step rabbit_policy defined by app rabbit
2019-11-11 13:59:37.360 [info] <0.305.0> Running boot step rabbit_queue_location_validator defined by app rabbit
2019-11-11 13:59:37.360 [info] <0.305.0> Running boot step rabbit_quorum_memory_manager defined by app rabbit
2019-11-11 13:59:37.360 [info] <0.305.0> Running boot step rabbit_vhost_limit defined by app rabbit
2019-11-11 13:59:37.360 [info] <0.305.0> Running boot step recovery defined by app rabbit
2019-11-11 13:59:37.364 [info] <0.419.0> Making sure data directory '/var/lib/rabbitmq/mnesia/rabbit@ip-172-31-10-228/msg_stores/vhosts/6L19J0ZI9130CN2MP4A516L7S' for vhost '/vanilla' exists
2019-11-11 13:59:37.370 [info] <0.419.0> Starting message stores for vhost '/vanilla'
2019-11-11 13:59:37.371 [info] <0.423.0> Message store "6L19J0ZI9130CN2MP4A516L7S/msg_store_transient": using rabbit_msg_store_ets_index to provide index
2019-11-11 13:59:37.376 [info] <0.419.0> Started message store of type transient for vhost '/vanilla'
2019-11-11 13:59:37.377 [info] <0.426.0> Message store "6L19J0ZI9130CN2MP4A516L7S/msg_store_persistent": using rabbit_msg_store_ets_index to provide index
2019-11-11 13:59:37.381 [info] <0.419.0> Started message store of type persistent for vhost '/vanilla'
2019-11-11 13:59:37.409 [info] <0.485.0> Making sure data directory '/var/lib/rabbitmq/mnesia/rabbit@ip-172-31-10-228/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L' for vhost '/' exists
2019-11-11 13:59:37.413 [info] <0.485.0> Starting message stores for vhost '/'
2019-11-11 13:59:37.414 [info] <0.489.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_transient": using rabbit_msg_store_ets_index to provide index
2019-11-11 13:59:37.419 [info] <0.485.0> Started message store of type transient for vhost '/'
2019-11-11 13:59:37.420 [info] <0.492.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": using rabbit_msg_store_ets_index to provide index
2019-11-11 13:59:37.424 [info] <0.485.0> Started message store of type persistent for vhost '/'
2019-11-11 13:59:37.663 [info] <0.305.0> Running boot step empty_db_check defined by app rabbit
2019-11-11 13:59:37.663 [info] <0.305.0> Running boot step rabbit_looking_glass defined by app rabbit
2019-11-11 13:59:37.663 [info] <0.305.0> Running boot step rabbit_core_metrics_gc defined by app rabbit
2019-11-11 13:59:37.663 [info] <0.305.0> Running boot step background_gc defined by app rabbit
2019-11-11 13:59:37.663 [info] <0.305.0> Running boot step connection_tracking defined by app rabbit
2019-11-11 13:59:37.663 [info] <0.305.0> Setting up a table for connection tracking on this node: 'tracked_connection_on_node_rabbit@ip-172-31-10-228'
2019-11-11 13:59:37.664 [info] <0.305.0> Setting up a table for per-vhost connection counting on this node: 'tracked_connection_per_vhost_on_node_rabbit@ip-172-31-10-228'
2019-11-11 13:59:37.664 [info] <0.305.0> Running boot step routing_ready defined by app rabbit
2019-11-11 13:59:37.664 [info] <0.305.0> Running boot step pre_flight defined by app rabbit
2019-11-11 13:59:37.664 [info] <0.305.0> Running boot step notify_cluster defined by app rabbit
2019-11-11 13:59:37.664 [info] <0.305.0> Running boot step networking defined by app rabbit
2019-11-11 13:59:37.671 [info] <0.569.0> started TCP listener on [::]:5672
2019-11-11 13:59:37.671 [info] <0.305.0> Running boot step cluster_name defined by app rabbit
2019-11-11 13:59:37.671 [info] <0.305.0> Running boot step direct_client defined by app rabbit
2019-11-11 13:59:38.772 [notice] <0.107.0> Changed loghwm of /var/log/rabbitmq/[email protected] to 50
2019-11-11 13:59:39.327 [info] <0.8.0> Server startup complete; 0 plugins started.

But once I enable the mqtt plugin it crashes:

2019-11-11 14:03:43.328 [info] <0.839.0> MQTT retained message store: rabbit_mqtt_retained_msg_store_dets
2019-11-11 14:03:43.344 [info] <0.860.0> started MQTT TCP listener on [::]:1883
2019-11-11 14:03:59.394 [error] <0.866.0> ** State machine mqtt_node terminating
** When server state  = {undefined,"ra_server_proc:format_status/2 crashed"}
** Reason for termination = error:{badmatch,{error,missing_segment_header}}
** Callback mode = undefined
** Stacktrace =
**  [{ra_log,'-recover_range/2-fun-1-',2,[{file,"src/ra_log.erl"},{line,1034}]},{ra_log,'-recover_range/2-lists^foldl/2-1-',3,[{file,"src/ra_log.erl"},{line,1032}]},{ra_log,recover_range,2,[{file,"src/ra_log.erl"},{line,1032}]},{ra_log,init,1,[{file,"src/ra_log.erl"},{line,143}]},{ra_server,init,1,[{file,"src/ra_server.erl"},{line,234}]},{ra_server_proc,init,1,[{file,"src/ra_server_proc.erl"},{line,237}]},{gen_statem,init_it,6,[{file,"gen_statem.erl"},{line,714}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,249}]}]
2019-11-11 14:03:59.395 [error] <0.866.0> CRASH REPORT Process <0.866.0> with 0 neighbours crashed with reason: no match of right hand value {error,missing_segment_header} in ra_log:'-recover_range/2-fun-1-'/2 line 1034
2019-11-11 14:03:59.395 [error] <0.865.0> Supervisor {<0.865.0>,ra_server_sup} had child mqtt_node started with ra_server_proc:start_link(#{await_condition_timeout => 30000,broadcast_time => 100,cluster_name => mqtt_node,friendly_name => ...,...}) at undefined exit with reason {badmatch,{error,missing_segment_header}} in context start_error
2019-11-11 14:03:59.396 [error] <0.836.0> CRASH REPORT Process <0.836.0> with 0 neighbours exited with reason: no match of right hand value {error,{shutdown,{failed_to_start_child,mqtt_node,{badmatch,{error,missing_segment_header}}}}} in rabbit_mqtt:start/2 line 29 in application_master:init/4 line 138
2019-11-11 14:03:59.396 [info] <0.43.0> Application rabbitmq_mqtt exited with reason: no match of right hand value {error,{shutdown,{failed_to_start_child,mqtt_node,{badmatch,{error,missing_segment_header}}}}} in rabbit_mqtt:start/2 line 29
2019-11-11 14:03:59.396 [info] <0.860.0> stopped MQTT TCP listener on [::]:1883
2019-11-11 14:03:59.396 [info] <0.43.0> Application amqp_client exited with reason: stopped

To be sure that it's not a problem of configuration, I commented all the mqtt configuration from the rabbitmq.conf file.

Any solution? Thanks

Cannot declare a delayed message exchange with 3.8.1

environment:win10,rabbitmq3.8.1
F:\rabbitmq\rabbitmq_server-3.8.1\sbin>rabbitmq-plugins enable rabbitmq_delayed_message_exchange
Enabling plugins on node rabbit@USERCHI-1H0BT4J:
rabbitmq_delayed_message_exchange
The following plugins have been configured:
rabbitmq_delayed_message_exchange
rabbitmq_management
rabbitmq_management_agent
rabbitmq_web_dispatch
Applying plugin configuration to rabbit@USERCHI-1H0BT4J...
Plugin configuration unchanged.

but wen i add exchange in web management tip:
Invalid argument, 'x-delayed-type' must be an existing exchange type

Block processing messages when receiving a low-level error

RabbitMQ 3.6.12, Erlang 20.3
amqp-client: 5.7.3

When I consume a broken message from a queue I get an error: "com.rabbitmq.client.MalformedFrameException: Unrecognised type in table". The processing of this situation is the break of the connection. The message is then returned back to the queue. Now there is no way to configure the behavior, throw this message in dead letter exchange or just throw it out of the queue. This blocks further message processing until we manually delete the broken message.

Java tutorial two: acknowledgement does not have the desired effect?

I use the java language to test the connection to RabbitMQ, two consumers, one of the consumers, got the message, but did not successfully consume, I want to try RabbitMQ to send this failed message to another consumer.

View official documentation https://www.rabbitmq.com/tutorials/tutorial-two-java.html Message acknowledgment

And run the example, but did not achieve the expected results

code show as below:

  • worker1
package com.gp.rabbitmq;

import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.DeliverCallback;

/**
 * @author gao peng
 * @date 2019/11/15 15:08
 */
public class WorkerDemo {

  private static final String TASK_QUEUE_NAME = "task_queue";

  public static void main(String[] argv) throws Exception {
    ConnectionFactory factory = new ConnectionFactory();
    factory.setHost(GPHost.ip);
    final Connection connection = factory.newConnection();
    final Channel channel = connection.createChannel();

    channel.queueDeclare(TASK_QUEUE_NAME, true, false, false, null);
    System.out.println(" [*] Waiting for messages. To exit press CTRL+C");

//    channel.basicQos(1);

    DeliverCallback deliverCallback = (consumerTag, delivery) -> {
      String message = new String(delivery.getBody(), "UTF-8");

      System.out.println(" [x] Received '" + message + "'");
      try {
        doWork(message);
      } finally {
//        System.out.println(" [x] Done");
//        channel.basicAck(delivery.getEnvelope().getDeliveryTag(), false);
      }
    };
    channel.basicConsume(TASK_QUEUE_NAME, false, deliverCallback, consumerTag -> {});
  }

  private static void doWork(String task) {
    for (char ch : task.toCharArray()) {
      if (ch == '.') {
        try {
          Thread.sleep(3000);
        } catch (InterruptedException _ignored) {
          Thread.currentThread().interrupt();
        }
      }
    }
  }
}
  • worker2
package com.gp.rabbitmq;

import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.DeliverCallback;

/**
 * @author gao peng
 * @date 2019/11/15 15:08
 */
public class WorkerDemo2 {

  private static final String TASK_QUEUE_NAME = "task_queue";

  public static void main(String[] argv) throws Exception {
    ConnectionFactory factory = new ConnectionFactory();
    factory.setHost(GPHost.ip);
    final Connection connection = factory.newConnection();
    final Channel channel = connection.createChannel();

    channel.queueDeclare(TASK_QUEUE_NAME, true, false, false, null);
    System.out.println(" [*] Waiting for messages. To exit press CTRL+C");

//    channel.basicQos(1);

    DeliverCallback deliverCallback = (consumerTag, delivery) -> {
      String message = new String(delivery.getBody(), "UTF-8");

      System.out.println(" [x] Received '" + message + "'");
      try {
        doWork(message);
      } finally {
        System.out.println(" [x] Done");
        channel.basicAck(delivery.getEnvelope().getDeliveryTag(), false);
      }
    };
    channel.basicConsume(TASK_QUEUE_NAME, false, deliverCallback, consumerTag -> {});
  }

  private static void doWork(String task) {
    for (char ch : task.toCharArray()) {
      if (ch == '.') {
        try {
          Thread.sleep(3000);
        } catch (InterruptedException _ignored) {
          Thread.currentThread().interrupt();
        }
      }
    }
  }
}
  • sender
package com.gp.rabbitmq;

import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;

import java.nio.charset.StandardCharsets;

/**
 * @author gao peng
 * @date 2019/11/15 14:49
 */
public class SendingDemo {

  private final static String QUEUE_NAME = "task_queue";

  public static void main(String[] argv) throws Exception {
    ConnectionFactory factory = new ConnectionFactory();
    factory.setHost("192.168.9.214");

    Connection connection = factory.newConnection();
    Channel channel = connection.createChannel();

    channel.queueDeclare(QUEUE_NAME, true, false, false, null);

    String message = "Hello World BEijing!1";
    channel.basicPublish("", QUEUE_NAME, null, message.getBytes(StandardCharsets.UTF_8));
    String message1 = "Hello World BEijing!2";
    channel.basicPublish("", QUEUE_NAME, null, message1.getBytes(StandardCharsets.UTF_8));
    String message2 = "Hello World BEijing!3";
    channel.basicPublish("", QUEUE_NAME, null, message2.getBytes(StandardCharsets.UTF_8));

    channel.close();
    connection.close();
  }
}

Output result:

  • worker1
 [x] Received 'Hello World BEijing!2'
  • worker2
 [x] Received 'Hello World BEijing!1'
 [x] Done
 [x] Received 'Hello World BEijing!3'
 [x] Done

worker1 No successful returnchannel.basicAck(delivery.getEnvelope().getDeliveryTag(), false); However, RabbitMQ does not send messages to the other workers.

Can regular expressions be supported, such as a|b

Thank you for using RabbitMQ.

STOP NOW AND READ THIS BEFORE OPENING A NEW ISSUE ON GITHUB

Unless you are CERTAIN you have found a reproducible problem in RabbitMQ or
have a specific, actionable suggestion for our team, you must first ask
your question or discuss your suspected issue on the mailing list:

https://groups.google.com/forum/#!forum/rabbitmq-users

Team RabbitMQ does not use GitHub issues for discussions, investigations, root
cause analysis and so on.

Please take the time to read the CONTRIBUTING.md document for instructions on
how to effectively ask a question or report a suspected issue:

https://github.com/rabbitmq/rabbitmq-server/blob/master/CONTRIBUTING.md#github-issues

Following these rules will save time for both you and RabbitMQ's maintainers.

Thank you.

MQTT: error in the log

I have an error in rabbitmq

OS:Windows Server 2019
RabbitMQ version:3.7.23
Erlang version 22.0
RabbitMQ server and client application log

2019-12-11 17:35:15 =ERROR REPORT====
** Generic server <0.6073.243> terminating
** Last message in was {tcp,#Port<0.742653>,<<16,114,0,4,77,81,84,84,4,192,1,44,0,17,68,95,51,53,56,57,56,49,49,48,48,48,51,49,48,53,50,0,17,68,95,51,53,56,57,56,49,49,48,48,48,51,49,48,53,50,0,64,49,103,112,49,49,54,122,56,50,81,43,81,74,48,110,77,97,68,48,71,85,81,55,55,98,79,122,81,86,110,70,120,78,112,116,49,84,80,68,107,76,47,121,70,119,82,70,69,68,102,112,116,53,111,77,115,48,97,69,56,114,80,113,106>>}
** When Server state == {state,#Port<0.742653>,"xxx.58.47.9:60857 -> xxx.31.19.xxx:1883",true,undefined,false,running,{none,none},<0.30039.247>,false,none,{proc_state,#Port<0.742653>,#{},{undefined,undefined},{0,nil},{0,nil},undefined,1,undefined,undefined,undefined,{undefined,undefined},undefined,<<"amq.topic">>,{amqp_adapter_info,{172,31,19,134},1883,{172,58,47,9},60857,<<"172.58.47.9:60857 -> 172.31.19.134:1883">>,{'MQTT',"N/A"},[{channels,1},{channel_max,1},{frame_max,0},{client_properties,[{<<"product">>,longstr,<<"MQTT client">>}]},{ssl,false}]},none,undefined,undefined,#Fun<rabbit_mqtt_processor.0.18620547>,{172,58,47,9},#Fun<rabbit_mqtt_util.4.62058906>,#Fun<rabbit_mqtt_util.5.62058906>},undefined,{state,fine,5000,undefined}}
** Reason for termination == 
** {{badmatch,{error,einval}},[{rabbit_mqtt_processor,process_login,4,[{file,"src/rabbit_mqtt_processor.erl"},{line,533}]},{rabbit_mqtt_processor,process_request,3,[{file,"src/rabbit_mqtt_processor.erl"},{line,137}]},{rabbit_mqtt_processor,process_frame,2,[{file,"src/rabbit_mqtt_processor.erl"},{line,77}]},{rabbit_mqtt_reader,process_received_bytes,2,[{file,"src/rabbit_mqtt_reader.erl"},{line,286}]},{gen_server2,handle_msg,2,[{file,"src/gen_server2.erl"},{line,1067}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,249}]}]}
2019-12-11 17:35:15 =CRASH REPORT====
  crasher:
    initial call: rabbit_mqtt_reader:init/1
    pid: <0.6073.243>
    registered_name: []
    exception exit: {{{badmatch,{error,einval}},[{rabbit_mqtt_processor,process_login,4,[{file,"src/rabbit_mqtt_processor.erl"},{line,533}]},{rabbit_mqtt_processor,process_request,3,[{file,"src/rabbit_mqtt_processor.erl"},{line,137}]},{rabbit_mqtt_processor,process_frame,2,[{file,"src/rabbit_mqtt_processor.erl"},{line,77}]},{rabbit_mqtt_reader,process_received_bytes,2,[{file,"src/rabbit_mqtt_reader.erl"},{line,286}]},{gen_server2,handle_msg,2,[{file,"src/gen_server2.erl"},{line,1067}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,249}]}]},[{gen_server2,terminate,3,[{file,"src/gen_server2.erl"},{line,1183}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,249}]}]}
    ancestors: [<0.16309.235>,<0.29707.6>,<0.29705.6>,<0.29703.6>,rabbit_mqtt_sup,<0.29676.6>]
    message_queue_len: 1
    messages: [{tcp_closed,#Port<0.742653>}]
    links: [<0.16309.235>]
    dictionary: [{rand_seed,{#{jump => #Fun<rand.24.53802439>,max => 288230376151711743,next => #Fun<rand.23.53802439>,type => exsplus},[185523379786012086|162008706961358891]}}]
    trap_exit: true
    status: running
    heap_size: 1598
    stack_size: 27
    reductions: 5392
  neighbours:

rabbitmq-plugins list:
[ ] rabbitmq_amqp1_0 3.7.23
[E*] rabbitmq_auth_backend_cache 3.7.23
[E*] rabbitmq_auth_backend_http 3.7.23
[ ] rabbitmq_auth_backend_ldap 3.7.23
[ ] rabbitmq_auth_mechanism_ssl 3.7.23
[ ] rabbitmq_consistent_hash_exchange 3.7.23
[E*] rabbitmq_event_exchange 3.7.23
[ ] rabbitmq_federation 3.7.23
[ ] rabbitmq_federation_management 3.7.23
[ ] rabbitmq_jms_topic_exchange 3.7.23
[E*] rabbitmq_management 3.7.23
[e*] rabbitmq_management_agent 3.7.23
[E*] rabbitmq_mqtt 3.7.23
[ ] rabbitmq_peer_discovery_aws 3.7.23
[ ] rabbitmq_peer_discovery_common 3.7.23
[ ] rabbitmq_peer_discovery_consul 3.7.23
[ ] rabbitmq_peer_discovery_etcd 3.7.23
[ ] rabbitmq_peer_discovery_k8s 3.7.23
[ ] rabbitmq_random_exchange 3.7.23
[ ] rabbitmq_recent_history_exchange 3.7.23
[ ] rabbitmq_sharding 3.7.23
[ ] rabbitmq_shovel 3.7.23
[ ] rabbitmq_shovel_management 3.7.23
[ ] rabbitmq_stomp 3.7.23
[ ] rabbitmq_top 3.7.23
[ ] rabbitmq_tracing 3.7.23
[ ] rabbitmq_trust_store 3.7.23
[e*] rabbitmq_web_dispatch 3.7.23
[E*] rabbitmq_web_mqtt 3.7.23
[ ] rabbitmq_web_mqtt_examples 3.7.23
[ ] rabbitmq_web_stomp 3.7.23
[ ] rabbitmq_web_stomp_examples 3.7.23

Client library:
Java,C#,Node,more....

image

Sharding plugin: sharding only to local shards?

Is it possible to configure this plugin to only route messages to local bound shards?

For an example:
I have 2 shards per node with 3 nodes, so the following shards will be created:

  • sharding: shard.test- rabbit@rabbit-01 - 0
  • sharding: shard.test- rabbit@rabbit-01 - 1
  • sharding: shard.test- rabbit@rabbit-02 - 0
  • sharding: shard.test- rabbit@rabbit-02 - 1
  • sharding: shard.test- rabbit@rabbit-03 - 0
  • sharding: shard.test- rabbit@rabbit-03 - 1

Is it possible to have this plugin shard the message only to the locally bound queues?

If I am publishing a message when connected to rabbit-01, I would like that message to be sharded to either sharding: shard.test- rabbit@rabbit-01 - 0 or sharding: shard.test- rabbit@rabbit-01 - 1 because the contents of those queues are local to my connection so there is no need for additional network communication in the cluster.

Support for SAC definitions in policy

Hi,

Are there plans to support the single-active-consumer flag of a queue in a policy? This does not seem to be possible in 3.8.2 (via admin UI or API). I cannot see a reason not to allow it to be defined in a policy, but maybe I am overlooking something? FWIW, I couldn't find anything related to this in rabbitmq/rabbitmq-server#1802 and related issues.

status error in my Running mirror cluster with two nodes

Thank you for using RabbitMQ.

STOP NOW AND READ THIS BEFORE OPENING A NEW ISSUE ON GITHUB

Unless you are CERTAIN you have found a reproducible problem in RabbitMQ or
have a specific, actionable suggestion for our team, you must first ask
your question or discuss your suspected issue on the mailing list:

https://groups.google.com/forum/#!forum/rabbitmq-users

Team RabbitMQ does not use GitHub issues for discussions, investigations, root
cause analysis and so on.

Please take the time to read the CONTRIBUTING.md document for instructions on
how to effectively ask a question or report a suspected issue:

https://github.com/rabbitmq/rabbitmq-server/blob/master/CONTRIBUTING.md#github-issues

Following these rules will save time for both you and RabbitMQ's maintainers.

Thank you.

MQTT due to an internal error or unavailable component

The cluster has 6 nodes, if 1 or 2 nodes are down,MQTT could be connected. When 3 nodes or more than 3 nodes are down, MQTT couldn't be connected. But if we use AMQP connection method, the AMQP could be connected even there was only one node survived. We think that MQTT could also be connected if there is anynode alive. The server side error is as bellow:

2019-11-12 15:36:33.698 [error] <0.15599.2> MQTT cannot accept a connection: client ID registration timed out
2019-11-12 15:36:33.698 [error] <0.15599.2> MQTT cannot accept connection x.x.x.x:50528 -> x.x.x.x:1883 due to an internal error or unavailable component

RabbitMQ version:3.8.0
Erlang version:Erlang 22.1.4
I want to know the reason why MQTT couldn't be connected when more than 3 nodes are down. Looking forward for your suggection. Thank you very much.

Peer discovery fails to contact a node

2017-12-18 12:23:02.480 [info] <0.184.0> Peer discovery backend rabbit_peer_discovery_dns does not support registration, skipping randomized startup delay.
2017-12-18 12:23:02.481 [info] <0.184.0> Addresses discovered via A records of rabbitmq: 10.0.1.22, 10.0.1.7, 10.0.1.12
2017-12-18 12:23:02.484 [info] <0.184.0> Addresses discovered via AAAA records of rabbitmq:
2017-12-18 12:23:02.484 [info] <0.184.0> All discovered existing cluster peers: rabbit@bridge-apps-test_rabbitmq.3.txyqdhq5tag4ypvqxmrrv7xux.bridge-apps-test_bridgeoverlay, [email protected]_bridgeoverlay, rabbit@b133becbbdce
2017-12-18 12:23:02.484 [info] <0.184.0> Peer nodes we can cluster with: rabbit@bridge-apps-test_rabbitmq.3.txyqdhq5tag4ypvqxmrrv7xux.bridge-apps-test_bridgeoverlay, [email protected]_bridgeoverlay
2017-12-18 12:23:02.490 [warning] <0.184.0> Could not auto-cluster with node rabbit@bridge-apps-test_rabbitmq.3.txyqdhq5tag4ypvqxmrrv7xux.bridge-apps-test_bridgeoverlay: {badrpc,nodedown}
2017-12-18 12:23:02.496 [warning] <0.184.0> Could not auto-cluster with node [email protected]_bridgeoverlay: {badrpc,nodedown}
2017-12-18 12:23:02.496 [warning] <0.184.0> Could not successfully contact any node of: rabbit@bridge-apps-test_rabbitmq.3.txyqdhq5tag4ypvqxmrrv7xux.bridge-apps-test_bridgeoverlay,[email protected]_bridgeoverlay (as in Erlang distribution). Starting as a blank standalone node...

are the relevant log lines. Peer discovery does success but cluster formation doesn't.

Originally posted by @michaelklishin in rabbitmq/rabbitmq-server#1454 (comment)

MQTT: QoS 1 publishers are not being notified in case of unroutable publishes

Summary

Our team faced up with issue on our production environment - after RabbitMQ maintance all queues were deleted and our client is still publishing messages without any error.
We've prepared a short Pub/Sub demo reproducing it (https://github.com/EdwardSkrobala/MqttIssue).

Publisher & Subscriber connect to broker via websockets, publisher sends message every 5 sec with Qos-1 level set.
We need publisher to be acknowledged if it publishes message that is not delivered to subscriber.

Environment:

Steps to reproduce:

  • Run docker-compose -f docker-compose-brokers.yml up from root folder
  • Run dotnet run from project folder (src/Test/)
  • Trigger management UI - http://localhost:15672/#/queues (username="test", pass="test")
  • Delete Queue mqtt-subscription-subscriberqos1 to simulate maintance on server.

Code lines showing issue

Message configuration
var message = new MqttApplicationMessageBuilder() .WithTopic("MyTopic") .WithPayload($"Hello World [{DateTime.Now.ToLongTimeString()}]") .WithQualityOfServiceLevel(MQTTnet.Protocol.MqttQualityOfServiceLevel.AtLeastOnce) .WithRetainFlag(false) .Build()
PublishAsync returns MqttClientPublishResult containing MqttClientPublishReasonCode (https://github.com/chkr1011/MQTTnet/blob/master/Source/MQTTnet/Client/Publishing/MqttClientPublishResult.cs)
var publishResult = await publisherClient.PublishAsync(message)

Expected result:
Exception or MqttClientPublishResult.ReasonCode = MqttClientPublishReasonCode.NoMatchingSubscribers

Actual result:
MqttClientPublishResult.ReasonCode = Success

Thanks in advance for the assistance

springboot Run for a while

Caused by: java.net.SocketException: Socket closed
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:116)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:95)
at java.io.DataOutputStream.writeByte(DataOutputStream.java:153)
at com.rabbitmq.client.impl.Frame.writeTo(Frame.java:185)
at com.rabbitmq.client.impl.SocketFrameHandler.writeFrame(SocketFrameHandler.java:171)
at com.rabbitmq.client.impl.AMQConnection.writeFrame(AMQConnection.java:562)
at com.rabbitmq.client.impl.AMQCommand.transmit(AMQCommand.java:117)
at com.rabbitmq.client.impl.AMQChannel.quiescingTransmit(AMQChannel.java:447)
at com.rabbitmq.client.impl.AMQChannel.transmit(AMQChannel.java:423)
at com.rabbitmq.client.impl.ChannelN.basicPublish(ChannelN.java:704)
at com.rabbitmq.client.impl.ChannelN.basicPublish(ChannelN.java:679)
at org.springframework.amqp.rabbit.connection.PublisherCallbackChannelImpl.basicPublish(PublisherCallbackChannelImpl.java:239)
at sun.reflect.GeneratedMethodAccessor334.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$CachedChannelInvocationHandler.invoke(CachingConnectionFactory.java:1071)
at com.sun.proxy.$Proxy187.basicPublish(Unknown Source)
at org.springframework.amqp.rabbit.core.RabbitTemplate.sendToRabbit(RabbitTemplate.java:2045)
at org.springframework.amqp.rabbit.core.RabbitTemplate.doSend(RabbitTemplate.java:2033)
at org.springframework.amqp.rabbit.core.RabbitTemplate.lambda$send$3(RabbitTemplate.java:876)
at org.springframework.amqp.rabbit.core.RabbitTemplate.doExecute(RabbitTemplate.java:1868)
... 125 more

Java client: TLS support on Android 5.1

  • RabbitMQ version = 3.7.7
  • Erlang version = 20.2.2
  • Client library version = implementation 'com.rabbitmq:amqp-client:4.11.3'

I am trying to use this client library on Android 5.1 and getting an java.lang.NoSuchMethodError when Android API level is less than 24. Here's the stack trace:

Fatal Exception: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)
	at com.rabbitmq.client.SocketConfigurators.enableHostnameVerification(SocketConfigurators.java:72)
	at com.rabbitmq.client.SocketConfigurators$2.configure(SocketConfigurators.java:60)
	at com.rabbitmq.client.SocketConfigurators$AbstractSocketConfigurator$1.configure(SocketConfigurators.java:135)
	at com.rabbitmq.client.impl.SocketFrameHandlerFactory.create(SocketFrameHandlerFactory.java:56)
	at com.rabbitmq.client.impl.recovery.RecoveryAwareAMQConnectionFactory.newConnection(RecoveryAwareAMQConnectionFactory.java:61)
	at com.rabbitmq.client.impl.recovery.AutorecoveringConnection.init(AutorecoveringConnection.java:177)
	at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1161)
	at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1118)
	at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1076)
	at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1236)
	at com.incidentclear.driver.lib.networking.Rabbit.connect(Rabbit.java:163)
	at com.incidentclear.driver.lib.services.DataService.tick(DataService.java:322)
	at com.incidentclear.driver.lib.services.DataService.access$500(DataService.java:53)
	at com.incidentclear.driver.lib.services.DataService$2.run(DataService.java:212)
	at android.os.Handler.handleCallback(Handler.java:739)
	at android.os.Handler.dispatchMessage(Handler.java:95)
	at android.os.Looper.loop(Looper.java:135)
	at android.os.HandlerThread.run(HandlerThread.java:61)

Looks like SSLParameters#setEndpointIdentificationAlgorithm isn't available to android 5.1 devices, which you've got on line of 58 of /src/main/java/com/rabbitmq/client/SocketConfigurators.java:

sslParameters.setEndpointIdentificationAlgorithm("HTTPS");
           // ^ issue here

Here is my connection factory:

ConnectionFactory connFactory = new ConnectionFactory();
connFactory.setHost(host);
connFactory.setPort(port);
connFactory.setUsername(user);
connFactory.setPassword(pass);
connFactory.setVirtualHost("/");
connFactory.enableHostnameVerification(); // <-- exception thrown here
connFactory.useSslProtocol();
connFactory.setAutomaticRecoveryEnabled(true);
connFactory.setNetworkRecoveryInterval(5000); // retry every 5s

I was able to temporarily suppress the exception with:

...
if ( Build.VERSION.SDK_INT >= Build.VERSION_CODES.N ) {
	connFactory.enableHostnameVerification();
}
...

But i just wanted to bring this to your attention. I was looking for any documentation concerning android compatibility without any luck. Maybe i'm just blind.

Anyway, perhaps some API level documentation for android could be added or workarounds developed?

I should add that I am setting a default SSLContext in other parts of my app for other SSL related things:

String protocol = "TLSv1.2";
SSLContext ctx = SSLContext.getDefault();
String sslProtocol = ctx.getProtocol();
if ( !sslProtocol.equalsIgnoreCase(protocol) ) {
	ctx = SSLContext.getInstance(protocol);
	ctx.init(null, null, null);
	SSLContext.setDefault(ctx);
}

Would that affect rabbit's connection initialization?

Restart RAM node without disc node in RabbitMQ

In RabbitMQ document, Restarting Cluster Nodes, it says

A stopping node picks an online cluster member (only disc nodes will be considered) to sync with after restart. Upon restart the node will try to contact that peer 10 times by default, with 30 second response timeouts. In case the peer becomes available in that time interval, the node successfully starts, syncs what it needs from the peer and keeps going. If the peer does not become available, the restarted node will give up and voluntarily stop.

I think it says "You cannot stop and restart a RAM node if there are no available disk nodes in the cluster".

This is because restarting the RAM node requires one or more disk nodes to sync. If there are no disk nodes in the cluster, you cannot sync the RAM nodes, so it gives up and voluntarily stops. (That's what the document says)

But the result I tried was different from what the document says. Suppose that there are three nodes in the cluster. One disk node and two RAM nodes. Let's say each node is 'disk1', 'ram1' and 'ram2'.

I thought the process would look like this:

  1. stop ram1
  2. stop disk1 -- at this point the cluster is RAM only cluster. ('ram2' node is the only node that's alive)
  3. start ram1 -- it should not be able to start because there're no disk nodes to sync.

But the result was different than I thought. I was able to start the RAM node on the RAM only cluster without any disk nodes.

Did I misunderstand something?

.NET client: more than one Consumer in the same Channel from the same Queue in parallel

Hi Folks

I am using RabbitMQ in C# dotnet
and I am not able to have more than one Consumer in the same Channel from the same Queue processing messages in parallel

          for (int x = 0; x < 5; x++)
            {
                Task.Factory.StartNew(() =>
                {
                    lock (channel)
                    {
                        var consumer = new EventingBasicConsumer(channel);
                        consumer.Received += (ch, ea) =>
                        {
                            var body = ea.Body;
                            string message = Encoding.UTF8.GetString(ea.Body);
                        // ... process the message
                            channel.BasicAck(ea.DeliveryTag, true);
                        };
                        channel.BasicQos(0, 50, true);
                        channel.BasicConsume(util.queueName, false, consumer);
                    }
                });
            }

even when I have multiple consumers, only one of them are processing a message at a time, in other words.

I got 1 active consumer and 4 others in idle.

all of them are linked to the same Channel


NOTE: When I am using one Channel per Consumer I do achieve the parallelism.

May I achieve the same behavior using only one channel to every one?

regards

Álvaro.

Provide an option to purge n messages.

As of now with the purge option in the Management UI deletes all the messages in the queue. Is it possible to support an option to delete n number of messages in the queue(from the head or tail).

Exception in the log when using HTTP-Auth-Backend

Dear all,

I am getting the below crash report when using the rabbitmq-auth-backend-http plugin (along with the rabbitmq-auth-backend-cache plugin).

2019-11-28 17:54:40.056 [error] <0.754.0> STOMP error frame sent:
Message: "Processing error"                                                                                                                                                                                             
Detail: "Processing error"                                                                                                                                                                                              
Server private detail: {{function_clause,[{amqp_gen_connection,terminate,[{{case_clause,{badrpc,{'EXIT',{function_clause,[{rabbit_http_util,quote_plus,[{error,einval},[]],[{file,"src/rabbit_http_util.erl"},{line,183}
]},{rabbit_auth_backend_http,escape,2,[{file,"src/rabbit_auth_backend_http.erl"},{line,163}]},{rabbit_auth_backend_http,'-q/1-lc$^0/1-0-',1,[{file,"src/rabbit_auth_backend_http.erl"},{line,157}]},{rabbit_auth_backend_http,'-q/1-lc$^0/1-0-'
,1,[{file,"src/rabbit_auth_backend_http.erl"},{line,157}]},{rabbit_auth_backend_http,q,1,[{file,"src/rabbit_auth_backend_http.erl"},{line,157}]},{rabbit_auth_backend_http,bool_req,2,[{file,"src/rabbit_auth_backend_http.erl"},{line,100}]},{
rabbit_auth_backend_cache,with_cache,3,[{file,"src/rabbit_auth_backend_cache.erl"},{line,86}]},{rabbit_access_control,check_access,5,[{file,"src/rabbit_access_control.erl"},{line,183}]}]}}}},[{amqp_direct_connection,connect,4,[{file,"src/a
mqp_direct_connection.erl"},{line,152}]},{amqp_gen_connection,handle_call,3,[{file,"src/amqp_gen_connection.erl"},{line,171}]},{gen_server,try_handle_call,4,[{file,"gen_server.erl"},{line,661}]},{gen_server,handle_msg,6,[{file,"gen_server.
erl"},{line,690}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,249}]}]},{<0.757.0>,{amqp_params_direct,<<"skata">>,<<"DC/5cwTWEQ0mneKhGuq2xDkjNX/yt/3c5mv9yyjWzdzGNWrwxPwgDTN1u0uV2Zpr">>,<<"/">>,'rabbitmq@rabbitmq-broker',{amqp
_adapter_info,<<"42575dc3d654">>,15674,<<"milarakis.ee.auth.gr">>,59268,<<"milarakis.ee.auth.gr:59268 -> 42575dc3d654:15674">>,{'Web STOMP',"1.2"},[{channels,1},{channel_max,1},{frame_max,0},{client_properties,[{<<"product">>,longstr,<<"ST
OMP client">>}]},{state,running},{ssl,false}]},[]}}],[{file,"src/amqp_gen_connection.erl"},{line,239}]},{gen_server,try_terminate,3,[{file,"gen_server.erl"},{line,673}]},{gen_server,terminate,10,[{file,"gen_server.erl"},{line,858}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,249}]}]},{gen_server,call,[<0.758.0>,connect,60000]}}
2019-11-28 17:54:40.056 [error] <0.758.0> ** Generic server <0.758.0> terminating 
** Last message in was connect
** When Server state == {<0.757.0>,{amqp_params_direct,<<"skata">>,<<"DC/5cwTWEQ0mneKhGuq2xDkjNX/yt/3c5mv9yyjWzdzGNWrwxPwgDTN1u0uV2Zpr">>,<<"/">>,'rabbitmq@rabbitmq-broker',{amqp_adapter_info,<<"42575dc3d654">>,15674
,<<"milarakis.ee.auth.gr">>,59268,<<"milarakis.ee.auth.gr:59268 -> 42575dc3d654:15674">>,{'Web STOMP',"1.2"},[{channels,1},{channel_max,1},{frame_max,0},{client_properties,[{<<"product">>,longstr,<<"STOMP client">>}]},{state,running},{ssl,
false}]},[]}}
** Reason for termination ==
** {function_clause,[{amqp_gen_connection,terminate,[{{case_clause,{badrpc,{'EXIT',{function_clause,[{rabbit_http_util,quote_plus,[{error,einval},[]],[{file,"src/rabbit_http_util.erl"},{line,183}]},{rabbit_auth_backe
nd_http,escape,2,[{file,"src/rabbit_auth_backend_http.erl"},{line,163}]},{rabbit_auth_backend_http,'-q/1-lc$^0/1-0-',1,[{file,"src/rabbit_auth_backend_http.erl"},{line,157}]},{rabbit_auth_backend_http,'-q/1-lc$^0/1-0-',1,[{file,"src/rabbit_auth_backend_http.erl"},{line,157}]},{rabbit_auth_backend_http,q,1,[{file,"src/rabbit_auth_backend_http.erl"},{line,157}]},{rabbit_auth_backend_http,bool_req,2,[{file,"src/rabbit_auth_backend_http.erl"},{line,100}]},{rabbit_auth_backend_c
ache,with_cache,3,[{file,"src/rabbit_auth_backend_cache.erl"},{line,86}]},{rabbit_access_control,check_access,5,[{file,"src/rabbit_access_control.erl"},{line,183}]}]}}}},[{amqp_direct_connection,connect,4,[{file,"src/amqp_direct_connection
.erl"},{line,152}]},{amqp_gen_connection,handle_call,3,[{file,"src/amqp_gen_connection.erl"},{line,171}]},{gen_server,try_handle_call,4,[{file,"gen_server.erl"},{line,661}]},{gen_server,...},...]},...],...},...]}
** Client <0.754.0> stacktrace
** [{gen,do_call,4,[{file,"gen.erl"},{line,167}]},{gen_server,call,3,[{file,"gen_server.erl"},{line,219}]},{rabbit_stomp_processor,start_connection,3,[{file,"src/rabbit_stomp_processor.erl"},{line,642}]},{rabbit_stom
p_processor,do_login,7,[{file,"src/rabbit_stomp_processor.erl"},{line,591}]},{rabbit_stomp_processor,'-process_connect/3-fun-0-',6,[{file,"src/rabbit_stomp_processor.erl"},{line,281}]},{rabbit_stomp_processor,process_request,3,[{file,"src/
rabbit_stomp_processor.erl"},{line,238}]},{rabbit_web_stomp_handler,handle_data,2,[{file,"src/rabbit_web_stomp_handler.erl"},{line,237}]},{cowboy_websocket,handler_call,6,[{file,"src/cowboy_websocket.erl"},{line,471}]}]
2019-11-28 17:54:40.056 [error] <0.758.0> CRASH REPORT Process <0.758.0> with 0 neighbours crashed with reason: no function clause matching amqp_gen_connection:terminate({{case_clause,{badrpc,{'EXIT',{function_clause,[{rabbit_http_util,quote_plus,[{error,einval},[]],...},...]}}}},...}, {<0.757.0>,{amqp_params_direct,<<"skata">>,<<"DC/5cwTWEQ0mneKhGuq2xDkjNX/yt/3c5mv9yyjWzdzGNWrwxPwg...">>,...}}) line 239
2019-11-28 17:54:40.056 [error] <0.756.0> Supervisor {<0.756.0>,amqp_connection_sup} had child connection started with amqp_gen_connection:start_link(<0.757.0>, {amqp_params_direct,<<"skata">>,<<"DC/5cwTWEQ0mneKhGuq2xDkjNX/yt/3c5mv9yyjWzdzGNWrwxPwgDTN1u0uV2Zp...">>,...}) at <0.758.0> exit with reason no function clause matching amqp_gen_connection:terminate({{case_clause,{badrpc,{'EXIT',{function_clause,[{rabbit_http_util,quote_plus,[{error,einval},[]],...},...]}}}},...}, {<0.757.0>,{amqp_params_direct,<<"skata">>,<<"DC/5cwTWEQ0mneKhGuq2xDkjNX/yt/3c5mv9yyjWzdzGNWrwxPwg...">>,...}}) line 239 in context child_terminated
2019-11-28 17:54:40.056 [error] <0.756.0> Supervisor {<0.756.0>,amqp_connection_sup} had child connection started with amqp_gen_connection:start_link(<0.757.0>, {amqp_params_direct,<<"skata">>,<<"DC/5cwTWEQ0mneKhGuq2xDkjNX/yt/3c5mv9yyjWzdzGNWrwxPwgDTN1u0uV2Zp...">>,...}) at <0.758.0> exit with reason reached_max_restart_intensity in context shutdown

This crash only reports when using the http-auth-backend and does not exist when using the rabbit_auth_backend_internal.

Any clues about what might cause this crash? I am using RabbitMQ 3.7.21

How to declare a queue of type quorum via HTTP API

Hello,

is there a way to specify the type of the queue being created via http api ?

  • RabbitMQ version: 3.8.1
  • Erlang version: 21.3.8.10
  • RabbitMQ plugin information via rabbitmq-plugins list:
    [E*] rabbitmq_management 3.8.1
    [E*] rabbitmq_management_agent 3.8.1
    [e*] rabbitmq_peer_discovery_common 3.8.1
    [E*] rabbitmq_peer_discovery_k8s 3.8.1
    [e*] rabbitmq_web_dispatch 3.8.1
  • Client library version (for all libraries used): http api included in rabbitmq_management
  • Operating system, version, and patch level: Ubuntu 18.04.3 LTS

If your issue involves RabbitMQ management UI or HTTP API, please also provide
the following:

  • How the HTTP API requests performed can be reproduced with curl:
    curl -X PUT -i -u USER:PASS HOST/api/queues/%2F/test

Creating queue 'test' with command above, creates a queue of type "classic".
Api docs do not mention how to define "queue type", so how to create a queue of type "quorum" ?

https://rawcdn.githack.com/rabbitmq/rabbitmq-management/v3.8.1/priv/www/api/index.html

/api/queues/vhost/name
An individual queue. To PUT a queue, you will need a body looking something like this: {"auto_delete":false,"durable":true,"arguments":{},"node":"rabbit@smacmullen"} All keys are optional.

Thanks,
Rgds,
Adrian

Java client: Android 6 support

Flashback below android 6.0

java.lang.NoClassDefFoundError: com.rabbitmq.client.impl.nio.-$$Lambda$NioParams$NrSUEb8m8wLfH2ztzTBNKyBN8fA
at com.rabbitmq.client.impl.nio.NioParams.(NioParams.java:37)
at com.rabbitmq.client.ConnectionFactory.(ConnectionFactory.java:153)

  • RabbitMQ version 5.7.0
    gradle dependencies {
    //rabbitmq消息推送
    implementation 'com.rabbitmq:amqp-client:5.7.0'
    }

get API definitions as unprivileged user

I want to provide all definitions (vhosts, exchanges, queues+bindings) to developers. So I made a script which exports the definitions as tsv.

Problem is: it only works with the admin user having management privileges. I don't see the need for that. I want to have the most minimal privileges for the endpoint /api/definitions, like read-only for the definitions.

Do I overlook something? Or is it just not possible?

BAD RPC Response code

I'm trying to get this oauth2 backend working since it's the perfect solution for our system.

But I'm always running into a badrcp error when a user connects :
{function_clause,[{amqp_gen_connection,terminate,[{{case_clause,{badrpc,{'EXIT',{undef,[{rabbitmq_auth_backend_oauth2,user_login_authentication,

Stack trace :
2019-12-23 20:39:51.723 [info] <0.3803.0> accepting Web MQTT connection <0.3803.0> (172.21.0.2:37696 -> 172.21.0.4:15675)
2019-12-23 20:39:51.897 [info] <0.3803.0> MQTT vhost picked using plugin configuration or default
2019-12-23 20:39:51.923 [info] <0.3803.0> closing Web MQTT connection <0.3803.0> (172.21.0.2:37696 -> 172.21.0.4:15675)
2019-12-23 20:39:51.926 [error] <0.3807.0> ** Generic server <0.3807.0> terminating
** Last message in was connect
** When Server state == {<0.3806.0>,{amqp_params_direct,<<"1576168428531">>,<<"d+oDSwuHKU2hRJJx1awrImQB618GliRIqCb2cKOkE+LL2TLZH/6BBenNLZCGDnFSoBx6dOwQAwtgcRHoChAhysWl84kn6dR18RVDXjNa/jvB4uM0JGogojx2c+mIuZh7yk5Y6vcJcLOEtLRW9l4ILHO4AZuoKDlnnKSXpHfRGP+pXs13fs6iMoQhY4mLaHoUx5ujLOROhreqqAoWM1uyRscx7Ddywoe4dFvT5MdnUiNGsu+MVnT8UY8eem7sapeyySFceTzCYUeQlooIq1L9jPXzQ4lkGNcg4UziIEx9SjNJVjcfUIqkga33nkV7Qc4UOanL29mnXb7AnlfH4cjm7qqHi/9KbluAfWlJnYxhk/YlWhE93vqtrD3KDNJ+bDGZh4gi0/HqWTg0w0hlferP6CP52LFLGYIZMNEXOuKACFUg2pvvqk1XBgTcEhe0sdw3j58TthqUOz46HCBcpVyfVYtqIfz5cE0K2fHzAF4ljiHM7k0an+x1GyO+OMfRxaLeFMVSYfQ6wyXlo0QDVX+3FXgA2w2mOkLU2bBtDzzofYDhTQ5yKT7x4Uw35kKeC+frXRDrB2B6GeEVu+l3Zg9Rr8kuIAnckTnl9O+WCucRyTTS8SjCjvqj7soGYHDMxr3pXn/4og0T5YZSomK75a1HEqvQm94sMTYvdu4u2lhsOcpGmjDSiXrBR4BDDeBXknyeBzDsM3//mUZv6bWuXklAK+qppjufXxyWxoSM7r37GY44GxQNykJGovJ49eJ4O4wEygH2j4yopsPrBNQs24pMHwwWKQGziEHyamzpubvj/c8hQuGvia7bp6AmmprLKQjQcfGlYhOaeSL7qvHAcBLwMAyovD21xl/IKxES+5ByjGFmdovZmhyQt2JuD/j16...">>,...}}
** Reason for termination ==
** {function_clause,[{amqp_gen_connection,terminate,[{{case_clause,{badrpc,{'EXIT',{undef,[{rabbitmq_auth_backend_oauth2,user_login_authentication,[<<"1576168428531">>,[{password,<<"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6InRhZGFhbV90ZW1wa2V5In0.eyJjdXN0b21lcklkIjoiMTU3NjE2ODQyODUzMSIsInN1YiI6IjE1NzYxNjg0Mjg1MzEiLCJ1c2VyX2lkIjoiMTU3NjE2ODQyODUzMSIsImNsaWVudF9pZCI6InJhYmJpdF9jbGllbnQiLCJjaWQiOiJyYWJiaXRfY2xpZW50Iiwicm9sZSI6ImN1c3RvbWVyIiwidXNlcm5hbWUiOiJ0YWRhYW0ucXVpY2tjaGVja180NTE5NkBnbWFpbC5jb20iLCJzY29wZSI6WyJyYWJiaXRtcS5yZWFkOiovKiIsInJhYmJpdG1xLndyaXRlOiovKiJdLCJhdWQiOlsicmFiYml0bXEiLCJyYWJiaXRfY2xpZW50Il0sImV4cCI6MTU3ODkzMzE1MCwiaWF0IjoxNTc3MTMzMTQ5fQ.V_q64BvfXdGFBSScHO_mm-xeS_5syjjHZ72s7KY2VQ3iSn0H9YcTmcP1sXKAMvISJTUzo0r9KndHmWW3hSI9y9jsBbBdG5694_UOPdCjf4sa-Af2wZh12HoMxfn486GRDV229RLcCjh5eFlhsJ9mlvPBJtMxbIcb92JWuU1Or9WKpb0R6p1NyrsN_ecrRVM8QYTS3lAQ9PzfkKa544_x448WFxdrJmNHr0coZ4A-1lcSjaZJ144f_gVnp6pceDeqFhvledgmurSG6WpJ7k...">>},...]],...},...]}}}},...},...],...},...]}

Maybe I'm missing some part in my token, but I can't find a complete example.
Does anyone has a valid token which works ?

about consumer latency

print msg :
id: test-030739-419, time: 420.052s, sent: 2102 msg/s, received: 10863 msg/s, min/median/75th/95th/99th consumer latency: 8141676/8715782/8769170/8826041/8833042 ?s
Q1 :
"?s" . s for second or ms? :)

(Static) Shovels fail with a socket_closed_unexpectedly and are never reconnected

System and Crash Info
  • RabbitMQ version: 3.8.1

  • Erlang version: 22.1.8

  • RabbitMQ server and client application log files: [edited out]

  • RabbitMQ plugin information via rabbitmq-plugins list

Listing plugins with pattern ".*" ...
 Configured: E = explicitly enabled; e = implicitly enabled
 | Status: * = running on rabbit@Encode-1
 |/
[  ] rabbitmq_amqp1_0                  3.8.1
[  ] rabbitmq_auth_backend_cache       3.8.1
[  ] rabbitmq_auth_backend_http        3.8.1
[  ] rabbitmq_auth_backend_ldap        3.8.1
[  ] rabbitmq_auth_backend_oauth2      3.8.1
[  ] rabbitmq_auth_mechanism_ssl       3.8.1
[  ] rabbitmq_consistent_hash_exchange 3.8.1
[  ] rabbitmq_event_exchange           3.8.1
[  ] rabbitmq_federation               3.8.1
[  ] rabbitmq_federation_management    3.8.1
[  ] rabbitmq_jms_topic_exchange       3.8.1
[E*] rabbitmq_management               3.8.1
[e*] rabbitmq_management_agent         3.8.1
[  ] rabbitmq_mqtt                     3.8.1
[  ] rabbitmq_peer_discovery_aws       3.8.1
[  ] rabbitmq_peer_discovery_common    3.8.1
[  ] rabbitmq_peer_discovery_consul    3.8.1
[  ] rabbitmq_peer_discovery_etcd      3.8.1
[  ] rabbitmq_peer_discovery_k8s       3.8.1
[  ] rabbitmq_prometheus               3.8.1
[  ] rabbitmq_random_exchange          3.8.1
[  ] rabbitmq_recent_history_exchange  3.8.1
[  ] rabbitmq_sharding                 3.8.1
[E*] rabbitmq_shovel                   3.8.1
[E*] rabbitmq_shovel_management        3.8.1
[  ] rabbitmq_stomp                    3.8.1
[  ] rabbitmq_top                      3.8.1
[  ] rabbitmq_tracing                  3.8.1
[  ] rabbitmq_trust_store              3.8.1
[e*] rabbitmq_web_dispatch             3.8.1
[  ] rabbitmq_web_mqtt                 3.8.1
[  ] rabbitmq_web_mqtt_examples        3.8.1
[  ] rabbitmq_web_stomp                3.8.1
[  ] rabbitmq_web_stomp_examples       3.8.1
  • Operating system, version, and patch level: Ubuntu 18.04, fully up to date.

We are running a setup with one local and one global rabbitmq server. The global one gets messages from a bunch of Web-Services, while the local one is connected to local on-location services.
To shovel messages between them, the local instance, which is the one described above, has a bunch of static shovels configured.
This works great, until at some seemingly random point in time, the shovels break. See the paste above for the crash log.
The shovels are all configured with a reconnect_delay of 5.0, but it seems like the max limit is reached immediately, and no reconnect attempt is ever made.
What's worse is that both the WebUI and rabbitmqctl shovel_status keep showing all the shovels as running, even though they clearly are not.

.NET client reports connection.start was never received when HAproxy upstreams change dynamically

hei guys,this problem just happen when i use haproxy to transmit your client's tcp connections. following is part of my haproxy.cfg:

listen rabbitmq_cluster
    bind *:5672
    mode tcp
    balance roundrobin
    server k8s-cluster-5672 192.168.199.17:5672 check inter 2000 rise 2 fall 3 weight 1
    server docker01 192.168.199.16:35672 check inter 2000 rise 2 fall 3 weight 1
    server ryan-5672 192.168.198.173:5672 check inter 2000 rise 2 fall 3 weight 1

when i try to add a server node in the cluster, or remove one, my client throws an exception:

connection.start was never received, likely due to a network timeout

acturally i find some code in you source:

image

and it doesn't wait anymore, and give me null immediately.

can you help me to find out why? my feeling is perhaps i can do some modifications to my haproxy.cfg to resolve this problem.

thank you!

A problem about comsumer blocking in production

When I used rabbit mq, I encountered a problem many times. A new consumer is blocked after spending a few messages, and after the consumer quits, the number of consumers in the "management ui" does not decrease. This happens occasionally when there is more data in the queue. I learned in the official website that when there is more data in the queue (memory usage exceeds the threshold), the producer will be blocked, but the relevant content about blocking consumers is not seen. Can you help explain why? thank.

version: RabbitMQ 3.6.5, Erlang 17.5

Durability conflicts when publishing messages

I am using SmallRye Reactive messaging AMQP connector to connect with RabbitMQ. When not using "durable" things work just fine. Listening to "durable" queue also works fine. However when publishing messages to "durable" queues things go wrong. An I end up with errors like this:

{'v1_0.error',{symbol,<<"amqp:precondition-failed">>},{utf8,<<"PRECONDITION_FAILED - inequivalent arg 'durable' for queue 'prices' in vhost '/': received 'true' but current is 'false'">>},undefined}

This issue has already been reported to Smallrye messaging by someone else in ticket #174. However as an outsider to me it is not very clear where this issue should be fixed, especially when reading this comment. By creating this issue on RabbitMQ side of the fence I just want to make sure both communities are aware of it, in the hope to get some guidance/help in resolving this issue.

Some contextual information. If I am correct SmallRye Messaging AMQP connector is build on top of Vert.X AMQP which uses Apache Qpid Proton-J.

after using publisherConfirms exception 'AlreadyClosedException' or 'ShutdownSignalException' are encountered in heavy messaging environment.

Hello,
In my publisher app if I don't use publisherConfirm everything works fine. But I need to use publisherConfirm to avoid losing messages if Connection is down. When I use publisher confirm in heavy load environment where 100 threads were trying to send messages continuously for 2 mins using a single shared channel, though my rabbitMQ node is running, after sending few messages my all 100 threads are died with exception 'AlreadyClosedException' or 'ShutdownSignalException'. I had also registered 'ShutdownListener' but it was never notified. My worry is how to handle heavy messaging environment with publisher confirm approach. Following is sample code I executed to simulate this scenario:
============================Sample Code===============================================
public static void main(String[] args) throws IOException, TimeoutException {
ConnectionFactory cf = new ConnectionFactory();
Address haAddr = new Address("192.168.X.X", 5672);
cf.setUsername("test25");
cf.setPassword("test25");
Connection con = cf.newConnection(new Address[]{haAddr});
con.addShutdownListener(new ConnectionShutdownListener());
Channel channel = con.createChannel();
channel.confirmSelect();
CPublisherConfirm pub = new CPublisherConfirm();
for(int index = 0; index < 100; index++){
MessageSender mSender = pub.new MessageSender(channel);
mSender.start();
}

System.out.println("done");
}

private class MessageSender extends Thread{

Channel channel = null;
public MessageSender(Channel ch) {
channel = ch;
}
@OverRide
public void run() {
long curTime = System.currentTimeMillis();
long twoMin = 1000602 + curTime;
int index = 0;
try{
while(System.currentTimeMillis() < twoMin ){
sendMessage(channel, "msg"+index++);
}
}catch(Exception e){
System.out.println(e.getClass().getName());
}
System.out.println("Total: "+ index);
}

}

private static void sendMessage(Channel channel, String msg){
try {
channel.basicPublish("", QUEUE_NAME, true, MessageProperties.PERSISTENT_TEXT_PLAIN, msg.getBytes());
try {
channel.waitForConfirmsOrDie(10000);
} catch (InterruptedException | TimeoutException e) {
System.out.println("Exception:");
}
} catch (IOException e) {
e.printStackTrace();
}catch (Exception e) {
System.out.println("Throwe");
throw e;
}
}
private static class ConnectionShutdownListener implements ShutdownListener{

@OverRide
public void shutdownCompleted(ShutdownSignalException cause) {

System.out.println("ShutDown#############################################");

}

}

rabbitmqctl.bat does not work with install path include space and placed similar name file.

Thank you for awesome product, Rabbit MQ.

I use windows version, I fount an issue with install path include space character.
In these situation, rabbitmqctl.bat does not work.

  • C:\test file is already exist.
  • C:\test (x86)\erlang\erts-9.2\bin\epmd.exe -daemon does not run.
  • Installing Rabbit MQ to C:\test (x86)\rabbitmq\

An error is occured.
C:\test (x86)\rabbitmq\sbin>rabbitmqctl.bat status
Distribution failed: {{:shutdown, {:failed_to_start_child, :net_kernel, {:EXIT,
:nodistribution}}}, {:child, :undefined, :net_sup_dynamic, {:erl_distribution, :
start_link, [[:rabbitmqcli27, :shortnames], false]}, :permanent, 1000, :supervis
or, [:erl_distribution]}}

C:\test (x86)\rabbitmq\sbin>echo %errorlevel%
78

I regard this erlang issue is similer.
https://bugs.erlang.org/browse/ERL-1115

rabbitmq Abnormal exit

Cluster of three nodes, one of which exits unexpectedly
error log

2019-11-20 00:23:47.464 [error] <0.27695.4781> ** Generic server pg_local terminating 
2019-11-20 00:44:01.570 [error] <0.31917.4810> ** Generic server pg_local terminating 
2019-11-20 00:46:25.962 [error] <0.25100.4842> ** Generic server pg_local terminating 
2019-11-20 01:01:51.524 [error] <0.13056.4831> ** Generic server pg_local terminating 
2019-11-20 01:37:43.171 [error] <0.7674.4850> ** Generic server pg_local terminating 
2019-11-20 01:42:08.368 [error] <0.20005.4859> ** Generic server pg_local terminating 
2019-11-20 01:42:08.642 [error] <0.32242.2291> ** Generic server <0.32242.2291> terminating
2019-11-20 01:42:08.644 [error] <0.3829.2291> ** Generic server <0.3829.2291> terminating
2019-11-20 01:42:08.644 [error] <0.32242.2291> CRASH REPORT Process <0.32242.2291> with 1 neighbours exited with reason: no such process or port in call to gen_server2:call(rabbit_memory_monitor, {report_ram_duration,<0.32242.2291>,0.0}, infinity) in gen_server2:terminate/3 line 1166
2019-11-20 01:42:08.644 [error] <0.3829.2291> CRASH REPORT Process <0.3829.2291> with 1 neighbours exited with reason: no such process or port in call to gen_server2:call(rabbit_memory_monitor, {report_ram_duration,<0.3829.2291>,0.0}, infinity) in gen_server2:terminate/3 line 1166
2019-11-20 01:42:08.644 [error] <0.31336.2291> Supervisor {<0.31336.2291>,rabbit_amqqueue_sup} had child rabbit_amqqueue started with rabbit_prequeue:start_link({amqqueue,{resource,<<"vhost.common">>,queue,<<"q.kepler.repay-plan-changed.ufin">>},true,false,...}, slave, <0.32409.2291>) at <0.32242.2291> exit with reason no such process or port in call to gen_server2:call(rabbit_memory_monitor, {report_ram_duration,<0.32242.2291>,0.0}, infinity) in context child_terminated
2019-11-20 01:42:08.645 [error] <0.3006.2291> Supervisor {<0.3006.2291>,rabbit_amqqueue_sup} had child rabbit_amqqueue started with rabbit_prequeue:start_link({amqqueue,{resource,<<"vhost.common">>,queue,<<"q.kepler.plan-changed-gtfee.canal">>},true,false,...}, slave, <0.3369.2291>) at <0.3829.2291> exit with reason no such process or port in call to gen_server2:call(rabbit_memory_monitor, {report_ram_duration,<0.3829.2291>,0.0}, infinity) in context child_terminated
2019-11-20 01:42:08.646 [error] <0.4125.2291> ** Generic server <0.4125.2291> terminating
2019-11-20 01:42:08.646 [error] <0.4125.2291> CRASH REPORT Process <0.4125.2291> with 1 neighbours exited with reason: no such process or port in call to gen_server2:call(rabbit_memory_monitor, {report_ram_duration,<0.4125.2291>,0.0}, infinity) in gen_server2:terminate/3 line 1166
2019-11-20 01:42:08.646 [error] <0.4027.2291> Supervisor {<0.4027.2291>,rabbit_amqqueue_sup} had child rabbit_amqqueue started with rabbit_prequeue:start_link({amqqueue,{resource,<<"vhost.common">>,queue,<<"q.kepler.plan-changed.canal">>},true,false,none,...}, slave, <0.4079.2291>) at <0.4125.2291> exit with reason no such process or port in call to gen_server2:call(rabbit_memory_monitor, {report_ram_duration,<0.4125.2291>,0.0}, infinity) in context child_terminated
2019-11-20 01:42:08.707 [error] <0.20816.3102> ** Generic server <0.20816.3102> terminating
2019-11-20 01:42:08.708 [error] <0.20816.3102> CRASH REPORT Process <0.20816.3102> with 1 neighbours exited with reason: no such process or port in call to gen_server2:call(rabbit_memory_monitor, {report_ram_duration,<0.20816.3102>,infinity}, infinity) in gen_server2:terminate/3 line 1166
2019-11-20 01:42:08.708 [error] <0.14907.3100> Supervisor {<0.14907.3100>,rabbit_amqqueue_sup} had child rabbit_amqqueue started with rabbit_prequeue:start_link({amqqueue,{resource,<<"vhost.common">>,queue,<<"q.oasis.sync-org.oasis">>},true,false,none,[],<9546.22931.2032>,...}, slave, <0.1994.3087>) at <0.20816.3102> exit with reason no such process or port in call to gen_server2:call(rabbit_memory_monitor, {report_ram_duration,<0.20816.3102>,infinity}, infinity) in context child_terminated
2019-11-20 01:42:08.875 [error] <0.19389.3102> ** Generic server <0.19389.3102> terminating
2019-11-20 01:42:08.875 [error] <0.19389.3102> CRASH REPORT Process <0.19389.3102> with 1 neighbours exited with reason: no such process or port in call to gen_server2:call(rabbit_memory_monitor, {report_ram_duration,<0.19389.3102>,infinity}, infinity) in gen_server2:terminate/3 line 1166
2019-11-20 01:42:08.875 [error] <0.32049.2896> Supervisor {<0.32049.2896>,rabbit_amqqueue_sup} had child rabbit_amqqueue started with rabbit_prequeue:start_link({amqqueue,{resource,<<"vhost.common">>,queue,<<"q.oasis.sync-user.oasis">>},true,false,none,[],<9546.11768.2021>,...}, slave, <0.3952.3049>) at <0.19389.3102> exit with reason no such process or port in call to gen_server2:call(rabbit_memory_monitor, {report_ram_duration,<0.19389.3102>,infinity}, infinity) in context child_terminated
2019-11-20 01:42:09.123 [error] <0.8399.2969> ** Generic server <0.8399.2969> terminating
2019-11-20 01:42:09.123 [error] <0.8399.2969> CRASH REPORT Process <0.8399.2969> with 1 neighbours exited with reason: no such process or port in call to gen_server2:call(rabbit_memory_monitor, {report_ram_duration,<0.8399.2969>,infinity}, infinity) in gen_server2:terminate/3 line 1166
2019-11-20 01:42:09.124 [error] <0.9683.2969> Supervisor {<0.9683.2969>,rabbit_amqqueue_sup} had child rabbit_amqqueue started with rabbit_prequeue:start_link({amqqueue,{resource,<<"vhost.common">>,queue,<<"q.oasis.umas-sync-user-helena.ufin">>},true,false,...}, slave, <0.7852.2969>) at <0.8399.2969> exit with reason no such process or port in call to gen_server2:call(rabbit_memory_monitor, {report_ram_duration,<0.8399.2969>,infinity}, infinity) in context child_terminated
2019-11-20 01:42:09.225 [error] <0.4049.2291> ** Generic server <0.4049.2291> terminating
2019-11-20 01:42:09.226 [error] <0.4049.2291> CRASH REPORT Process <0.4049.2291> with 1 neighbours exited with reason: no such process or port in call to gen_server2:call(rabbit_memory_monitor, {report_ram_duration,<0.4049.2291>,infinity}, infinity) in gen_server2:terminate/3 line 1166
2019-11-20 01:42:09.226 [error] <0.3708.2291> Supervisor {<0.3708.2291>,rabbit_amqqueue_sup} had child rabbit_amqqueue started with rabbit_prequeue:start_link({amqqueue,{resource,<<"vhost.common">>,queue,<<"q.kepler.bank-compens.canal">>},true,false,none,...}, slave, <0.3584.2291>) at <0.4049.2291> exit with reason no such process or port in call to gen_server2:call(rabbit_memory_monitor, {report_ram_duration,<0.4049.2291>,infinity}, infinity) in context child_terminated
2019-11-20 01:42:09.370 [error] <0.59.0> Supervisor kernel_safe_sup had child dets_sup started with dets_sup:start_link() at <0.179.0> exit with reason killed in context shutdown_error
2019-11-20 01:42:09.371 [error] <0.37.0> Supervisor kernel_sup had child kernel_safe_sup started with supervisor:start_link({local,kernel_safe_sup}, kernel, safe) at <0.59.0> exit with reason shutdown in context child_terminated
2019-11-20 01:42:09.371 [error] <0.37.0> Supervisor kernel_sup had child kernel_safe_sup started with supervisor:start_link({local,kernel_safe_sup}, kernel, safe) at <0.59.0> exit with reason reached_max_restart_intensity in context shutdown
2019-11-20 01:42:09.371 [error] <0.19020.4859> CRASH REPORT Process <0.19020.4859> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server:call/3 line 214
2019-11-20 01:42:09.371 [error] <0.15591.4854> CRASH REPORT Process <0.15591.4854> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server:call/3 line 214
2019-11-20 01:42:09.372 [error] <0.32694.4833> Supervisor {<0.32694.4833>,rabbit_connection_sup} had child reader started with rabbit_reader:start_link(<0.31357.4564>, {acceptor,{0,0,0,0},5672}, #Port<0.29764696>) at <0.15591.4854> exit with reason no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in context child_terminated
2019-11-20 01:42:09.372 [error] <0.10049.4861> Supervisor {<0.10049.4861>,rabbit_connection_sup} had child reader started with rabbit_reader:start_link(<0.24637.4858>, {acceptor,{0,0,0,0},5672}, #Port<0.29747031>) at <0.19020.4859> exit with reason no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in context child_terminated
2019-11-20 01:42:09.372 [error] <0.32694.4833> Supervisor {<0.32694.4833>,rabbit_connection_sup} had child reader started with rabbit_reader:start_link(<0.31357.4564>, {acceptor,{0,0,0,0},5672}, #Port<0.29764696>) at <0.15591.4854> exit with reason reached_max_restart_intensity in context shutdown
2019-11-20 01:42:09.372 [error] <0.10049.4861> Supervisor {<0.10049.4861>,rabbit_connection_sup} had child reader started with rabbit_reader:start_link(<0.24637.4858>, {acceptor,{0,0,0,0},5672}, #Port<0.29747031>) at <0.19020.4859> exit with reason reached_max_restart_intensity in context shutdown
2019-11-20 01:42:09.372 [error] <0.20601.2088> CRASH REPORT Process <0.20601.2088> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server:call/3 line 214
2019-11-20 01:42:09.373 [error] <0.24874.2075> Supervisor {<0.24874.2075>,rabbit_connection_sup} had child reader started with rabbit_reader:start_link(<0.25247.2085>, {acceptor,{0,0,0,0},5672}, #Port<0.5672001>) at <0.20601.2088> exit with reason no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in context child_terminated
2019-11-20 01:42:09.373 [error] <0.24874.2075> Supervisor {<0.24874.2075>,rabbit_connection_sup} had child reader started with rabbit_reader:start_link(<0.25247.2085>, {acceptor,{0,0,0,0},5672}, #Port<0.5672001>) at <0.20601.2088> exit with reason reached_max_restart_intensity in context shutdown
2019-11-20 01:42:09.373 [error] <0.2503.4864> ** Generic server <0.2503.4864> terminating
2019-11-20 01:42:09.373 [error] <0.2503.4864> CRASH REPORT Process <0.2503.4864> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155
2019-11-20 01:42:09.374 [error] <0.11834.4860> ** Generic server <0.11834.4860> terminating
2019-11-20 01:42:09.374 [error] <0.24069.4861> ** Generic server <0.24069.4861> terminating
2019-11-20 01:42:09.374 [error] <0.11834.4860> CRASH REPORT Process <0.11834.4860> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155
2019-11-20 01:42:09.375 [error] <0.24069.4861> CRASH REPORT Process <0.24069.4861> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155
2019-11-20 01:42:09.375 [error] <0.389.0> ** Generic server mnesia_sync terminating 
** {{badmatch,{error,no_such_log}},[{mnesia_sync,handle_info,2,[{file,"src/mnesia_sync.erl"},{line,63}]},{gen_server,try_dispatch,4,[{file,"gen_server.erl"},{line,616}]},{gen_server,handle_msg,6,[{file,"gen_server.erl"},{line,686}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}
2019-11-20 01:42:09.375 [error] <0.389.0> CRASH REPORT Process mnesia_sync with 0 neighbours crashed with reason: no match of right hand value {error,no_such_log} in mnesia_sync:handle_info/2 line 63
2019-11-20 01:42:09.375 [error] <0.32551.4855> Error on AMQP connection <0.32551.4855> (192.168.0.98:56647 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.3081.4851> ** Generic server <0.3081.4851> terminating
2019-11-20 01:42:09.375 [error] <0.8801.4857> Error on AMQP connection <0.8801.4857> (192.168.0.98:56787 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.26920.3054> Error on AMQP connection <0.26920.3054> (192.168.0.98:25977 -> 192.168.0.125:5672 - rabbitConnectionFactory4CommonListener#4285c905:0, vhost: 'vhost.common', user: 'ufin', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.28769.4839> Error on AMQP connection <0.28769.4839> (192.168.0.98:56727 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.9594.4176> Error on AMQP connection <0.9594.4176> (192.168.0.98:12877 -> 192.168.0.125:5672 - factorySelf#4529e005:0, vhost: 'vhost.infra', user: 'infra', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.23965.931> Error on AMQP connection <0.23965.931> (192.168.0.98:29904 -> 192.168.0.125:5672 - rabbitConnectionFactory4UfinListener#4ba07fae:0, vhost: 'vhost.ufin', user: 'ufin', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.758.4832> Error on AMQP connection <0.758.4832> (192.168.0.98:56777 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.31060.4854> Error on AMQP connection <0.31060.4854> (192.168.0.98:56881 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.11085.2779> Error on AMQP connection <0.11085.2779> (192.168.0.98:26505 -> 192.168.0.125:5672 - SpringAMQP#74b9a39e:0, vhost: 'vhost.common', user: 'ufin', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.13654.3083> Error on AMQP connection <0.13654.3083> (192.168.0.98:27157 -> 192.168.0.125:5672 - SpringAMQP#5cadd1d5:0, vhost: 'vhost.common', user: 'ufin', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.13802.4841> Error on AMQP connection <0.13802.4841> (192.168.0.98:56643 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.15929.4861> Error on AMQP connection <0.15929.4861> (192.168.0.98:56677 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.3392.4860> Error on AMQP connection <0.3392.4860> (192.168.0.98:56711 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.1697.4860> Error on AMQP connection <0.1697.4860> (192.168.0.98:56963 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.24716.4823> Error on AMQP connection <0.24716.4823> (192.168.0.98:56959 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.376 [error] <0.255.0> Supervisor rabbit_sup had child mnesia_sync started with mnesia_sync:start_link() at <0.389.0> exit with reason no match of right hand value {error,no_such_log} in mnesia_sync:handle_info/2 line 63 in context child_terminated
2019-11-20 01:42:09.375 [error] <0.32230.4475> Error on AMQP connection <0.32230.4475> (192.168.0.98:56743 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.784.4785> Error on AMQP connection <0.784.4785> (192.168.0.98:12873 -> 192.168.0.125:5672 - rabbitConnectionFactory#5d5e288a:0, vhost: 'vhost.common', user: 'infra', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.30485.2514> Error on AMQP connection <0.30485.2514> (192.168.0.98:56772 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.24924.4842> Error on AMQP connection <0.24924.4842> (192.168.0.98:56931 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.27596.4053> Error on AMQP connection <0.27596.4053> (192.168.0.98:50361 -> 192.168.0.125:5672 - SpringAMQP#507d4ed6:0, vhost: 'vhost.common', user: 'infra', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.16278.4860> Error on AMQP connection <0.16278.4860> (192.168.0.98:56733 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.14948.3079> Error on AMQP connection <0.14948.3079> (192.168.0.98:43478 -> 192.168.0.125:5672 - connectionFactory#28311128:0, vhost: 'vhost.common', user: 'infra', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.29472.4782> Error on AMQP connection <0.29472.4782> (192.168.0.98:62642 -> 192.168.0.125:5672 - SpringAMQP#19a143a9:0, vhost: 'vhost_thread', user: 'thread', state: running), channel 0:
2019-11-20 01:42:09.376 [error] <0.255.0> Supervisor rabbit_sup had child mnesia_sync started with mnesia_sync:start_link() at <0.389.0> exit with reason reached_max_restart_intensity in context shutdown
2019-11-20 01:42:09.375 [error] <0.21194.4858> Error on AMQP connection <0.21194.4858> (192.168.0.98:56755 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.18380.4861> Error on AMQP connection <0.18380.4861> (192.168.0.98:56817 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.389 [error] <0.3081.4851> CRASH REPORT Process <0.3081.4851> with 1 neighbours exited with reason: no such process or port in call to gen_server:call(mnesia_sync, sync, infinity) in gen_server2:terminate/3 line 1166
2019-11-20 01:42:09.375 [error] <0.7070.4744> Error on AMQP connection <0.7070.4744> (192.168.0.98:60820 -> 192.168.0.125:5672 - SpringAMQP#4d4ef564:0, vhost: 'vhost.common', user: 'thread', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.27314.4779> Error on AMQP connection <0.27314.4779> (192.168.0.98:59248 -> 192.168.0.125:5672 - SpringAMQP#7a1e7e97:0, vhost: 'vhost.common', user: 'thread', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.13846.7830> Error on AMQP connection <0.13846.7830> (192.168.0.98:14388 -> 192.168.0.125:5672 - factorySelf#a696796:1, vhost: 'vhost.infra', user: 'infra', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.25931.4860> Error on AMQP connection <0.25931.4860> (192.168.0.98:56829 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.31112.4857> Error on AMQP connection <0.31112.4857> (192.168.0.98:56639 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.28951.3079> Error on AMQP connection <0.28951.3079> (192.168.0.98:26629 -> 192.168.0.125:5672 - rabbitConnectionFactory4UfinListener#1d2fd4b8:0, vhost: 'vhost.ufin', user: 'ufin', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.4383.3616> Error on AMQP connection <0.4383.3616> (192.168.0.98:50563 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'thread', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.27409.4854> Error on AMQP connection <0.27409.4854> (192.168.0.98:56935 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.5892.2106> Error on AMQP connection <0.5892.2106> (192.168.0.98:30539 -> 192.168.0.125:5672 - oasisRabbitConnectionFactory#48a16458:0, vhost: 'vhost.oasis', user: 'oasis', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.27083.3603> Error on AMQP connection <0.27083.3603> (192.168.0.98:40359 -> 192.168.0.125:5672 - commonRabbitConnectionFactory#698ac135:0, vhost: 'vhost.common', user: 'oasis', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.21755.1458> Error on AMQP connection <0.21755.1458> (192.168.0.98:26627 -> 192.168.0.125:5672 - rabbitConnectionFactory4CommonListener#7982add8:0, vhost: 'vhost.common', user: 'ufin', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.19067.4853> Error on AMQP connection <0.19067.4853> (192.168.0.98:56717 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.18853.2983> Error on AMQP connection <0.18853.2983> (192.168.0.98:58553 -> 192.168.0.125:5672 - infraConnectionFactory#789bbb6b:0, vhost: 'vhost.infra', user: 'infra', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.24010.7987> Error on AMQP connection <0.24010.7987> (192.168.0.98:14478 -> 192.168.0.125:5672 - commonRabbitConnectionFactory#154997a5:1, vhost: 'vhost.common', user: 'oasis', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.10783.4857> Error on AMQP connection <0.10783.4857> (192.168.0.98:56649 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.9150.4832> Error on AMQP connection <0.9150.4832> (192.168.0.98:56915 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.31198.4864> Error on AMQP connection <0.31198.4864> (192.168.0.98:56859 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.19351.2095> Error on AMQP connection <0.19351.2095> (192.168.0.98:30521 -> 192.168.0.125:5672 - rabbitConnectionFactory#364c9ae4:0, vhost: 'vhost.common', user: 'infra', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.14722.4864> Error on AMQP connection <0.14722.4864> (192.168.0.98:56635 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.31639.4768> Error on AMQP connection <0.31639.4768> (192.168.0.98:62477 -> 192.168.0.125:5672 - rabbitConnectionFactory#63de4390:0, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.10040.4781> Error on AMQP connection <0.10040.4781> (192.168.0.98:59154 -> 192.168.0.125:5672 - SpringAMQP#734274aa:0, vhost: 'vhost.common', user: 'thread', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.12034.4839> Error on AMQP connection <0.12034.4839> (192.168.0.98:56941 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.11164.4858> Error on AMQP connection <0.11164.4858> (192.168.0.98:56661 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.6899.4788> Error on AMQP connection <0.6899.4788> (192.168.0.98:56799 -> 192.168.0.125:5672 - rabbitConnectionFactory#78c8b8af:73700, vhost: 'vhost.common', user: 'oasis', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.30979.6230> Error on AMQP connection <0.30979.6230> (192.168.0.98:58822 -> 192.168.0.125:5672 - infraConnectionFactory#4bf20e2d:0, vhost: 'vhost.infra', user: 'infra', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.1344.3079> Error on AMQP connection <0.1344.3079> (192.168.0.98:43166 -> 192.168.0.125:5672 - connectionFactory#7d0fb1d2:0, vhost: 'vhost.common', user: 'infra', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.25053.4852> Error on AMQP connection <0.25053.4852> (192.168.0.98:56861 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.375 [error] <0.17131.297> Error on AMQP connection <0.17131.297> (192.168.0.98:40693 -> 192.168.0.125:5672 - commonConnectionFactory#64677323:0, vhost: 'vhost.common', user: 'thread', state: running), channel 0:
2019-11-20 01:42:09.376 [error] <0.10419.4859> Error on AMQP connection <0.10419.4859> (192.168.0.98:56893 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.376 [error] <0.10860.4860> Error on AMQP connection <0.10860.4860> (192.168.0.98:56773 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.376 [error] <0.23153.4785> Error on AMQP connection <0.23153.4785> (192.168.0.98:59408 -> 192.168.0.125:5672 - SpringAMQP#3f409a8c:0, vhost: 'vhost_thread', user: 'thread', state: running), channel 0:
2019-11-20 01:42:09.376 [error] <0.15157.4856> Error on AMQP connection <0.15157.4856> (192.168.0.98:56891 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.376 [error] <0.2610.2565> Error on AMQP connection <0.2610.2565> (192.168.0.98:60636 -> 192.168.0.125:5672 - SpringAMQP#72519c23:0, vhost: 'vhost.common', user: 'ufin', state: running), channel 0:
2019-11-20 01:42:09.376 [error] <0.22673.4846> Error on AMQP connection <0.22673.4846> (192.168.0.98:56691 -> 192.168.0.125:5672, vhost: 'vhost.common', user: 'bigdata', state: running), channel 0:
2019-11-20 01:42:09.376 [error] <0.15836.3349> Error on AMQP connection <0.15836.3349> (192.168.0.98:60060 -> 192.168.0.125:5672 - rabbitConnectionFactory#7fb26506:0, vhost: 'vhost_infra_sales', user: 'thread', state: running), channel 0:
2019-11-20 01:42:09.376 [error] <0.6333.4603> Error on AMQP connection <0.6333.4603> (192.168.0.98:59472 -> 192.168.0.125:5672 - SpringAMQP#6e2601f4:0, vhost: 'vhost.common', user: 'thread', state: running), channel 0:
2019-11-20 01:42:10.000 [error] <0.366.7605> ** Generic server <0.366.7605> terminating
2019-11-20 01:42:10.001 [error] <0.4506.4784> CRASH REPORT Process <0.4506.4784> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155
2019-11-20 01:42:10.001 [error] <0.1485.4205> CRASH REPORT Process <0.1485.4205> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155
2019-11-20 01:42:10.001 [error] <0.29965.4613> CRASH REPORT Process <0.29965.4613> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155
2019-11-20 01:42:10.002 [error] <0.5702.4772> ** Generic server <0.5702.4772> terminating
2019-11-20 01:42:10.002 [error] <0.27190.4859> ** Generic server <0.27190.4859> terminating
2019-11-20 01:42:10.003 [error] <0.30204.3432> ** Generic server <0.30204.3432> terminating
2019-11-20 01:42:10.003 [error] <0.11238.4782> ** Generic server <0.11238.4782> terminating
2019-11-20 01:42:10.004 [error] <0.11369.7988> CRASH REPORT Process <0.11369.7988> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155
2019-11-20 01:42:10.004 [error] <0.10441.2530> ** Generic server <0.10441.2530> terminating
2019-11-20 01:42:10.005 [error] <0.24810.4789> ** Generic server <0.24810.4789> terminating
2019-11-20 01:42:10.005 [error] <0.1721.3603> CRASH REPORT Process <0.1721.3603> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155
2019-11-20 01:42:10.005 [error] <0.1319.4784> ** Generic server <0.1319.4784> terminating
2019-11-20 01:42:10.007 [error] <0.4810.4786> ** Generic server <0.4810.4786> terminating
2019-11-20 01:42:10.008 [error] <0.5702.4772> CRASH REPORT Process <0.5702.4772> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155
2019-11-20 01:42:10.008 [error] <0.30204.3432> CRASH REPORT Process <0.30204.3432> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155
2019-11-20 01:42:10.009 [error] <0.2188.3429> ** Generic server <0.2188.3429> terminating
2019-11-20 01:42:10.009 [error] <0.32048.4785> ** Generic server <0.32048.4785> terminating
2019-11-20 01:42:10.010 [error] <0.22484.4860> CRASH REPORT Process <0.22484.4860> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155
2019-11-20 01:42:10.010 [error] <0.24810.4789> CRASH REPORT Process <0.24810.4789> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155
2019-11-20 01:42:10.010 [error] <0.29165.2546> ** Generic server <0.29165.2546> terminating
2019-11-20 01:42:10.011 [error] <0.30015.4144> ** Generic server <0.30015.4144> terminating
2019-11-20 01:42:10.012 [error] <0.4810.4786> CRASH REPORT Process <0.4810.4786> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155
2019-11-20 01:42:10.012 [error] <0.26310.4777> CRASH REPORT Process <0.26310.4777> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155
2019-11-20 01:42:10.013 [error] <0.1319.4784> CRASH REPORT Process <0.1319.4784> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155
2019-11-20 01:42:10.013 [error] <0.11238.4782> CRASH REPORT Process <0.11238.4782> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155
2019-11-20 01:42:10.013 [error] <0.2188.3429> CRASH REPORT Process <0.2188.3429> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155
2019-11-20 01:42:10.014 [error] <0.30015.4144> CRASH REPORT Process <0.30015.4144> with 0 neighbours exited with reason: no such process or port in call to gen_server:call(kernel_safe_sup, {start_child,{pg_local,{pg_local,start_link,[]},permanent,4294967295,worker,[pg_local]}}, infinity) in gen_server2:terminate/3 line 1155

Federation links are disappeared on rabbitmq restart

Hi
We have two rabbitmq clusters with 3 nodes in each cluster. Few of the queue/exchange federation links are missing, every time we restart rabbitmq nodes as part of maintenance. Please help us to overcome this issue and let me know if you need more info regarding this issue.

rabbitmqctl --version
3.7.12
Erlang/OTP 21 [erts-10.2.4]

OS: RHEL 7.6
URI Configuration:
"uri": ["amqp://username:password@node1_ip/vhost","amqp://username:password@node2_ip/vhost","amqp://username:password@node3_ip/vhost"]

Kubernetes peer discovery: I'm not sure what the question is.

I'm not sure what the question is.

At the end of the log a peer joins the cluster:

2019-09-29 02:30:14.075 [info] <0.452.0> node '[email protected]' up
2019-09-29 02:30:14.424 [info] <0.452.0> rabbit on node '[email protected]' up

If you want to see what Kubernetes API endpoint responses return, set log level to debug.
Previously initialised (as in data directory) nodes must be reset between clustering attemptes or they will behave as "rejoining nodes" which the docs cover.

For our team GitHub is not a support forum => I am locking this issue. Please iuse the mailing list in the future.

Originally posted by @michaelklishin in rabbitmq/rabbitmq-peer-discovery-k8s#52 (comment)

Perftest: cannot start more than 740 producers

my version is 2.10.0.RC1

my running options here
./runjava com.rabbitmq.perf.PerfTest --producers 1000 --publishing-interval 1 --producer-scheduler-threads 10 --size 1500 --use-millis

and result

Main thread caught exception: java.util.concurrent.TimeoutException
11:20:16.319 [main] ERROR com.rabbitmq.perf.PerfTest - Main thread caught exception
java.util.concurrent.TimeoutException: null
	at com.rabbitmq.utility.BlockingCell.get(BlockingCell.java:77)
	at com.rabbitmq.utility.BlockingCell.uninterruptibleGet(BlockingCell.java:120)
	at com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(BlockingValueOrException.java:36)
	at com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(AMQChannel.java:502)
	at com.rabbitmq.client.impl.AMQConnection.start(AMQConnection.java:325)
	at com.rabbitmq.client.impl.recovery.RecoveryAwareAMQConnectionFactory.newConnection(RecoveryAwareAMQConnectionFactory.java:64)
	at com.rabbitmq.client.impl.recovery.AutorecoveringConnection.init(AutorecoveringConnection.java:156)
	at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1110)
	at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1067)
	at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:965)
	at com.rabbitmq.perf.MulticastSet$ConnectionCreator.createConnection(MulticastSet.java:730)
	at com.rabbitmq.perf.MulticastSet.createConnection(MulticastSet.java:267)
	at com.rabbitmq.perf.MulticastSet.createProducers(MulticastSet.java:324)
	at com.rabbitmq.perf.MulticastSet.run(MulticastSet.java:173)
	at com.rabbitmq.perf.PerfTest.main(PerfTest.java:325)
	at com.rabbitmq.perf.PerfTest.main(PerfTest.java:445)

MQTT "due to an internal error or unavailable component"

The cluster has 6 nodes, if 1 or 2 nodes are down,MQTT could be connected. When 3 nodes or more than 3 nodes are down, MQTT couldn't be connected. But if we use AMQP connection method, the AMQP could be connected even there was only one node survived. We think that MQTT could also be connected if there is anynode alive. The server side error is as bellow:

2019-11-12 15:36:33.698 [error] <0.15599.2> MQTT cannot accept a connection: client ID registration timed out
2019-11-12 15:36:33.698 [error] <0.15599.2> MQTT cannot accept connection x.x.x.x:50528 -> x.x.x.x:1883 due to an internal error or unavailable component

  • RabbitMQ version:3.8.0
  • Erlang version:Erlang 22.1.4

I want to know the reason why MQTT couldn't be connected when more than 3 nodes are down. Looking forward for your suggection. Thank you very much.

Can regular expressions be supported in routingKey ?

i want to use regular expressions in routingKey, such as a|b.
e.g, publish two types of messages(such as t1/t2) to an exchange.
i want to create three types of queue to bind this exchange.
1、receive message that type is t1;
2、receive message that type is t2;
3、receive message that type is t1 and t2.
how to solve this problem?
thanks

Management UI starts responding with 500s after 1-2 hours of operation

Hi,

i'm having an issue with rabbitMQ, everything is fine on my machine but it's not on the others like ( VM)
after installation it works for 1 hour then it crash , and i need to reinstall again to get it back .

when i open the rabbitmq-management-agent I got Error 500

C:\Program Files\RabbitMQ Server\rabbitmq_server-3.8.1\sbin>rabbitmqctl status
Status of node rabbit@IE-DEV-CNTRCT02 ...
Runtime
OS PID: 1956
OS: Windows
Uptime (seconds): 55473
RabbitMQ version: 3.8.1
Node name: rabbit@IE-DEV-CNTRCT02
Erlang configuration: Erlang/OTP 22 [erts-10.5] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:64]
Erlang processes: 402 used, 1048576 limit
Scheduler run queue: 1
Cluster heartbeat timeout (net_ticktime): 60

Plugins

Enabled plugin file: C:/Users/XXXXX/AppData/Roaming/RabbitMQ/enabled_plugins
Enabled plugins:

 * rabbitmq_management
 * rabbitmq_web_dispatch
 * rabbitmq_management_agent
 * cowboy
 * cowlib
 * amqp_client

Data directory

Node data directory: c:/Users/XXXXXXX/AppData/Roaming/RabbitMQ/db/rabbit@IE-DEV-CNTRCT02-mnesia

Config files

 * c:/Users/XXXXX/AppData/Roaming/RabbitMQ/advanced.config

Log file(s)

 * C:/Users/XXXXXXX/AppData/Roaming/RabbitMQ/log/[email protected]
 * C:/Users/XXXXXXX/AppData/Roaming/RabbitMQ/log/rabbit@IE-DEV-CNTRCT02_upgrade.log

Alarms

(none)

Memory

Calculation strategy: rss
Memory high watermark setting: 0.4 of available memory, computed to: 3.4358 gb
other_proc: 0.0328 gb (37.76 %)
code: 0.0297 gb (34.24 %)
other_system: 0.0112 gb (12.93 %)
allocated_unused: 0.0074 gb (8.49 %)
other_ets: 0.0033 gb (3.75 %)
atom: 0.0015 gb (1.75 %)
plugins: 0.0003 gb (0.35 %)
metrics: 0.0002 gb (0.24 %)
binary: 0.0002 gb (0.23 %)
mnesia: 0.0001 gb (0.09 %)
quorum_ets: 0.0 gb (0.05 %)
mgmt_db: 0.0 gb (0.04 %)
msg_index: 0.0 gb (0.04 %)
queue_procs: 0.0 gb (0.03 %)
connection_other: 0.0 gb (0.0 %)
connection_channels: 0.0 gb (0.0 %)
connection_readers: 0.0 gb (0.0 %)
connection_writers: 0.0 gb (0.0 %)
queue_slave_procs: 0.0 gb (0.0 %)
quorum_queue_procs: 0.0 gb (0.0 %)
reserved_unallocated: 0.0 gb (0.0 %)

File Descriptors

Total: 2, limit: 65439
Sockets: 0, limit: 58893

Free Disk Space

Low free disk space watermark: 0.05 gb
Free disk space: 20.163 gb

Totals

Connection count: 0
Queue count: 1
Virtual host count: 1

Listeners

Interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication
Interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0
Interface: 0.0.0.0, port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0
Interface: [::], port: 15672, protocol: http, purpose: HTTP API
Interface: 0.0.0.0, port: 15672, protocol: http, purpose: HTTP API

then stop the server by

> C:\Program Files\RabbitMQ Server\rabbitmq_server-3.8.1\sbin>rabbitmqctl stop_app
> .Stopping rabbit application on node rabbit@IE-DEV-CNTRCT02 ...

start the server again

C:\Program Files\RabbitMQ Server\rabbitmq_server-3.8.1\sbin>rabbitmqctl start_app
Starting node rabbit@IE-DEV-CNTRCT02 ...
Error:
{:could_not_start, :rabbitmq_management_agent, {:rabbitmq_management_agent, {{:shutdown, {:failed_to_start_child, :rabbit_mgmt_agent_sup, {:shutdown, {:failed_to_start_child, :rabbit_mgmt_external_stats, {:badarg, [{:erlang, :port_command, [#Port<10510.10551>, []], [file: 'erlang.erl', line: 3143]}, {:os, :cmd, 2, [file: 'os.erl', line: 278]}, {:rabbit_mgmt_external_stats, :get_used_fd, 1, [file: 'src/rabbit_mgmt_external_stats.erl', line: 137]}, {:rabbit_mgmt_external_stats, :get_used_fd, 0, [file: 'src/rabbit_mgmt_external_stats.erl', line: 65]}, {:rabbit_mgmt_external_stats, :"-infos/2-lc$^0/1-0-", 2, [file: 'src/rabbit_mgmt_external_stats.erl', line: 181]}, {:rabbit_mgmt_external_stats, :emit_update, 1, [file: 'src/rabbit_mgmt_external_stats.erl', line: 385]}, {:rabbit_mgmt_external_stats, :init, 1, [file: 'src/rabbit_mgmt_external_stats.erl', line: 363]}, {:gen_server, :init_it, 2, [file: 'gen_server.erl', line: 374]}]}}}}}, {:rabbit_mgmt_agent_app, :start, [:normal, []]}}}}

anyone got this issue before ?

i'm using RabbitMQ on VM

debug file

2019-11-15 09:43:57.649 [info] <0.28747.1> Stopping message store for directory 'c:/Users/xxxxxxxxx/AppData/Roaming/RabbitMQ/db/rabbit@IE-DEV-CNTRCT02-mnesia/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent'
2019-11-15 09:43:57.707 [info] <0.28747.1> Message store for directory 'c:/Users/xxxxxxxxx/AppData/Roaming/RabbitMQ/db/rabbit@IE-DEV-CNTRCT02-mnesia/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent' is stopped
2019-11-15 09:43:57.707 [info] <0.28744.1> Stopping message store for directory 'c:/Users/xxxxxxxxx/AppData/Roaming/RabbitMQ/db/rabbit@IE-DEV-CNTRCT02-mnesia/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_transient'
2019-11-15 09:43:57.751 [info] <0.28744.1> Message store for directory 'c:/Users/xxxxxxxxx/AppData/Roaming/RabbitMQ/db/rabbit@IE-DEV-CNTRCT02-mnesia/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_transient' is stopped
2019-11-15 09:43:57.773 [info] <0.43.0> Application rabbit exited with reason: stopped
2019-11-15 09:43:57.787 [info] <0.43.0> Application ra exited with reason: stopped
2019-11-15 09:43:57.797 [info] <0.43.0> Application cowboy exited with reason: stopped
2019-11-15 09:43:57.797 [info] <0.43.0> Application cowlib exited with reason: stopped
2019-11-15 09:43:57.806 [info] <0.43.0> Application amqp_client exited with reason: stopped
2019-11-15 09:43:57.806 [info] <0.43.0> Application rabbit_common exited with reason: stopped
2019-11-15 09:43:57.817 [info] <0.43.0> Application os_mon exited with reason: stopped
2019-11-15 09:43:57.828 [info] <0.43.0> Application sysmon_handler exited with reason: stopped
2019-11-15 09:43:57.847 [info] <0.43.0> Application mnesia exited with reason: stopped
2019-11-15 09:43:57.848 [error] <0.28496.1> 
Error description:
    rpc:'-handle_call_call/6-fun-0-'/5 line 197
    rabbit:start_it/1 line 465
    rabbit:broker_start/1 line 341
    rabbit:start_loaded_apps/2 line 591
    app_utils:manage_applications/6 line 126
    lists:foldl/3 line 1263
    rabbit:'-handle_app_error/1-fun-0-'/3 line 714
throw:{could_not_start,rabbitmq_management_agent,
       {rabbitmq_management_agent,
        {{shutdown,
          {failed_to_start_child,rabbit_mgmt_agent_sup,
           {shutdown,
            {failed_to_start_child,rabbit_mgmt_external_stats,
             {badarg,
              [{erlang,port_command,
                [#Port<0.10551>,[]],
                [{file,"erlang.erl"},{line,3143}]},
               {os,cmd,2,[{file,"os.erl"},{line,278}]},
               {rabbit_mgmt_external_stats,get_used_fd,1,
                [{file,"src/rabbit_mgmt_external_stats.erl"},{line,137}]},
               {rabbit_mgmt_external_stats,get_used_fd,0,
                [{file,"src/rabbit_mgmt_external_stats.erl"},{line,65}]},
               {rabbit_mgmt_external_stats,'-infos/2-lc$^0/1-0-',2,
                [{file,"src/rabbit_mgmt_external_stats.erl"},{line,181}]},
               {rabbit_mgmt_external_stats,emit_update,1,
                [{file,"src/rabbit_mgmt_external_stats.erl"},{line,385}]},
               {rabbit_mgmt_external_stats,in...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.