GithubHelp home page GithubHelp logo

kevinkreiser / prime_server Goto Github PK

View Code? Open in Web Editor NEW
58.0 58.0 25.0 518 KB

non-blocking (web)server API for distributed computing and SOA based on zeromq

License: Other

Shell 0.46% Python 1.84% C++ 71.60% Makefile 1.36% M4 21.78% CMake 2.96%

prime_server's People

Contributors

bentley avatar danpaz avatar fsaric avatar irees avatar jengelh avatar kevinkreiser avatar lagrunge avatar matteblair avatar mookerji avatar nilsnolde avatar noblige avatar payasr avatar puretryout avatar texitoi avatar zerebubuth avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

prime_server's Issues

Valhalla OPTIONS request gets 405 - Method not allowed

I think it's a prime server issue, but happy to be told otherwise.

On browser preflight for a Valhalla POST request it fails on the OPTIONS request with a 405 - Method now allowed. Of course it works easily with GET, but I do cringe a little when forcing Valhalla's request into GET 😉

Interestingly, I couldn't find any issue concerning preflight/OPTIONS/CORS on Valhalla or prime server. If you find the time, a nudge in the right direction would be appreciated.

Empty HTTP Body

If you do a work function like this:

worker_t::result_t result{false};
http_response_t response(200, "OK", std::string());
response.from_info(info);
result.messages.emplace_back(response.to_string());
return result;

That is reply to an http request by sending an empty body as a response, firefox at least, will choke on parsing it. It will expect more of the request even when it isn't coming. So something is weird when content length is 0 or something in the serialization of http responses..

logging submodule is missing

I’m trying to build valhalla on OpenBSD. prime_server appears to be a dependency of loki, but it can’t be built without the logging submodule. The repository listed in .gitmodules 404s.

Am I missing something obvious here?

Remove autoconf based build system

Currently this repository included two separate build systems that would have to be kept in sync with every modification. The readme still suggests using the autoconf version, which is broken for me with glibc >= 2.2.5.

To simplify the setup I would recommend removing the outdated autoconf build system and update the readme accordingly.

./configure needs to be told to link 'pthread' under arch linux

I recently installed Arch Linux and wanted to rebuild Valhalla and all its dependencies.

For prime-server, I got stuck following the documentation. When running make test:

  CXX      src/libprime_server_la-prime_server.lo
  CXX      src/libprime_server_la-netstring_protocol.lo
  CXX      src/libprime_server_la-http_util.lo
  CXX      src/libprime_server_la-http_protocol.lo
  CXX      src/prime_httpd-prime_httpd.o
  CXX      src/prime_proxyd-prime_proxyd.o
  CXX      src/prime_workerd-prime_workerd.o
  CXX      src/prime_echod-prime_echod.o
  CXX      src/prime_filed-prime_filed.o
  CXXLD    libprime_server.la
  CXXLD    prime_serverd
/usr/bin/ld: src/prime_serverd-prime_serverd.o: undefined reference to symbol 'pthread_create@@GLIBC_2.2.5'
/usr/bin/ld: /usr/lib/libpthread.so.0: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make: *** [Makefile:890: prime_serverd] Error 1
make: *** Waiting for unfinished jobs....

However, re-running ./configure LDFLAGS='-pthread' seemed to solve it. I guess this could be somewhere in the CMakeLists.txt. If not, happy to PR an updated documentation.

Memory not being reclaimed

Hello,
i am running a daemon process which parses request and

  1. instantly evaluates it
  2. sends to computation layer for further processing

To each of these layers i send pointers which are captured inside lambda work function for each layer. I have written a check program which uses all functions that i am using inside daemon and ran memcheck on that, there are no memory leaks . However any variables that were used inside the work function still claim memory even after the thread ends. I am checking memory usage through Gnome System Info.

For instance when function find_trip does some heavy processing and all variables inside (R-tree, hash map) take 300 MB space,these variables are not freed even if they were not allocated through new. They somehow keep to live inside the process and the memory usage is never lower even if i run less resource intensive request afterwards.

I am attaching my daemon code below. Can you please take a look and help me figure out what am i doing wrong?

int main(int argc, char *argv[])
{
    if (argc < 2)
    {
        cout << "Usage " << argv[0] << " input_file" << endl;
        return -1;
    }
    string inputFileName(argv[1]);
    string nodes = inputFileName + ".nodes";         // node is actually Edge
    string lmdb_file = inputFileName + ".nodeIndex"; // LMDBStore keeps information node
    // coordinates, poi data and poi geometry
    string processed_edges = inputFileName + ".ch"; // edges from optimalized graph created by contraction hierarchies
    string highwayData = inputFileName + ".highwaydata";
    string highwayRtree = inputFileName + ".highwayRTree";
    string poiRtree = inputFileName + ".poiRTree";

    Graph *graph = new Graph();
    graph->loadNodes(nodes);
    graph->loadEdges(processed_edges);
    graph->loadLMDBStore(lmdb_file);
    graph->loadHighwayData(highwayData);

    RStarIndex *rstar_index = new RStarIndex(graph, highwayRtree, poiRtree);
    tbb::task_scheduler_init init;

    // number of workers to use at each stage
    auto worker_concurrency = std::thread::hardware_concurrency();
    logging::INFO("Number of threads per stage is " + to_string(worker_concurrency));

    // change these to tcp://known.ip.address.with:port if you want to do this
    // across machines
    zmq::context_t context;
    string server_endpoint = "tcp://*:8002";
    std::string result_endpoint = "ipc:///tmp/result_endpoint";
    std::string request_interrupt = "ipc:///tmp/request_interrupt";
    std::string parse_proxy_endpoint = "ipc:///tmp/parse_proxy_endpoint";
    std::string compute_proxy_endpoint = "ipc:///tmp/compute_proxy_endpoint";
    bool default_logging = false; //enable/disable default request/response logging

    // server
    std::thread server_thread = std::thread(std::bind(&http_server_t::serve,
                                                      http_server_t(context,
                                                                    server_endpoint,
                                                                    parse_proxy_endpoint + "_upstream",
                                                                    result_endpoint,
                                                                    request_interrupt,
                                                                    default_logging)));
    // load balancer for parsing
    std::thread parse_proxy(std::bind(&proxy_t::forward, proxy_t(context,
                                                                 parse_proxy_endpoint + "_upstream",
                                                                 parse_proxy_endpoint + "_downstream")));
    parse_proxy.detach();

    //request parsers
    std::list<std::thread> parse_worker_threads;
    for (size_t i = 0; i < worker_concurrency; ++i)
    {
        parse_worker_threads.emplace_back(std::bind(
            &worker_t::work,
            worker_t(
                context,
                parse_proxy_endpoint + "_downstream",
                compute_proxy_endpoint + "_upstream",
                result_endpoint,
                request_interrupt,
                [&graph, &rstar_index](const std::list<zmq::message_t> &job, void *request_info,
                                       worker_t::interrupt_function_t &) {
                    try
                    {
                        http_request_t request = http_request_t::from_string(static_cast<const char *>(job.front().data()), job.front().size());
                        http_request_info_t info = *static_cast<http_request_info_t *>(request_info);
                        int is_get_request = (request.method == method_t::GET);

                        //we accept only GET and POST requests
                        if (request.method != method_t::POST && request.method != method_t::GET)
                        {
                            return jsonify_error(405, runtime_error("Only GET and POST request are allowed"), is_get_request, info);
                        }

                        //is the request path action in the action set?
                        auto action = PATH_TO_ACTION.find(request.path);
                        if (action == PATH_TO_ACTION.cend())
                        {
                            return jsonify_error(404, runtime_error("Invalid route"), is_get_request, info);
                        }

                        worker_t::result_t result{false};
                        //do request specific processing
                        switch (action->second)
                        {
                        case ADDRESS:
                            result = find_address(request, info, graph);
                            break;
                        case NEAREST:
                            result = find_nearest(request, info, graph, rstar_index);
                            break;
                        case ROUTE:
                            result = parse_shortest_query(request, info);
                            break;
                        case TRIP:
                            result = parse_trip_query(request, info);
                            break;
                        default:
                            //apparently you wanted something that we figured we'd support but havent written yet
                            return jsonify_error(501, runtime_error("Feature is not implemented yet"), is_get_request, info);
                        }

                        return result;
                    }
                    catch (const std::exception &e)
                    {
                        http_request_t request = http_request_t::from_string(static_cast<const char *>(job.front().data()), job.front().size());
                        int is_get_request = (request.method == method_t::GET);
                        http_request_info_t info = *static_cast<http_request_info_t *>(request_info);
                        return jsonify_error(400, e, is_get_request, info);
                    }
                })));
        parse_worker_threads.back().detach();
    }

    //load balancer for shortest path computation
    std::thread compute_proxy(std::bind(&proxy_t::forward, proxy_t(context,
                                                                   compute_proxy_endpoint + "_upstream",
                                                                   compute_proxy_endpoint + "_downstream")));
    compute_proxy.detach();

    //shortest path computation
    std::list<std::thread> compute_worker_threads;
    vector<ContractionHierarchyQuery> query_objects;

    for (size_t i = 0; i < worker_concurrency; ++i)
    {
        //we need separate query object for each thread
        query_objects.push_back(graph->getQueryObject());

        compute_worker_threads.emplace_back(std::bind(
            &worker_t::work,
            worker_t(context,
                     compute_proxy_endpoint + "_downstream",
                     "ipc:///dev/null",
                     result_endpoint,
                     request_interrupt,
                     [&graph, &rstar_index, &query_objects, i](const std::list<zmq::message_t> &job, void *request_info, worker_t::interrupt_function_t &) {

                         string query(static_cast<const char *>(job.front().data()), job.front().size());

                         //parse request
                         json arr = json::parse(query);
                         int is_get_request = arr[0];
                         int accepts_gzip = arr[1];
                         int find_shortest = arr[2];
                         uint32_t start = arr[3];
                         uint32_t destination = arr[4];
                         vector<double> coordinates = arr[5].get<vector<double>>();
                         vector<int> categories;
                         if (!find_shortest)
                         {
                             categories = arr[6].get<vector<int>>();
                         }
                         try
                         {
                             http_request_info_t info = *static_cast<http_request_info_t *>(request_info);
                             worker_t::result_t result{false};
                             if (find_shortest)
                             {
                                 result = find_shortest_path(is_get_request, accepts_gzip, start, destination, coordinates, info, graph, query_objects[i]);
                             }
                             else
                             {
                                 result = find_trip(is_get_request, accepts_gzip, start, destination, coordinates, categories, info, graph, rstar_index, query_objects[i]);
                             }
                             return result;
                         }
                         catch (const std::exception &e)
                         {
                             http_request_info_t info = *static_cast<http_request_info_t *>(request_info);
                             return jsonify_error(400, e, is_get_request, info);
                         }
                     })));
        compute_worker_threads.back().detach();
    }

    //listen for SIGINT and terminate if we hear it
    std::signal(SIGINT, [](int) {
        std::this_thread::sleep_for(std::chrono::seconds(1));
        exit(1);
    });
    server_thread.join();

    delete graph;
    delete rstar_index;

    return 0;
}

MOAR Config

We have to bite the bullet at some point and allow for a standard config.. At this point the api is configurable in a few ways but we really should make a config object that (probably serializes from yml or json) and can be passed around to the different constructors or whatever that need the options.

Right now a few things arent even configurable so we should make those so, specifically CORS headers and others, perhaps we can just have a section of key value pars of all headers to add to responses?

let the response determine whether or not to close the connection to the client

right now we have the server hardcoded to always close the connection after the response is sent. from stack overflow:

In HTTP 0.9, the server always closes the connection after sending the response. The client must close its end of the connection after receiving the response.

In HTTP 1.0, the server always closes the connection after sending the response UNLESS the client sent a Connection: keep-alive request header and the server sent a Connection: keep-alive response header. If no such response header exists, the client must close its end of the connection after receiving the response.

In HTTP 1.1, the server does not close the connection after sending the response UNLESS the client sent a Connection: close request header, or the server sent a Connection: close response header. If such a response header exists, the client must close its end of the connection after receiving the response.

so apprently we are .9 compliant but thats just not going to fly. we should fix that

refactor protocol

it seems the simple nature of the protocol defintions is great for netstrings but completely unsufficient for http (which is a very interesting use case). we should refactor that to be something more like:

request - an object that the server pushes down into the proxy/worker layers
response - an object that a worker pushes back up to the server when finished with its work
stream_request - used by the server to collect bytes from a client and can break itself into multiple request objects
stream_response - used by the client to collect bytes from a server and can break itself into multiple response objects

add proper http response parsing

it would be nice but completely not critical to have actual well defined http response objects that given an http protocol response string would populate themselves into: code, message, headers, body, etc

server becomes unresponsive

the lapse project which makes use of prime_server for the sake of an image file server has a button for cycling through pictures. if you highlight the button in a web browser and just hold down the enter key, you'll make a lot of requests in short succession. this after a short time the server stops noticing requests and completely stops answering them as well.

this has been also confirmed by @slvlirnoff via issue valhalla/valhalla#798

Compilation error: ‘ZMQ_STREAM_NOTIFY’ was not declared in this scope

I am trying to build it on Ubuntu 16.04 Beta. (to answer your Q in AU). I followed the steps from read-me file.

make test raise the following error:

$ make test
  CXX      src/libprime_server_la-prime_server.lo
src/prime_server.cpp: In constructor ‘prime_server::client_t::client_t(zmq::context_t&, const string&, const request_function_t&, const collect_function_t&, size_t)’:
src/prime_server.cpp:23:23: error: ‘ZMQ_STREAM_NOTIFY’ was not declared in this scope
     server.setsockopt(ZMQ_STREAM_NOTIFY, &enabled, sizeof(enabled));
                       ^
src/prime_server.cpp: In constructor ‘prime_server::server_t<request_container_t, request_info_t>::server_t(zmq::context_t&, const string&, const string&, const string&, bool, size_t)’:
src/prime_server.cpp:91:23: error: ‘ZMQ_STREAM_NOTIFY’ was not declared in this scope
     client.setsockopt(ZMQ_STREAM_NOTIFY, &enabled, sizeof(enabled));
                       ^
Makefile:916: recipe for target 'src/libprime_server_la-prime_server.lo' failed
make: *** [src/libprime_server_la-prime_server.lo] Error 1

Current dependencies installed:

$ apt-cache policy  libcurl4-openssl-dev libzmq3-dev
libcurl4-openssl-dev:
  Installed: 7.47.0-1ubuntu2
  Candidate: 7.47.0-1ubuntu2
  Version table:
 *** 7.47.0-1ubuntu2 500
        500 http://dz.archive.ubuntu.com/ubuntu xenial/main i386 Packages
        100 /var/lib/dpkg/status
libzmq3-dev:
  Installed: 4.1.4-7
  Candidate: 4.1.4-7
  Version table:
 *** 4.1.4-7 500
        500 http://dz.archive.ubuntu.com/ubuntu xenial/universe i386 Packages
        100 /var/lib/dpkg/status

large request response

currently large requests garner a close of connection. perhaps, at least for http, we might want to return a 413.

heartbeats carry useful information

currently the proxy chooses a worker to send work to based on only one criterion, which is whether or not the worker is already working or not. it would be great if we could add a bit more semantics to this interface. specifically in the future workers could advertise within their heartbeats what types of work they are most likely to do a good job at. the application can decide whether this is a hard and fast 'i only do this type of work' to a more 'i prefer this type of work'.

we'll make this generic by simply heartbeating a set of bytes from the worker and the application can interpret them however it likes. to make this interpreting easier we'll break out the bit in the proxy that chooses a worker and does the heartbeating book keeping.

we'll take care to make sure the defaults work without any intervention the same as they do now.

refactor netstring protocol for requests

should have netstring protocol mirror http protocol in terms of request parsing and serializing, this could simplify testing and make the pattern more obvious for others who want to develop their own protocol

-lpthread not linked against

in some systems autotools doesnt throw -lpthread in there and you end up getting link errors. @ptpt describes this here: #35

hopefully there is a way to automatically link against pthreads via autotools detection mechanisms. it would be a shame to have to hardcode that

http pedantic

there are lots of things we just arent pedantic about... some of these we could do, but at the annoyance of added complication. some of them we purposely let up to the application for flexibility's sake..

here's a small start of a list, there are many more to cross off.. we should start a branch and slowly chip these off. its important that we merge master to this branch frequently..

  • http 1.1 clients who dont specify Host: header should get 400s
  • no Date: header is returned
  • no cache headers at all..

close connection on malformed request

right now we throw an exception and log malformed requests. we should indeed close the connection to these peers. theres really no point in keeping them around as they basically can't recover from failing to parse whatever protocol they were using anyway.

mac build fails now

@patriciogonzalezvivo found that on darwin:

./prime_server/zmq_helpers.hpp:73:45: error: unknown template name '__hash_base'
 struct hash<zmq::message_t> : public std::__hash_base<size_t, zmq::message_t> {
                                           ^
./prime_server/zmq_helpers.hpp:76:19: error: no member named '_Hash_impl' in namespace 'std'
     return std::_Hash_impl::hash(__m.data(), __m.size());
            ~~~~~^
In file included from src/netstring_protocol.cpp:1:
In file included from ./prime_server/netstring_protocol.hpp:4:
In file included from ./prime_server/prime_server.hpp:18:
./prime_server/zmq_helpers.hpp:73:45: error: unknown template name '__hash_base'
 struct hash<zmq::message_t> : public std::__hash_base<size_t, zmq::message_t> {

Job Cancellation

When workers are performing relatively long tasks and the client no longer needs the result of such a task (maybe signaled via client disconnect?) we should be able to tell a given worker to abort his work. There are a few questions here though.

  1. Who controls whether or not the work is aborted? I think in the most rigorous sense you'd like to be sure the work was ended as soon as possible. The problem though is that the thread or process who knows about the cancellation isnt the same one doing the work. So this option is out. Which leaves us with the worker has to decide when its done. The issue here is that the worker doesnt even control its work. Indeed that functionality is injected into the workers work function. So the trick will be able to find a way to allow the work function access to information that the worker is getting.
  2. How do we notify the workers work function about aborting? Lets assume that the worker itself knows it should abort (we'll talk about how in another bullet point). How does the worker communicate this information to the work function. The obvious way to do it would be to have the work function poll this state directly. To do this we'll need to expand the prototype for the work function to include a polling function to allow the worker to hear about aborting its job. We can have the polling function throw a custom exception when an abort matches their current job id. This can be caught outside of the work function, logged, and then let the worker tell the proxy its ready for more work again. note that this requires the implementer of the work function to actually call the polling function periodically. they have the option not to.
  3. How do we tell a given worker that their current job is to be cancelled? The issue here is that of synchronization. Sure the server at the top knows that the job is outstanding but it knows neither which stage of the pipeline its on nor which worker has it. Indeed a job could be returned before the abort made it to the right place. So anyway all this boils down to is that we have two options. Either we do a bunch of bookkeeping (might need at least some of this for timeouts etc anyway) or we broadcast the abort message and assume it happened. I personally like the latter. Even if its udp style broadcasting where its not gauranteed to get there it should be good enough 99% of the time. The upside of this approach is that we wouldnt have to shuffle messages through the pipeline (essentially flood filling the tree) to be sure we found the right one. Even with perfect bookkeeping you couldnt know exactly where to send the abort. Unless every proxy remembered every request it forwarded for a decently long amount of time. This sounds kludgy. So yeah either floodfill or broadcast. The broadcasting might also be easier once we merge the outstanding zbeacon pr.
  4. What triggers an abort? The job took longer than the configured time limit. In which case the server replies to the client that it timed out and tells the worker to abort. The second option would be the client makes a request but then disconnects. In which case no response is needed but we would like to not waste time on completing a job we aren't sending anywhere. You could also allow for a specific request to abort another request. Since the server knows who the client is it could abort any requests that the client has outstanding. There are some questions about having this behind other load balancers and whether or not keepalive might make it impossible for the client to correctly talk to the server servicing the request.

test core api directly

different pieces of the core api could be tested directly. there are already practical tests using the supplied protocols but a little more work would be nice to make easier fixing any thing that crops up in the core api

header parsing to more closely follow rfc

so the vast majority of things making http requests out there seem to do so with header fields that conform to some rules: Caps-With-Dashes-For-The-Fields: a colon, a single space and then whatever for the value

Apparently, the fields are case insensitive and you can have any amount of white space between the colon and the value... So we should fix this... Specifically both ab and siege fail to properly POST because their content length header doesnt follow the rule above. Yuck!

@dgearhart

Interrupt test failing on Mac OS X Sierra 10.12.6

Versions:
commit 9a8ac0e (HEAD -> master, origin/master, origin/HEAD)
Homebrew 1.7.1-242-g0d81ad0
autoconf (GNU Autoconf) 2.69
automake (GNU automake) 1.16.1
glibtool (GNU libtool) 2.4.6

libtool -V: Apple Inc. version cctools-900
GNU Make 3.81

gcc -v:
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 9.0.0 (clang-900.0.39.2)
Target: x86_64-apple-darwin16.7.0

make test -j8
CXX src/prime_serverd-prime_serverd.o
CXX src/libprime_server_la-prime_server.lo
CXX src/libprime_server_la-netstring_protocol.lo
CXX src/libprime_server_la-zmq_helpers.lo
CXX src/libprime_server_la-http_util.lo
CXX src/libprime_server_la-http_protocol.lo
CXX src/prime_httpd-prime_httpd.o
CXX src/prime_proxyd-prime_proxyd.o
CXX src/prime_workerd-prime_workerd.o
CXX src/prime_echod-prime_echod.o
CXX src/prime_filed-prime_filed.o
CXXLD libprime_server.la
CXXLD prime_serverd
CXXLD prime_workerd
CXXLD prime_filed
CXXLD prime_echod
CXXLD prime_proxyd
CXXLD prime_httpd
/Applications/Xcode.app/Contents/Developer/usr/bin/make test/zmq test/netstring test/http test/shaping test/interrupt
CXX test/zmq-zmq.o
CXX test/netstring-netstring.o
CXX test/shaping-shaping.o
CXX test/http-http.o
CXX test/interrupt-interrupt.o
CXXLD test/zmq
CXXLD test/shaping
CXXLD test/interrupt
CXXLD test/netstring
CXXLD test/http
/Applications/Xcode.app/Contents/Developer/usr/bin/make check-TESTS
./test-driver: line 107: 29761 Bus error: 10 "$@" > $log_file 2>&1
FAIL: test/interrupt
PASS: test/zmq
PASS: test/shaping
PASS: test/netstring
PASS: test/http

Testsuite summary for prime_server 0.6.3

TOTAL: 5

PASS: 4

SKIP: 0

XFAIL: 0

FAIL: 1

XPASS: 0

ERROR: 0

============================================================================
See ./test-suite.log
Please report to https://github.com/kevinkreiser/prime_server/issues

make[2]: *** [test-suite.log] Error 1
make[1]: *** [check-TESTS] Error 2
make: *** [check-am] Error 2

Contents of test_suite.log:

prime_server 0.6.3: ./test-suite.log

TOTAL: 5

PASS: 4

SKIP: 0

XFAIL: 0

FAIL: 1

XPASS: 0

ERROR: 0

.. contents:: :depth: 2

FAIL: test/interrupt

=== Testing interrupt ===
test_early [PASS]
test_loopFAIL test/interrupt (exit status: 138)

Tests fail on Ubuntu 18.04.2

make test is currently failing on Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-45-generic x86_64). All packages up to date as of the time of this report.

Here are the test-suite.log contents:

==========================================
   prime_server 0.6.4: ./test-suite.log
==========================================

# TOTAL: 5
# PASS:  3
# SKIP:  0
# XFAIL: 0
# FAIL:  2
# XPASS: 0
# ERROR: 0

.. contents:: :depth: 2

FAIL: test/netstring
====================

=== Testing netstring ===
           test_streaming_client [PASS]
           test_streaming_server [PASS]
                     test_entity [PASS]
           test_parallel_clientsFAIL test/netstring (exit status: 142)

FAIL: test/http
===============

=== Testing http ===
           test_streaming_client [PASS]
           test_streaming_server [PASS]
                    test_request [PASS]
            test_request_parsing [PASS]
              test_query_parsing [PASS]
                   test_response [PASS]
           test_response_parsing [PASS]
           test_chunked_encoding [PASS]
           test_parallel_clientsFAIL test/http (exit status: 142)

Unable to install on Ubuntu 18.04 via PPA

Hi Kevin! I got a problem while installing prime_server on Ubuntu 18.04

Steps to reproduce:

$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.3 LTS"

$ sudo add-apt-repository ppa:kevinkreiser/prime-server
$ sudo apt-get update
$ sudo apt-get install libprime-server0
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 libprime-server0 : Depends: libczmq3 but it is not installable
E: Unable to correct problems, you have held broken packages.

It seems that libczmq3 has been removed from 18.04 https://packages.ubuntu.com/search?keywords=libczmq3

large messages

zmq has internal limits on the size of messages. if you hit those limits your messages will be broken up into multiple messages. at multiple stages of the stack it is assumed that there will be a fixed number of messages. this is a problem. we need to know how many messages to expect and when the api should mash the bytes together..

server -- this isnt all that problematic for the server. all it has to do is let the protocol stream parse each message in succession.

proxy -- no problem at all, we forward all messages without touching them anyway

worker -- this could be tricky. say the person using the api wants to send 1 message that is larger than the batch size. it gets split into two

if it was returning that to the client the server would want to mash them together before putting them on the socket. maybe it can achieve this by just sending all messages it gets? although i think the docs say these are dropped?

if it was proxing it on to another worker that worker would have to detect it got split messages instead of what it currently does (know something about which messages are which)

tests fail with libzmq 4.0.5 (libzmq3-dev)

packaged version of libzmq fails the zmq helper tests. the reason is because the helper tests dont properly handle the sending or not sending of the disconnecting and connect messages. in fact zmq has changed the behavior several times with respect to this feature. at any rate, the desired effect can be had with a slight tweak of the tests.

we'll need to not expect the connection messages as the tests currently do. shouldnt be too much work

file server example

it would be nice to show a simple implementation of a file serving application. basically a binary that you could say drop in a directory, or point at a directory and it would serve up the stuff in the directory. it doesnt have to be fancy. it could read the file every time from disk, it could cache the files in memory.. it doesnt have to do chunked encoding because we dont yet support it..

allow worker to do post response cleanup

sometimes workers need to do a long cleanup after their work, would be nice to send on the result before cleanup. let the workers optionally bind a cleanup method so they can return their stuff but not signal they are idle until cleanup is done

why do we even need the cpp zmq bindings?

its annoying to have to grab cpp bindings that aren't keeping up to date with zmq, not to mention there are multiple flavors. with a tiny bit of boilerplate we can completely remove the need for having them. so lets!

timeout long requests

erroneous requests may end up in the queue and just take up space indefinitely or they could go into the worker backends and never come out (worker crash for example). so we need to make a configurable time out that will purge long standing requests.

Http Parsing

Seems like depending on where a key value ends up in the url if the key has no value ie key= the key may or may not exist in http_request_t::query

Use Docker-Based Travis Builds

So many advantages but a few small catches. For one we'd need to change the install of various packages to something like:

http://docs.travis-ci.com/user/migrating-from-legacy/?utm_source=legacy-notice&utm_medium=banner&utm_campaign=legacy-upgrade#How-do-I-install-APT-sources-and-packages%3F

We'd need to install libzmq into a non standard directory and add that to our paths, also no problem.

The only issue would be forcing the use of the compiler without using update alternatives, or maybe you can update alternatives for just one user? Seems to be the only issue left to tackle

Export targets for CMake

It is currently quite awkward for CMake-based projects to consume prime_server, especially as a public dependency. Right now, such projects must somehow create an imported target themselves by writing their own logic to find prime_server, and potentially must somehow export that target for their own consumers.

It would be much, MUCH easier if prime_server would instead export CMake targets.

Compilation on Amazon Linux failed due to pthread

cd /tmp
git clone --recurse-submodules https://github.com/kevinkreiser/prime_server.git
cd prime_server
./autogen.sh
./configure
make test -j$(nproc)
make install
Cloning into 'prime_server'...
Submodule 'logging' (https://gist.github.com/39f2e39273c625d96790.git) registered for path 'prime_server/logging'
Submodule 'testing' (https://gist.github.com/ada902fb51a1ad1e5a27.git) registered for path 'test/testing'
Cloning into '/tmp/prime_server/prime_server/logging'...
Cloning into '/tmp/prime_server/test/testing'...
Submodule path 'prime_server/logging': checked out 'c5b9b40d1e47cd190d78582993fd07e4478540af'
Submodule path 'test/testing': checked out '1c45352fa2c4e25a748cf1221b93a9aed4dbc34c'
libtoolize: putting auxiliary files in AC_CONFIG_AUX_DIR, `.'.
libtoolize: copying file `./ltmain.sh'
libtoolize: putting macros in AC_CONFIG_MACRO_DIR, `m4'.
libtoolize: copying file `m4/libtool.m4'
libtoolize: copying file `m4/ltoptions.m4'
libtoolize: copying file `m4/ltsugar.m4'
libtoolize: copying file `m4/ltversion.m4'
libtoolize: copying file `m4/lt~obsolete.m4'
configure.ac:8: installing './config.guess'
configure.ac:8: installing './config.sub'
configure.ac:7: installing './install-sh'
configure.ac:7: installing './missing'
Makefile.am: installing './INSTALL'
Makefile.am: installing './depcomp'
parallel-tests: installing './test-driver'
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking how to print strings... printf
checking for style of include used by make... GNU
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking dependency style of gcc... gcc3
checking for a sed that does not truncate output... /bin/sed
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 1572864
checking whether the shell understands some XSI constructs... yes
checking whether the shell understands "+="... yes
checking how to convert x86_64-unknown-linux-gnu file names to x86_64-unknown-linux-gnu format... func_convert_file_noop
checking how to convert x86_64-unknown-linux-gnu file names to toolchain format... func_convert_file_noop
checking for /usr/bin/ld option to reload object files... -r
checking for objdump... objdump
checking how to recognize dependent libraries... pass_all
checking for dlltool... no
checking how to associate runtime and link libraries... printf %s\n
checking for ar... ar
checking for archiver @FILE support... @
checking for strip... strip
checking for ranlib... ranlib
checking command to parse /usr/bin/nm -B output from gcc object... ok
checking for sysroot... no
checking for mt... no
checking if : is a manifest tool... no
checking how to run the C preprocessor... gcc -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for dlfcn.h... yes
checking for objdir... .libs
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fPIC -DPIC
checking if gcc PIC flag -fPIC -DPIC works... yes
checking if gcc static flag -static works... no
checking if gcc supports -c -o file.o... yes
checking if gcc supports -c -o file.o... (cached) yes
checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
checking whether -lc should be explicitly linked in... no
checking dynamic linker characteristics... GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... yes
checking whether make supports nested variables... (cached) yes
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking dependency style of g++... gcc3
checking how to run the C++ preprocessor... g++ -E
checking for ld used by g++... /usr/bin/ld -m elf_x86_64
checking if the linker (/usr/bin/ld -m elf_x86_64) is GNU ld... yes
checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
checking for g++ option to produce PIC... -fPIC -DPIC
checking if g++ PIC flag -fPIC -DPIC works... yes
checking if g++ static flag -static works... no
checking if g++ supports -c -o file.o... yes
checking if g++ supports -c -o file.o... (cached) yes
checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
checking dynamic linker characteristics... (cached) GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking whether make sets $(MAKE)... (cached) yes
checking for ANSI C header files... (cached) yes
checking whether g++ supports C++11 features by default... yes
checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for DEPS... yes
checking whether gcc is Clang... no
checking whether pthreads work with -pthread... yes
checking for joinable pthread attribute... PTHREAD_CREATE_JOINABLE
checking whether more special flags are required for pthreads... no
checking for PTHREAD_PRIO_INHERIT... yes
checking that generated files are newer than configure... done
configure: creating ./config.status
config.status: creating Makefile
config.status: creating libprime_server.pc
config.status: creating prime_server/config.h
config.status: executing depfiles commands
config.status: executing libtool commands
  CXX      src/libprime_server_la-prime_server.lo
  CXX      src/libprime_server_la-zmq_helpers.lo
  CXX      src/libprime_server_la-http_util.lo
  CXX      src/libprime_server_la-netstring_protocol.lo
  CXX      src/libprime_server_la-http_protocol.lo
  CXX      src/prime_serverd-prime_serverd.o
  CXX      src/prime_httpd-prime_httpd.o
  CXX      src/prime_proxyd-prime_proxyd.o
  CXX      src/prime_workerd-prime_workerd.o
  CXX      src/prime_echod-prime_echod.o
  CXX      src/prime_filed-prime_filed.o
  CXXLD    libprime_server.la
  CXXLD    prime_serverd
  CXXLD    prime_httpd
  CXXLD    prime_proxyd
  CXXLD    prime_workerd
  CXXLD    prime_echod
  CXXLD    prime_filed
/usr/bin/ld: src/prime_echod-prime_echod.o: undefined reference to symbol 'pthread_create@@GLIBC_2.2.5'
/lib64/libpthread.so.0: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make: *** [prime_echod] Error 1
make: *** Waiting for unfinished jobs....
/usr/bin/ld: src/prime_serverd-prime_serverd.o: undefined reference to symbol 'pthread_create@@GLIBC_2.2.5'
/lib64/libpthread.so.0: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make: *** [prime_serverd] Error 1
/usr/bin/ld: src/prime_filed-prime_filed.o: undefined reference to symbol 'pthread_create@@GLIBC_2.2.5'
/lib64/libpthread.so.0: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make: *** [prime_filed] Error 1
  CXXLD    prime_serverd
/usr/bin/ld: src/prime_serverd-prime_serverd.o: undefined reference to symbol 'pthread_create@@GLIBC_2.2.5'
/lib64/libpthread.so.0: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make: *** [prime_serverd] Error 1

On the other hand, compilation with cmake works.

Test failure on Ubuntu 18.04 running on WSL

An interest in routing has brought me back to linux after a very long break so i'm somewhat blindly following this to get Valhalla running.
https://gis-ops.com/valhalla-how-to-build-and-install-on-ubuntu-18-04/

When I make test by running
make -j$(nproc) -k test

I get the following two failures.

Any suggestions would be appreciated.

==========================================
prime_server 0.6.4: ./test-suite.log

TOTAL: 5

PASS: 3

SKIP: 0

XFAIL: 0

FAIL: 2

XPASS: 0

ERROR: 0

.. contents:: :depth: 2

FAIL: test/netstring

=== Testing netstring ===
test_streaming_client [PASS]
test_streaming_server [PASS]
test_entity [PASS]
test_parallel_clients [PASS]
test_malformed [PASS]
test_too_large [PASS]
test_large_requestFAIL test/netstring (exit status: 142)

FAIL: test/http

=== Testing http ===
test_streaming_client [PASS]
test_streaming_server [PASS]
test_request [PASS]
test_request_parsing [PASS]
test_query_parsing [PASS]
test_response [PASS]
test_response_parsing [PASS]
test_chunked_encoding [PASS]
test_parallel_clients [PASS]
test_malformed [PASS]
test_too_large [PASS]
test_large_requestFAIL test/http (exit status: 142)

add headers to the responses

currently there is no way to specify headers on a given response. this will be needed for gzip encoded responses for example

pirme_server license clarification

I'm trying to package Valhalla for Fedora and I've started by packaging prime_server, as Valhalla depends on it. While I've managed to build the package fine in COPR (a PPA-like service for Fedora), I have not been able to find an authoritative license for the project:

  • the LICENSE.md file in the root of the project does not point to a license
  • the Ubuntu PPA has a License: BSD-2-clause specified in its metadata

So I would like to ask, what the intended license for prime_prime server ? Ideally it would be good if the license could be specified via a valid LICENSE file.

Otherwise without a clear license I would most likely not be able to get the package through package review for the main Fedora repositories.

Thanks in advance for any information! :)

add proper http request parsing

right now there is a streaming state type thing that doenst really care about most of the request, it only really cares about what marks the end of one and the beginning of another so it can forward it into the system. to make this system useful and to avoid having to make users write their own, we need to provide something that the first layer of workers can use to decode the request string into a proper http request object. start out with supporting get and post i guess

darwin builds break on interrupt test with -O2

for some reason we need to set osx builds to -O0 otherwise we get a bus error: 10. We need to get on an osx machine and try this out with debug symbols and and have gdb spit out info on where its happening. because everything is wrapped in libtool and because libtool on osx isnt the same thing... we need to disable shared libs so we can just use gdb directly.

Initialization order fiasco in some data

Running valhalla node bindings with enabled address sanitizer and linked with prime_server crashes due to incorrect initialization order of static structures:

Thread 1 "node" received signal SIGSEGV, Segmentation fault.
0x00007fffe6cdde95 in std::_Hashtable<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, prime_server::method_t>, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, prime_server::method_t> >, std::__detail::_Select1st, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true> >::_Hashtable<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, prime_server::method_t> const*> (this=0x7fffe4c2fb60 <prime_server::STRING_TO_METHOD>, __f=0x7fffffffa6f0, __l=0x7fffffffa830, __bucket_hint=0, __h1=..., __h2=..., __h=..., __eq=..., __exk=..., __a=...) at /usr/include/c++/7/bits/hashtable.h:955
955	      _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
(gdb) bt
#0  0x00007fffe6cdde95 in std::_Hashtable<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, prime_server::method_t>, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, prime_server::method_t> >, std::__detail::_Select1st, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true> >::_Hashtable<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, prime_server::method_t> const*> (this=0x7fffe4c2fb60 <prime_server::STRING_TO_METHOD>, __f=0x7fffffffa6f0, __l=0x7fffffffa830, __bucket_hint=0, __h1=..., __h2=..., __h=..., __eq=..., __exk=..., __a=...) at /usr/include/c++/7/bits/hashtable.h:955
#1  0x00007fffe4a0460f in std::_Hashtable<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, prime_server::method_t>, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, prime_server::method_t> >, std::__detail::_Select1st, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true> >::_Hashtable (__a=..., __eql=..., __hf=..., __n=0, __l=..., this=0x7fffe4c2fb60 <prime_server::STRING_TO_METHOD>) at /usr/include/c++/7/bits/hashtable.h:453
#2  std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, prime_server::method_t, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, prime_server::method_t> > >::unordered_map (__a=..., __eql=..., __hf=..., __n=0, __l=..., this=0x7fffe4c2fb60 <prime_server::STRING_TO_METHOD>) at /usr/include/c++/7/bits/unordered_map.h:230
#3  __static_initialization_and_destruction_0 (__priority=65535, __initialize_p=1) at ./prime_server/http_protocol.hpp:57
#4  0x00007ffff7de5733 in call_init (env=0x7fffffffdef0, argv=0x7fffffffded8, argc=2, l=<optimized out>) at dl-init.c:72
#5  _dl_init (main_map=main_map@entry=0x226e090, argc=2, argv=0x7fffffffded8, env=0x7fffffffdef0) at dl-init.c:119
#6  0x00007ffff7dea1ff in dl_open_worker (a=a@entry=0x7fffffffacb0) at dl-open.c:522
#7  0x00007ffff6bdc2df in __GI__dl_catch_exception (exception=0x7fffffffac90, operate=0x7ffff7de9dc0 <dl_open_worker>, args=0x7fffffffacb0) at dl-error-skeleton.c:196
#8  0x00007ffff7de97ca in _dl_open (file=0x7fffffffb398 "/home/miha/valhalla/lib/binding/node_valhalla.node", mode=-2147483647, caller_dlopen=0x1440a27 <uv_dlopen+39>, nsid=<optimized out>, argc=2, argv=<optimized out>, env=0x7fffffffdef0) at dl-open.c:605
#9  0x00007ffff7bd1f96 in dlopen_doit (a=a@entry=0x7fffffffaee0) at dlopen.c:66
#10 0x00007ffff6bdc2df in __GI__dl_catch_exception (exception=exception@entry=0x7fffffffae80, operate=0x7ffff7bd1f40 <dlopen_doit>, args=0x7fffffffaee0) at dl-error-skeleton.c:196
#11 0x00007ffff6bdc36f in __GI__dl_catch_error (objname=0x22658d0, errstring=0x22658d8, mallocedp=0x22658c8, operate=<optimized out>, args=<optimized out>) at dl-error-skeleton.c:215
#12 0x00007ffff7bd2735 in _dlerror_run (operate=operate@entry=0x7ffff7bd1f40 <dlopen_doit>, args=args@entry=0x7fffffffaee0) at dlerror.c:162
#13 0x00007ffff7bd2051 in __dlopen (file=file@entry=0x7fffffffb398 "/home/miha/valhalla/lib/binding/node_valhalla.node", mode=mode@entry=1) at dlopen.c:87
#14 0x0000000001440a27 in uv_dlopen (filename=0x7fffffffb398 "/home/miha/valhalla/lib/binding/node_valhalla.node", lib=0x7fffffffaf50) at ../deps/uv/src/unix/dl.c:36
#15 0x000000000121f245 in node::DLOpen(v8::FunctionCallbackInfo<v8::Value> const&) ()
#16 0x0000000000b1a673 in v8::internal::FunctionCallbackArguments::Call(void (*)(v8::FunctionCallbackInfo<v8::Value> const&)) ()
#17 0x0000000000b9043c in v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<false>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::FunctionTemplateInfo>, v8::internal::Handle<v8::internal::Object>, v8::internal::BuiltinArguments) ()
#18 0x0000000000b9108f in v8::internal::Builtin_HandleApiCall(int, v8::internal::Object**, v8::internal::Isolate*) ()

and

#0  0x00007fffe6cf96a9 in __gnu_cxx::__to_xstring<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, char> (__convf=0x83bbd0 <vsnprintf@plt>, __n=32, __fmt=0x7fffe4a6ca44 "%lu") at /usr/include/c++/7/ext/string_conversions.h:99
#1  0x00007fffe4a5d582 in std:: (__val=<optimized out>) at /usr/include/c++/7/bits/basic_string.h:6421
#2  prime_server::netstring_entity_t::to_string (body="BAD_REQUEST: Non-numeric length") at src/netstring_protocol.cpp:36
#3  0x00007fffe4a5d789 in prime_server::netstring_entity_t::request_exception_t::request_exception_t (this=<optimized out>, response=...) at src/netstring_protocol.cpp:135
#4  0x00007fffe4a4aa59 in __static_initialization_and_destruction_0 (__priority=65535, __initialize_p=1) at src/netstring_protocol.cpp:12
#5  0x00007ffff7de5733 in call_init (env=0x7fffffffdef0, argv=0x7fffffffded8, argc=2, l=<optimized out>) at dl-init.c:72
#6  _dl_init (main_map=main_map@entry=0x226e090, argc=2, argv=0x7fffffffded8, env=0x7fffffffdef0) at dl-init.c:119
#7  0x00007ffff7dea1ff in dl_open_worker (a=a@entry=0x7fffffffacb0) at dl-open.c:522
#8  0x00007ffff6bdc2df in __GI__dl_catch_exception (exception=0x7fffffffac90, operate=0x7ffff7de9dc0 <dl_open_worker>, args=0x7fffffffacb0) at dl-error-skeleton.c:196
#9  0x00007ffff7de97ca in _dl_open (file=0x7fffffffb398 "/home/miha/valhalla/lib/binding/node_valhalla.node", mode=-2147483647, caller_dlopen=0x1440a27 <uv_dlopen+39>, nsid=<optimized out>, argc=2, argv=<optimized out>, env=0x7fffffffdef0) at dl-open.c:605
#10 0x00007ffff7bd1f96 in dlopen_doit (a=a@entry=0x7fffffffaee0) at dlopen.c:66
#11 0x00007ffff6bdc2df in __GI__dl_catch_exception (exception=exception@entry=0x7fffffffae80, operate=0x7ffff7bd1f40 <dlopen_doit>, args=0x7fffffffaee0) at dl-error-skeleton.c:196
#12 0x00007ffff6bdc36f in __GI__dl_catch_error (objname=0x226fd90, errstring=0x226fd98, mallocedp=0x226fd88, operate=<optimized out>, args=<optimized out>) at dl-error-skeleton.c:215
#13 0x00007ffff7bd2735 in _dlerror_run (operate=operate@entry=0x7ffff7bd1f40 <dlopen_doit>, args=args@entry=0x7fffffffaee0) at dlerror.c:162
#14 0x00007ffff7bd2051 in __dlopen (file=file@entry=0x7fffffffb398 "/home/miha/valhalla/lib/binding/node_valhalla.node", mode=mode@entry=1) at dlopen.c:87
#15 0x0000000001440a27 in uv_dlopen (filename=0x7fffffffb398 "/home/miha/valhalla/lib/binding/node_valhalla.node", lib=0x7fffffffaf50) at ../deps/uv/src/unix/dl.c:36
#16 0x000000000121f245 in node::DLOpen(v8::FunctionCallbackInfo<v8::Value> const&) ()
#17 0x0000000000b1a673 in v8::internal::FunctionCallbackArguments::Call(void (*)(v8::FunctionCallbackInfo<v8::Value> const&)) ()
#18 0x0000000000b9043c in v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<false>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::FunctionTemplateInfo>, v8::internal::Handle<v8::internal::Object>, v8::internal::BuiltinArguments) ()
#19 0x0000000000b9108f in v8::internal::Builtin_HandleApiCall(int, v8::internal::Object**, v8::internal::Isolate*) ()

The first crash is fixed by the patch

fix-init-order master 952adbe4201f9b166f812d8c2d1a13ff3f4137b7
Author:     Michael Krasnyk <[email protected]>
AuthorDate: Wed Jun 13 01:11:13 2018 +0200
Commit:     Michael Krasnyk <[email protected]>
CommitDate: Wed Jun 13 01:11:13 2018 +0200

Parent:     9324d4f cat the right things on failure, include the right headers for installation
Merged:     fix-init-order master
Containing: fix-init-order master
Follows:    0.6.3 (44)

Fix initialization order fiasco of unordered maps

2 files changed, 31 insertions(+), 20 deletions(-)
prime_server/http_protocol.hpp | 11 -----------
src/http_protocol.cpp          | 40 +++++++++++++++++++++++++++++++---------

modified   prime_server/http_protocol.hpp
@@ -51,17 +51,6 @@ namespace prime_server {
   using headers_t = std::unordered_map<std::string, std::string, caseless_predicates_t, caseless_predicates_t>;
   using query_t = std::unordered_map<std::string, std::list<std::string> >;
   enum method_t { OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE, CONNECT };
-  const std::unordered_map<std::string, method_t> STRING_TO_METHOD {
-    {"OPTIONS", method_t::OPTIONS}, {"GET", method_t::GET}, {"HEAD", method_t::HEAD}, {"POST", method_t::POST},
-    {"PUT", method_t::PUT}, {"DELETE", method_t::DELETE}, {"TRACE", method_t::TRACE}, {"CONNECT", method_t::CONNECT}
-  };
-  const std::unordered_map<method_t, std::string, std::hash<int> > METHOD_TO_STRING{
-    {method_t::OPTIONS, "OPTIONS"}, {method_t::GET, "GET"}, {method_t::HEAD, "HEAD"}, {method_t::POST, "POST"},
-    {method_t::PUT, "PUT"}, {method_t::DELETE, "DELETE"}, {method_t::TRACE, "TRACE"}, {method_t::CONNECT, "CONNECT"}
-  };
-  const std::unordered_map<std::string, bool> SUPPORTED_VERSIONS {
-    {"HTTP/1.0", true}, {"HTTP/1.1", true}
-  };
 
   struct http_entity_t {
     std::string version;
modified   src/http_protocol.cpp
@@ -39,6 +39,28 @@ namespace {
   const http_request_t::request_exception_t RESPONSE_504(http_response_t(504, "Gateway Time-out", "The server didn't respond in time", {CORS}));
   const http_request_t::request_exception_t RESPONSE_505(http_response_t(505, "HTTP Version Not Supported", "The HTTP request version is not supported", {CORS}));
 
+
+  const std::unordered_map<std::string, method_t>& string_to_method() {
+    static const std::unordered_map<std::string, method_t> data {
+       {"OPTIONS", method_t::OPTIONS}, {"GET", method_t::GET}, {"HEAD", method_t::HEAD}, {"POST", method_t::POST},
+       {"PUT", method_t::PUT}, {"DELETE", method_t::DELETE}, {"TRACE", method_t::TRACE}, {"CONNECT", method_t::CONNECT}
+    };
+    return data;
+  }
+  const std::unordered_map<method_t, std::string, std::hash<int> >& method_to_string() {
+    static const std::unordered_map<method_t, std::string, std::hash<int> > data {
+      {method_t::OPTIONS, "OPTIONS"}, {method_t::GET, "GET"}, {method_t::HEAD, "HEAD"}, {method_t::POST, "POST"},
+      {method_t::PUT, "PUT"}, {method_t::DELETE, "DELETE"}, {method_t::TRACE, "TRACE"}, {method_t::CONNECT, "CONNECT"}
+    };
+    return data;
+  }
+  const std::unordered_map<std::string, bool>& supported_versions() {
+    static const std::unordered_map<std::string, bool> data {
+      {"HTTP/1.0", true}, {"HTTP/1.1", true}
+    };
+    return data;
+  }
+
   template <class T>
   size_t name_max(const std::unordered_map<std::string, T>& methods) {
     size_t i = 0;
@@ -46,8 +68,8 @@ namespace {
       i = std::max(i, kv.first.size());
     return i;
   };
-  const size_t METHOD_MAX_SIZE = name_max(prime_server::STRING_TO_METHOD) + 1;
-  const size_t VERSION_MAX_SIZE = name_max(prime_server::SUPPORTED_VERSIONS) + 2;
+  const size_t METHOD_MAX_SIZE = name_max(string_to_method()) + 1;
+  const size_t VERSION_MAX_SIZE = name_max(supported_versions()) + 2;
 }
 
 namespace prime_server {
@@ -258,8 +280,8 @@ namespace prime_server {
   std::string http_request_t::to_string(const method_t& method, const std::string& path, const std::string& body, const query_t& query,
                                const headers_t& headers, const std::string& version) {
     //get the method on there
-    auto itr = METHOD_TO_STRING.find(method);
-    if(itr == METHOD_TO_STRING.end())
+    auto itr = method_to_string().find(method);
+    if(itr == method_to_string().end())
       throw std::runtime_error("Unsupported http request method");
     std::string request;
     request.reserve(16 + path.size() + headers.size() * 32 + body.size());
@@ -399,8 +421,8 @@ namespace prime_server {
         case CODE:
           throw RESPONSE_500;
         case METHOD: {
-          auto itr = STRING_TO_METHOD.find(partial_buffer);
-          if(itr == STRING_TO_METHOD.end())
+          auto itr = string_to_method().find(partial_buffer);
+          if(itr == string_to_method().end())
             throw RESPONSE_501;
           log_line = partial_buffer + delimiter;
           method = itr->second;
@@ -415,8 +437,8 @@ namespace prime_server {
           break;
         }
         case VERSION: {
-          auto itr = SUPPORTED_VERSIONS.find(partial_buffer);
-          if(itr == SUPPORTED_VERSIONS.end())
+          auto itr = supported_versions().find(partial_buffer);
+          if(itr == supported_versions().end())
             throw RESPONSE_505;
           log_line += partial_buffer;
           version.swap(partial_buffer);
@@ -599,7 +621,7 @@ namespace prime_server {
           break;
         }
         case VERSION: {
-          if(SUPPORTED_VERSIONS.find(partial_buffer) == SUPPORTED_VERSIONS.end())
+          if(supported_versions().find(partial_buffer) == supported_versions().end())
             throw std::runtime_error("Unknown http version");
           //log_line += partial_buffer;
           version.swap(partial_buffer);

The second one is tricky, because exceptions can be initialized before initialization of vsprintf internal data.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.