GithubHelp home page GithubHelp logo

jinganix / enif_protobuf Goto Github PK

View Code? Open in Web Editor NEW
38.0 38.0 21.0 1.26 MB

A Google Protobuf implementation with enif (Erlang nif)

Makefile 3.18% Erlang 51.06% C 45.76%
enif erlang google nif protobuf

enif_protobuf's Issues

enum default value is being set to undefined in proto3

  • In proto3 format: the default value of an enum field must be the first value in the list of the possible values.
  • gpb actually does this correctly.. but the nif code sets the value to be undefined.

Any suggestions on how to fix it?
I found the function to modify in ep_decoder.c. but could not figure out the exact changes to make.
any pointers would be greatly appreciated.

load_cache crash if there are other fields after oneof

Hi,

it seems that load_cache crash if there are other fields after the oneof one.
Below is example of proto that should result in fail but if you comment out proxy_session_id it will work.

syntax = "proto2";
package one.clproto;


message ServerMessage {
    oneof message_body { 
        int32 file_children = 3;
        bytes xattr = 4;
    }

    optional bytes proxy_session_id                        = 21;
}

{error, tid_not_found} when encoding/decoding

Hi,

sometimes I get tid_not_found during either data encoding or decoding. After eliminating the option that messages were invalid I take a look at enif_protobuf code. If I understand it correctly some space is allocated in state (exactly erlang:system_info(logical_processors) slots) so that each thread can keep it's data there. When new thread comes it simply takes one of the empty slots. If no slot is available then {error, tid_not_found} is returned.
Is it possible that there can be more threads than logical_processors? or that threads die and are respawned with dirrefent tid?
Either way, would it be possible to clear state->tdata so it can be populated again instead of returning error? or add function to do so (e.g. purge_tdata) from erlang?

combine two protobuf 3.0's defs, only one {proto3_msgs, [...]} works, other sill packed like protobuf 2.0, preset default value in binary.

%% message with gpb 4.18.0
%% -- test_proto3_default02.proto
%% protobuf v3
%% syntax = "proto3";
%% message default_uint32_msg {
%% uint32 count_num = 1;
%% }
%%
%% -- test_proto3_default02.proto
%% syntax = "proto3";
%% message default_string_mix_msg {
%% uint32 count_num = 1;
%% string rname = 2;
%% uint32 py_id = 3;
%% uint32 py_num = 4;
%% }
-module(test_protobuf_encode).
-export([
combine_two_defs/0
]).
-include_lib("gpb.hrl").
-record(default_uint32_msg,
{count_num = 0 :: non_neg_integer() | undefined % = 1, optional, 32 bits
}).
-record(default_string_mix_msg,
{count_num = 0 :: non_neg_integer() | undefined, % = 1, optional, 32 bits
rname = [] :: unicode:chardata() | undefined, % = 2, optional
py_id = 0 :: non_neg_integer() | undefined, % = 3, optional, 32 bits
status_arr = [] :: [non_neg_integer()] | undefined, % = 4, repeated, 32 bits
py_num = 0 :: non_neg_integer() | undefined, % = 5, optional, 32 bits
gname = [] :: unicode:chardata() | undefined, % = 6, optional
gnum = 0 :: non_neg_integer() | undefined % = 7, optional, 32 bits
}).
get_proto_defs() ->
[%{proto_defs_version, 1},
%{file, {"test_proto3_default", "test_proto3_default.proto"}},
%{{msg_containment, "test_proto3_default"}, [default_uint32_msg]},
%{{enum_containment, "test_proto3_default"}, []},
{syntax, "proto3"},
{{msg, default_uint32_msg}, [#field{name = count_num, fnum = 1, rnum = 2, type = uint32, occurrence = optional, opts = []}]},
{proto3_msgs, [default_uint32_msg]}].
get_proto_defs_02() ->
[%{proto_defs_version, 1},
%{file, {"test_proto3_default02", "test_proto3_default02.proto"}},
%{{msg_containment, "test_proto3_default02"}, [default_string_mix_msg]},
%{{enum_containment, "test_proto3_default02"}, []},
{syntax, "proto3"},
{{msg, default_string_mix_msg},
[#field{name = count_num, fnum = 1, rnum = 2, type = uint32, occurrence = optional, opts = []},
#field{name = rname, fnum = 2, rnum = 3, type = string, occurrence = optional, opts = []},
#field{name = py_id, fnum = 3, rnum = 4, type = uint32, occurrence = optional, opts = []},
#field{name = status_arr, fnum = 4, rnum = 5, type = uint32, occurrence = repeated, opts = [packed]},
#field{name = py_num, fnum = 5, rnum = 6, type = uint32, occurrence = optional, opts = []},
#field{name = gname, fnum = 11, rnum = 7, type = string, occurrence = optional, opts = []},
#field{name = gnum, fnum = 12, rnum = 8, type = uint32, occurrence = optional, opts = []}]},
{proto3_msgs, [default_string_mix_msg]}].
%% @doc
combine_two_defs() ->
io:format("diff combine_two_defs...~n",[]),
enif_protobuf:load_cache(get_proto_defs_02() ++ get_proto_defs()),
%% proto3 default value not be serialized ok
Defaultuint32 = #default_uint32_msg{},
EnifPacked = enif_protobuf:encode(Defaultuint32),
GpbPacked = test_proto3_default:encode_msg(Defaultuint32),
io:format("default value not packed ...~n",[]),
io:format("enif_protobuf:encode/1 packed, byte_size:~w, binary:~w ~n", [erlang:byte_size(EnifPacked), EnifPacked]),
io:format("test_proto3_default:encode_msg/1 packed, byte_size:~w, binary:~w ~n", [erlang:byte_size(GpbPacked), GpbPacked]),
%% proto3 default value was present in the binary
DataRd = #default_string_mix_msg{count_num = 99, rname = <<"test">>, py_id = 0, status_arr = [], gname = <<>>},
EnifPacked02 = enif_protobuf:encode(DataRd),
GpbPacked02 = test_proto3_default02:encode_msg(DataRd),
io:format("default value packed ...~n",[]),
io:format("enif_protobuf:encode/1 packed, byte_size:~w, binary:~w ~n", [erlang:byte_size(EnifPacked02), EnifPacked02]),
io:format("test_proto3_default02:encode_msg/1 packed, byte_size:~w, binary:~w ~n", [erlang:byte_size(GpbPacked02), GpbPacked02]),
ok.

a reproduce code like this

proto3: field specific custom options

Hi, I am using gpb create the message definitions and then loading them using enif_protobuf for the codec.
I need to load a field specific option and then update the encoder and decoder for that field.. specifically i want to achieve the following:

in the following definition: i have added a custom option to the integer field indicating binary = true.. this means that the integer value when decoded should be returned as a binary value and during encoding: convert the given binary to integer and then encode it.. for the 2nd integer field without the custom option - the usual logic can run.

syntax = "proto3";
message One {
	int32 id = 1 [binary = true];
        int32 j = 2;
}

how do i achieve this?
i am able to get this from the message definitions at gpb.. so i need to be able to store this in the cache and then update the encoder and decoder for this.
any pointers would be greatly appreciated.

i am trying to add it as an option to the field similar to how 'packed' is being used.. would that be okay?

building for macos m1

Describe: current version doesn't build for arm architecture on macos. Could I to add PR for solve the problem?

load_cache/1 segfaults on non-trivial maps

Minimal failing proto:

syntax = "proto3";

message a_message {
    map<string, non_trivial_item> non_trivial_map = 1;
}

message non_trivial_item {
    int64 item = 1;
}

Running enif_proto compiled with ASan (see bottom for Makefile modifications:

enif_protobuf (master *=) $ protoc-erl fail.proto
enif_protobuf (master *=) $ erlc -I/home/jay/repos/gpb/include fail.erl
~/repos/enif_protobuf (master *=) $ ASAN_OPTIONS=detect_leaks=0 LD_PRELOAD=$(gcc -print-file-name=libasan.so) rebar3 shell
===> Verifying dependencies...
make: Entering directory 'enif_protobuf/c_src'
make: 'enif_protobuf/c_src/../priv/enif_protobuf.so' is up to date.
make: Leaving directory 'enif_protobuf/c_src'
===> Analyzing applications...
===> Compiling enif_protobuf
Erlang/OTP 23 [erts-11.2.2.7] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [hipe]

Eshell V11.2.2.7  (abort with ^G)
1> fail:get_msg_defs().
[{{msg,a_message},
  [{field,non_trivial_map,1,2,
          {map,string,{msg,non_trivial_item}},
          repeated,[]}]},
 {{msg,non_trivial_item},
  [{field,item,1,2,int64,optional,[]}]}]
2> enif_protobuf:load_cache(fail:get_msg_defs()).

enif_protobuf/c_src/ep_node.c:714:44: runtime error: member access within null pointer of type 'struct ep_node_t'
ASAN:DEADLYSIGNAL
=================================================================
==21520==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000018 (pc 0x7ff516041014 bp 0x7ff51827e9f0 sp 0x7ff51827e910 T4)
==21520==The signal is caused by a READ memory access.
==21520==Hint: address points to the zero page.
#0 0x7ff516041013 in stack_ensure_all /home/jay/repos/enif_protobuf/c_src/ep_node.c:714
#1 0x7ff5160370c3 in load_cache_1 /home/jay/repos/enif_protobuf/c_src/enif_protobuf.c:261
#2 0x564925a8e3c4 in process_main x86_64-unknown-linux-gnu/opt/smp/beam_cold.h:184
#3 0x564925aa8e39 in sched_thread_func beam/erl_process.c:8560
#4 0x564925ca726b in thr_wrapper pthread/ethread.c:122
#5 0x7ff55e0196da in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x76da)
#6 0x7ff55db3aa3e in __clone (/lib/x86_64-linux-gnu/libc.so.6+0x121a3e)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /home/jay/repos/enif_protobuf/c_src/ep_node.c:714 in stack_ensure_all
Thread T4 (1_scheduler) created by T0 here:
#0 0x7ff55ec51d2f in __interceptor_pthread_create (/usr/lib/gcc/x86_64-linux-gnu/7/libasan.so+0x37d2f)
#1 0x564925ca760f in ethr_thr_create pthread/ethread.c:419
==21520==ABORTING

Compiling enif_protobuf with ASan

diff --git a/c_src/Makefile b/c_src/Makefile
index 8c8fc3d..063d782 100644
--- a/c_src/Makefile
+++ b/c_src/Makefile
@@ -32,10 +32,13 @@ else ifeq ($(UNAME_SYS), Linux)
 endif

 CFLAGS += -fPIC -I $(ERTS_INCLUDE_DIR) -I $(ERL_INTERFACE_INCLUDE_DIR)
+CFLAGS +=  -ggdb3 -fno-omit-frame-pointer -Og -fsanitize=address,undefined
 CXXFLAGS += -fPIC -I $(ERTS_INCLUDE_DIR) -I $(ERL_INTERFACE_INCLUDE_DIR)
+CXXFLAGS +=  -ggdb3 -fno-omit-frame-pointer -Og -fsanitize=address,undefined

 LDLIBS += -L $(ERL_INTERFACE_LIB_DIR) -lei -lpthread
 LDFLAGS += -shared
+LDFLAGS += -fsanitize=address,undefined

 # Verbosity.

Does not skip encoding "empty" fields

Using the x.proto example file, converted to proto3:

syntax = "proto3";

message Person {
    string name = 1;
    int32 id = 2;
    string email = 3;
}

blank, default value fields still get encoded:

4> enif_protobuf:encode(#'Person'{}).
<<10,0,16,0,26,0>>
5> x:encode_msg(#'Person'{}).
<<>>

This is allowed by the spec, but it's not binary compatible with gpb or protoc

协议循环引用触发加载死循环

message p_v {
    optional int64 int_v =1;
    optional string str_v =2;
    optional float float_v =3;
    repeated p_v list =4;
}

这种自引用的协议 通过gpb生成描述符载入到enif_protobuf库时会发生死循环,并最终导致oom

Issue with oneof fields

Hi, I am using gpb create the message definitions and then loading them using enif_protobuf for the codec.
The issue I face is the following:

For definition like this:

syntax = "proto3";

message One {
	int32 id = 1;
}

message Two {
	string s = 1;
}

message Three {
    oneof payload {
    	One o = 1;
    	Two t = 2;
    }
}

Final erlang records for Three would look something like this:

#Three{
    payload = {o, #One{id = 1}}
}

In my case, it is guaranteed that each of these oneof fields will be different record types: in these cases,
I would like to get rid of this additional tuple and make it look something like this:

#Three{
    payload = #One{id = 1}
}

I figured out the changes to do in enif_protobuf: HalloAppInc@01b8c60
However, the issue is when encoding we search for the field number using the field_name. [specifically, line 622-626 bsearch function call].
I want to avoid that and do the search using the types of the fields. Is there a way for me to do that?
can you please let me know. any pointers would really be appreciated?

If you'd like this change, I can add it to be an option for enif_protobuf and send out a PR for this.

load_cache_1: random Segmentation faults

Describe the bug

I have built debug ERTS on MacOS, and sometimes I get this random Segmentation fault on startup of the app when I load a certain NIF lib (enif_protobuf).

To Reproduce
I work with enif_protobuf in a company project. I think this will be hard to reproduce on other machines.

What I am doing is simply running Elixirs mix test inside LLDB to catch the Segmentation fault.

$ cerl -debug -lldb -pa /Users/y/.asdf/installs/elixir/1.12.0-otp-24/bin/../lib/eex/ebin /Users/y/.asdf/installs/elixir/1.12.0-otp-24/bin/../lib/elixir/ebin /Users/y/.asdf/installs/elixir/1.12.0-otp-24/bin/../lib/ex_unit/ebin /Users/y/.asdf/installs/elixir/1.12.0-otp-24/bin/../lib/iex/ebin /Users/y/.asdf/installs/elixir/1.12.0-otp-24/bin/../lib/logger/ebin /Users/y/.asdf/installs/elixir/1.12.0-otp-24/bin/../lib/mix/ebin -elixir ansi_enabled true -noshell -s elixir start_cli -extra /Users/y/.asdf/installs/elixir/1.12.0-otp-24/bin/mix test
(lldb) target create "beam.debug.smp"
Current executable set to 'beam.debug.smp' (x86_64).
(lldb) settings set -- target.run-args  "--" "-root" "/Users/y/.asdf/plugins/erlang/kerl-home/builds/asdf_24.0.1/otp_src_24.0.1" "-progname" "/Users/y/.asdf/plugins/erlang/kerl-home/builds/asdf_24.0.1/otp_src_24.0.1/bin/cerl" "-debug" "--" "-home" "/Users/y" "--" "-pa" "/Users/y/.asdf/installs/elixir/1.12.0-otp-24/bin/../lib/eex/ebin" "/Users/y/.asdf/installs/elixir/1.12.0-otp-24/bin/../lib/elixir/ebin" "/Users/y/.asdf/installs/elixir/1.12.0-otp-24/bin/../lib/ex_unit/ebin" "/Users/y/.asdf/installs/elixir/1.12.0-otp-24/bin/../lib/iex/ebin" "/Users/y/.asdf/installs/elixir/1.12.0-otp-24/bin/../lib/logger/ebin" "/Users/y/.asdf/installs/elixir/1.12.0-otp-24/bin/../lib/mix/ebin" "-elixir" "ansi_enabled" "true" "-noshell" "-s" "elixir" "start_cli" "-extra" "/Users/y/.asdf/installs/elixir/1.12.0-otp-24/bin/mix" "test"
(lldb) command source -s 0 '/tmp/.cerllldb.89302'
Executing commands in '/tmp/.cerllldb.89302'.
(lldb) env TERM=dumb
(lldb) command script import /Users/y/.asdf/plugins/erlang/kerl-home/builds/asdf_24.0.1/otp_src_24.0.1/erts/etc/unix/etp.py
(lldb) run
Process 89458 launched: '/Users/y/.asdf/plugins/erlang/kerl-home/builds/asdf_24.0.1/otp_src_24.0.1/bin/x86_64-apple-darwin20.2.0/beam.debug.smp' (x86_64)
librdkafka fork already exist. delete deps/librdkafka for a fresh checkout ...
concurrentqueue fork already exist. delete deps/concurrentqueue for a fresh checkout ...
make[1]: `/Users/y/sportening/superbet_erlkaf/c_src/../priv/erlkaf_nif.so' is up to date.
===> Analyzing applications...
===> Compiling erlkaf
Loading library: "/Users/y/sportening/we-api-user-account/_build/test/lib/erlkaf/priv/erlkaf_nif" 

15:54:27.885 [debug] :metrics_ex enabled=false, port=8088}

15:54:28.260 [info]  persistent queue path: "/Users/y/sportening/we-api-user-account/_build/test/lib/erlkaf/priv/client"

15:54:28.260 [warn]  rdkafka#producer-1 CONFWARN [thrd:app]: Configuration property enable.auto.commit is a consumer property and will be ignored by this producer instance

15:54:28.260 [warn]  rdkafka#producer-1 CONFWARN [thrd:app]: Configuration property enable.auto.offset.store is a consumer property and will be ignored by this producer instance

15:54:28.261 [warn]  rdkafka#producer-1 CONFWARN [thrd:app]: Configuration property enable.partition.eof is a consumer property and will be ignored by this producer instance

15:54:28.262 [info]  Producer client created with config: [bootstrap_servers: "kafka:19092", delivery_report_only_error: true, delivery_report_callback: &PrettyKafkaClient.Producer.delivery_report/2, message_max_bytes: 52428800, socket_timeout_ms: 120000, queue_buffering_max_ms: 1, queue_buffering_overflow_strategy: :block_calling_process]
Process 89458 stopped
* thread #6, name = '2_scheduler', stop reason = EXC_BAD_ACCESS (code=1, address=0x142651038)
    frame #0: 0x0000000148033271 enif_protobuf.so`stack_ensure_all(env=0x0000700001003ca0, cache=0x00000001426483a0) at ep_node.c:762:39
   759 	                for (j = spot->pos; j < (size_t) (spot->node->size); j++) {
   760 	                    spot->pos = j + 1;
   761 	                    field = ((ep_field_t *) (spot->node->fields)) + j;
-> 762 	                    if (field->o_type == occurrence_repeated) {
   763 	                        if (field->type == field_msg || field->type == field_map) {
   764 	                            spot++;
   765 	                            stack_ensure(env, stack, &spot);
Target 0: (beam.debug.smp) stopped.
(lldb) bt
* thread #6, name = '2_scheduler', stop reason = EXC_BAD_ACCESS (code=1, address=0x142651038)
  * frame #0: 0x0000000148033271 enif_protobuf.so`stack_ensure_all(env=0x0000700001003ca0, cache=0x00000001426483a0) at ep_node.c:762:39
    frame #1: 0x000000014803b002 enif_protobuf.so`load_cache_1(env=0x0000700001003ca0, argc=1, argv=0x0000700001003dc0) at enif_protobuf.c:252:5
    frame #2: 0x0000000100031c8a beam.debug.smp`beam_jit_call_nif(c_p=0x00000001426f0638, I=0x000000014991d0c0, reg=0x0000700001003dc0, fp=(enif_protobuf.so`load_cache_1 at enif_protobuf.c:154), NifMod=0x000000014263c308) at beam_jit_common.c:117:26

Expected behavior
No random Segmentation faults on ERTS startup.

Affected versions
$ cerl
Erlang/OTP 24 [erts-12.0.1] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [jit]

Additional context
$ uname -v
Darwin Kernel Version 20.2.0: Wed Dec 2 20:39:59 PST 2020; root:xnu-7195.60.75~1/RELEASE_X86_64

decoding of some msg fails

Hi,

while testing enif_protobuf I have stumbled upon cases when decoding encoded msg returns incorrect record.

For example, having such proto definition:

syntax = "proto2";
package one.clproto;


message FuseResponse {
    oneof fuse_response {
        FileChildren file_children = 3;
        bytes xattr = 13;
    }
}

message ChildLink {
    required bytes name = 2;
}

message FileChildren {
    repeated ChildLink child_links = 1;
}

message ServerMessage {
    oneof message_body {
        FuseResponse             fuse_response             = 15;
    }
}

after encoding following record:

{'ServerMessage',
    {fuse_response,
        {'FuseResponse',
            {file_children,{'FileChildren',[{'ChildLink',<<"1">>}]}}}}}

and decoding record shown below was returned:

{'ServerMessage',
    {fuse_response,{'FuseResponse',{'FileChildren',[{'ChildLink',<<"1">>}]}}}}

Compilation Failed with OTP-23 and rebar 3.11.1

Hello Devs,

Failed to compile on CentOS 7 with OTP-23 and rebar3 3.11.1.

Here is the log —

make: Entering directory `/home/user/git/myProj/_build/default/lib/enif_protobuf'
Uncaught error in rebar_core: {'EXIT',
                               {undef,
                                [{rebar_utils,get_cwd,[],[]},
                                 {rebar_config,new,0,[]},
                                 {rebar,init_config,1,[]},
                                 {rebar,run,1,[]},
                                 {rebar,main,1,[]},
                                 {escript,run,2,
                                  [{file,"escript.erl"},{line,758}]},
                                 {escript,start,1,
                                  [{file,"escript.erl"},{line,277}]},
                                 {init,start_em,1,[]}]}}
=ERROR REPORT==== 3-Sep-2020::12:05:20.089932 ===
Loading of /home/user/git/myProj/_build/default/lib/enif_protobuf/rebar/rebar/ebin/rebar_utils.beam failed: badfile

=ERROR REPORT==== 3-Sep-2020::12:05:20.089905 ===
beam/beam_load.c(1624): Error loading module rebar_utils:
  please re-compile this module with an 23 compiler (old-style fun with indices: 3/6)


=ERROR REPORT==== 3-Sep-2020::12:05:20.106527 ===
beam/beam_load.c(1624): Error loading module rebar_utils:
  please re-compile this module with an 23 compiler (old-style fun with indices: 3/6)


=ERROR REPORT==== 3-Sep-2020::12:05:20.106573 ===
Loading of /home/user/git/myProj/_build/default/lib/enif_protobuf/rebar/rebar/ebin/rebar_utils.beam failed: badfile

escript: exception error: undefined function rebar_utils:delayed_halt/1
  in function  escript:run/2 (escript.erl, line 758)
  in call from escript:start/1 (escript.erl, line 277)
  in call from init:start_em/1 
  in call from init:do_boot/3 
make: *** [get-deps] Error 127
make: Leaving directory `/home/user/git/myProj/_build/default/lib/enif_protobuf'

The issue is with deprecated rebar 2 and migration to rebar3 will solve the issue.

Does it make sense to you guys ? Happy to do migration.

/Prakash

rebar compile issue

collect2: error: ld returned 1 exit status
ERROR: sh(cc c_src/enif_protobuf.o c_src/ep_cache.o c_src/ep_decoder.o c_src/ep_encoder.o c_src/ep_node.o $LDFLAGS -shared -L"/usr/lib64/erlang/lib/erl_interface-4.0.2/lib" -lerl_interface -lei -o priv/enif_protobuf_drv.so)
failed with return code 1 and the following output:
/usr/bin/ld: cannot find -lerl_interface
collect2: error: ld returned 1 exit status

os version = centos 7.9
erlang otp version = Erlang/OTP 23 [erts-11.1.7]
gcc-4.8.5-44.el7.x86_64
libgcc-4.8.5-44.el7.x86_64
gcc-c++-4.8.5-44.el7.x86_64
under this path "/usr/lib64/erlang/lib/erl_interface-4.0.2/lib", only with two files libei_st.a libei.a

make failed on OTP 25 project

=ERROR REPORT==== 17-Jun-2022::19:06:09.059862 ===
beam/beam_load.c(148): Error loading module enc:
please re-compile this module with an Erlang/OTP 25 compiler

escript: exception error: undefined function enc:main/1
in function escript:run/2 (escript.erl, line 750)
in call from escript:start/1 (escript.erl, line 277)
in call from init:start_em/1
in call from init:do_boot/3
===> Hook for compile failed!

mac环境下编译报错

环境:
macOS 10.12.6
Erlang 18.3

报错内容很长,不贴了,主要问题是:

  1. stack_t类型同macos中的内置类型冲突
  2. 64位整数类型兼容问题(例如:ErlNifSInt64和uint_64并不是完全兼容的)

uint64 decode fail

static inline ERL_NIF_TERM
unpack_uint64(ErlNifEnv *env, ep_dec_t *dec, ERL_NIF_TERM *term)
{
    uint32_t    shift = 0, left = 10;
    uint64_t    val = 0;
    printf("uint64_t %d, \r\n", sizeof(val));

    while (left && dec->p < dec->end) {

        val |= (((uint64_t) (*(dec->p) & 0x7f)) << shift);
        printf("unpack_uint64 v:%d, %d, %lu, \r\n", (*(dec->p) & 0x7f), shift, (unsigned long) val);
        if ((*(dec->p)++ & 0x80) == 0) {

            *term = enif_make_ulong(env, (ErlNifUInt64) val);
            return RET_OK;
        }
        shift += 7;
        left--;
    }

    return_error(env, dec->term);
}

1> enif_protobuf:decode(<<8,181,207,209,168,154,47>>, gateway_s_heart).
uint64_t 8,
unpack_uint64 v:53, 0, 53,
unpack_uint64 v:79, 7, 10165,
unpack_uint64 v:81, 14, 1337269,
unpack_uint64 v:40, 21, 85223349,
unpack_uint64 v:26, 28, 2769577909,
unpack_uint64 v:47, 35, 2769577909,
{gateway_s_heart,2769577909}

2>gateway_12_pb:decode_msg(<<8,181,207,209,168,154,47>>, gateway_s_heart).
====={53,0,53}
====={79,7,10165}
====={81,14,1337269}
====={40,21,85223349}
====={26,28,7064545205}
====={47,35,1621972248501}
{gateway_s_heart,1621972248501}

enif decode , ((uint64_t) (*(dec->p) & 0x7f)) seem work like: int32 << n
but I print 'sizeof(uint64_t)' get 8
when I use gpb(gateway_12_pb) to decode, I can get the right result
I have no idea about it.....

some info:
Window 10
vcvarsall.bat x64, vs set compile env
rebar3_gpb_plugin ~ "2.15.0"
gpb ~ "4.17.3"
=================== gateway_12.proto ==================
syntax = "proto3";

message gateway_s_heart {
uint64 time = 1;
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.