rust-lang / socket2 Goto Github PK
View Code? Open in Web Editor NEWAdvanced configuration options for sockets.
Home Page: https://docs.rs/socket2
License: Apache License 2.0
Advanced configuration options for sockets.
Home Page: https://docs.rs/socket2
License: Apache License 2.0
CI support, following the same tiers as rust (https://doc.rust-lang.org/nightly/rustc/platform-support.html).
Tests must pass.
Must build.
Best effort.
Old description:
Currently only Linux, macOS and Windows are tested on the CI but we support (to some degree) many more OSes. We should add more of these OSes to the CI, at least running cargo check
. For (Rust) tier 1 architecture/OS combos this should be easy, but for other tiers rustup
might not be supported.
The documentation for SockAddr::as_inet6() has an error:
pub fn as_inet6(&self) -> Option | [src]
Returns this address as a SocketAddrV4 if it is in the AF_INET6 family.
Presumably, it should say:
Returns this address as a SocketAddrV6 if it is in the AF_INET6 family.
Cheers.
Pr #107 added vectored I/O and currently returns a boolean indicating if the buffer was truncated. But the flags
field on msghdr
give us more information. This issue is about nicely (and cross-platform) exposing that information.
Some ideas:
msghdr
like structure that can be checked after a call to send/recv. Downside: input, output arguments are very "rustic".flags
field wrapped in a new type. Such a new type could be SendFlags
with function such as is_truncated(&self) -> bool
./cc @de-vri-es
Per the WSASend
docs (https://docs.microsoft.com/en-us/windows/win32/api/winsock2/nf-winsock2-wsasend):
For a Winsock application, once the WSASend function is called, the system owns these buffers and the application may not access them.
But we use &[IoSlice<'_>]
, meaning that we don't own the buffers or even have unique access to them. We need to use &mut [IoSlice<'_>]
, but this conflicts with the io::Write::write_vectored
function signature.
Relevant code: https://github.com/rust-lang/socket2-rs/blob/dc82e662cda027f69abb75917777943fbffb8463/src/sys/windows.rs#L499-L516
It would be useful to be able to set the IP_FREEBIND socket option that is available on linux.
See also rust-lang/libc#1835 for exposing this symbol from libc.
When using a AF_INET STREAM socket, obtaining the TCP_NODELAY
option using Socket::nodelay()
produces an error failure like so:
thread 'primary 1' panicked at 'assertion failed: `(left == right)`
left: `1`,
right: `4`', C:\Users\appveyor\.cargo\registry\src\github.com-1ecc6299db9ec823\socket2-0.3.8\src\sys\windows.rs:714:13
note: Run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
This is because of the following code:
As far as I know no additional options need to be set before being able to obtain the option. The Rust standard library uses a BYTE
type instead of c_int
:
Perhaps the size of these two types are different?
Hi there @alexcrichton,
Thanks a lot for the project. It has helped me to get started with a multicast-based project, and it was amazing. I was able to send messages and find peers, until I couldn't anymore. I've dig deep into the depths of the network stack, to find, with amusement, that network is chaos.
One of my computers could find the other announcements, but the other computer would not find the first one. After some analyses, I realized this is related to having multiple interfaces, which is very common on laptops (wifi and cable) and Windows machines with HyperV (and WSL2). To my surprise, I could listen to multicast packets on any of the interfaces, and it would only send on a specific one instead of a broadcast to all.
socket2-rs
already supports most of the calls to have a "Single socket, multi interface listeners". If we bind to 0.0.0.0
, we can use Socket::join_multicast_v4 for each interface ip (enumerating it is outside the package scope).
As we bind to 0.0.0.0
, selecting the interface to use for sending the packets is delegated to the OS, and it might end-up selecting the interface that does not talk to the other computers, leading to partial discovery and something that I find very often with my dual network computer.
I found this StackOverflow answer that shows that it would be possible to use the already available Socket::set_multicast_if_v4 call to determine which interface to use before sending the packet, so we can hop around to respond on the same interface that we received the packet.
As you may already know, the default socket API does not return the origin interface on the packet, which made me despaired to find alternatives and question my own sanity of why I'm shaving this yak. I was surprised to find some incantations using IP_PKTINFO
and/or/I'm not sure anymore IP_RECVIF
to load the information from a "magic" place of the network stack, using CMSG_FIRSTHDR
, CMSG_NXTHDR
, CMSG_DATA
, and WSARecvMsg on Windows.
It looks like this is what Avahi does the interface hopping already, in order to use a single socket to broadcast messages on all interfaces and locate peers happily until the next network topology breaks it again.
My question here; TL; DR; Would it make sense for the socket2-rs
project to expose this extra information from CMSG_DATA
and WSARecvMsg
?
I do think I could make something as a separate crate, but it would also benefit a lot from the already available Windows/Unix infrastructure to build this project on the sys.rs
file. If you do think it fits this project, I could try to get something running, but I would also be super ok to ask to have this on another crate.
The alternative seems to be create a socket per interface and put a MultiSocket struct grouping them, which would avoid all this side-quest, but it is not resource efficient, and I'm afraid I would displease Ferris by opening too many sockets.
If you have any other suggestions from your experience, I would love to hear them as well! This is way out of my depth hehehe
Thank you a lot for your attention and time to read this up :)
(PS: I've tried to make this issue funny to read, and I'm sorry ahead of time if it ended up too verbose, or came up weird. This hobby project is leading to so many rabbit holes but I'm learning a lot!)
Just reporting a test failure on Haiku. Overall it's pretty close!
master as of 32969e5
/Data/socket2-rs> cargo test
Downloaded remove_dir_all v0.5.3
Downloaded rand v0.4.6
Downloaded tempdir v0.3.7
Downloaded 3 crates (97.1 KB) in 0.93s
Compiling remove_dir_all v0.5.3
Compiling rand v0.4.6
Compiling tempdir v0.3.7
Compiling socket2 v0.3.12 (/Data/socket2-rs)
Finished test [unoptimized + debuginfo] target(s) in 5.27s
Running target/debug/deps/socket2-d90163a26b657973
running 13 tests
test socket::test::connect_timeout_unbound ... FAILED
test socket::test::connect_timeout_unrouteable ... ok
test socket::test::connect_timeout_valid ... ok
test socket::test::keepalive ... FAILED
test socket::test::nodelay ... ok
test socket::test::tcp ... ok
test sys::test_ip ... ok
test tests::domain_fmt_debug ... ok
test tests::domain_for_address ... ok
test tests::protocol_fmt_debug ... ok
test tests::socket_address_ipv4 ... ok
test tests::socket_address_ipv6 ... ok
test tests::type_fmt_debug ... ok
failures:
---- socket::test::connect_timeout_unbound stdout ----
thread 'main' panicked at 'unexpected success', src/socket.rs:925:22
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
---- socket::test::keepalive stdout ----
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: -2147483643, kind: InvalidInput, message: "Invalid Argument" }', src/socket.rs:985:9
failures:
socket::test::connect_timeout_unbound
socket::test::keepalive
test result: FAILED. 11 passed; 2 failed; 0 ignored; 0 measured; 0 filtered out
error: test failed, to rerun pass '--lib'
Recently Mio got a very nice API for dealing all things keepalive: tokio-rs/mio#1385. socket should adopt the same API.
I struggled a bit with this one over the last 2 days, and I'm finally making some progress to the point where I think there might be an issue to this create.
Initially, I wrote the following code in Python and it worked:
import socket as s
import struct
sock = s.socket(s.AF_INET, s.SOCK_DGRAM, s.IPPROTO_UDP)
sock.setsockopt(s.IPPROTO_IP, s.IP_ADD_MEMBERSHIP, struct.pack("4sL", s.inet_aton("239.255.255.250"), s.INADDR_ANY))
sock.sendto("M-SEARCH * HTTP/1.1\r\nHOST: 239.255.255.250:1982\r\nMAN: \"ssdp:discover\"\r\nST: wifi_bulb\r\n".encode('UTF-8'), ("239.255.255.250", 1982))
sock.recv(4096)
I then wrote the following code in Rust but it does not want to work:
#[macro_use] extern crate log;
use socket2::{Socket, Domain, Type, Protocol, SockAddr};
use std::net::{SocketAddrV4, Ipv4Addr};
fn main() {
env_logger::init();
let yeelight_ip = "239.255.255.250".parse().unwrap();
let yeelight_port: u16 = 1982;
println!("Yeelight multicast address: {}:{}", yeelight_ip, yeelight_port);
println!("Requesting YeeLight devices");
let socket = Socket::new(
Domain::ipv4(),
Type::dgram(),
Some(Protocol::udp())
).expect("Failed to create socket!");
info!("Created IPv4 DGRAM UDP socket");
socket.join_multicast_v4(
&yeelight_ip,
&Ipv4Addr::UNSPECIFIED
).expect("Unable to join multicast broadcast!");
info!("Joined IPv4 multicast on address {}, interface {}", yeelight_ip, Ipv4Addr::UNSPECIFIED);
let msg = "M-SEARCH * HTTP/1.1\r\nHOST: 239.255.255.250:1982\r\nMAN: \"ssdp:discover\"\r\nST: wifi_bulb\r\n";
let msg_cstr = std::ffi::CString::new(msg).unwrap();
match socket.send_to(
msg_cstr.as_bytes(),
&SockAddr::from(
SocketAddrV4::new(
yeelight_ip,
yeelight_port
)
)
) {
Ok(bytes_sent) => {
let dash_repeat_cnt = 60;
info!("--[ {: >3} bytes sent ]{}", bytes_sent, "-".repeat(dash_repeat_cnt - 20));
for line in msg.lines() {
info!("{}", line);
}
info!("{}", "-".repeat(dash_repeat_cnt));
},
Err(_) => eprintln!("Error broadcasting request for identification!")
};
println!();
println!("Listen and print YeeLight devices");
loop {
let mut buffer = Vec::with_capacity(1024 * 1024);
let received_bytes = socket
.recv(&mut buffer)
.expect("Unable to receive message!");
debug!("Received {} bytes", received_bytes);
}
}
The output of the code is this:
Yeelight multicast address: 239.255.255.250:1982
Requesting YeeLight devices
[2020-07-14T23:58:34Z INFO yeelight_test] Created IPv4 DGRAM UDP socket
[2020-07-14T23:58:34Z INFO yeelight_test] Joined IPv4 multicast on address 239.255.255.250, interface 0.0.0.0
[2020-07-14T23:58:34Z INFO yeelight_test] --[ 86 bytes sent ]----------------------------------------
[2020-07-14T23:58:34Z INFO yeelight_test] M-SEARCH * HTTP/1.1
[2020-07-14T23:58:34Z INFO yeelight_test] HOST: 239.255.255.250:1982
[2020-07-14T23:58:34Z INFO yeelight_test] MAN: "ssdp:discover"
[2020-07-14T23:58:34Z INFO yeelight_test] ST: wifi_bulb
[2020-07-14T23:58:34Z INFO yeelight_test] ------------------------------------------------------------
Listen and print YeeLight devices
[2020-07-14T23:58:34Z DEBUG yeelight_test] Received 0 bytes
[2020-07-14T23:58:34Z DEBUG yeelight_test] Received 0 bytes
^Cโ
After the two "Received 0 bytes", the program freezes.
I then wrote the following C++ code and, after correctly calling htons(yeelight_port)
which I initially forgot, it works:
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <iostream>
#include <string>
#include <array>
const std::string msg = "M-SEARCH * HTTP/1.1\r\nHOST: 239.255.255.250:1982\r\nMAN: \"ssdp:discover\"\r\nST: wifi_bulb\r\n";
int main() {
auto yeelight_ip = in_addr();
inet_aton("239.255.255.250", &yeelight_ip);
unsigned int yeelight_port = 1982;
auto sock = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
std::cerr << "Created socket!" << std::endl;
auto mreq = ip_mreq();
mreq.imr_interface.s_addr = INADDR_ANY;
mreq.imr_multiaddr.s_addr = yeelight_ip.s_addr;
if (setsockopt(sock, IPPROTO_IP, IP_ADD_MEMBERSHIP, &mreq, sizeof(mreq)) != 0) {
std::cerr << "Error joining multicast!" << std::endl;
exit(errno);
}
else {
std::cerr << "Joined multicast!" << std::endl;
}
auto addr = sockaddr_in();
addr.sin_port = htons(yeelight_port);
for (int i = 0; i < 8; i++) addr.sin_zero[i] = 0;
addr.sin_family = AF_INET;
addr.sin_addr = yeelight_ip;
int ret = sendto(sock, msg.c_str(), msg.length(), 0, (sockaddr*)(&addr), sizeof(addr));
if (ret == -1) {
std::cerr << "Error broadcasting request for identification!" << std::endl;
std::cerr << errno << std::endl;
}
else {
std::cerr << "Sent broadcast (" << ret << " bytes) request for idetification!" << std::endl;
}
while (true) {
std::array<char, 1024 * 1024> buffer;
auto bytes_read = recv(sock, buffer.data(), buffer.size(), 0);
if (bytes_read == -1) {
std::cerr << "Unable to receive message!" << std::endl;
std::cerr << errno << std::endl;
}
else {
std::cerr << "Read " << bytes_read << " bytes..." << std::endl;
std::cout << std::string(buffer.begin(), buffer.begin() + bytes_read) << std::endl;
}
}
return 0;
}
At this point I'm not sure anymore why the Rust code doesn't work.
I'm trying to build using wasm-pack build
and its giving me this error:
Compiling socket2 v0.3.9
error[E0583]: file not found for module `sys`
--> /home/justin/.cargo/registry/src/github.com-1ecc6299db9ec823/socket2-0.3.9/src/lib.rs:72:5
|
72 | mod sys;
| ^^^
|
= help: name the file either sys.rs or sys/mod.rs inside the directory "/home/justin/.cargo/registry/src/github.com-1ecc6299db9ec823/socket2-0.3.9/src"
I'm very new at this but it seems like its pulling down the latest version of the code in this repo and failing to compile.
Any help would be greatly appreciated.
There is apparently not any way of reading these values.
The support for keepalive is pretty limited right now. I'm looking to write a test that wants to trigger connection closure in response to unacked timeouts, and without being able to set these, it's impossible for that test to complete in a reasonable amount of time.
Moved from deprecrated/net2-rs#90.
Hello,
As part of packaging crates as RPMs we run tests. And recently I found out that if there are no network adapters, one of the tests is failing:
[ 129s] failures:
[ 129s]
[ 129s] ---- socket::test::connect_timeout_unrouteable stdout ----
[ 129s] thread 'socket::test::connect_timeout_unrouteable' panicked at 'unexpected error Network is unreachable (os error 101)', src/socket.rs:902:23
[ 129s] note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
[ 129s]
[ 129s]
[ 129s] failures:
[ 129s] socket::test::connect_timeout_unrouteable
Would appreciate if you could look on this issue.
Hi there,
The latest version published of socket2
(0.3.12
) works with the following test program, but when upgrading to latest commit on this repo fails to associate the socket with a multicast address, both on Windows and Linux.
use socket2::{Domain, Protocol, Socket, Type};
use std::net::Ipv4Addr;
fn main() {
let socket = Socket::new(Domain::IPV4, Type::DGRAM, Some(Protocol::UDP))
.expect("could not create socket");
socket
.join_multicast_v4(&Ipv4Addr::new(224, 0, 0, 251), &Ipv4Addr::UNSPECIFIED)
.expect("could not join multicast group");
}
On Windows it crashes as:
Os { code: 10049, kind: AddrNotAvailable, message: "The requested address is not valid in its context." }
On Linux it crashes as:
Os { code: 22, kind: InvalidInput, message: "Invalid argument" }
Running the test program with git bisect
led me to the commit on #68, and maybe there is some conversion errors when sending ip addresses to the system calls.
I will try to take a look there at some point, but I thought of warning to avoid a new release before this is fixed.
Currently there is no explicit documentation on what versions of OSes are supported. We should create such a documentation and keep it up-to-date.
It divides the interval seconds by 1000 even though that's expected to be a second count on Linux.
@Thomasdezeeuw @stjepang y'all have been doing a great job maintaining this crate, and I clearly don't have time for it, so I was wondering if y'all would prefer it live somewhere else? I don't think it's helping much living under my name right now!
Pr #110 changed Socket::new
to be a simple call to socket(2)
, with setting common flags like CLO_EXEC
. For convenience we should add back a function which copies the old behaviour. For that function we need a good name. Some possibilities:
Socket::new_with_common_flags
: too long.Socket::like_std
: create a socket like the standard library would?It seems that the last version bump shouldn't have been a patch version.
Projects depending on hyper (reqwest in my case) seem to be broken now.
Output:
cargo test
Compiling hyper v0.13.7
Compiling matrix-sdk-crypto v0.1.0 (/home/poljar/werk/matrix/nio-rust/matrix_sdk_crypto)
Compiling matrix-sdk-base v0.1.0 (/home/poljar/werk/matrix/nio-rust/matrix_sdk_base)
error[E0599]: no function or associated item named `ipv4` found for struct `socket2::Domain` in the current scope
--> /home/poljar/local/data/cargo/registry/src/github.com-1ecc6299db9ec823/hyper-0.13.6/src/client/connect/http.rs:556:38
|
556 | SocketAddr::V4(_) => Domain::ipv4(),
| ^^^^ function or associated item not found in `socket2::Domain`
error[E0599]: no function or associated item named `ipv6` found for struct `socket2::Domain` in the current scope
--> /home/poljar/local/data/cargo/registry/src/github.com-1ecc6299db9ec823/hyper-0.13.6/src/client/connect/http.rs:557:38
|
557 | SocketAddr::V6(_) => Domain::ipv6(),
| ^^^^ function or associated item not found in `socket2::Domain`
error[E0599]: no function or associated item named `stream` found for struct `socket2::Type` in the current scope
--> /home/poljar/local/data/cargo/registry/src/github.com-1ecc6299db9ec823/hyper-0.13.6/src/client/connect/http.rs:559:44
|
559 | let socket = Socket::new(domain, Type::stream(), Some(Protocol::tcp()))?;
| ^^^^^^ function or associated item not found in `socket2::Type`
error[E0599]: no function or associated item named `tcp` found for struct `socket2::Protocol` in the current scope
--> /home/poljar/local/data/cargo/registry/src/github.com-1ecc6299db9ec823/hyper-0.13.6/src/client/connect/http.rs:559:69
|
559 | let socket = Socket::new(domain, Type::stream(), Some(Protocol::tcp()))?;
| ^^^ function or associated item not found in `socket2::Protocol`
error: aborting due to 4 previous errors
For more information about this error, try `rustc --explain E0599`.
error: could not compile `hyper`.
I am experiencing trouble with sigpipes using socket2 on ios. The code already sets nosigpipe by default for macos, is there a reason this is not done for ios as well?
Current list left to test:
Socket::peer_addr
Socket::try_clone
Socket::shutdown
Socket::recv_out_of_band
Socket::peek
Socket::recv_from
Socket::peek_from
Socket::send_out_of_band
Socket::take_error
Socket::write_timeout
Socket::set_linger(None)
Socket::join_multicast_v4
Socket::leave_multicast_v4
Socket::multicast_if_v4
Socket::set_multicast_if_v4
Socket::multicast_loop_v4
Socket::set_multicast_loop_v4
Socket::multicast_ttl_v4
Socket::set_multicast_ttl_v4
Socket::join_multicast_v6
Socket::leave_multicast_v6
Socket::multicast_hops_v6
Socket::set_multicast_hops_v6
Socket::multicast_if_v6
Socket::set_multicast_if_v6
Socket::multicast_loop_v6
Socket::set_multicast_loop_v6
Socket::read_vectored
&Socket::read
&Socket::read_vectored
Socket::write_vectored
Socket::flush
&Socket::write
&Socket::write_vectored
&Socket::flush
Socket::debug
SockAddr::debug
TcpListener
All of Linux, OS X and Windows are the same, call setsockopt
with TCP_FASTOPEN
option
// For Linux, value is the queue length of pending packets
int opt = 5;
// For the others, just a boolean value for enable and disable
int opt = 1;
// Call it before listen()
int ret = setsockopt(socket, IPPROTO_TCP, TCP_FASTOPEN, &opt, sizeof(opt));
TcpStream
This is a bit more complicated than TcpListener
, because we have to send SYN
with the first data packet to make TFO works. APIs are completely different on different platforms:
Before 4.11, Linux uses MSG_FASTOPEN
flag for sendto()
, doesn't need to call connect()
// Send SYN with data in buf
ssize_t n = sendto(socket, buf, buf_length, MSG_FASTOPEN, saddr, saddr_len);
After 4.11, Linux provides a TCP_FASTOPEN_CONNECT
option
int enable = 1;
// Enable it before connect()
int ret = setsockopt(socket, IPPROTO_TCP, TCP_FASTOPEN_CONNECT, &enable, sizeof(enable));
// Call connect as usual
int ret = connect(socket, saddr, saddr_len);
Uses sendto()
as Linux without MSG_FASTOPEN
flag
// Set before sendto()
int ret = setsockopt(socket, IPPROTO_TCP, TCP_FASTOPEN, &opt, sizeof(opt));
// Call sendto() without flags
ssize_t n = sendto(socket, buf, buf_length, 0, saddr, saddr_len);
Darwin supports TFO after OS X 10.11, iOS 9.0, tvOS 9.0 and watchOS 2.0.
It supports with a new syscall connectx
.
sa_endpoints_t endpoints;
memset(&endpoints, 0, sizeof(endpoints));
endpoints.sae_dstaddr = &saddr;
endpoints.sae_dstaddrlen = saddr_len;
int ret = connectx(socket,
&endpoints,
SAE_ASSOCID_ANY,
CONNECT_DATA_IDEMPOTENT | CONNECT_RESUME_ON_READ_WRITE,
NULL,
0,
NULL,
NULL);
SYN will be sent with the first write
or send
.
Windows supports with ConnectEx
, since Windows 10. https://docs.microsoft.com/en-us/windows/win32/api/mswsock/nc-mswsock-lpfn_connectex
// TCP_FASTOPEN is required
int ret = setsockopt(socket, IPPROTO_TCP, TCP_FASTOPEN, &opt, sizeof(opt));
// Get LPFN_CONNECTEX
LPFN_CONNECTEX ConnectEx = ...;
// Call it with the first packet of data, SYN will send with this packet's data
LPDWORD bytesSent = 0;
int ret = ConnectEx(socket, saddr, saddr_len, buf, buf_length, &bytesSent, &overlapped);
As we can see for those OSes' APIs, (except OS X and Linux > 4.10) connections are made with the first write()
, which means that if we want to support TFO, we have to put those code in the first call of std::io::Write::write
or tokio::AsyncWrite::poll_write
.
There is no other way except customizing a TcpStream
from socket()
and call different APIs on different OSes while sending the first data packet.
I want to open a discussion here for how in Rust's world to support TFO gracefully.
connectx
for OS X: rust-lang/libc#1635TCP_FASTOPEN_CONNECT
for Linux: rust-lang/libc#1634TCP_FASTOPEN
for Windows: retep998/winapi-rs#856Most types have function constructors, such as Domain::ipv4
, should we make this functions const
functions? Or alternatively make them associated constants?
impl Domain {
/// Domain for IPv4 communication, corresponding to `AF_INET`.
pub fn ipv4() -> Domain {
Domain(sys::AF_INET)
}
/// Domain for IPv4 communication, corresponding to `AF_INET`.
pub const IPV4: Domain = Domain(sys::AF_INET);
}
Personally I feel like associated constants would be more appropriate, as there constants wrapped in a type. But I don't feel strongly either way.
/cc @stjepang
Hi! I transitively use socket2
as part of a Rust project built with
Bazel. Iโm noting that socket2
builds successfully for me at v0.3.15,
but not at version v0.3.16 (released today) unless I specify the extra
flag --cfg=libc_align
. Iโm a bit hesitant to do so because it feels
low-level and platform specific, so I wanted to check and see whether
this is expected or whether thereโs a better solution?
Here is a repro repo. You do need to have Bazel to repro, but you
donโt need any additional setup. The error is:
INFO: From Compiling Rust lib socket2 v0.3.16 (7 files):
error[E0063]: missing field `__align` in initializer of `libc::in6_addr`
--> external/raze__socket2__0_3_16/src/sockaddr.rs:243:25
|
243 | let sin6_addr = in6_addr {
| ^^^^^^^^ missing `__align`
error: aborting due to previous error
For more information about this error, try `rustc --explain E0063`.
ERROR: /HOMEDIR/.cache/bazel/_bazel_wchargin/e6b2d93af64d3456b38c9c9d2131e939/external/raze__socket2__0_3_16/BUILD.bazel:33:13: output 'external/raze__socket2__0_3_16/libsocket2--1560485693.rlib' was not created
ERROR: /HOMEDIR/.cache/bazel/_bazel_wchargin/e6b2d93af64d3456b38c9c9d2131e939/external/raze__socket2__0_3_16/BUILD.bazel:33:13: not all outputs were created or valid
Target @raze__socket2__0_3_16//:socket2 failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 0.707s, Critical Path: 0.20s
INFO: 2 processes: 1 internal, 1 linux-sandbox.
FAILED: Build did NOT complete successfully
I dug far enough into the libc
structure to see the align
/no_align
modules and guess that this cfg
flag would probably fix it; it does.
The failing Bazel action, with directory structure and command line
invocation, is here (in a Gist because it has some long paths):
https://gist.github.com/wchargin/4da947cf43d2a241cfffb89873788a82
In all cases, the libc
dependency is at version 0.2.80. But I note
that socket2
v0.3.15 depends on libc
with default features, whereas
in v0.3.16 it depends on the align
feature:
$ cargo tree -e features --manifest-path socket2-0.3.15/Cargo.toml
socket2-libc-align-demo v0.0.0 (/HOMEDIR/git/socket2-libc-align-demo/socket2-0.3.15)
โโโ socket2 feature "default"
โโโ socket2 v0.3.15
โโโ cfg-if feature "default"
โ โโโ cfg-if v0.1.10
โโโ libc feature "default"
โโโ libc v0.2.80
โโโ libc feature "std"
โโโ libc v0.2.80
$ cargo tree -e features --manifest-path socket2-0.3.16-default/Cargo.toml
socket2-libc-align-demo v0.0.0 (/HOMEDIR/git/socket2-libc-align-demo/socket2-0.3.16-default)
โโโ socket2 feature "default"
โโโ socket2 v0.3.16
โโโ cfg-if feature "default"
โ โโโ cfg-if v0.1.10
โโโ libc feature "align"
โ โโโ libc v0.2.80
โโโ libc feature "default"
โโโ libc v0.2.80
โโโ libc feature "std"
โโโ libc v0.2.80
$ cargo tree -e features --manifest-path socket2-0.3.16-libc-align/Cargo.toml
socket2-libc-align-demo v0.0.0 (/HOMEDIR/git/socket2-libc-align-demo/socket2-0.3.16-libc-align)
โโโ socket2 feature "default"
โโโ socket2 v0.3.16
โโโ cfg-if feature "default"
โ โโโ cfg-if v0.1.10
โโโ libc feature "align"
โ โโโ libc v0.2.80
โโโ libc feature "default"
โโโ libc v0.2.80
โโโ libc feature "std"
โโโ libc v0.2.80
So Iโm hoping that you can help me understand what caused this change,
why it requires me to build with --cfg=libc_align
, (maybe) why itโs
needed with Bazel but not with Cargo (though I understand if thatโs
outside your purview/expertise), and whether I should feel comfortable
doing so? I want to build for non-Linux platforms, too, eventually.
Iโm on rustc 1.47.0 (18bf6b4f0 2020-10-07) on Debian-like Linux.
Thanks much!
I am using this crate over standard librasy one mainly because of Read
implementation.
I have two binary crates acting as server/client both of which use serde_json
to serialize data.
After creating a unix-domain stream
socket and connecting it to server's address, my client successfully uses socket.send()
and reports back number of bytes sent.
However on server side the following never returns:
let (sock, addr) = server_socket.accept().unwrap();
let item: MyStrucnt = serde_json::from_read(sock).unwrap();
The only way I can force from_read
to finish deserialization is to drop
the socket on client side, which brings me to my question.
Is it possible to establish a socket connection between server/cli and communicate multiple messages back & forth between them while taking advantage of Read
implementation in this crate? (I don't want to drop my client socket and re-create it every time I have to send a message accross!)
Confusing part for me is that the socket.send()
on client side returns but seems server doesn't get the memo that payload is delivered & it's time to finish the deserialization operation.
How do you go about doing this type of communication?
Do I need to somehow attach an EOF
to serialized data in client side to signal server?
Any input is appreciated.
build error,detail message
error[E0599]: no function or associated item named `ipv4` found for struct `socket2::Domain` in the current scope
--> /Users/baoyachi/.cargo/registry/src/github.com-1ecc6299db9ec823/hyper-0.13.7/src/client/connect/http.rs:556:38
|
556 | SocketAddr::V4(_) => Domain::ipv4(),
| ^^^^ function or associated item not found in `socket2::Domain`
error[E0599]: no function or associated item named `ipv6` found for struct `socket2::Domain` in the current scope
--> /Users/baoyachi/.cargo/registry/src/github.com-1ecc6299db9ec823/hyper-0.13.7/src/client/connect/http.rs:557:38
|
557 | SocketAddr::V6(_) => Domain::ipv6(),
| ^^^^ function or associated item not found in `socket2::Domain`
error[E0599]: no function or associated item named `stream` found for struct `socket2::Type` in the current scope
--> /Users/baoyachi/.cargo/registry/src/github.com-1ecc6299db9ec823/hyper-0.13.7/src/client/connect/http.rs:559:44
|
559 | let socket = Socket::new(domain, Type::stream(), Some(Protocol::tcp()))?;
| ^^^^^^ function or associated item not found in `socket2::Type`
error[E0599]: no function or associated item named `tcp` found for struct `socket2::Protocol` in the current scope
--> /Users/baoyachi/.cargo/registry/src/github.com-1ecc6299db9ec823/hyper-0.13.7/src/client/connect/http.rs:559:69
|
559 | let socket = Socket::new(domain, Type::stream(), Some(Protocol::tcp()))?;
| ^^^ function or associated item not found in `socket2::Protocol`
error: aborting due to 4 previous errors
For more information about this error, try `rustc --explain E0599`.
error: could not compile `hyper`.
In order to be pulled in as a dependency for rust-lang/rust#55553, could you please bump the minor version and publish it?
Thanks for your help.
thanks for you work, i got some question:
how can i use socket2-rs
send one raw packet(IP Packet or Ethernet Frame) to net interface device ?
In C, you use recvmsg() to get the incoming interface. It's a bit complicated for differences between unix and linux. Not sure about Windows. Sample C code is in this gist:
https://gist.github.com/pusateri/58d4ea943cfbfb7e2b3af955293227fe
recvfrom() won't give you the incoming interface, just the source address.
I have been working on a cross-platform Bluetooth implementation, and have my own cross-platform Socket implementation that is very similar to yours. To reduce duplicity, I'd like to add Bluetooth support to your repository, and use it as a dependency.
Socket::accept segfaults within accept4 in statically linked musl(1.1.19) binaries on i386 linux.
Starting with version 0.3.10 of socket2, it appears that calling accept
on a UNIX socket no longer works. Consider the following server code:
use socket2::{Domain, SockAddr, Socket, Type};
fn main() {
let listener = Socket::new(Domain::unix(), Type::stream(), None).unwrap();
let addr = SockAddr::unix("\0dummy").unwrap();
listener.bind(&addr).expect("Failed in bind(2)");
listener.listen(65_536).expect("Failed in listen(2)");
listener.accept().expect("Failed in accept(2)");
}
This uses the Linux abstract address "dummy", but any other socket path (e.g. /tmp/test.sock
) will produce the same problem.
We use the following client program (using Ruby):
ruby -r socket -e 'UNIXSocket.new("\0dummy")'
On versions prior to 0.3.10, all is well: the client connects, and the accept()
call on the server returns with an Ok
value. Starting with 0.3.10, the server instead produces an Err
in the accept
call. In the above example, this leads to the following panic:
thread 'main' panicked at 'Failed in accept(2): Os { code: 14, kind: Other, message: "Bad address" }', src/libcore/result.rs:997:5
stack backtrace:
0: 0x5648ea97ab53 - std::sys::unix::backtrace::tracing::imp::unwind_backtrace::h50ebfb8734a81144
at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:39
1: 0x5648ea9767fb - std::sys_common::backtrace::_print::hc7fdae4fb6b58d2d
at src/libstd/sys_common/backtrace.rs:71
2: 0x5648ea9799e6 - std::panicking::default_hook::{{closure}}::hc55d0892611a29ff
at src/libstd/sys_common/backtrace.rs:59
at src/libstd/panicking.rs:197
3: 0x5648ea979779 - std::panicking::default_hook::h3c8a3df5d3469668
at src/libstd/panicking.rs:211
4: 0x5648ea97a03f - std::panicking::rust_panic_with_hook::h24c9a1c35b1f49cc
at src/libstd/panicking.rs:474
5: 0x5648ea979bc1 - std::panicking::continue_panic_fmt::h8ed9632bdd4b9299
at src/libstd/panicking.rs:381
6: 0x5648ea979aa5 - rust_begin_unwind
at src/libstd/panicking.rs:308
7: 0x5648ea989b7c - core::panicking::panic_fmt::h0d6d5c8b201e3246
at src/libcore/panicking.rs:85
8: 0x5648ea97059d - core::result::unwrap_failed::hd7d397710ebf1ef1
9: 0x5648ea970426 - playground::main::ha4f78b9fc613ce66
10: 0x5648ea970892 - std::rt::lang_start::{{closure}}::h7cd1d2b3088a513f
11: 0x5648ea979a92 - std::panicking::try::do_call::h3310fc9f30c8962f
at src/libstd/rt.rs:49
at src/libstd/panicking.rs:293
12: 0x5648ea97b779 - __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:87
13: 0x5648ea97a54c - std::rt::lang_start_internal::h66306a4a4a80131b
at src/libstd/panicking.rs:272
at src/libstd/panic.rs:388
at src/libstd/rt.rs:48
14: 0x5648ea970491 - main
15: 0x7f5dddfcdee2 - __libc_start_main
16: 0x5648ea97016d - _start
17: 0x0 - <unknown>
I suspect this may have been caused by commit 763b8a5, as it's the only commit I could find that changes accept
.
Would it be possible to add support for Source Specific Multicast?
Hi,
I'm not sure if this is an issue with me or with the socket2
library, but it seemed to appear out of the blue. That is, there is no clear event between before (when my project was building) and after (when my project was failing to build) to which I can attribute this breakage. socket2
is buried deep within my dependency tree -- it is not in my project's Cargo.toml.
error[E0583]: file not found for module `sys`
--> /home/me/.cargo/registry/src/github.com-1ecc6299db9ec823/socket2-0.3.8/src/lib.rs:72:5
|
72 | mod sys;
| ^^^
|
= help: name the file either sys.rs or sys/mod.rs inside the directory "/home/me/.cargo/registry/src/github.com-1ecc6299db9ec823/socket2-0.3.8/src"
error: aborting due to previous error
rustc 1.30.0-nightly (3bc2ca7e4 2018-09-20)
actix v0.7.4 -> trust-dns-resolver v0.9.1 -> trust-dns-proto v0.4.0 -> socket2 v0.3.8
If this is not actually a problem with the library, could someone please explain to me how to resolve it on my end? Thank you.
I didn't know this project had anything to do with Serde.
The documentation shows this crate being used to bind a TCP socket to two addresses. This doesn't work on any OS that I'm familiar with - the second bind returns EINVAL
since the socket is already bound. I suggest providing a different example, as the example led me to believe that this crate had higher-level functionality that would abstract the use of multiple sockets to listen on multiple addresses.
https://docs.rs/socket2/0.3.12/socket2/struct.Domain.html
https://docs.rs/socket2/0.3.13/socket2/struct.Domain.html
functions missing
socket2::Domain::ipv4()
socket2::Domain::ipv6()
I'm trying to use socket2
to shutdown a socket in an accept
loop from another thread:
use std::net::{TcpListener, Shutdown};
use socket2::Socket;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let listener = TcpListener::bind("localhost:0")?;
println!("listening on {}", listener.local_addr()?);
let socket: Socket = listener.try_clone()?.into();
let handle = std::thread::spawn(move || {
for conn in listener.incoming() {
if conn.is_err() {
break;
}
println!("got connection");
}
println!("socket shut down");
});
socket.shutdown(Shutdown::Read)?;
drop(socket);
handle.join().unwrap();
Ok(())
}
The code works as expected on Linux (playground). On Windows, I get the following error message from the call to shutdown
.:
Error: Os { code: 10057, kind: NotConnected, message: "A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied." }
socket shut down
Due to #120. If the standard library ever do change the memory layout of SocketAddr
, usage of this crate of versions before 0.3.16
will have undefined behaviour and lead to segfaults. And even if the standard library never do change the layout of those types, those older versions of this crate still makes assumptions they should not.
This is exactly what yanking is for. Marking a release as "should not be used". I think all versions of socket2
from 0.3.0
through 0.3.15
should be yanked from crates.io.
Related issues on other crates having the same problem: deprecrated/net2-rs#107, tokio-rs/mio#1412
Hello I'm the maintainer of Mio and I want to support some additional API around sockets. To not duplicate the effort around raw socket handling I think socket2 would be a good fit.
However I do have some questions. What is the status of maintenance of this crate? I've read @alexcrichton message some time ago that he'll be taking a step back from Rust (sorry to see you go and thanks for all you've done!). I'm willing to help maintain this crate if its a good fit for Mio.
I'm also looking to change the API a bit. I've being doing some experiments here: https://github.com/Thomasdezeeuw/socket2, diff: master...Thomasdezeeuw:master. The basic idea follows the same philosophy as Mio with OS specific API: make setters only available on OSes that support it and getters on all platforms (returning a negative/false value if not supported). For example mio::Interest::LIO
is only available on FreeBSD, but its getter is available on all platforms.
Furthermore I would like to structure the code a bit differently. Moving the struct definitions to the same file as the commonly supported API, but move OS specific additional API to the sys/
dir. For example the common API for Domain
: https://github.com/Thomasdezeeuw/socket2/blob/279c289e0af232167c7b67560b6c64e60ef0069c/src/lib.rs#L63-L112. And the Unix specific API is located in sys/unix.rs
: https://github.com/Thomasdezeeuw/socket2/blob/279c289e0af232167c7b67560b6c64e60ef0069c/src/sys/unix.rs#L32-L44.
Let me know what you guys think.
This library casts std::net::SocketAddrV4
(and V6
) into libc::sockaddr
: https://github.com/rust-lang/socket2-rs/blob/b0f77842b3357dd571bb86b30b354bc55215f693/src/sockaddr.rs#L95-L115
As far as I can tell there are no guarantees from std
about the layout of SocketAddrV{4,6}
, and this code could silently compile and cause UB elsewhere if the representation changes.
This internals forum thread is where this discussion started: https://internals.rust-lang.org/t/why-are-socketaddrv4-socketaddrv6-based-on-low-level-sockaddr-in-6/13321
mio
does the same kind of invalid casting: tokio-rs/mio#1386
Am finding a curious issue when running 'cargo build' on my application with socket2-rs v0.3.10. When compiling my app, at the point where cargo is building the socket2-rs dependency cargo comes back with:
Compiling socket2 v0.3.10
error: cannot find macro `__cfg_if_items!` in this scope
--> /Users/akilroy/.cargo/registry/src/github.com-1ecc6299db9ec823/socket2-0.3.10/src/sys/unix.rs:27:1
|
27 | / cfg_if::cfg_if! {
28 | | if #[cfg(any(target_os = "dragonfly", target_os = "freebsd",
29 | | target_os = "ios", target_os = "macos",
30 | | target_os = "openbsd", target_os = "netbsd",
... |
37 | | }
38 | | }
| |_^
|
= note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)
[snipped, more errors below this]
The dependencies in my Cargo.toml:
[dependencies]
clap = "2.33"
socket2 = { version = "0.3.10", features = ["reuseport"] }
base64 = "0.10"
failure = "0.1"
exitfailure = "0.5"
structopt = "0.2"
When cloning the socket2-rs repository, checking out 0.3.10 and running 'cargo build --features reuseport' there is no problem:
$ cargo clean ; cargo build
Compiling libc v0.2.60
Compiling cfg-if v0.1.9
Compiling socket2 v0.3.10 (/Users/akilroy/workspace/dev/rust/socket2-rs)
Finished dev [unoptimized + debuginfo] target(s) in 2.54s
In alexcrichton/mio-uds
all socket types are constructed from references to Path
. This is to match the API of std's unix socket types.
It'd be great if socket2
would expose extra methods to bind and connect on Path
types when the unix
feature is enabled. This would save some unsafe code and boilerplate for consumers implementing unix socket types.
Thanks heaps!
Currently Unix/Windows specific API is added in the sys/*.rs
files. However recently the Socket
API has been split in groups, e.g. options in SOL_SOCKET
, IPPROTO_IP
, etc. I think it would be better to just put the API in src/socket.rs
where most (potential) contributors would expect it.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.