containers / aardvark-dns Goto Github PK
View Code? Open in Web Editor NEWAuthoritative dns server for A/AAAA container records. Forwards other request to host's /etc/resolv.conf
License: Apache License 2.0
Authoritative dns server for A/AAAA container records. Forwards other request to host's /etc/resolv.conf
License: Apache License 2.0
Test is failing on this branch daily. Example log:
not ok 1 basic container - dns itself
# (from function `die' in file test/helpers.bash, line 99,
# from function `setup_slirp4netns' in file test/helpers.bash, line 517,
# in test file test/100-basic-name-resolution.bats, line 9)
# `setup_slirp4netns' failed
# /usr/bin/slirp4netns
# nsenter -m -n -t 1175 mount --bind /tmp/aardvark_bats.SEDh6J/resolv.conf /etc/resolv.conf
# nsenter -m -n -t 1175 ip addr
# 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
# link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
# nsenter -m -n -t 1175 ip addr
# 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
# link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
# nsenter -m -n -t 1175 ip addr
# 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
# link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
# nsenter -m -n -t 1175 ip addr
# 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
# link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
# nsenter -m -n -t 1175 ip addr
# 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
# link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
# sent tapfd=7 for tap0
# received tapfd=7
# #/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
# #| FAIL: Timed out waiting for slirp4netns to start
# #\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# 1175
This is a copy of an issue from containers/netavark#247 because it also happens when building aardvark-dns.
Building from archives provided in the Releases Page fails due to lack of repository related information.
More Specifically vergen fails to embed git commit checksum since it doesn't exist.
make
or cargo build
.[...]
Compiling git2 v0.13.25
Compiling aardvark-dns v1.0.1 (/aardvark-dns-1.0.1)
error: failed to run custom build command for `aardvark-dns v1.0.1 (/aardvark-dns-1.0.1)`
Caused by:
process didn't exit successfully: `/aardvark-dns-1.0.1/targets/release/build/aardvark-dns-55f410ec0c6374ff/build-script-build` (exit status: 1)
--- stderr
Error: could not find repository from '/aardvark-dns-1.0.1'; class=Repository (6); code=NotFound (-3)
make: *** [Makefile:47: build] Error 101
by removing "git" from list of vergen features in Cargo.toml
and manually setting VERGEN_GIT_SHA
environment variable I was able to build it successfully.
$ sed -i 's/, "git"//' Cargo.toml
$ env VERGEN_GIT_SHA="" make
Result:
$ ./bin/aardvark-dns version
{
"version": "1.0.1",
"commit": "",
"build_time": "2022-02-25T21:29:25.854856292+00:00",
"target": "x86_64-alpine-linux-musl"
}
Name : aardvark-dns
Version : 1.0.2
Release : 1.el8
Architecture: x86_64
{
"name": "dual",
"id": "2697203bf4180da9e7a6d074e38cbafb2fad4c8a3436522bde4ac573c059caa6",
"driver": "bridge",
"network_interface": "podman1",
"created": "2022-08-24T04:03:37.236675178-05:00",
"subnets": [
{
"subnet": "192.168.227.0/24",
"gateway": "192.168.227.1"
},
{
"subnet": "fdf8:192:168:227::/120",
"gateway": "fdf8:192:168:227::1"
}
],
"ipv6_enabled": true,
"internal": false,
"dns_enabled": true,
"ipam_options": {
"driver": "host-local"
}
}
[root@foo /]# cat /etc/resolv.conf
search dns.podman
nameserver 192.168.227.1
nameserver fdf8:192:168:227::1
nslookup
complains "Got recursion not available from 192.168.227.1, trying next server"[root@foo /]# nslookup bar
;; Got recursion not available from 192.168.227.1, trying next server
;; connection timed out; no servers could be reached
[root@foo /]#
dig
also complains "WARNING: recursion requested but not available"[root@foo /]# dig bar
; <<>> DiG 9.11.36-RedHat-9.11.36-3.el8 <<>> bar
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13400
;; flags: qr rd ad; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: b8dbf9748e7ba467 (echoed)
;; QUESTION SECTION:
;bar. IN A
;; ANSWER SECTION:
bar. 86400 IN A 192.168.227.9
bar. 86400 IN AAAA fdf8:192:168:227::9
bar. 86400 IN A 192.168.227.9
bar. 86400 IN AAAA fdf8:192:168:227::9
;; Query time: 0 msec
;; SERVER: 192.168.227.1#53(192.168.227.1)
;; WHEN: Fri Aug 26 10:23:45 UTC 2022
;; MSG SIZE rcvd: 132
[root@foo /]#
There are way to many uses of .unwrap()
here:
$ grep -R "unwrap()" src/
src/backend/mod.rs: if name.len() > 0 && name.chars().last().unwrap() == '.' {
src/config/mod.rs: if cfg.path().file_name().unwrap() == constants::AARDVARK_PID_FILE {
src/config/mod.rs: new_ctr_ips.push(IpAddr::V4(entry.v4.unwrap()));
src/config/mod.rs: new_ctr_ips.push(IpAddr::V6(entry.v6.unwrap()));
src/dns/coredns.rs: let name: Name = Name::parse(name, None).unwrap();
src/dns/coredns.rs: let origin: Name = Name::from_str_relaxed(name).unwrap();
src/dns/coredns.rs: match v.unwrap() {
src/dns/coredns.rs: let (name, record_type, mut req) = parse_dns_msg(msg).unwrap();
src/dns/coredns.rs: self.kill_switch.lock().unwrap()
src/dns/coredns.rs: request_name = request_name.strip_suffix(&self.filter_search_domain).unwrap().to_string();
src/dns/coredns.rs: request_name = request_name.strip_suffix(&filter_domain_ndots_complete).unwrap().to_string();
src/dns/coredns.rs: let record_name: Name = Name::from_str_relaxed(name.as_str()).unwrap();
src/dns/coredns.rs: reply(sender.clone(), src_address, &nx_message).unwrap();
src/dns/coredns.rs: let record_name: Name = Name::from_str_relaxed(name.as_str()).unwrap();
src/server/serve.rs: let mut switch = kill_switch.lock().unwrap();
src/server/serve.rs: let _ = handle.join().unwrap();
src/server/serve.rs: tx.broadcast(true).await.unwrap();
src/server/serve.rs: let address = address_string.parse().unwrap();
src/server/serve.rs: let conn = UdpClientConnection::with_timeout(address, Duration::from_millis(5)).unwrap();
src/server/serve.rs: let name = Name::from_str("anything.").unwrap();
We should always log a useful error and only exit when absolutely necessary. Aardvark is a daemon we should never die because of a small error.
Reproducer:
$ podman system reset --force
$ podman create network
$ podman run -dt --rm --name baude --network podman1 alpine top
$ podman run -it --rm --network podman1 ping baude
cntrl-c after it pings
.... wait
strace shows:
futex(0x7f99f87ac910, FUTEX_WAIT_BITSET|FUTEX_CLOCK_REALTIME, 130307, NULL, FUTEX_BITSET_MATCH_ANY
We would like to get rid of the rust-webpki package in Debian as it's abandoned upstream and has a security issue. One of the blockers for doing so is updating the trust-dns-* crates.
Your package uses some of the trust-dns crates, though it doesn't appear to use the features that depend on webpki. Still it would be be preferable to get it up to date. I see a bot has already bumped the dependency on trust-dns-server, but has left the dependencies on trust-dns-client and trust-dns-proto alone. This seems suboptimal to say the least as it means your builds now include two different versions of trust-dns-proto.
After bumping the dependency I get a bunch of errors like
error[E0308]: mismatched types
--> src/dns/coredns.rs:224:83
|
224 | ... .set_data(Some(RData::PTR(answer)))
| ---------- ^^^^^^ expected struct `trust_dns_client::rr::rdata::PTR`, found struct `Name`
| |
| arguments to this enum variant are incorrect
|
note: tuple variant defined here
--> /<redacted>/.cargo/registry/src/github.com-1ecc6299db9ec823/trust-dns-proto-0.23.0/src/rr/record_data.rs:481:5
|
481 | PTR(PTR),
| ^^^
help: try wrapping the expression in `trust_dns_client::rr::rdata::PTR`
|
224 | .set_data(Some(RData::PTR(trust_dns_client::rr::rdata::PTR(answer))))
| +++++++++++++++++++++++++++++++++ +
These seem to be able to be fixed by following the suggestion in the error message.
However I also get another error.
error[E0596]: cannot borrow data in dereference of `DnsResponse` as mutable
--> src/dns/coredns.rs:431:13
|
431 | response.set_id(id);
| ^^^^^^^^^^^^^^^^^^^ cannot borrow as mutable
|
= help: trait `DerefMut` is required to modify through a dereference, but it is not implemented for `DnsResponse`
Which I have no clue how to fix.
aardvark-dns
should be improved as the title mentioned, more details can be found here: containers/netavark#855.
When I run reverse lookups inside podman in docker compose with aardvark-dns
, resolving A
records works but resolving reverse lookups does not with hickory DNS.
I think the issue is because in the answer section the domain is just .
instead of the queried domain but my knowledge of inner workings of DNS is limitted so I might be wrong here.
root@1e714d3fa668:/# dig -x 10.89.0.63
; <<>> DiG 9.18.24-1-Debian <<>> -x 10.89.0.63
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 48549
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: 966cea63d68458dc (echoed)
;; QUESTION SECTION:
;63.0.89.10.in-addr.arpa. IN PTR
;; ANSWER SECTION:
. 60 IN PTR project-container-1.
. 60 IN PTR project-container-1.
. 60 IN PTR container.
. 60 IN PTR 190053e6f903.
;; Query time: 0 msec
;; SERVER: 10.89.0.1#53(10.89.0.1) (UDP)
;; WHEN: Tue Feb 27 16:39:01 UTC 2024
;; MSG SIZE rcvd: 160
When I run the same with docker, the answer section looks like this:
;; ANSWER SECTION:
8.0.20.172.in-addr.arpa. 600 IN PTR project-container-1.project-network.
Hello!
I've been playing with Aardvark (1.0.2) with Podman (4.0.3) and I couldn't discover why Nginx's DNS client on an custom bridge network was failing to resolve DNS names. Nginx was complaining about an "unexpected A record" - upon digging, it appears Aardvark may be returning duplicate records that Nginx couldn't handle.
Here's an example - here "cyberchef" is another container on the custom bridge network:
# dig cyberchef
; <<>> DiG 9.16.27-Debian <<>> cyberchef
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31292
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;cyberchef. IN A
;; ANSWER SECTION:
cyberchef. 600 IN A 172.18.0.3
cyberchef. 600 IN A 172.18.0.3
;; Query time: 0 msec
;; SERVER: 10.89.0.1#53(10.89.0.1)
;; WHEN: Wed Apr 20 15:26:43 UTC 2022
;; MSG SIZE rcvd: 52
The same behavior appears regardless of if the shortname ("cyberchef") or an FQDN ("cyberchef.dns.podman") is used.
This behavior, while not deal-breaking (some clients handle it gracefully), can cause unexpected behavior (as evidenced with my nginx example) - It's also different from the default Docker behavior where only a single record is returned.
I'm happy to provide more information as needed.
The last packit task to update the fedora package repo didn't quite work as expected. Specifically, the rpm/update-spec-provides.sh
didn't seem to have generated the expected output. See: https://dashboard.packit.dev/results/propose-downstream/2821 . Packit folks have made some updates to packit service and to try this out I need an issue. So here's the issue filed.
It needs to use the dns-options, dns nameservers and the dns-searches.
If you set invalid options in /etc/resolv.conf aardvark-dns will be unresponsive. It will run, but does not give any error even with RUST_LOG=trace
Note: these options are from Oracle Solaris and setting these options on a RHEL-based OS will not prevent DNS requests.
This is an example file: /etc/resolv.conf
search this.is.dumb dont.do.this unless.you.want your.queries.to.fail like.this
options retrans:3 retry:1
nameserver 8.8.8.8
aardvark-dns starts:
RUST_LOG=trace /usr/libexec/podman/aardvark-dns --config /run/containers/networks/custom-dns -p 4343 run
ps aux
root 6587 0.0 0.0 276552 220 ? Ssl 17:25 0:00 /usr/libexec/podman/aardvark-dns --config /run/containers/networks/backup-dns -p 4343 run
However, it does not respond to any queries:
dig @127.0.0.1 -p 4343 google.com
; <<>> DiG 9.16.23-RH <<>> @127.0.0.1 -p 4343 google.com
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
No errors or anything about it found in syslog:
cat /var/log/messages | grep dns
But shows other logs from previous testing, for example:
aardvark-dns[5506]: Unable to start server unable to start CoreDns server: Cannot assign requested address (os error 99)
aardvark-dns[5758]: Unable to start server unable to start CoreDns server: Address already in use (os error 98)
If you remove the bogous options from the /etc/resolv.conf file it works again. E.g. with this /etc/resolv.conf it will respond to queries as expected:
search this.is.dumb dont.do.this unless.you.want your.queries.to.fail like.this
nameserver 8.8.8.8
Tested:
aardvark-dns 1.7.0 (Podman package RHEL-based)
aardvark-dns 1.9.0 (Github Releases)
This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
Cargo.toml
clap ~4.4.10
syslog ^6.1.1
log 0.4.21
hickory-server 0.24.1
hickory-proto 0.24.1
hickory-client 0.24.1
anyhow 1.0.86
futures-util 0.3.30
signal-hook 0.3.17
tokio 1.38.0
resolv-conf 0.7.0
nix 0.29.0
libc 0.2.154
arc-swap 1.7.1
flume 0.11.0
chrono 0.4.38
.github/workflows/check_cirrus_cron.yml
.github/workflows/rerun_cirrus_cron.yml
.cirrus.yml
containers/automation_images 20240529t141726z-f40f39d13
We have a requirement to have both static and dynamic IP addresses when our containers are run. We'd like to limit the dynamic IP addresses to a range so the static IP addresses aren't assigned to the dynamic IP address containers if the dynamic containers are run before the static ones.
In #312 the response TTL was reduced drastically, but even 60 seconds is too long if it's being used for load balancing. I would like to be able to configure the TTL to something more like 5 seconds or even turn it off entirely.
Inbuilt resolver returns IPv4
and IPv6
correctly for A
or AAAA
requests but it should return both when request type is ANY
.
ANY
request is not common but certain libraries and tools use this to get both v6
and v4
records therefore aardvark-dns
should support it.
Hi! Similar to containers/netavark#231 it would be great to get a statement in regards to the future use of PGP signatures and maintaining a chain of trust for it, so that downstreams may rely on it. Thanks! :)
not ok 10 three networks with a connect
# (from function `assert' in file test/helpers.bash, line 197,
# in test file test/300-three-networks.bats, line 50)
# `assert "$a2_ip"' failed
# nsenter -m -n -t 19461 /usr/libexec/podman/netavark --config /tmp/aardvark_bats.nnp08X -a /usr/libexec/podman/aardvark-dns setup /proc/19494/ns/net
# {"podman1":{"dns_search_domains":["dns.podman"],"dns_server_ips":["10.223.153.1"],"interfaces":{"eth0":{"mac_address":"72:33:b6:e6:c3:6f","subnets":[{"gateway":"10.223.153.1","ipnet":"10.223.153.129/24"}]}}}}
# nsenter -m -n -t 19461 /usr/libexec/podman/netavark --config /tmp/aardvark_bats.nnp08X -a /usr/libexec/podman/aardvark-dns setup /proc/19581/ns/net
# {"podman2":{"dns_search_domains":["dns.podman"],"dns_server_ips":["10.25.37.1"],"interfaces":{"eth0":{"mac_address":"32:87:6e:38:57:0b","subnets":[{"gateway":"10.25.37.1","ipnet":"10.25.37.210/24"}]}}}}
# nsenter -m -n -t 19461 /usr/libexec/podman/netavark --config /tmp/aardvark_bats.nnp08X -a /usr/libexec/podman/aardvark-dns setup /proc/19658/ns/net
# {"podman1":{"dns_search_domains":["dns.podman"],"dns_server_ips":["10.223.153.1"],"interfaces":{"eth0":{"mac_address":"52:a5:e6:d9:25:bf","subnets":[{"gateway":"10.223.153.1","ipnet":"10.223.153.129/24"}]}}},"podman2":{"dns_search_domains":["dns.podman"],"dns_server_ips":["10.25.37.1"],"interfaces":{"eth1":{"mac_address":"12:42:27:d8:e3:4e","subnets":[{"gateway":"10.25.37.1","ipnet":"10.25.37.62/24"}]}}}}
# nsenter -n -t 19494 dig +short abtwo @10.223.153.1
# 10.223.153.129
# 10.223.153.129
# 10.25.37.62
# #/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
# #| FAIL: [no test name given]
# #| expected: '10.223.153.129'
# #| actual: '10.223.153.129'
# #| > '10.223.153.129'
# #| > '10.25.37.62'
# #\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Seems to fail in CI but I also reproduces locally. Not sure why this is racy here.
I am using podman as a docker replacement on our gitlab-runner host. I have a 40 containers concurrency limit and when I start my tests, I get DNS resolution errors.
Testing environment:
While running tests, I get random dns resolution fail errors inside containers (actual host replaced with host.example.tld):
Example 1:
Cloning into 'spec/fixtures/modules/yumrepo_core'...
ssh: Could not resolve hostname host.example.tld: Temporary failure in name resolution
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Example 2:
$ bundle install -j $(nproc)
Fetching gem metadata from https://host.example.tld/nexus/repository/GroupRubyGems/..
Fetching gem metadata from https://host.example.tld/nexus/repository/GroupRubyGems/..
Could not find gem 'beaker (~> 5)' in any of the gem sources listed in your
Example 3:
Initialized empty Git repository in /builds/puppet/freeradius/.git/
Created fresh repository.
fatal: unable to access 'https://host.example.tld/puppet/freeradius.git/': Could not resolve host: host.example.tld
Cleaning up project directory and file based variables
This does not happen in every container, it's sporadic and random. If I switch back to cni
backend, it works without errors.
I tried running up to 8 containers and flooding the dns server with dns lookups, but I could not get a DNS resolution error. Will try to ramp that up to 30-40 and see if I can reproduce.
If anyone has an idea how to debug this, I will gladly look into it if my knowledge allows me.
I've setup a nextcloud deployment with podman in combination with docker-compose. It's working fine mostly and dns is also generally working. But around every 30s I see connection issues between containers. I've created a test script which queries the database every second and I often see connection errors (SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Try again
). journalctl
shows the following output at these times:
aardvark-dns[70887]: Failed while parsing message: unexpected end of input reached
aardvark-dns[70887]: None received while parsing dns message, this is not expected server will ignore this message
Any ideas?
System details
podman
systemctl --user enable podman.socket
systemctl --user start podman.socket
sudo dnf install -y podman podman-plugins
podman network create nginx-proxy
docker-compose up
docker-compose.yml
It's app
which sometimes cannot connect to db
and redis
.
version: '3'
services:
db:
image: docker.io/library/mariadb:10.5
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- db:/var/lib/mysql
env_file:
- db.env
redis:
image: docker.io/library/redis:7.0-alpine
restart: always
app:
image: docker.io/library/nextcloud:24-fpm-alpine
restart: always
security_opt:
- label=disable
volumes:
- nextcloud:/var/www/html
environment:
- MYSQL_HOST=db
- REDIS_HOST=redis
env_file:
- db.env
depends_on:
- db
- redis
web:
build: ./web
restart: always
security_opt:
- label=disable
volumes:
- nextcloud:/var/www/html:ro
depends_on:
- app
ports:
- "8082:8080"
environment:
VIRTUAL_HOST: REDACTED
LETSENCRYPT_HOST: REDACTED
LETSENCRYPT_EMAIL: REDACTED
cron:
image: docker.io/library/nextcloud:24-fpm-alpine
restart: always
security_opt:
- label=disable
volumes:
- nextcloud:/var/www/html
entrypoint: /cron.sh
depends_on:
- db
- redis
volumes:
nextcloud:
driver: local
driver_opts:
type: none
o: bind
device: "${PWD}/volumes/nextcloud"
db:
networks:
default:
name: nginx-proxy
external: true
Host: VPS running Debian Sid
Container: NextCloud:stable (from docker.io, but also affects all other containers on same host)
I did use docker initially, but migrated to podman.
I am using podman-compose up -d
to run the containers, and they start on boot-up of the host
aardvark-dns is running on 10.8.1.1 and, as shown below, is working correctly to resolve DNS requests from the host.
I have one container where I have managed to get DNS working (I can't remember exactly how I managed to get /etc/resolv.conf 'locked in' inside that container, but it's a custom docker container I have build), but I would like to resolve why my containers can't.
None of the containers have a 'network' stanza in the docker-compose.yml
I have tried podman network update --dns-add
, amongst numerous other things which I'm too tired right now to recall (it's 03:25 where I am)
I'm running out of things I can think of to 'google' for.
$ dig apps.nextcloud.com @10.8.1.1
; <<>> DiG 9.19.21-1+b1-Debian <<>> apps.nextcloud.com @10.8.1.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62660
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1410
;; QUESTION SECTION:
;apps.nextcloud.com. IN A
;; ANSWER SECTION:
apps.nextcloud.com. 2033 IN A 176.9.217.53
;; Query time: 24 msec
;; SERVER: 10.8.1.1#53(10.8.1.1) (UDP)
;; WHEN: Tue Apr 23 16:22:06 UTC 2024
;; MSG SIZE rcvd: 63
$ curl https://apps.nextcloud.com
<Lots of output>
$ podman exec -it cloud cat /etc/resolv.conf
search dns.podman
nameserver 10.8.1.1
$ podman exec -it cloud curl https://apps.nextcloud.com
curl: (6) Could not resolve host: apps.nextcloud.com
curl ...
:Apr 23 17:16:20 <REDACTED> podman[323035]: 2024-04-23 17:16:20.930295082 +0000 UTC m=+0.707901763 container exec 7b69bae96e2676244174cb39dc62bd2945690cadd0f4c89c1a22a56f8ae48941 (image=docker.io/library/nextcloud:stable, name=cloud, [email protected], com.docker.compose.project=docker, io.podman.compose.project=docker, com.docker.compose.container-number=1, com.docker.compose.project.config_files=docker-compose.yml, io.podman.compose.config-hash=f82ac3baaa4440712fe5b223698bc15986b2531b21b4e3917da914b72df39c1a, io.podman.compose.version=1.0.6, com.docker.compose.project.working_dir=/docker, com.docker.compose.service=nextcloud)
Apr 23 17:16:40 <REDACTED> podman[323062]: 2024-04-23 17:16:40.952167881 +0000 UTC m=+0.064408201 container exec_died 7b69bae96e2676244174cb39dc62bd2945690cadd0f4c89c1a22a56f8ae48941 (image=docker.io/library/nextcloud:stable, name=cloud, com.docker.compose.service=nextcloud, com.docker.compose.container-number=1, io.podman.compose.version=1.0.6, com.docker.compose.project.working_dir=/docker, io.podman.compose.config-hash=f82ac3baaa4440712fe5b223698bc15986b2531b21b4e3917da914b72df39c1a, io.podman.compose.project=docker, com.docker.compose.project.config_files=docker-compose.yml, [email protected], com.docker.compose.project=docker)
Apr 23 17:16:41 <REDACTED> podman[323035]: 2024-04-23 17:16:41.248378177 +0000 UTC m=+21.025984868 container exec_died 7b69bae96e2676244174cb39dc62bd2945690cadd0f4c89c1a22a56f8ae48941 (image=docker.io/library/nextcloud:stable, name=cloud, [email protected], com.docker.compose.project=docker, io.podman.compose.config-hash=f82ac3baaa4440712fe5b223698bc15986b2531b21b4e3917da914b72df39c1a, com.docker.compose.project.config_files=docker-compose.yml, io.podman.compose.project=docker, io.podman.compose.version=1.0.6, com.docker.compose.container-number=1, com.docker.compose.project.working_dir=/docker, com.docker.compose.service=nextcloud)
Any, and all, assistance would be greatly appreciated.
Hello,
I noticed that your project has a ASL2 license as per Cargo.toml
although there is no LICENSE file in the repository.
There also doesn't appear to be a code of conduct, or other standard documentation.
Thank you,
-steve
This is kind of a requirement.
From host's POV, usually it looks up /etc/hosts firstly, then send request to /etc/resolv.conf.
Similarly, from container's POV, it looks up /etc/hosts firstly, then send request to aardvark-dns.
Ideally aardvark-dns should look up its own config file first, then look up host's /etc/hosts, then forward request to host's /etc/resolv.conf.
Especially for Non-DNS users, they don't configure /etc/resolv.conf but just add some entries in /etc/hosts in host.
Adding such logic of looking up host's /etc/hosts before forwarding request to host's /etc/resolv.conf would benefit those Non-DNS users.
I'm using arch linux, so the packages should have the newest version.
I'm using firewallD and rootless podman with netavark and aardvark-dns.
I understand, that rootless podman with netavark won't manage my firewallD, but I would like to know which rules I need to activate to avoid the spam in my journal.
And if the rule need to be in my loopback or network interface. (Also if it is enough to allow communication with the host instead of having an open port in the internet.
My dns resolver is systemd-resolved
$ ls -lha /etc/resolv.conf
lrwxrwxrwx 1 root root 39 31. Okt 10:22 /etc/resolv.conf -> ../run/systemd/resolve/stub-resolv.conf
My journal spam:
aardvark-dns[6156]: 21433 dns request failed: request timed out
The rootless container itself can ping to google.com
.
I didn't test if they can ping to a container dns name.
Packit failed on creating pull-requests in dist-git (https://src.fedoraproject.org/rpms/aardvark-dns.git):
dist-git branch | error |
---|---|
f40 |
See https://dashboard.packit.dev/results/propose-downstream/8821 |
You can retrigger the update by adding a comment (/packit propose-downstream
) into this issue.
title says it all. we need to fix the flake.
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
When a container is connected to multiple networks only one network can resolve containers by fqdn.
Steps to reproduce the issue:
podman network create --subnet 192.168.55.0/24 network1
podman network create --subnet 192.168.56.0/24 network2
podman run --detach --rm -ti --name container1 --network network1 alpine sleep 9000
podman run --rm -ti --name container2 --network network1,network2 alpine sh -c "cat /etc/resolv.conf; apk add bind-tools > /dev/null; echo '<<<<<<<<<<< network1 dns test'; dig container1.dns.podman @192.168.55.1; echo '<<<<<<<<<<< network2 dns test'; dig container1.dns.podman @192.168.56.1"
podman run --rm -ti --name container2 --network network1,network2 alpine sh -c "cat /etc/resolv.conf; apk add bind-tools > /dev/null; echo '<<<<<<<<<<< network1 dns test'; dig container1 @192.168.55.1; echo '<<<<<<<<<<< network2 dns test'; dig container1 @192.168.56.1"
Describe the results you received:
When resolving the fqdn of container1 only one name server responds correctly.
search dns.podman dns.podman
nameserver 192.168.55.1
nameserver 192.168.56.1
nameserver 192.168.121.1
<<<<<<<<<<< network1 dns test
... (clipped for clarity)
;; QUESTION SECTION:
;container1.dns.podman. IN A
;; ANSWER SECTION:
container1.dns.podman. 86400 IN A 192.168.55.2
;; Query time: 1 msec
;; SERVER: 192.168.55.1#53(192.168.55.1)
;; WHEN: Tue May 24 14:37:03 UTC 2022
;; MSG SIZE rcvd: 78
<<<<<<<<<<< network2 dns test
... (clipped for clarity)
;; QUESTION SECTION:
;container1.dns.podman. IN A
;; Query time: 3 msec
;; SERVER: 192.168.56.1#53(192.168.56.1)
;; WHEN: Tue May 24 14:37:03 UTC 2022
;; MSG SIZE rcvd: 62
When resolving the short name of the container one both name server respond correctly
search dns.podman dns.podman
nameserver 192.168.56.1
nameserver 192.168.55.1
nameserver 192.168.121.1
<<<<<<<<<<< network1 dns test
... (clipped for clarity)
;; QUESTION SECTION:
;container1. IN A
;; ANSWER SECTION:
container1. 86400 IN A 192.168.55.2
;; Query time: 2 msec
;; SERVER: 192.168.55.1#53(192.168.55.1)
;; WHEN: Tue May 24 14:38:01 UTC 2022
;; MSG SIZE rcvd: 67
<<<<<<<<<<< network2 dns test
... (clipped for clarity)
;; QUESTION SECTION:
;container1. IN A
;; ANSWER SECTION:
container1. 86400 IN A 192.168.55.2
;; Query time: 3 msec
;; SERVER: 192.168.56.1#53(192.168.56.1)
;; WHEN: Tue May 24 14:38:01 UTC 2022
;; MSG SIZE rcvd: 67
Describe the results you expected:
Both name servers should respond to both the shortname and fqdn queries.
Additional information you deem important (e.g. issue happens only occasionally):
Output of podman version
:
# podman version
Client: Podman Engine
Version: 4.1.0
API Version: 4.1.0
Go Version: go1.18
Built: Fri May 6 16:15:54 2022
OS/Arch: linux/amd64
Output of podman info --debug
:
# podman info --debug
host:
arch: amd64
buildahVersion: 1.26.1
cgroupControllers:
- cpuset
- cpu
- io
- memory
- hugetlb
- pids
- misc
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.0-2.fc36.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.0, commit: '
cpuUtilization:
idlePercent: 97.24
systemPercent: 0.93
userPercent: 1.83
cpus: 2
distribution:
distribution: fedora
variant: cloud
version: "36"
eventLogger: journald
hostname: container.redacted
idMappings:
gidmap: null
uidmap: null
kernel: 5.17.5-300.fc36.x86_64
linkmode: dynamic
logDriver: journald
memFree: 148381696
memTotal: 6217089024
networkBackend: netavark
ociRuntime:
name: crun
package: crun-1.4.4-1.fc36.x86_64
path: /usr/bin/crun
version: |-
crun version 1.4.4
commit: 6521fcc5806f20f6187eb933f9f45130c86da230
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
path: /run/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.0-0.2.beta.0.fc36.x86_64
version: |-
slirp4netns version 1.2.0-beta.0
commit: 477db14a24ff1a3de3a705e51ca2c4c1fe3dda64
libslirp: 4.6.1
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.3
swapFree: 5851836416
swapTotal: 6217003008
uptime: 15h 16m 15.62s (Approximately 0.62 days)
plugins:
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- docker.io
store:
configFile: /usr/share/containers/storage.conf
containerStore:
number: 19
paused: 0
running: 19
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev,metacopy=on
graphRoot: /var/lib/containers/storage
graphRootAllocated: 41788899328
graphRootUsed: 9318744064
graphStatus:
Backing Filesystem: btrfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "true"
imageCopyTmpDir: /var/tmp
imageStore:
number: 67
runRoot: /run/containers/storage
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 4.1.0
Built: 1651853754
BuiltTime: Fri May 6 16:15:54 2022
GitCommit: ""
GoVersion: go1.18
Os: linux
OsArch: linux/amd64
Version: 4.1.0
Package info (e.g. output of rpm -q podman
or apt list podman
):
# rpm -q netavark podman
netavark-1.0.3-3.fc36.x86_64
podman-4.1.0-1.fc36.x86_64
Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)
Yes
Additional environment details (AWS, VirtualBox, physical, etc.):
Libvirt VM
Is this a BUG REPORT or FEATURE REQUEST?
/kind bug
Description
When using docker-compose
and podman, podman fails to bring up containers trying to port map ports below 60. Additionally, when trying to map port 53
on the host, it seems to conflict with dnsmasq
process podman spawns.
Steps to reproduce the issue:
Parsing Error
Install podman 3.0 as root to utilize docker-compose features
Make sure to disable any dns(port 53) service running on OS
Using the docker-compose.yml
file below issue: docker-compose up
Port 53 Conflict
Install podman 3.0 as root to utilize docker-compose features
Make sure to disable any dns(port 53) service running on OS
Edit the docker-compose.yml
file and change - 53:53
to - 53:XXXX
, where XXXX is anything above 59.
Example: - 53:60
Then issue the following: docker-compose up
Describe the results you received:
Using the unmodified docker-compose.yml
file below will generate the parsing error:
root@vm-307:/home/crowley# docker-compose up
Creating network "crowley_default" with the default driver
Creating crowley_admin_1 ...
Creating crowley_pdns_1 ... error
Creating crowley_admin_1 ... done
ERROR: for crowley_pdns_1 Cannot create container for service pdns: make cli opts(): strconv.Atoi: parsing "": invalid syntax
From my testing if I change the port mapping, - 53:53
to be anything above 59 for the container port, it passes the parsing error.
Changing the port mapping to - 53:60
, allows the docker-compose up
to continue but fail with this error message:
root@vm-307:/home/crowley# docker-compose up
Creating network "crowley_default" with the default driver
Creating crowley_admin_1 ...
Creating crowley_pdns_1 ... error
Creating crowley_admin_1 ... done
ERROR: for crowley_pdns_1 error preparing container ac8f5caddef9e28d43fd2f8b41d0c96845765c623b1f7fe0fef3b6692efa5910 for attach: cannot listen on the TCP port: listen tcp4 :53: bind: address already in use
ERROR: for pdns error preparing container ac8f5caddef9e28d43fd2f8b41d0c96845765c623b1f7fe0fef3b6692efa5910 for attach: cannot listen on the TCP port: listen tcp4 :53: bind: address already in use
ERROR: Encountered errors while bringing up the project.
Just to make sure I am not crazy, I bring down the containers, docker-compose down
. Then check my ports using sudo lsof -i -P -n
. Which results in:
root@vm-307:/home/crowley# sudo lsof -i -P -n
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 630 root 3u IPv4 32734 0t0 TCP *:22 (LISTEN)
sshd 630 root 4u IPv6 32736 0t0 TCP *:22 (LISTEN)
sshd 668 root 4u IPv4 32763 0t0 TCP X.X.X.X:22->X.X.X.X:55832 (ESTABLISHED)
sshd 695 crowley 4u IPv4 32763 0t0 TCP X.X.X.X:22->X.X.X.X:55832 (ESTABLISHED)
Please note X.X.X.X
is just me censoring my IPs. As you can see I do not have any services listen on port 53
.
Next I issue docker-compose up
again. I see the same port conflict issue. Then issue sudo lsof -i -P -n
to check my services before bringing down the containers.
root@vm-307:/home/crowley# sudo lsof -i -P -n
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 630 root 3u IPv4 32734 0t0 TCP *:22 (LISTEN)
sshd 630 root 4u IPv6 32736 0t0 TCP *:22 (LISTEN)
sshd 668 root 4u IPv4 32763 0t0 TCP X.X.X.X->X.X.X.X:55832 (ESTABLISHED)
sshd 695 crowley 4u IPv4 32763 0t0 TCP X.X.X.X:22->X.X.X.X:55832 (ESTABLISHED)
dnsmasq 16060 root 4u IPv4 112910 0t0 UDP 10.89.0.1:53
dnsmasq 16060 root 5u IPv4 112911 0t0 TCP 10.89.0.1:53 (LISTEN)
dnsmasq 16060 root 10u IPv6 116160 0t0 UDP [fe80::9cc6:14ff:fe16:3953]:53
dnsmasq 16060 root 11u IPv6 116161 0t0 TCP [fe80::9cc6:14ff:fe16:3953]:53 (LISTEN)
conmon 16062 root 5u IPv4 111869 0t0 TCP *:9191 (LISTEN)
As you can see podman has spawned a dnsmasq
process. I think this is to allow DNS between the containers, but seems to conflict if you want to run/port map port 53
.
Describe the results you expected:
I expect not to hit that parsing error. I am not sure why podman/docker-compose is hitting that error. When running that exact same docker-compose.yml
via docker I have no issues.
I also expect not to hit port 53 conflicts. I am not sure how podman is handling DNS between the containers but the implementation limits users ability to hosts different services.
Additional information you deem important (e.g. issue happens only occasionally):
N/A
Output of podman version
:
podman version 3.0.0
Output of podman info --debug
:
host:
arch: amd64
buildahVersion: 1.19.2
cgroupManager: systemd
cgroupVersion: v1
conmon:
package: 'conmon: /usr/libexec/podman/conmon'
path: /usr/libexec/podman/conmon
version: 'conmon version 2.0.26, commit: '
cpus: 8
distribution:
distribution: ubuntu
version: "20.04"
eventLogger: journald
hostname: vm-307
idMappings:
gidmap: null
uidmap: null
kernel: 5.4.0-28-generic
linkmode: dynamic
memFree: 15873085440
memTotal: 16762957824
ociRuntime:
name: crun
package: 'crun: /usr/bin/crun'
path: /usr/bin/crun
version: |-
crun version 0.17.6-58ef-dirty
commit: fd582c529489c0738e7039cbc036781d1d039014
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
os: linux
remoteSocket:
path: /run/podman/podman.sock
security:
apparmorEnabled: true
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
selinuxEnabled: false
slirp4netns:
executable: ""
package: ""
version: ""
swapFree: 1023406080
swapTotal: 1023406080
uptime: 1h 11m 7.15s (Approximately 0.04 days)
registries:
search:
- docker.io
- quay.io
store:
configFile: /etc/containers/storage.conf
containerStore:
number: 4
paused: 0
running: 0
stopped: 4
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev,metacopy=on
graphRoot: /var/lib/containers/storage
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "true"
imageStore:
number: 4
runRoot: /run/containers/storage
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 3.0.0
Built: 0
BuiltTime: Wed Dec 31 19:00:00 1969
GitCommit: ""
GoVersion: go1.15.2
OsArch: linux/amd64
Version: 3.0.0
Package info (e.g. output of rpm -q podman
or apt list podman
):
Listing... Done
podman/unknown,now 100:3.0.0-4 amd64 [installed]
podman/unknown 100:3.0.0-4 arm64
podman/unknown 100:3.0.0-4 armhf
podman/unknown 100:3.0.0-4 s390x
Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
Yes
Additional environment details (AWS, VirtualBox, physical, etc.):
Running on amd64
hardware. The server is a VM inside of VMware. Also running on Ubuntu 20.04.
docker-compose.yml
version: "3"
services:
pdns:
image: powerdns/pdns-auth-master:latest
ports:
- 53:53
- 8081:8081
admin:
image: ngoduykhanh/powerdns-admin:latest
ports:
- 9191:80
Name : aardvark-dns
Version : 1.0.2
Release : 1.el8
Architecture: x86_64
{
"name": "dual",
"id": "2697203bf4180da9e7a6d074e38cbafb2fad4c8a3436522bde4ac573c059caa6",
"driver": "bridge",
"network_interface": "podman1",
"created": "2022-08-24T04:03:37.236675178-05:00",
"subnets": [
{
"subnet": "192.168.227.0/24",
"gateway": "192.168.227.1"
},
{
"subnet": "fdf8:192:168:227::/120",
"gateway": "fdf8:192:168:227::1"
}
],
"ipv6_enabled": true,
"internal": false,
"dns_enabled": true,
"ipam_options": {
"driver": "host-local"
}
}
location /bar {
resolver 192.168.227.1;
set $upstream bar.dns.podman;
proxy_pass http://$upstream;
}
[root@foo /]# curl -vvv http://localhost/bar
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 80 (#0)
> GET /bar HTTP/1.1
> Host: localhost
> User-Agent: curl/7.61.1
> Accept: */*
>
< HTTP/1.1 502 Bad Gateway
We can see Nginx error.log is filled up with plenty of the following errors:
2022/08/26 09:54:58 [error] 88#0: unexpected AAAA record in DNS response
2022/08/26 09:54:58 [error] 88#0: unexpected A record in DNS response
aardvark-dns
always returns both A and AAAA records no matter what QTYPE is specified in DNS request.[root@foo /]# nslookup -type=A bar 192.168.227.1
Server: 192.168.227.1
Address: 192.168.227.1#53
Non-authoritative answer:
Name: bar.dns.podman
Address: 192.168.227.5
Name: bar.dns.podman
Address: fdf8:192:168:227::5
[root@foo /]# nslookup -type=AAAA bar 192.168.227.1
Server: 192.168.227.1
Address: 192.168.227.1#53
Non-authoritative answer:
Name: bar.dns.podman
Address: 192.168.227.5
Name: bar.dns.podman
Address: fdf8:192:168:227::5
[root@foo /]# nslookup bar 192.168.227.1
Server: 192.168.227.1
Address: 192.168.227.1#53
Non-authoritative answer:
Name: bar.dns.podman
Address: 192.168.227.5
Name: bar.dns.podman
Address: fdf8:192:168:227::5
Name: bar.dns.podman
Address: 192.168.227.5
Name: bar.dns.podman
Address: fdf8:192:168:227::5
This is a request for enhancement.
Currently aardvark DNS resolves container names and for anything it can not resolve on its own, it refers to the configured resolvers on the host. Requirement is that: Need a way to tell aardvark DNS to refer to a particular DNS, and not host's configured DNS. This is because I need host to work on separate DNS and container to work on separate DNS.
I tried doing this by bind mounting a alternate_resolve.conf from host to container. It has 2 entries. First is of aardvark DNS and my second is of alternate DNS (say DNS1). Now, note that, my host has DNS2 in its resolv.conf.
Expected behavior:
For FQDNs that aardvark DNS can not resolve, my expectation is that the forward request should go to DNS1.
Observed behavior:
Instead, it goes to DNS2.
Kindly guide to understand if this is a valid requirement.
Also, as this is not working currently, is there a workaround to make it work?
Thank you
With #270 merged, both renovate and dependabot are operating on this repo. Assuming things go well, (they both find the same set of dep. updates) once this issue goes stale, dependabot may be disabled. That includes BOTH the settings and any .github/dependabot.yml
configuration file.
Packit failed on creating pull-requests in dist-git (https://src.fedoraproject.org/rpms/aardvark-dns.git):
dist-git branch | error |
---|---|
f38 |
See https://dashboard.packit.dev/results/propose-downstream/2893 |
You can retrigger the update by adding a comment (/packit propose-downstream
) into this issue.
somehow the validate task dropped the make validate step. Make validate yields a ton of things to correct. Once corrected, edit the .cirrius.yaml and add make validate as a task to validate
Hi,
I think that adding the host.containers.internal entry in aardvark-dns would be more consistent and quite handy in some cases.
No response
No response
When using podman as backend for a k8s kind cluster, host.containers.internal is not resolvable because k8s's internal coredns forwards requests directly to aardvark-dns and /etc/hosts is not propagated to pods.
100% reproducible for me, x86_64-linux (NixOS).
failures:
---- test::test::tests::test_backend_network_scoped_custom_dns_server stdout ----
thread 'test::test::tests::test_backend_network_scoped_custom_dns_server' panicked at 'assertion failed: `(left == right)`
left: `["127.0.0.1", "::0.0.0.2"]`,
right: `["127.0.0.1", "::2"]`', src/test/test.rs:110:21
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
test::test::tests::test_backend_network_scoped_custom_dns_server
test result: FAILED. 23 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.01s
Probably triggered by rustc: 1.71.1 -> 1.72.0, on a very quick glance. I might bisect, in case you found that very useful.
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind feature
Description
I'm curious whether the new Podman 4 network stack components can be leveraged to enable (rootful) host-to-container/pod name resolution.
Most of my projects are rootful and single host (and run at boot by systemd), so being able to resolve the name to an IP as soon as the container starts would be so helpful.
I'm dreaming of:
% sudo podman run --detach --name myredis redis
% ping myredis
PING 10.88.0.25 (10.88.0.25) 56(84) bytes of data.
64 bytes from 10.88.0.25: icmp_seq=1 ttl=64 time=0.066 ms
or:
% sudo podman pod create --name mypod
% sudo podman run --detach --name myredis --pod mypod redis
% ping mypod
PING 10.88.0.25 (10.88.0.25) 56(84) bytes of data.
64 bytes from 10.88.0.25: icmp_seq=1 ttl=64 time=0.066 ms
Currently I achieve this by creating a custom network and using --ip
to assign each pod/container a static IP that I add to /etc/hosts
.
% sudo podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}\t{{.Name}}" myredis >> /etc/hosts
Perhaps this functionality is venturing into the world of "service discovery" so I may be asking too much of podman. But since most of my projects are single host and have simple needs, expanding my footprint to include minikube/k3s/etc. feels like overkill.
Right now our dns startup is super flaky causing many flakes in CI that are only solved by using retries. This is bad and often not what users are doing. Sending signals is just not reliable. Netavark sends the signal on a update but then never wait for aardvark-dns to actually update the names and be ready to respond to the new name. The same goes for error handling aardvark-dns logs its errors to journald but there is absolutely no way right now to get this error back to netavark and thus podman. A common problem is that port 53 is already bound causing aardvark-dns to be up and running but unable to serve any dns.
There are a lot of dns related issues on the podman issue tracker, most not really possible to debug. IMO we have to address this situation.
Of course one important caveat is that we must stay backwards compatible. I am creating to have a discussion about it so we can find a good solution for this.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.