GithubHelp home page GithubHelp logo

nlnetlabs / unbound Goto Github PK

View Code? Open in Web Editor NEW
2.8K 61.0 329.0 98.47 MB

Unbound is a validating, recursive, and caching DNS resolver.

Home Page: https://nlnetlabs.nl/unbound

License: BSD 3-Clause "New" or "Revised" License

Makefile 1.95% M4 2.51% C 82.28% Perl 0.12% Batchfile 0.20% Shell 6.89% Python 1.77% Roff 0.04% Lex 0.47% Yacc 1.90% NSIS 0.13% SWIG 1.58% Awk 0.16%
dns dnssec resolver dns-privacy recursor

unbound's Introduction

Unbound

Github Build Status Packaging status Fuzzing Status Documentation Status Mastodon Follow

Unbound is a validating, recursive, caching DNS resolver. It is designed to be fast and lean and incorporates modern features based on open standards. If you have any feedback, we would love to hear from you. Don’t hesitate to create an issue on Github or post a message on the Unbound mailing list. You can learn more about Unbound by reading our documentation.

Compiling

Make sure you have the C toolchain, OpenSSL and its include files, and libexpat installed. If building from the repository source you also need flex and bison installed. Unbound can be compiled and installed using:

./configure && make && make install

You can use libevent if you want. libevent is useful when using many (10000) outgoing ports. By default max 256 ports are opened at the same time and the builtin alternative is equally capable and a little faster.

Use the --with-libevent configure option to compile Unbound with libevent support.

Unbound configuration

All of Unbound's configuration options are described in the man pages, which will be installed and are available on the Unbound documentation page.

An example configuration file is located in doc/example.conf.

unbound's People

Contributors

alexanderband avatar cgallred avatar countsudoku avatar dyunwei avatar eaglegai avatar edmonds avatar episource avatar fgasper avatar fhriley avatar fobser avatar gthess avatar k9982874 avatar kimheino avatar maryse47 avatar mibere avatar noloader avatar orbea avatar pemensik avatar philip-nlnetlabs avatar pmunch avatar ralphdolmans avatar rcmcdonald91 avatar shchelk avatar talkabout avatar tcy16 avatar vvfedorenko avatar wcawijngaards avatar wtoorop avatar xiaoxiaoafeifei avatar ziollek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

unbound's Issues

Unbound returns additional records on NODATA response

When Unbound gets a response from an authoritative server without answer section
(NODATA), but with additional section filled (but no referral NS records), it
returns the additional section records to the client.

This can be misused to tunnel data through unsuspicious queries (like A/AAAA) and
can be a potential security risk.

Other DNS resolvers (BIND 9, PowerDNS-Recursor, Knot-Resolver) do not forward
the additional section to the client.

Directly sending & receiving in and from cache

Let unbound keep cache hot by sending everything that enters the cache as a UDP DNS message to a configured list of IP addresses (with port and TSIG secret, like notify statement in NSD). I.e.:

  • send-new-cache-entries: <ip-address> <key-name | NOKEY>
    Will send data that enters the cache as DNS msg to <ip-address> signed with TSIG <key-name>
    A port number can be added using a suffix of@number, for example 1.2.3.4@5300. The specified key is used to sign DNS msg.

  • accept-new-cache-entries: <ip-spec> <key-name | NOKEY | BLOCKED>
    Accept data from <ip-spec> directly in cache.

    The ip-spec is either a plain IP address (IPv4 or IPv6), or can be a subnet of the form 1.2.3.4/24, or masked like 1.2.3.4&255.255.255.0 or a range of the form 1.2.3.4-1.2.3.25. A port number can be added using a suffix of@number, for example 1.2.3.4@5300 or 1.2.3.4/24@5300 for port 5300. Note the ip-spec ranges do not use spaces around the /, &, @ and - symbols.
    Received new cache entries will be send out on the addresses specified with send-new-cache-entries:, but only if it was not already in cache (to prevent loops).

Leftover logfile is close()d … unpredictably?

#include <stdio.h>
#include <unbound.h>

int main(void)
{
    FILE *tfh = tmpfile();
    setvbuf( tfh, NULL, _IONBF, 0 );

    struct ub_ctx* ctx;

    for (int i=0; i<2; i++) {
        ctx = ub_ctx_create();

        ub_ctx_debuglevel(ctx, 5);
        ub_ctx_debugout( ctx, tfh );

        ub_ctx_delete(ctx);

        if (-1 == fseek(tfh, 0, 0)) {
            perror("fseek()");
        }
    }
}

The above code illustrates a problem where a 2nd context creation will clobber a leftover logfile from a former context.

It seems like the ideal fix would be to put the log state variables into the context struct rather than having them be globals?

(I haven’t tested it, but it looks like, given the globals, it would be problematic, for example, to have two concurrent Unbound objects in the same process with different log outputs?)

Failing that change … is there any way to fix this, short of removing that fclose() in log.c (and potentially breaking behavior that implementations may have been expecting now for over 10 years)?

Regardless, it seems like this issue should at least be documented? The existing libunbound(3) documentation for ub_ctx_debugout() gives the impression that log state is contained entirely within the log object, which of course isn’t the case.

The workaround seems to be to set the log handle to stderr before deleting the context … does that seem reasonable?

Thank you!

outgoing interface on forward-zones

I have the requirement to forward lookups for specific domains to resolvers that are accessible over an ipsec tunnel, which requires the request to originate from a specific IP range.

Would be great if we could specify a source interface/address per forward-host entry, similar to dnsmasq :)

Solaris 11.3 and missing symbols be64toh, htobe64

Results from building Unbound 1.9.1 tarball on Solaris 11.3 i86pc. Unbound compiles fine on Solaris. It looks like a couple of symbols are missing on the non-GNU systems.

libtool: link: gcc -I. -I/usr/local/include -DNDEBUG -I/usr/local/include -I/usr/local/include -I/usr/local/include -DSRCDIR=. -g2 -O2 -march=native -fPIC -pthread -std=c99 -D_REENTRANT -pthreads -Wl,-R -Wl,/usr/local/lib -o unbound-host .libs/unbound-host.o .libs/keyraw.o .libs/sbuffer.o .libs/wire2str.o .libs/parse.o .libs/parseutil.o .libs/rrdef.o .libs/str2wire.o .libs/explicit_bzero.o  -L/usr/local/lib -L. -L.libs /export/home/build/unbound-1.9.1/.libs/libunbound.a -lssl -lsocket -lnsl -ldl -lpthread -lcrypto -lhiredis -pthreads -pthread  -R/usr/local/lib
Undefined                       first referenced
 symbol                             in file
be64toh                             /export/home/build/unbound-1.9.1/.libs/libunbound.a(cachedb.o)
htobe64                             /export/home/build/unbound-1.9.1/.libs/libunbound.a(cachedb.o)
ld: fatal: symbol referencing errors
collect2: error: ld returned 1 exit status
gmake: *** [unbound-host] Error 1
gmake: *** Waiting for unfinished jobs....
libtool: link: gcc -I. -I/usr/local/include -DNDEBUG -I/usr/local/include -I/usr/local/include -I/usr/local/include -DSRCDIR=. -g2 -O2 -march=native -fPIC -pthread -std=c99 -D_REENTRANT -pthreads -Wl,-R -Wl,/usr/local/lib -o unbound-checkconf .libs/unbound-checkconf.o .libs/worker_cb.o .libs/dns.o .libs/infra.o .libs/rrset.o .libs/dname.o .libs/msgencode.o .libs/as112.o .libs/msgparse.o .libs/msgreply.o .libs/packed_rrset.o .libs/iterator.o .libs/iter_delegpt.o .libs/iter_donotq.o .libs/iter_fwd.o .libs/iter_hints.o .libs/iter_priv.o .libs/iter_resptype.o .libs/iter_scrub.o .libs/iter_utils.o .libs/localzone.o .libs/mesh.o .libs/modstack.o .libs/view.o .libs/outbound_list.o .libs/alloc.o .libs/config_file.o .libs/configlexer.o .libs/configparser.o .libs/fptr_wlist.o .libs/edns.o .libs/locks.o .libs/log.o .libs/mini_event.o .libs/module.o .libs/net_help.o .libs/random.o .libs/rbtree.o .libs/regional.o .libs/rtt.o .libs/dnstree.o .libs/lookup3.o .libs/lruhash.o .libs/slabhash.o .libs/tcp_conn_limit.o .libs/timehist.o .libs/tube.o .libs/winsock_event.o .libs/autotrust.o .libs/val_anchor.o .libs/validator.o .libs/val_kcache.o .libs/val_kentry.o .libs/val_neg.o .libs/val_nsec3.o .libs/val_nsec.o .libs/val_secalgo.o .libs/val_sigcrypt.o .libs/val_utils.o .libs/dns64.o .libs/cachedb.o .libs/redis.o .libs/authzone.o .libs/respip.o .libs/netevent.o .libs/listen_dnsport.o .libs/outside_network.o .libs/ub_event.o .libs/keyraw.o .libs/sbuffer.o .libs/wire2str.o .libs/parse.o .libs/parseutil.o .libs/rrdef.o .libs/str2wire.o .libs/explicit_bzero.o .libs/reallocarray.o .libs/arc4random.o .libs/arc4random_uniform.o .libs/arc4_lock.o  -L/usr/local/lib -lssl -lsocket -lnsl -ldl -lpthread -lcrypto -lhiredis -pthreads -pthread  -R/usr/local/lib
libtool: link: gcc -I. -I/usr/local/include -DNDEBUG -I/usr/local/include -I/usr/local/include -I/usr/local/include -DSRCDIR=. -g2 -O2 -march=native -fPIC -pthread -std=c99 -D_REENTRANT -pthreads -Wl,-R -Wl,/usr/local/lib -o unbound-control .libs/unbound-control.o .libs/worker_cb.o .libs/dns.o .libs/infra.o .libs/rrset.o .libs/dname.o .libs/msgencode.o .libs/as112.o .libs/msgparse.o .libs/msgreply.o .libs/packed_rrset.o .libs/iterator.o .libs/iter_delegpt.o .libs/iter_donotq.o .libs/iter_fwd.o .libs/iter_hints.o .libs/iter_priv.o .libs/iter_resptype.o .libs/iter_scrub.o .libs/iter_utils.o .libs/localzone.o .libs/mesh.o .libs/modstack.o .libs/view.o .libs/outbound_list.o .libs/alloc.o .libs/config_file.o .libs/configlexer.o .libs/configparser.o .libs/fptr_wlist.o .libs/edns.o .libs/locks.o .libs/log.o .libs/mini_event.o .libs/module.o .libs/net_help.o .libs/random.o .libs/rbtree.o .libs/regional.o .libs/rtt.o .libs/dnstree.o .libs/lookup3.o .libs/lruhash.o .libs/slabhash.o .libs/tcp_conn_limit.o .libs/timehist.o .libs/tube.o .libs/winsock_event.o .libs/autotrust.o .libs/val_anchor.o .libs/validator.o .libs/val_kcache.o .libs/val_kentry.o .libs/val_neg.o .libs/val_nsec3.o .libs/val_nsec.o .libs/val_secalgo.o .libs/val_sigcrypt.o .libs/val_utils.o .libs/dns64.o .libs/cachedb.o .libs/redis.o .libs/authzone.o .libs/respip.o .libs/netevent.o .libs/listen_dnsport.o .libs/outside_network.o .libs/ub_event.o .libs/keyraw.o .libs/sbuffer.o .libs/wire2str.o .libs/parse.o .libs/parseutil.o .libs/rrdef.o .libs/str2wire.o .libs/explicit_bzero.o .libs/reallocarray.o .libs/arc4random.o .libs/arc4random_uniform.o .libs/arc4_lock.o  -L/usr/local/lib -lssl -lsocket -lnsl -ldl -lpthread -lcrypto -lhiredis -pthreads -pthread  -R/usr/local/lib
libtool: link: gcc -I. -I/usr/local/include -DNDEBUG -I/usr/local/include -I/usr/local/include -I/usr/local/include -DSRCDIR=. -g2 -O2 -march=native -fPIC -pthread -std=c99 -D_REENTRANT -pthreads -Wl,-R -Wl,/usr/local/lib -o unbound .libs/acl_list.o .libs/cachedump.o .libs/daemon.o .libs/shm_main.o .libs/remote.o .libs/stats.o .libs/unbound.o .libs/worker.o .libs/dns.o .libs/infra.o .libs/rrset.o .libs/dname.o .libs/msgencode.o .libs/as112.o .libs/msgparse.o .libs/msgreply.o .libs/packed_rrset.o .libs/iterator.o .libs/iter_delegpt.o .libs/iter_donotq.o .libs/iter_fwd.o .libs/iter_hints.o .libs/iter_priv.o .libs/iter_resptype.o .libs/iter_scrub.o .libs/iter_utils.o .libs/localzone.o .libs/mesh.o .libs/modstack.o .libs/view.o .libs/outbound_list.o .libs/alloc.o .libs/config_file.o .libs/configlexer.o .libs/configparser.o .libs/fptr_wlist.o .libs/edns.o .libs/locks.o .libs/log.o .libs/mini_event.o .libs/module.o .libs/net_help.o .libs/random.o .libs/rbtree.o .libs/regional.o .libs/rtt.o .libs/dnstree.o .libs/lookup3.o .libs/lruhash.o .libs/slabhash.o .libs/tcp_conn_limit.o .libs/timehist.o .libs/tube.o .libs/winsock_event.o .libs/autotrust.o .libs/val_anchor.o .libs/validator.o .libs/val_kcache.o .libs/val_kentry.o .libs/val_neg.o .libs/val_nsec3.o .libs/val_nsec.o .libs/val_secalgo.o .libs/val_sigcrypt.o .libs/val_utils.o .libs/dns64.o .libs/cachedb.o .libs/redis.o .libs/authzone.o .libs/respip.o .libs/netevent.o .libs/listen_dnsport.o .libs/outside_network.o .libs/ub_event.o .libs/keyraw.o .libs/sbuffer.o .libs/wire2str.o .libs/parse.o .libs/parseutil.o .libs/rrdef.o .libs/str2wire.o .libs/explicit_bzero.o .libs/reallocarray.o .libs/arc4random.o .libs/arc4random_uniform.o .libs/arc4_lock.o  -L/usr/local/lib -lssl -lsocket -lnsl -ldl -lpthread -lcrypto -lhiredis -pthreads -pthread  -R/usr/local/lib
Undefined                       first referenced
 symbol                             in file
be64toh                             .libs/cachedb.o
htobe64                             .libs/cachedb.o
ld: fatal: symbol referencing errors
collect2: error: ld returned 1 exit status
gmake: *** [unbound-checkconf] Error 1
Undefined                       first referenced
 symbol                             in file
be64toh                             .libs/cachedb.o
htobe64                             .libs/cachedb.o
ld: fatal: symbol referencing errors
collect2: error: ld returned 1 exit status
gmake: *** [unbound-control] Error 1
Undefined                       first referenced
 symbol                             in file
be64toh                             .libs/cachedb.o
htobe64                             .libs/cachedb.o
ld: fatal: symbol referencing errors
collect2: error: ld returned 1 exit status
gmake: *** [unbound] Error 1
Failed to build Unbound

More graceful forward-zone

I have lots of forward-zones with the same multi forward-addr like this:

forward-zone:
  name: "zzzzaaaa.com."
  forward-addr: 119.29.29.29
  forward-addr: 114.114.114.114
  forward-addr: 223.5.5.5

forward-zone:
  name: "zzzzhong.com."
  forward-addr: 119.29.29.29
  forward-addr: 114.114.114.114
  forward-addr: 223.5.5.5

forward-zone:
  name: "zzzzmall.com."
  forward-addr: 119.29.29.29
  forward-addr: 114.114.114.114
  forward-addr: 223.5.5.5

Too many repeat settings make the config file large. How about this idea?

forward-group:
  name: "default"
  forward-addr: 119.29.29.29
  forward-addr: 114.114.114.114
  forward-addr: 223.5.5.5

forward-zone:
  name: "zzzzaaaa.com."
  group: "default"

forward-zone:
  name: "zzzzmall.com."
  group: "default"

or single setting with a name list

forward-zone:
  name-list: "filename_with_domain_list_only_forward_to_the_same_addr"
  forward-addr: 119.29.29.29
  forward-addr: 114.114.114.114
  forward-addr: 223.5.5.5

calc_hash() function

Hello, Wouter once again!
Firstly, big thank you for help to look for "new place" of your great codes ;)

Secondly, i have specific question to you:
I test your latest Unbound-1.9.2 in conjunction of redis in-memory database. And my general purpose of the project (first step) is to write simple redis client to do some actions like connect to redis db and read/modify some DNS data for example TTL values of an RRs. And i wrote such the client using your sources redis.c (and other necessary C- and header files) and testcode/pktview.c to decode binary data stored in redis db.
So, now i have some task to process the KEYs stored in redis. Now they look like:

  1. "8C5B5634EA9E8C973A14ED5E4DEA9FD2F278E04504C73203BFFEFC96E5764C9A"
  2. "34F89932F644AFA26BDD8C73565ECE26040908606E9879C1703649EA26B3AE9F"
  3. "6BCB8FB9523A1C43ABEE55C9D1346840B7C1A85943CC4BFE3C0DDE92157AEFC5"
  4. "E6EA9BA0D7586174E7D34E86CB08EF44B185E6BD60FE01E31C0A7DED22145BFF"
  5. "www.google.com"
  6. "5771013AE51CE03274B0CAB3BAF6A760B55BE6E022C71BD77E5D41594D6464AD"
  7. "896B4381F0310B8C7B7DE656210C9079C18B09D6742B4C6FF438AF955B1E6104"
  8. "053CBEEE46F43E5F33FF34A464D5E68A439A23BD0B4DD98CCDF9DFA31418B0DA"
  9. "5991C8E781FEB0DB5D5113A3E66E6E5F202044BE2985675841B764067408BBC7"
  10. "7FEF3AA1B0C4DF1D0558D02A8A14E3E202C2782F2A59B3D4D4E1391D26EB988C"
  11. "3D77C6793AA9D3660A647041F4F93EE529D4287A879DFE9085CFE4C1D3D2F54F"
  12. "2B07AFD3B97277D95126F897D4E34D1CA950F27A9F41442255672D19753489AC"
  13. "8EACF8C917CF06D4471847DAEA3C2FAF058138AF22A947E15618C608F907634A"
  14. "C5A8720EE5B1FE59AE245EBBCED277FA7C77ED9D6BD9AC0DDA3B80E931AD457C"
  15. "www.ru"
  16. "50E28602A91A9895B5798AB5E1C3423C04E7F75F01021498166AFAE8760A43A3"
  17. "A7865343C6CC77353BCB66FFFA04AC8FD0C4253E95A44DB55D0DD06B7C7FCD75"
  18. "C4696EEC08F5B1641B6CA7CD32580D281B85DD002A86CA3EC77C171A18339795"
  • the set of hashes generated by your hash_calc() function in cachedb.c. My question is:
    May i rewrite the function to operate with no hashes? And as the redis KEYs i would like to use readable names such as www.ru (15-th KEY in the example above. Of course its simplified form just for testing rightnow). At first sight i could use the structures elements (the combination) such as qstate->qinfo.qname, qstate->qinfo.qtype, qstate->qinfo.qclass for uniqueness of the redis keys, Is this way is proper and implementable?

Thank you in advance!

Response refused from ARGUS

Hi! I using "argus" as system monitoring. For bind check DNS work ok
But for unbound I get REFUSED, next messages:
12:46:50.881654 IP (tos 0x0, ttl 64, id 53425, offset 0, flags [DF], proto UDP (17), length 55)
X.X.X.2.56121 > X.X.X.60.53: [udp sum ok] 12024 A? cisco.com. (27)
12:46:50.882005 IP (tos 0x0, ttl 64, id 47423, offset 0, flags [none], proto UDP (17), length 40)
X.X.X.60.53 > X.X.X.2.56121: [bad udp cksum 0x98a1 -> 0xdcdd!] 12024 Refused- [0q] 0/0/0 (12)

And dig works OK
13:50:40.211764 IP (tos 0x0, ttl 64, id 33609, offset 0, flags [none], proto UDP (17), length 55)
X.X.X.2.48985 > X.X.X.60.53: [udp sum ok] 18953+ A? cisco.com. (27)
13:50:40.469015 IP (tos 0x0, ttl 64, id 19361, offset 0, flags [none], proto UDP (17), length 71)
X.X.X.60.53 > X.X.X.2.48985: [bad udp cksum 0x98c0 -> 0x15c0!] 18953 q: A? cisco.com. 1/0/0 cisco.com. [1h] A 72.163.4.185 (43)

Add OCSP stapling support

Can we add OCSP stapling support in the response to avoid a deadlock in running DNS-over-TLS only Unbound server?

It's currently not supported. I see NSD recently got ocsp stapling support so presumably it would be trivial.

OCSP stapling test output:

$ openssl s_client -connect my-dns-server:853 -tls1_3  -tlsextdebug  -status
CONNECTED(00000003)
...
OCSP response: no response sent
...

stub-first fails with stub-tls-upstream

Hello.

When both stub-first and stub-tls-upstream options are enabled, unbound tries to use parent NS, but it uses TLS with them, while accessing them on the default port (53). I think it should either not use TLS on parent NS in this case, make it configurable, use tls-port, or document that this use case will fail.

No listen port if IPv6 address ends with :ffff:ffff:ffff:ffff:ffff

Hello,

I have configured the address fd00:aaaa:255:ffff:ffff:ffff:ffff:ffff /128 on a dummy0 interface. Restarts Unbound and expects to be able to see ; with e.g. lsof command ; unbound listering on the address in question. No dice.

Have tried with different /48 prefixes but kept the last five quintets of :ffff:ffff:ffff:ffff:ffff the same. Same experience each time. => Not working.
Output of lsof -Pni :53 lists all other addresses. Just not the one ending in :ffff:ffff:ffff:ffff:ffff. :(

(This is the case regardless of the address being configured on interface lo or interface dummy0.)


Intefaces

$ ip addr show lo
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 fd00:aaaa:255:42::87/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever

$ ip addr show dummy0
11: dummy0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether c2:2d:1d:e8:a5:13 brd ff:ff:ff:ff:ff:ff
    inet6 fd00:aaaa:255:ffff:ffff:ffff:ffff:ffff/128 scope global 
       valid_lft forever preferred_lft forever

Unbound IP's to listen on

$ sudo cat /etc/unbound/local.conf
#/etc/unbound/local.conf

server:
  interface: fd00:aaaa:255:42::87
  interface: 127.0.0.1
  interface: ::1
  interface: fd00:aaaa:255:ffff:ffff:ffff:ffff:ffff

Interfaces Unbound listen on after restart of service

$ sudo lsof -Pni :53
COMMAND   PID    USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
unbound 12390 unbound   11u  IPv4 934541      0t0  UDP 127.0.0.1:53 
unbound 12390 unbound   12u  IPv4 934542      0t0  TCP 127.0.0.1:53 (LISTEN)
unbound 12390 unbound   13u  IPv6 934543      0t0  UDP [::1]:53 
unbound 12390 unbound   14u  IPv6 934544      0t0  TCP [::1]:53 (LISTEN)
unbound 12514 unbound    3u  IPv6 936850      0t0  UDP [fd00:aaaa:255:42::87]:53 
unbound 12514 unbound    4u  IPv6 936860      0t0  TCP [fd00:aaaa:255:42::87]:53 (LISTEN)

Some system details

$ uname -a
Linux targe 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5+deb10u2 (2019-08-08) x86_64 GNU/Linux

$ lsb_release -a
Distributor ID: Debian
Description:    Debian GNU/Linux 10 (buster)
Release:        10
Codename:       buster

$ dpkg -l | grep unbound
ii  libunbound2:amd64                    1.6.0-3+deb9u2                                 amd64        library implementing DNS resolution and validation
ii  libunbound8:amd64                    1.9.0-2                                        amd64        library implementing DNS resolution and validation
ii  unbound                              1.9.0-2                                        amd64        validating, recursive, caching DNS resolver
ii  unbound-anchor                       1.9.0-2                                        amd64        utility to securely fetch the root DNS trust anchor
ii  unbound-host                         1.9.0-2                                        amd64        reimplementation of the 'host' command

End note

Have had tried setting verbose: 0 under server: and looking in the log files after service restart.

$ systemctl stop unbound && sleep 2s && \
systemctl start unbound && sleep 2s && \
egrep "ffff:ffff:ffff:ffff:ffff" /var/log/syslog /var/log/daemon.log /var/log/unbound.log /var/log/debug /var/log/kern.log /var/log/messages /var/log/syslog

/var/log/syslog:Aug 21 22:47:30 targe unbound[13017]: [1566427650] unbound[13017:0] debug: creating udp6 socket fd00:aaaa:255:ffff:ffff:ffff:ffff:ffff 53
/var/log/syslog:Aug 21 22:47:30 targe unbound[13017]: [1566427650] unbound[13017:0] debug: creating tcp6 socket fd00:aaaa:255:ffff:ffff:ffff:ffff:ffff 53

Most uninteresting information I got.
The socket is created. 🆗
But when checking for the listening port with lsof. It is nowhere to be found. ??? 😕

Problem with ECS and cache

I'm using unbound 1.6.7, and recently encountered a problem.
For example, authoritative nameserver A returns a A record (11.22.33.44) to a query a.testa.com, subnet=1.2.3.4/32/0:
unbound_1

.. and returns a CNAME record (a.testb.com) to subnet=5.6.7.8/32/0:
unbound_2

.. and Authoritative Nameserver B doesn't support ECS.
then within the TTL(60 seconds), a client send a query(a.testa.com, subnet=1.2.3.4/32/0) to unbound. the correct response is 11.22.33.44, but unbound replies CNAME and A 55.66.77.88 from the cache:
unbound_3

Occasional SERVFAIL for ocsp.int-x3.letsencrypt.org. (hosted by akamai)

I'm running unbound 1.8.1 (debian version 1.8.1-1+b1) on my debian server.
When I first recognized the issue I was running unbound 1.6.0-3+deb9u2, but upgrading to 1.8.1 did not help.
I'm running unbound in single threaded mode to eliminate possible threading issues.

Occasionally unbound returns SERVFAIL for queries for ocsp.int-x3.letsencrypt.org. which is hosted by akamai.
I don't have any statistics but managed to get hold of such a query by running dig every minute (which produced a SERVFAIL response after about 3-4 days):

; <<>> DiG 9.10.3-P4-Debian <<>> +multiline ocsp.int-x3.letsencrypt.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 27104
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1472
;; QUESTION SECTION:
;ocsp.int-x3.letsencrypt.org. IN        A

;; Query time: 343 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Mon Apr 22 18:15:33 CEST 2019
;; MSG SIZE  rcvd: 56

I also configured unbound with verbosity 2.
If you need, I can give you the full log for the corresponding timeframe, but some
loglines that seem suspicious to me are:

Apr 22 18:15:33 master unbound[3307]: [3307:0] info: response for ocsp.int-x3.letsencrypt.org. A IN
Apr 22 18:15:33 master unbound[3307]: [3307:0] info: reply from <akamai.net.> 184.26.160.193#53
Apr 22 18:15:33 master unbound[3307]: [3307:0] info: query response was nodata ANSWER

followed later by:

Apr 22 18:15:33 master unbound[3307]: [3307:0] info: response for ocsp.int-x3.letsencrypt.org. A IN
Apr 22 18:15:33 master unbound[3307]: [3307:0] info: reply from <akamai.net.> 2600:1480:1::c1#53
Apr 22 18:15:33 master unbound[3307]: [3307:0] info: Capsforid: reply is equal. go to next fallback
Apr 22 18:15:33 master unbound[3307]: [3307:0] info: response for ocsp.int-x3.letsencrypt.org. A IN
Apr 22 18:15:33 master unbound[3307]: [3307:0] info: reply from <akamai.net.> 23.74.25.192#53
Apr 22 18:15:33 master unbound[3307]: [3307:0] info: Capsforid fallback: getting different replies, failed

unbound crashing with SIGABRT

Jun 06 07:46:51 mx3-fi1 unbound[20686]: *** stack smashing detected ***: /usr/sbin/unbound terminated
Jun 06 07:46:51 mx3-fi1 unbound[20686]: ======= Backtrace: =========
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /lib64/libc.so.6(__fortify_fail+0x37)[0x7fcfe5cf0b67]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /lib64/libc.so.6(+0x117b22)[0x7fcfe5cf0b22]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /usr/sbin/unbound(+0x2f351)[0x55cc1f6e1351]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /usr/sbin/unbound(+0x36822)[0x55cc1f6e8822]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /usr/sbin/unbound(+0x2e703)[0x55cc1f6e0703]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /usr/sbin/unbound(+0x2a23e)[0x55cc1f6dc23e]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /usr/sbin/unbound(+0x3cdd8)[0x55cc1f6eedd8]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /usr/sbin/unbound(iter_operate+0x335)[0x55cc1f6efb05]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /usr/sbin/unbound(+0x4d782)[0x55cc1f6ff782]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /usr/sbin/unbound(worker_handle_request+0x1b45)[0x55cc1f6da055]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /usr/sbin/unbound(+0xc91db)[0x55cc1f77b1db]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /usr/sbin/unbound(+0xc5868)[0x55cc1f777868]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /usr/sbin/unbound(comm_point_tcp_handle_callback+0xf4)[0x55cc1f777ba4]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /lib64/libevent-2.0.so.5(event_base_loop+0x774)[0x7fcfe6a00a14]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /usr/sbin/unbound(+0xc2eac)[0x55cc1f774eac]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /usr/sbin/unbound(+0x1ddf1)[0x55cc1f6cfdf1]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /usr/sbin/unbound(+0x1995f)[0x55cc1f6cb95f]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /lib64/libc.so.6(__libc_start_main+0xf5)[0x7fcfe5bfb495]
Jun 06 07:46:51 mx3-fi1 unbound[20686]: /usr/sbin/unbound(+0x1a542)[0x55cc1f6cc542]

This is new issue, This didn't happen yet with svn trunk@5175 but happens with git a4f4d7b

Occasional abort related to dnssec-trigger

It seems this issue is related to some action done by dnssec-trigger we are using. It happens sometime on Fedora, bug #1667387. We were not able to find a reason for it, maybe you could help us?

(gdb) where
#0  0x00007f8f19bb053f in raise () from /lib64/libc.so.6
#1  0x00007f8f19b9a895 in abort () from /lib64/libc.so.6
#2  0x00007f8f19bf3927 in __libc_message () from /lib64/libc.so.6
#3  0x00007f8f19c870f5 in __fortify_fail_abort () from /lib64/libc.so.6
#4  0x00007f8f19c87127 in __fortify_fail () from /lib64/libc.so.6
#5  0x00007f8f19c850e6 in __chk_fail () from /lib64/libc.so.6
#6  0x00005583269dca0d in memmove (__len=<optimized out>, __src=<optimized out>, __dest=0x7fffad5ea930)
    at /usr/include/bits/string_fortified.h:40
#7  xfr_probe_send_probe (xfr=xfr@entry=0x5583287edea0, env=env@entry=0x5583288151c8, timeout=timeout@entry=100)
    at services/authzone.c:5793
#8  0x00005583269dd760 in xfr_probe_send_or_end (xfr=xfr@entry=0x5583287edea0, env=env@entry=0x5583288151c8)
    at services/authzone.c:6074
#9  0x00005583269ddf6d in xfr_probe_send_or_end (env=0x5583288151c8, xfr=0x5583287edea0) at services/authzone.c:6147
#10 auth_xfer_probe_lookup_callback (arg=0x5583287edea0, rcode=<optimized out>, buf=0x0, sec=<optimized out>, 
    why_bogus=<optimized out>, was_ratelimited=<optimized out>) at services/authzone.c:6147
#11 0x000055832699b70e in mesh_state_cleanup (mstate=0x558328d644c0) at services/mesh.c:744
#12 0x000055832699d0d3 in mesh_state_delete (qstate=<optimized out>) at services/mesh.c:795
#13 mesh_state_delete (qstate=<optimized out>) at services/mesh.c:761
#14 0x000055832699d219 in mesh_delete_helper (n=<optimized out>) at services/mesh.c:293
#15 mesh_delete_all (mesh=0x558328e60860) at services/mesh.c:293
#16 0x0000558326973ce6 in do_flush_requestlist (worker=0x558328813d10, ssl=0x7fffad5eb260) at daemon/remote.c:2947
#17 execute_cmd (rc=rc@entry=0x5583286efa40, ssl=ssl@entry=0x7fffad5eb260, 
    cmd=cmd@entry=0x7fffad5eae20 " flush_requestlist", worker=0x558328813d10) at daemon/remote.c:2947
#18 0x0000558326974b6f in handle_req (rc=rc@entry=0x5583286efa40, res=res@entry=0x7fffad5eb260, s=<optimized out>)
    at daemon/remote.c:3095
#19 0x0000558326974c72 in remote_control_callback (c=0x5583287e0b40, arg=arg@entry=0x558328eb9e20, err=err@entry=0, 
    rep=rep@entry=0x0) at daemon/remote.c:3177
#20 0x0000558326974f45 in remote_accept_callback (err=0, rep=<optimized out>, arg=0x5583286efa40, c=<optimized out>)
    at daemon/remote.c:521
#21 remote_accept_callback (c=<optimized out>, arg=0x5583286efa40, err=<optimized out>, rep=<optimized out>)
    at daemon/remote.c:443
#22 0x00007f8f1a3d6031 in event_process_active_single_queue () from /lib64/libevent-2.1.so.6
#23 0x00007f8f1a3d67c7 in event_base_loop () from /lib64/libevent-2.1.so.6
#24 0x0000558326a14530 in comm_base_dispatch (b=<optimized out>) at util/netevent.c:244
#25 0x00005583269796ed in worker_work (worker=<optimized out>) at daemon/worker.c:1884
#26 0x000055832696d26b in daemon_fork (daemon=0x5583286bb260) at daemon/daemon.c:666
#27 0x0000558326968c6f in run_daemon (need_pidfile=1, log_default_identity=0x7fffad5ecdf3 "unbound", debug_mode=1, 
    cmdline_verbose=0, cfgfile=0x558326a2fe1f "/etc/unbound/unbound.conf") at daemon/unbound.c:652
#28 main (argc=<optimized out>, argv=<optimized out>) at daemon/unbound.c:749
(gdb) frame 7
#7  xfr_probe_send_probe (xfr=xfr@entry=0x5583287edea0, env=env@entry=0x5583288151c8, timeout=timeout@entry=100)
    at services/authzone.c:5793
5793			memmove(&addr, &xfr->task_probe->scan_addr->addr, addrlen);
(gdb) p *xfr->task_probe->scan_addr
$1 = {next = 0x558328f599b0, addr = {ss_family = 161, 
    __ss_padding = "\000\000\000\000\000\000\360\\\361(\203U\000\000 \022~(\203U\000\000\200\000\000\000\000\000\000\000\340@\241&\203U\000\000`\205\353(\203U\000\000\377\377\377\377", '\000' <repeats 12 times>, "\377\377\377\377\000\000\000\000\360\364\200(\203U", '\000' <repeats 34 times>, "\001\000\000\000\000\000\000", __ss_align = 56728}, 
  addrlen = 151524}
(gdb) p *xfr->task_probe->scan_addr->next
$2 = {next = 0x0, addr = {ss_family = 177, 
    __ss_padding = "\000\000\000\000\000\000`\215\323\031\217\177\000\000pT\365(\203U\000\000\200\000\000\000\000\000\000\000\340@\241&\203U\000\000p\261\355(\203U\000\000\377\377\377\377", '\000' <repeats 12 times>, "\377\377\377\377\000\000\000\000\360\364\200(\203U", '\000' <repeats 34 times>, "\001\000\000\000\000\000\000", __ss_align = 56727}, 
  addrlen = 862559}
(gdb) p *xfr->task_probe->scan_target
$3 = {next = 0x5583287ee120, host = 0x5583287ee100 "k.root-servers.net", file = 0x0, http = 0, ixfr = 1, 
  allow_notify = 0, ssl = 0, port = 0, list = 0x0}
(gdb)

Happens often in version 1.8.3, but I think it happens sometime in 1.9.x sometime.

I dont know why the forward function doesn't work.

I only made these changes below to the unbound.conf. I want to forward all my dns queries to port 9999, but it doesn't work.
interface:127.0.0.1@8888
forward-zone
name: "."
forward-addr: 127.0.0.1@9999
Can you help to explain?

rfc:rpz (draft) for having full support of rpz zones and always_nxdomain

intro:

I'm building a project to hosts public owned RPZ zones.

The issue is the way unbound handles always_nxdomain as demonstrated in this thread

unbound is not only adding nxdomain from second level domains and down the latter, it also doing so in the opposed direction from fourth or fifth level domains to the second level

Now the question/feature request:

Is there currently a work around for this issue to use the always_nxdomain or actually full support for real RPZ zones, rather than having to build very very big files with

local-zone: "%%b" redirect
local-data: "%%b A !TO_BLACKHOLE!"

Outtro

The main idea with my project is to fully implant the NXDOMAIN as they smarter and faster as you have a straight-up answer and wildcard is fully supporter, any DNS queries don't have to wait up for any redirection timeouts ex. 127.0.0.1 or 0.0.0.0

I'm asking you here as (personal linux) Unbound is the only reliable recursor I have found that could be easily installed on Windows 10.Home.xxxxxxxxxxxxxxx.what.ever.hosts.reset.version.tracking.optimization.version

Ping for inclusion

@ScriptTiger

AddressSanitizer finding in lookup3.c

Hi Everyone,

I'm building Unbound 1.9.1 from the tarball on Fedora 29, x86_64. It has a modern version of GCC with sanitizers. The build adds CFLAGS+=-fsanitize=address and CXXFLAGS+=-fsanitize=address (and LDFLAGS+=-fsanitize=address as needed). Then I run make check to see what happens.

make check
...
libtool: link: gcc -I. -I/var/sanitize/include -DNDEBUG -I/var/sanitize/include -I/var/sanitize/include -I/var/sanitize/include -DSRCDIR=. -g2 -O2 -fsanitize=address -march=native -fPIC -pthread -fsanitize=address -Wl,-R -Wl,/var/sanitize/lib64 -Wl,--enable-new-dtags -o testbound .libs/testbound.o .libs/replay.o .libs/fake_event.o .libs/testpkts.o .libs/worker.o .libs/acl_list.o .libs/daemon.o .libs/stats.o .libs/shm_main.o .libs/dns.o .libs/infra.o .libs/rrset.o .libs/dname.o .libs/msgencode.o .libs/as112.o .libs/msgparse.o .libs/msgreply.o .libs/packed_rrset.o .libs/iterator.o .libs/iter_delegpt.o .libs/iter_donotq.o .libs/iter_fwd.o .libs/iter_hints.o .libs/iter_priv.o .libs/iter_resptype.o .libs/iter_scrub.o .libs/iter_utils.o .libs/localzone.o .libs/mesh.o .libs/modstack.o .libs/view.o .libs/outbound_list.o .libs/alloc.o .libs/config_file.o .libs/configlexer.o .libs/configparser.o .libs/fptr_wlist.o .libs/edns.o .libs/locks.o .libs/log.o .libs/mini_event.o .libs/module.o .libs/net_help.o .libs/random.o .libs/rbtree.o .libs/regional.o .libs/rtt.o .libs/dnstree.o .libs/lookup3.o .libs/lruhash.o .libs/slabhash.o .libs/tcp_conn_limit.o .libs/timehist.o .libs/tube.o .libs/winsock_event.o .libs/autotrust.o .libs/val_anchor.o .libs/validator.o .libs/val_kcache.o .libs/val_kentry.o .libs/val_neg.o .libs/val_nsec3.o .libs/val_nsec.o .libs/val_secalgo.o .libs/val_sigcrypt.o .libs/val_utils.o .libs/dns64.o .libs/cachedb.o .libs/redis.o .libs/authzone.o .libs/respip.o .libs/ub_event.o .libs/keyraw.o .libs/sbuffer.o .libs/wire2str.o .libs/parse.o .libs/parseutil.o .libs/rrdef.o .libs/str2wire.o .libs/strlcat.o .libs/strlcpy.o .libs/arc4random.o .libs/arc4random_uniform.o .libs/arc4_lock.o  -L/var/sanitize/lib64 -L/var/sanitize/lib -lssl -ldl -lpthread -lcrypto -lhiredis -pthread  -Wl,-rpath -Wl,/var/sanitize/lib
./unittest
Start of unbound 1.9.1 unit test.
test authzone functions
=================================================================
==23867==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7fff1d847e80 at pc 0x0000004e0f5f bp 0x7fff1d847e30 sp 0x7fff1d847e20
READ of size 4 at 0x7fff1d847e80 thread T0
    #0 0x4e0f5e in hashlittle util/storage/lookup3.c:378
    #1 0x45ccac in rrset_key_hash util/data/packed_rrset.c:170
    #2 0x543ca5 in auth_packed_rrset_copy_region services/authzone.c:179
    #3 0x545b7e in msg_add_rrset_an services/authzone.c:234
    #4 0x545b7e in msg_add_rrset_an services/authzone.c:219
    #5 0x547621 in az_generate_positive_answer services/authzone.c:2811
    #6 0x547621 in az_generate_answer_with_node services/authzone.c:3062
    #7 0x547621 in auth_zone_generate_answer services/authzone.c:3157
    #8 0x557363 in auth_zones_lookup services/authzone.c:3194
    #9 0x4382fc in q_ans_query testcode/unitauth.c:775
    #10 0x4382fc in check_az_q_ans testcode/unitauth.c:830
    #11 0x4382fc in check_queries testcode/unitauth.c:852
    #12 0x4382fc in authzone_query_test testcode/unitauth.c:888
    #13 0x4382fc in authzone_test testcode/unitauth.c:899
    #14 0x406e2c in main testcode/unitmain.c:888
    #15 0x7fa0a952a412 in __libc_start_main ../csu/libc-start.c:308
    #16 0x40a54d in _start (/home/jwalton/Build-Scripts/unbound-1.9.1/unittest+0x40a54d)

Address 0x7fff1d847e80 is located in stack of thread T0 at offset 32 in frame
    #0 0x45cbcf in rrset_key_hash util/data/packed_rrset.c:163

  This frame has 1 object(s):
    [32, 34) 't' <== Memory access at offset 32 partially overflows this variable
HINT: this may be a false positive if your program uses some custom stack unwind mechanism or swapcontext
      (longjmp and C++ exceptions *are* supported)
SUMMARY: AddressSanitizer: stack-buffer-overflow util/storage/lookup3.c:378 in hashlittle
Shadow bytes around the buggy address:
  0x100063b00f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x100063b00f90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x100063b00fa0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x100063b00fb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x100063b00fc0: 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1
=>0x100063b00fd0:[02]f2 f2 f2 f3 f3 f3 f3 00 00 00 00 00 00 00 00
  0x100063b00fe0: 00 00 f1 f1 f1 f1 00 00 00 00 00 00 00 00 00 00
  0x100063b00ff0: 00 00 00 00 00 00 00 f2 f2 f2 f3 f3 f3 f3 00 00
  0x100063b01000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x100063b01010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x100063b01020: 00 00 00 00 f1 f1 f1 f1 04 f2 f2 f2 f2 f2 f2 f2
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==23867==ABORTING
gmake: *** [Makefile:311: test] Error 1
Failed to test Unbound
Failed to build Unbound

Reduce number of TLS connections to forwarded (DoT) when using "forward-tls-upstream"

Hi.
First of all I want to thank you very much for your work about unbound!!
I mainly use unbound as forwarder to Cloudflare (1.1.1.1 and 1.0.0.1). So unbound listen on my Linux on port 53 and forward DNS queries to 1.1.1.1:853 via TLS (DNS over TLS). I use "forward-tls-upstream" option.
I have notice that unbound, for each new lookup, create a new connection to 1.1.1.1 on port 853 in order to process the lookup. This is tested also thanks to netstat command.
Well, on a performance point of view, I expect that if unbound will go to establish only one (or much less I mean) TLS connection to 1.1.1.1:853 (and leave it UP) the DNS queries will be processed more fastly because unbound don't need time to establish a new TLS connection each time to 1.1.1.1/1.0.0.1.
Is there the possiblity to set unbound to use one and always the same connection to 1.1.1.1:853 for all DNS over TLS queries?
If not, is there a plan to add an option to unbound in order to set what I'm asking?
Many thanks for you time.

This is the output of "netstat" command that show how many connections unbound do to DoT forwarder (Cloudflare in this case). In this example only one client is connected to unbound DNS Server (port 53):

root@server:~# netstat -anp | grep -i 53
tcp        0      0 0.0.0.0:53              0.0.0.0:*               LISTEN     620/unbound
tcp        0      0 192.168.1.10:40436     1.0.0.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:48402     1.1.1.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:48422     1.1.1.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:40402     1.0.0.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:48432     1.1.1.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:48428     1.1.1.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:40408     1.0.0.1:853             TIME_WAIT   -
tcp      225    137 192.168.1.10:48444     1.1.1.1:853             ESTABLISHED 620/unbound
tcp      152      0 192.168.1.10:40456     1.0.0.1:853             ESTABLISHED 620/unbound
tcp        0      0 192.168.1.10:48426     1.1.1.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:40410     1.0.0.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:48400     1.1.1.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:40434     1.0.0.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:40404     1.0.0.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:40412     1.0.0.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:48396     1.1.1.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:48430     1.1.1.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:48410     1.1.1.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:40470     1.0.0.1:853             ESTABLISHED 620/unbound
tcp      377      0 192.168.1.10:48442     1.1.1.1:853             ESTABLISHED 620/unbound
tcp        0      0 192.168.1.10:48420     1.1.1.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:40414     1.0.0.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:40472     1.0.0.1:853             ESTABLISHED 620/unbound
tcp        0      0 192.168.1.10:40446     1.0.0.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:48418     1.1.1.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:48416     1.1.1.1:853             TIME_WAIT   -
tcp      385      0 192.168.1.10:48446     1.1.1.1:853             ESTABLISHED 620/unbound
tcp      152      0 192.168.1.10:40460     1.0.0.1:853             ESTABLISHED 620/unbound
tcp        0      0 192.168.1.10:40426     1.0.0.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:40420     1.0.0.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:48436     1.1.1.1:853             TIME_WAIT   -
tcp      377      0 192.168.1.10:48440     1.1.1.1:853             ESTABLISHED 620/unbound
tcp        0      0 192.168.1.10:48408     1.1.1.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:40428     1.0.0.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:40416     1.0.0.1:853             TIME_WAIT   -
tcp        0      0 192.168.1.10:40406     1.0.0.1:853             TIME_WAIT   -
udp     6016      0 0.0.0.0:53              0.0.0.0:*                          620/unbound

unbound-host hangs on “mail.cpanelssltest.org”

unbound-host mail.cpanelssltest.org

This query takes about 45 seconds before unbound-host prints the result—but then the command still doesn’t exit.

dig +trace executes the same query almost instantaneously.

Seen in v1.9.1 on macOS.

Suspicious (?) delay in resolution

#include <stdio.h>
#include <unbound.h>

#define MX 15
#define TXT 16

int main() {
    struct ub_ctx* ub = ub_ctx_create();
    struct ub_result* result;

    // Comment the following line out to see the MX query go much faster:
    ub_resolve(ub, "onlysslcertificates.cpanelssltest.org", TXT, 1, &result);
    printf("after TXT\n");

    // The following takes a very long time, but only if the TXT query
    // precedes it:
    ub_resolve(ub, "onlysslcertificates.cpanelssltest.org", MX, 1, &result);
    printf("after MX\n");

    ub_resolve_free(result);
    ub_ctx_delete(ub);

    return 0;
}

The above demonstrates a strangely long delay in resolving the MX query—but, as the comment notes, only when there’s a TXT query before it. (An A query in lieu of the TXT does not produce the same delay.)

Strangely, when I try the same thing using Net::DNS::Resolver::Recurse, the delay isn’t there:

> perl -MNet::DNS::Resolver::Recurse -e'my $dns = Net::DNS::Resolver::Recurse->new(); $dns->query("onlysslcertificates.cpanelssltest.org", "TXT"); print "after TXT\n"; $dns->query("onlysslcertificates.cpanelssltest.org", "MX"); print "after MX\n"'

The DNS configuration for this domain is admittedly a bit wonky, but the long delay from libunbound still seems suspicious, particularly given that the Perl resolver doesn’t have the delay.

Thank you!

problem with ssl-upstream and vpn interface

Hi,

Since I haven't been able to post anything through the mailing list to get some help, I'm posting here instead.

I recently wanted to setup unbound in place of dnscrypt to resolve queries with my pi-hole on my rasp.

The version of unbound available on Raspbian is 1.6.0 currently.

When activating the options


    ssl-upstream: yes
    ssl-service-key: "/etc/ssl/certs/ca-certificates.crt"

unbound stopped working and we have something like this in the logs:

[1556709926] unbound[4394:0] info: server stats for thread 0: 23 queries, 7 answers from cache, 16 recursions, 0 prefetch
[1556709926] unbound[4394:0] info: server stats for thread 0: requestlist max 13 avg 1.875 exceeded 0 jostled 0
[1556709926] unbound[4394:0] info: mesh has 0 recursion states (0 with reply, 0 detached), 0 waiting replies, 16 recursion replies sent, 0 replies dropped, 0 states jostled out
[1556709926] unbound[4394:0] info: average recursion processing time 0.948223 sec
[1556709926] unbound[4394:0] info: histogram of recursion processing times
[1556709926] unbound[4394:0] info: [25%]=0.32768 median[50%]=0.603573 [75%]=0.920715
[1556709926] unbound[4394:0] info: lower(secs) upper(secs) recursions
[1556709926] unbound[4394:0] info:    0.000000    0.000001 1
[1556709926] unbound[4394:0] info:    0.008192    0.016384 1
[1556709926] unbound[4394:0] info:    0.016384    0.032768 1
[1556709926] unbound[4394:0] info:    0.262144    0.524288 4
[1556709926] unbound[4394:0] info:    0.524288    1.000000 6
[1556709926] unbound[4394:0] info:    1.000000    2.000000 1
[1556709926] unbound[4394:0] info:    2.000000    4.000000 2
[1556709926] unbound[4394:0] debug: cache memory msg=33040 rrset=33040 infra=17292 val=40931
[1556709926] unbound[4394:0] debug: switching log to stderr

I did also try to setup unbound to send queries through a vpn connection on the rasp itself
But I can’t resolve apparently through the vpn connection.
I tried set it up by hardcoding the ip address from the vpn connection, same result. I tried to used udp and tcp separately, same result

Am I missing something? I have connectivity through my vpn so that’s not the problem apparently. And the problem disappear as soon as I deactivate the vpn connection.
Or is all that supposed to happen in 1.6?

Does anyone have an idea about this?

Thanks in advance.

Unbound accept client initiated renegotiation

pip3 install sslyze
sslyze --reneg $HOSTNAME:853

Result:

$ sslyze example.com:853 --reneg



 AVAILABLE PLUGINS
 -----------------

  CompressionPlugin
  RobotPlugin
  CertificateInfoPlugin
  FallbackScsvPlugin
  SessionRenegotiationPlugin
  SessionResumptionPlugin
  HttpHeadersPlugin
  EarlyDataPlugin
  OpenSslCipherSuitesPlugin
  OpenSslCcsInjectionPlugin
  HeartbleedPlugin



 CHECKING HOST(S) AVAILABILITY
 -----------------------------

   example.com:853                       => XXXXX




 SCAN RESULTS FOR example.com:XXXXX
 ----------------------------------------------------------------------------

 * Session Renegotiation:
       Client-initiated Renegotiation:    VULNERABLE - Server honors client-initiated renegotiations
       Secure Renegotiation:              OK - Supported


 SCAN COMPLETED IN 0.18 S
 ------------------------

FIN without SERVFAIL on TCP lookup to failing hostnames

Version: 1.9.2

Short description: Sporadically, TCP-based lookups for records that result in SERVFAIL will result in a FIN being sent early to the client instead of SRVFAIL and then a FIN. Subsequent replies for the same record then result in the expected SRVFAIL response code and normal TCP flow until cache timeout.

Steps to reproduce (unbound on port 5000):

[root@test log]# kdig @127.0.0.1 -p 5000 +tcp +retry=1 rhybar.cz
;; WARNING: can't receive reply from 127.0.0.1@5000(TCP)
;; WARNING: failed to query server 127.0.0.1@5000(TCP)
[root@test log]#
[root@test log]# kdig @127.0.0.1 -p 5000 +tcp +retry=1 rhybar.cz
;; ->>HEADER<<- opcode: QUERY; status: SERVFAIL; id: 22768
;; Flags: qr rd ra; QUERY: 1; ANSWER: 0; AUTHORITY: 0; ADDITIONAL: 0

;; QUESTION SECTION:
;; rhybar.cz. IN A

;; Received 27 B
;; Time 2019-08-23 20:14:34 UTC
;; From 127.0.0.1@5000(TCP) in 0.1 ms
[root@test log]#

From unbound.log:
Aug 23 20:14:30 res310 unbound: [10242:1] info: validation failure rhybar.cz. A IN
Aug 23 20:14:30 res310 unbound: [10242:1] info: validation failure rhybar.cz. NS IN

Long description:
This has been observed with many other SRVFAIL responses, though rhybar.cz does tend to fail much more predictably and has a validation failure. Another randomly-picked servfail test element that has been used with less consistently reproducable results: ns.hesjptt.net.cn

Packet captures were done on these transactions, and there is a FIN sent by unbound around 200ms after the receipt of the query on TCP sessions.

If this is a configuration issue, advice would be welcome on what we should look at.

swig 4.0 and python module

Latest unbound of the time of reporting this issue (i.e. 1.9.1) is not compatible with swig 4.0 and hence python module fails.

Sept 3, 2019 commit compilation error

error log:
In file included from util/netevent.c:58:
util/netevent.c: In function ‘squelch_err_ssl_handshake’:
util/netevent.c:1068:32: error: ‘SSL_F_TLS_POST_PROCESS_CLIENT_HELLO’ undeclared (first use in this function)
1068 | err == ERR_PACK(ERR_LIB_SSL, SSL_F_TLS_POST_PROCESS_CLIENT_HELLO, SSL_R_NO_SHARED_CIPHER) ||
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/media/wang-zhikuan/System_D/Debian-Stretch/myscript/ipkg/unbound/opt/include/openssl/err.h:243:51: note: in definition of macro ‘ERR_PACK’
243 | ((((unsigned long)f)&0xfffL)*0x1000)|
| ^
util/netevent.c:1068:32: note: each undeclared identifier is reported only once for each function it appears in
1068 | err == ERR_PACK(ERR_LIB_SSL, SSL_F_TLS_POST_PROCESS_CLIENT_HELLO, SSL_R_NO_SHARED_CIPHER) ||
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/media/wang-zhikuan/System_D/Debian-Stretch/myscript/ipkg/unbound/opt/include/openssl/err.h:243:51: note: in definition of macro ‘ERR_PACK’
243 | ((((unsigned long)f)&0xfffL)*0x1000)|
| ^
util/netevent.c:1069:32: error: ‘SSL_F_TLS_EARLY_POST_PROCESS_CLIENT_HELLO’ undeclared (first use in this function)
1069 | err == ERR_PACK(ERR_LIB_SSL, SSL_F_TLS_EARLY_POST_PROCESS_CLIENT_HELLO, SSL_R_UNKNOWN_PROTOCOL) ||
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/media/wang-zhikuan/System_D/Debian-Stretch/myscript/ipkg/unbound/opt/include/openssl/err.h:243:51: note: in definition of macro ‘ERR_PACK’
243 | ((((unsigned long)f)&0xfffL)*0x1000)|
| ^
util/netevent.c:1071:75: error: ‘SSL_R_VERSION_TOO_LOW’ undeclared (first use in this function); did you mean ‘SSL_R_MESSAGE_TOO_LONG’?
1071 | err == ERR_PACK(ERR_LIB_SSL, SSL_F_TLS_EARLY_POST_PROCESS_CLIENT_HELLO, SSL_R_VERSION_TOO_LOW))
| ^~~~~~~~~~~~~~~~~~~~~
/media/wang-zhikuan/System_D/Debian-Stretch/myscript/ipkg/unbound/opt/include/openssl/err.h:244:51: note: in definition of macro ‘ERR_PACK’
244 | ((((unsigned long)r)&0xfffL)))
| ^
make: *** [Makefile:288: netevent.lo] Error 1
make: *** Waiting for unfinished jobs....

unbound-control error if unbound is started with custom config filename

When the unbound is started with the custom config file name:
./unbound -c custom.conf

custom.cof:

remote-control:

control-enable: yes
control-interface: 127.0.0.1
control-port: 8953

Running the following unbound-control command will result in the error:

linux:/usr/local/etc/unbound# unbound-control dump_cache -s 127.0.0.1
[1561341472] unbound-control[19082:0] error: Could not open /usr/local/etc/unbound/unbound.conf: No such file or directory
[1561341472] unbound-control[19082:0] fatal error: could not read config file

I don't know whether it is a bug or my misunderstanding of the documentation.

Add non-recursive option

We would like to add a non-recursive option and are wondering if such a pull request would be accepted.

Our use case requires us to use unbound as a resolver for local domains with no need for recursion. This would allow us to disable the recursive root server lookups. We would make this an option that has to be enabled and let the default behavior be the same as it is today.

Currently even if you disable DNSSEC with:
module-config: "iterator"
(Setting this to "iterator" will result in a non-validating server.)
domain-insecure: "."
(Sets domain name to be insecure, DNSSEC chain of trust is ignored towards the domain name.)

The root server lookups still happen.

wishlist: reduce memory usage when using many identical local-data records

I hit something like https://nlnetlabs.nl/bugs-script/show_bug.cgi?id=743 in that I use unbound to make malicious/unwanted domains unreachable and memory usage is very high.

I have about 250k entries like this:

local-zone: "example.com" redirect
local-data: "example.com A 0.0.0.0"

Many of the domain names are a lot longer, but the local-data is identical for all.

With that set, unbound uses about 1GB of memory (on a 64bit system).

If I just use local-zone: "example.com" static and no local-data, memory usage drops from 1GB to less than 100MB, which is much more realistic.

I'm not yet certain whether returning NXDOMAIN instead of 0.0.0.0 for the malicious domains has any drawbacks, so this may not matter for my use-case; but on the whole, it seems to me that the tenfold increase in memory usage is unnecessary.

Without looking at the code, I guess that with the local-data entries present, unbound is allocating 250k instances of a local-data data structure, each taking slightly less than 4k of RAM. Since these records are completely static, wouldn't it be sufficient to allocate only one instance of each and just use pointers in place of new copies? (Also, ~4k seems a bit much for such data.)

Perhaps instead of just redirect, there could be a local-zone: ... sinkhole keyword which does the above but uses less memory?

option for safe logging

We run public DNS resolvers that aim to protect the user's privacy by offering transport layer encryption and not having any query name logging, but when using unbound we are faced with logs that contain more data than we want according to our privacy policy even though we are running with verbosity: 0

unbound: info: validator: error. failed to classify response message:  ;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 0#012;; flags: qr ra ; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 #012;; QUESTION SECTION:#-scrubbed-#011IN#011NS#012#012;; ANSWER SECTION:#-scrubbed-#0113548#011IN#011A#-scrubbed-#012#012;; AUTHORITY SECTION:#012#012;; ADDITIONAL SECTION:#012;; MSG SIZE  rcvd: 80

and we currently do not see any way to avoid this via an unbound configuration option.

We would suggest to add an option that allows privacy focused DNS resolver operators to use unbound without having that kind of data in their logs.

It could be something like:

safelogging: 1

which would result in safer logs where sensitive data like:

  • IP addresses
  • query name
    is masked.

Memory leak in outside_network.c

In function outnet_serviced_query(), first serviced_query object is created at

/* make new serviced query entry */
sq = serviced_create(outnet, buff, dnssec, want_dnssec, nocaps,
tcp_upstream, ssl_upstream, tls_auth_name, addr,
addrlen, zone, zonelen, (int)qinfo->qtype,
qstate->edns_opts_back_out);

which potentially allocates memory for the fields qbuf, zone, tls_auth_name and opt_list.

And later under some conditions, this object can be destroyed, but only qbuf and zone fields are released at

if(!serviced_udp_send(sq, buff)) {
(void)rbtree_delete(outnet->serviced, sq);
free(sq->qbuf);
free(sq->zone);
free(sq);

and
if(!serviced_tcp_send(sq, buff)) {
(void)rbtree_delete(outnet->serviced, sq);
free(sq->qbuf);
free(sq->zone);
free(sq);

Probably, the function serviced_delete() or serviced_node_del() should be used instead for destroying the serviced_query object.

Service Failed with 'timeout'

I'm seeing serious issues with unbound where the service simply restarts itself intermittently. I say intermittently but I'm beginning to think it's related to the service failed timeout + the restart delay.

When I attempt to start the service normally via systemctl start unbound or systemctl restart unbound the prompt hangs and waits. Eventually there's an error message along the lines of:

Job for unbound.service failed because a timeout was exceeded. See "systemctl status unbound.service" and "journalctl -xe" for details.

In the unbound log I can see that it's reached a status of info: start of service (unbound 1.9.0). within milliseconds of the service start though.

tls-upstream is set to yes, and looking at the more verbose outputs requests are getting resolved. However when the service restarts the cache is dropped, which is the one of the main reasons for using unbound on this LAN (as well as part of a wider security & privacy design).

I'm using DNS-over-TLS upstreams only, and are configured as per this example:

forward-zone:
        name: "."
        forward-tls-upstream: yes

        ### straight connect instead of via stubby
        ## https://ripe78.ripe.net/on-site/tech-info/dns-over-tls-resolvers/
        forward-addr:   193.0.31.237@853#nscache.ripemtg.ripe.net
        forward-addr:   193.0.31.238@853#nscache.ripemtg.ripe.net

I know this generally works because if I start the service directly (stopping the systemd service) via /usr/sbin/unbound -dvv, I see it working as expected. No restarts or drop-outs.

It's almost like the systemd service thinks the startup hasn't completed, and assumes timeout - and forces the restart based on the unbound.service Restart=on-failed setting. If I remove the Restart=... line from the service I see the following in the systemctl status:

unbound.service - Unbound DNS server
Loaded: loaded (/lib/systemd/system/unbound.service; enabled; vendor preset: enabled)
Active: failed (Result: timeout) since Wed 2019-08-07 18:21:22 CEST; 38s ago
Docs: man:unbound(8)
Process: 6990 ExecStartPre=/usr/lib/unbound/package-helper chroot_setup (code=exited, status=0/SUCCESS)
Process: 6993 ExecStartPre=/usr/lib/unbound/package-helper root_trust_anchor_update (code=exited, status=0/SUCCESS)
Process: 6997 ExecStart=/usr/sbin/unbound -d $DAEMON_OPTS (code=exited, status=0/SUCCESS)
Main PID: 6997 (code=exited, status=0/SUCCESS)

Aug 07 18:19:51 ServerName systemd[1]: Starting Unbound DNS server...
Aug 07 18:19:51 ServerName package-helper[6993]: /var/lib/unbound/root.key has content
Aug 07 18:19:51 ServerName package-helper[6993]: success: the anchor is ok
Aug 07 18:21:21 ServerName systemd[1]: unbound.service: Start operation timed out. Terminating.
Aug 07 18:21:22 ServerName systemd[1]: unbound.service: Failed with result 'timeout'.
Aug 07 18:21:22 ServerName systemd[1]: Failed to start Unbound DNS server.

The pre-reqs for chroot and trust anchors appear to operate as expected, it's the actual unbound service which seems to timeout as a systemd service.

The server in question is a new build to replace an older server, running another Debian derivative with unbound 1.6.0, and that uses Stubby and DnsCrypt as its forwarders. I've tried swapping out the DoT configuration for a straight udp forwarder with known-good DNS (Cloudflare in the test), and that exhibits the same symptoms.

Version in use: 1.9.0.2
OS: Debian Buster (stable) arm64

I enabled a higher verbosity and captured a part of the log for reference - it shows the brief time unbound was actually up through to the timeout 'happening' and the service dropping. Just after the systemctl start unbound I hit it with a dig google.com +dnssec:

[1565196026] unbound[7414:0] debug: validator[module 1] operate: extstate:module_wait_subquery event:module_event_pass
[1565196026] unbound[7414:0] info: validator operate: query google.com. A IN
[1565196026] unbound[7414:0] debug: val handle processing q with state VAL_FINDKEY_STATE
[1565196026] unbound[7414:0] info: validator: FindKey google.com. A IN
[1565196026] unbound[7414:0] info: current keyname com. DNSKEY IN
[1565196026] unbound[7414:0] info: target keyname google.com. DNSKEY IN
[1565196026] unbound[7414:0] debug: striplab 0
[1565196026] unbound[7414:0] info: next keyname google.com. DNSKEY IN
[1565196026] unbound[7414:0] info: DS RRset com. DS IN
[1565196026] unbound[7414:0] info: generate request google.com. DS IN
[1565196026] unbound[7414:0] debug: mesh_run: validator module exit state is module_wait_subquery
[1565196026] unbound[7414:0] debug: subnet[module 0] operate: extstate:module_state_initial event:module_event_pass
[1565196026] unbound[7414:0] info: subnet operate: query google.com. DS IN
[1565196026] unbound[7414:0] debug: subnet: pass to next module
[1565196026] unbound[7414:0] debug: mesh_run: subnet module exit state is module_wait_module
[1565196026] unbound[7414:0] debug: validator[module 1] operate: extstate:module_state_initial event:module_event_pass
[1565196026] unbound[7414:0] info: validator operate: query google.com. DS IN
[1565196026] unbound[7414:0] debug: validator: pass to next module
[1565196026] unbound[7414:0] debug: mesh_run: validator module exit state is module_wait_module
[1565196026] unbound[7414:0] debug: iterator[module 2] operate: extstate:module_state_initial event:module_event_pass
[1565196026] unbound[7414:0] debug: process_request: new external request event
[1565196026] unbound[7414:0] debug: iter_handle processing q with state INIT REQUEST STATE
[1565196026] unbound[7414:0] info: resolving google.com. DS IN
[1565196026] unbound[7414:0] debug: request has dependency depth of 0
[1565196026] unbound[7414:0] debug: forwarding request
[1565196026] unbound[7414:0] debug: iter_handle processing q with state QUERY TARGETS STATE
[1565196026] unbound[7414:0] info: processQueryTargets: google.com. DS IN
[1565196026] unbound[7414:0] debug: processQueryTargets: targetqueries 0, currentqueries 0 sentcount 0
[1565196026] unbound[7414:0] info: DelegationPoint<.>: 0 names (0 missing), 7 addrs (0 result, 7 avail) parentNS
[1565196026] unbound[7414:0] debug: [dns.larsdebruin.net] ip4 51.15.70.167 port 853 (len 16)
[1565196026] unbound[7414:0] debug: [dnsovertls.sinodun.com] ip4 145.100.185.15 port 853 (len 16)
[1565196026] unbound[7414:0] debug: [dns.cmrg.net] ip4 199.58.81.218 port 443 (len 16)
[1565196026] unbound[7414:0] debug: [dnsovertls.sinodun.com] ip4 145.100.185.15 port 443 (len 16)
[1565196026] unbound[7414:0] debug: [getdnsapi.net] ip4 185.49.141.37 port 853 (len 16)
[1565196026] unbound[7414:0] debug: [nscache.ripemtg.ripe.net] ip4 193.0.31.238 port 853 (len 16)
[1565196026] unbound[7414:0] debug: [nscache.ripemtg.ripe.net] ip4 193.0.31.237 port 853 (len 16)
[1565196026] unbound[7414:0] debug: attempt to get extra 3 targets
[1565196026] unbound[7414:0] debug: servselect ip4 193.0.31.237 port 853 (len 16)
[1565196026] unbound[7414:0] debug: rtt=1504
[1565196026] unbound[7414:0] debug: servselect ip4 193.0.31.238 port 853 (len 16)
[1565196026] unbound[7414:0] debug: rtt=1504
[1565196026] unbound[7414:0] debug: servselect ip4 185.49.141.37 port 853 (len 16)
[1565196026] unbound[7414:0] debug: rtt=649
[1565196026] unbound[7414:0] debug: servselect ip4 145.100.185.15 port 443 (len 16)
[1565196026] unbound[7414:0] debug: rtt=569
[1565196026] unbound[7414:0] debug: servselect ip4 199.58.81.218 port 443 (len 16)
[1565196026] unbound[7414:0] debug: rtt=1147
[1565196026] unbound[7414:0] debug: servselect ip4 145.100.185.15 port 853 (len 16)
[1565196026] unbound[7414:0] debug: rtt=598
[1565196026] unbound[7414:0] debug: selrtt 376
[1565196026] unbound[7414:0] info: sending query: google.com. DS IN
[1565196026] unbound[7414:0] debug: sending to target: <.> 145.100.185.15#443
[1565196026] unbound[7414:0] debug: dnssec status: not expected
[1565196026] unbound[7414:0] debug: qname perturbed to GOogLE.cOM.
[1565196026] unbound[7414:0] debug: comm point start listening 39
[1565196026] unbound[7414:0] debug: mesh_run: iterator module exit state is module_wait_reply
[1565196026] unbound[7414:0] info: mesh_run: end 2 recursion states (1 with reply, 0 detached), 1 waiting replies, 1 recursion replies sent, 0 replies dropped, 0 states jostled out
[1565196026] unbound[7414:0] info: average recursion processing time 4.325272 sec
[1565196026] unbound[7414:0] info: histogram of recursion processing times
[1565196026] unbound[7414:0] info: [25%]=0 median[50%]=0 [75%]=0
[1565196026] unbound[7414:0] info: lower(secs) upper(secs) recursions
[1565196026] unbound[7414:0] info: 4.000000 8.000000 1
[1565196026] unbound[7414:0] info: 0vRDCD mod2 google.com. DS IN
[1565196026] unbound[7414:0] info: 1RDdc mod1 rep google.com. A IN
[1565196026] unbound[7414:0] debug: cache memory msg=68665 rrset=73891 infra=9590 val=69820 subnet=74504
[1565196026] unbound[7414:0] debug: svcd callbacks end
[1565196026] unbound[7414:0] debug: close fd 38
[1565196026] unbound[7414:0] debug: comm point listen_for_rw 39 0
[1565196026] unbound[7414:0] debug: peer certificate:
Issuer: C=US, O=Let's Encrypt, CN=Let's Encrypt Authority X3
Validity
Not Before: Jul 8 10:15:11 2019 GMT
Not After : Oct 6 10:15:11 2019 GMT
Subject: CN=dnsovertls.sinodun.com
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Key Identifier:
05:3C:F3:1F:64:B2:4C:57:F0:79:8B:0D:55:7B:33:7B:EF:B6:CA:1F
X509v3 Authority Key Identifier:
keyid:A8:4A:6A:63:04:7D:DD:BA:E6:D1:39:B7:A6:45:65:EF:F3:A8:EC:A1

        Authority Information Access: 
            OCSP - URI:http://ocsp.int-x3.letsencrypt.org
            CA Issuers - URI:http://cert.int-x3.letsencrypt.org/

        X509v3 Subject Alternative Name: 
            DNS:dnsovertls.sinodun.com
        X509v3 Certificate Policies: 
            Policy: 2.23.140.1.2.1
            Policy: 1.3.6.1.4.1.44947.1.1.1
              CPS: http://cps.letsencrypt.org

        CT Precertificate SCTs: 
            Signed Certificate Timestamp:
                Version   : v1 (0x0)
                Log ID    : 74:7E:DA:83:31:AD:33:10:91:21:9C:CE:25:4F:42:70:
                            C2:BF:FD:5E:42:20:08:C6:37:35:79:E6:10:7B:CC:56
                Timestamp : Jul  8 11:15:11.999 2019 GMT
                Extensions: none
                Signature : ecdsa-with-SHA256
                            30:44:02:20:4C:D9:6C:B3:34:4B:7E:D9:E7:90:A4:D5:
                            AB:1D:13:B0:3F:6C:16:CA:93:3C:F6:AC:1D:62:3F:96:
                            5B:E3:F0:1B:02:20:59:0F:B5:B1:8F:FB:74:4F:CE:56:
                            D5:E1:27:C4:C7:98:99:20:2C:12:DF:94:1C:09:50:FD:
                            88:B4:03:E1:65:40
            Signed Certificate Timestamp:
                Version   : v1 (0x0)
                Log ID    : 29:3C:51:96:54:C8:39:65:BA:AA:50:FC:58:07:D4:B7:
                            6F:BF:58:7A:29:72:DC:A4:C3:0C:F4:E5:45:47:F4:78
                Timestamp : Jul  8 11:15:12.030 2019 GMT
                Extensions: none
                Signature : ecdsa-with-SHA256
                            30:46:02:21:00:B8:23:CD:36:89:77:64:0A:88:1B:68:
                            10:F9:D2:8A:27:28:75:F9:13:AF:58:5F:A6:40:F6:CD:
                            41:92:59:70:34:02:21:00:8B:71:0F:CD:DE:E7:9A:54:
                            6D:05:BD:7D:95:DE:A4:7C:ED:1E:65:3C:5F:BC:48:F6:
                            91:67:50:5A:AD:97:57:CB

[1565196026] unbound[7414:0] debug: SSL connection to dnsovertls.sinodun.com authenticated ip4 145.100.185.15 port 443 (len 16)
[1565196026] unbound[7414:0] debug: comm point listen_for_rw 39 1
[1565196026] unbound[7414:0] debug: comm point stop listening 39
[1565196026] unbound[7414:0] debug: comm point start listening 39
[1565196026] unbound[7414:0] debug: Reading ssl tcp query of length 766
[1565196026] unbound[7414:0] debug: comm point stop listening 39
[1565196026] unbound[7414:0] debug: outnettcp cb
[1565196026] unbound[7414:0] debug: measured TCP-time at 207 msec
[1565196026] unbound[7414:0] debug: svcd callbacks start
[1565196026] unbound[7414:0] debug: good 0x20-ID in reply qname
[1565196026] unbound[7414:0] debug: worker svcd callback for qstate 0xaaab0d9901b0
[1565196026] unbound[7414:0] debug: mesh_run: start
[1565196026] unbound[7414:0] debug: iterator[module 2] operate: extstate:module_wait_reply event:module_event_reply
[1565196026] unbound[7414:0] info: iterator operate: query google.com. DS IN
[1565196026] unbound[7414:0] debug: process_response: new external response event
[1565196026] unbound[7414:0] info: scrub for . NS IN
[1565196026] unbound[7414:0] info: response for google.com. DS IN
[1565196026] unbound[7414:0] info: reply from <.> 145.100.185.15#443
[1565196026] unbound[7414:0] info: incoming scrubbed packet: ;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 0
;; flags: qr rd ra ; QUERY: 1, ANSWER: 0, AUTHORITY: 6, ADDITIONAL: 0
;; QUESTION SECTION:
google.com. IN DS

;; ANSWER SECTION:

;; AUTHORITY SECTION:
com. 1200 IN SOA a.gtld-servers.net. nstld.verisign-grs.com. 1565195898 1800 900 604800 86400
com. 1200 IN RRSIG SOA 8 1 900 20190814163818 20190807152818 17708 com. b/iaRYjfVFnFfMeWeTvihmSBwHG0FxpjtSHMjwYxoj0H4zzga/JM8wzO3Bj4/XXaJZA6wKWvR58znpfhUkqS/U3jbsPwRTH6Zavl/0IZ08b4iTwGUFmiMOAnddkzpjqNaIxfnC/py5k3Nd4iNY0Pz/PydcQhL4viSS1bg33JVS4= ;{id = 17708}
CK0POJMG874LJREF7EFN8430QVIT8BSM.com. 1200 IN NSEC3 1 1 0 - ck0q1gin43n1arrc9osm6qpqr81h5m9a NS SOA RRSIG DNSKEY NSEC3PARAM ;{flags: optout}
CK0POJMG874LJREF7EFN8430QVIT8BSM.com. 1200 IN RRSIG NSEC3 8 2 86400 20190811044603 20190804033603 17708 com. DFIdCVmw76xkgFouDSnCX8/TUMjZOkvQ/VmNcdSW3tE2rTDsD6VszVwNp5vV0iBTyXFk1lM3F4edA+zWR+fyCMwFfQs0vGdL4PHYbYMfxlY9cswx36W5suvp6oZGEMapfhDFRCcQG88kmIYjGKLjo9QAeTQKEStghE7DWD5SO3M= ;{id = 17708}
S84BDVKNH5AGDSI7F5J0O3NPRHU0G7JQ.com. 1200 IN NSEC3 1 1 0 - s84cfh3a62n0fjpc5d9ij2vjr71oglv5 NS DS RRSIG ;{flags: optout}
S84BDVKNH5AGDSI7F5J0O3NPRHU0G7JQ.com. 1200 IN RRSIG NSEC3 8 2 86400 20190812045756 20190805034756 17708 com. tCi2xBJY/XO0kpl2Sj23eyW/dFAgmeY11F+zcaeuLdCrwHzP4EM3yLq/XgbQV3L+9icbvcpqCHPv6B3461NGQ5HRvq4wrBgtoFMi0oFp2hZRHaVIMrJqTyerTH+dnoQioY+62bLf8tJ7A/BZVfpe28fg5JNHe3XbgAFVrVgKyWE= ;{id = 17708}

;; ADDITIONAL SECTION:
;; MSG SIZE rcvd: 749

[1565196026] unbound[7414:0] debug: iter_handle processing q with state QUERY RESPONSE STATE
[1565196026] unbound[7414:0] info: query response was nodata ANSWER
[1565196026] unbound[7414:0] debug: iter_handle processing q with state FINISHED RESPONSE STATE
[1565196026] unbound[7414:0] info: finishing processing for google.com. DS IN
[1565196026] unbound[7414:0] debug: mesh_run: iterator module exit state is module_finished
[1565196026] unbound[7414:0] debug: validator[module 1] operate: extstate:module_wait_module event:module_event_moddone
[1565196026] unbound[7414:0] info: validator operate: query google.com. DS IN
[1565196026] unbound[7414:0] debug: validator: nextmodule returned
[1565196026] unbound[7414:0] debug: not validating response, is valrec(validation recursion lookup)
[1565196026] unbound[7414:0] debug: mesh_run: validator module exit state is module_finished
[1565196026] unbound[7414:0] debug: subnet[module 0] operate: extstate:module_wait_module event:module_event_moddone
[1565196026] unbound[7414:0] info: subnet operate: query google.com. DS IN
[1565196026] unbound[7414:0] debug: mesh_run: subnet module exit state is module_finished
[1565196026] unbound[7414:0] info: validator: inform_super, sub is google.com. DS IN
[1565196026] unbound[7414:0] info: super is google.com. A IN
[1565196026] unbound[7414:0] info: verify rrset CK0POJMG874LJREF7EFN8430QVIT8BSM.com. NSEC3 IN
[1565196026] unbound[7414:0] debug: verify sig 17708 8
[1565196026] unbound[7414:0] debug: verify result: sec_status_secure
[1565196026] unbound[7414:0] info: verify rrset S84BDVKNH5AGDSI7F5J0O3NPRHU0G7JQ.com. NSEC3 IN
[1565196026] unbound[7414:0] debug: verify sig 17708 8
[1565196026] unbound[7414:0] debug: verify result: sec_status_secure
[1565196026] unbound[7414:0] debug: nsec3: keysize 1024 bits, max iterations 150
[1565196026] unbound[7414:0] info: ce candidate com. TYPE0 CLASS0
[1565196026] unbound[7414:0] info: NSEC3s for the referral proved no DS.
[1565196026] unbound[7414:0] debug: validator[module 1] operate: extstate:module_wait_subquery event:module_event_pass
[1565196026] unbound[7414:0] info: validator operate: query google.com. A IN
[1565196026] unbound[7414:0] debug: val handle processing q with state VAL_VALIDATE_STATE
[1565196026] unbound[7414:0] info: Verified that unsigned response is INSECURE
[1565196026] unbound[7414:0] debug: val handle processing q with state VAL_FINISHED_STATE
[1565196026] unbound[7414:0] debug: mesh_run: validator module exit state is module_finished
[1565196026] unbound[7414:0] debug: subnet[module 0] operate: extstate:module_wait_module event:module_event_moddone
[1565196026] unbound[7414:0] info: subnet operate: query google.com. A IN
[1565196026] unbound[7414:0] debug: mesh_run: subnet module exit state is module_finished
[1565196026] unbound[7414:0] debug: query took 0.794191 sec
[1565196026] unbound[7414:0] info: mesh_run: end 0 recursion states (0 with reply, 0 detached), 0 waiting replies, 2 recursion replies sent, 0 replies dropped, 0 states jostled out
[1565196026] unbound[7414:0] info: average recursion processing time 2.559731 sec
[1565196026] unbound[7414:0] info: histogram of recursion processing times
[1565196026] unbound[7414:0] info: [25%]=0 median[50%]=0 [75%]=0
[1565196026] unbound[7414:0] info: lower(secs) upper(secs) recursions
[1565196026] unbound[7414:0] info: 0.524288 1.000000 1
[1565196026] unbound[7414:0] info: 4.000000 8.000000 1
[1565196026] unbound[7414:0] debug: cache memory msg=68965 rrset=75290 infra=9590 val=70000 subnet=74504
[1565196026] unbound[7414:0] debug: svcd callbacks end
[1565196026] unbound[7414:0] debug: close fd 39
[1565196053] unbound[7414:0] info: service stopped (unbound 1.9.0).
[1565196053] unbound[7414:0] debug: stop threads
[1565196053] unbound[7414:0] debug: join 1
[1565196053] unbound[7414:1] debug: got control cmd quit
[1565196053] unbound[7414:0] debug: join success 1
[1565196053] unbound[7414:0] debug: join 2
[1565196053] unbound[7414:2] debug: got control cmd quit
[1565196053] unbound[7414:0] debug: join success 2
[1565196053] unbound[7414:0] debug: cleanup.

==> /var/log/syslog <==
Aug 7 18:40:53 localhost systemd[1]: unbound.service: Start operation timed out. Terminating.

==> /etc/unbound/logs/default.log <==
[1565196053] unbound[7414:0] info: server stats for thread 0: 2 queries, 0 answers from cache, 2 recursions, 0 prefetch, 0 rejected by ip ratelimiting
[1565196053] unbound[7414:0] info: server stats for thread 0: requestlist max 0 avg 0 exceeded 0 jostled 0
[1565196053] unbound[7414:0] info: mesh has 0 recursion states (0 with reply, 0 detached), 0 waiting replies, 2 recursion replies sent, 0 replies dropped, 0 states jostled out
[1565196053] unbound[7414:0] info: average recursion processing time 2.559731 sec
[1565196053] unbound[7414:0] info: histogram of recursion processing times
[1565196053] unbound[7414:0] info: [25%]=0 median[50%]=0 [75%]=0
[1565196053] unbound[7414:0] info: lower(secs) upper(secs) recursions
[1565196053] unbound[7414:0] info: 0.524288 1.000000 1
[1565196053] unbound[7414:0] info: 4.000000 8.000000 1
[1565196053] unbound[7414:0] debug: cache memory msg=66072 rrset=66072 infra=9590 val=70000 subnet=74504
[1565196054] unbound[7414:0] info: server stats for thread 1: 1 queries, 0 answers from cache, 1 recursions, 0 prefetch, 0 rejected by ip ratelimiting
[1565196054] unbound[7414:0] info: server stats for thread 1: requestlist max 0 avg 0 exceeded 0 jostled 0
[1565196054] unbound[7414:0] info: mesh has 0 recursion states (0 with reply, 0 detached), 0 waiting replies, 1 recursion replies sent, 0 replies dropped, 0 states jostled out
[1565196054] unbound[7414:0] info: average recursion processing time 9.752778 sec
[1565196054] unbound[7414:0] info: histogram of recursion processing times
[1565196054] unbound[7414:0] info: [25%]=0 median[50%]=0 [75%]=0
[1565196054] unbound[7414:0] info: lower(secs) upper(secs) recursions
[1565196054] unbound[7414:0] info: 8.000000 16.000000 1
[1565196054] unbound[7414:0] debug: cache memory msg=66072 rrset=66072 infra=9590 val=70000 subnet=74504
[1565196054] unbound[7414:0] info: server stats for thread 2: 0 queries, 0 answers from cache, 0 recursions, 0 prefetch, 0 rejected by ip ratelimiting
[1565196054] unbound[7414:0] info: server stats for thread 2: requestlist max 0 avg 0 exceeded 0 jostled 0
[1565196054] unbound[7414:0] info: mesh has 0 recursion states (0 with reply, 0 detached), 0 waiting replies, 0 recursion replies sent, 0 replies dropped, 0 states jostled out
[1565196054] unbound[7414:0] debug: cache memory msg=66072 rrset=66072 infra=9590 val=70000 subnet=74504
[1565196054] unbound[7414:0] debug: Exit cleanup.
[1565196054] unbound[7414:0] debug: switching log to stderr

==> /var/log/syslog <==
Aug 7 18:40:54 localhost systemd[1]: unbound.service: Failed with result 'timeout'.
Aug 7 18:40:54 localhost systemd[1]: Failed to start Unbound DNS server.

Support for regexp filters

It is widely known that Unbound can be used to block ads and trackers (by creating a list of void zones), which is a great feature, but unfortunately it doesn't support regular expressions in domain names.

For example, I think it would be nice to be able to use such kind of rules:

local-zone: "ads[0-9]*" static
local-zone: "ad?.*" static
local-zone: "*analytics*" static
local-zone: "=youtube.com" static

error: outgoing tcp: connect: Invalid argument for

When forwarding queries to a link-local IPv6 address, e.g.:

forward-zone:
  name: "."
  forward-addr: fe80::?:?:?:?

I am seeing this many times in the log:

Aug 23 01:04:33 instance-1 unbound: [619:0] error: outgoing tcp: connect: Invalid argument for fe80::?:?:?:? port 53

Question marks added to link-local address to obscure real address.

The configuration works, but the errors do make a mess in syslog.

Add cache handling to libunbound

Would there be receptivity if I submitted PRs to augment libunbound with the ability to clear a specific query’s result from the caches?

I see logic like this in daemon/remote.c’s do_cache_remove() function.

What I’d like to be able to do is to do a query to see if a DNS propagation has taken effect, then redo that query in a few seconds if it hasn’t. Right now the only viable solution is recreating the libunbound context, which carries an undesirable overhead. Ideally, I’d be able to tell libunbound to clear just that specific query’s result, then redo the query.

systemd is not notified when chroot is in place

When unbound is ran as a systemd service and is also configured to chroot it may be the case that unbound cannot communicate to systemd through the NOTIFY_SOCKET that probably lives outside the chroot.
This leads to systemd thinking the unbound service is unresponsive.

Try to check if that is the case and warn the user.

Add support for multiple background threads/forks in libunbound

In the unbound daemon, we can configure multiple background threads with the

num-threads: <number>

config option. For the library we can specify the same option, but without any effect. The ub_resolve_async call will always create exactly one background thread or fork. So to create multiple background threads with the library, you can only create multiple context objects and then call ub_resolve_async with one of the contexts; with the downside of a non-shared cache.

So please add the same feature of multiple background threads from the daemon to the library too.

[feature suggestion] differentiate ipv4/6 traffic up/downstream

In the current implementation (1.9.1) ipv4/6 traffic can only be turned on globally but does not provide an option to selectively set ipv4/6 per upstream or downstream.

It would be appreciated if such option could be provided, improving granularity between upstream and downstream.

Need a way to attempt forward lookup if recursive lookup fails

I need a way to specify that queries should be sent to a forwarder for additional lookup if and only if the recursive lookup fails. I maintain a separate DNS server that does recursive lookups to the Internet (for testing and other diagnostic reasons, among other things) other than a few specific internal zones, but it appears that AT&T's DNS servers don't accept queries from just anyone, so any lookup of a domain for which their DNS servers are authoritative will fail with a timeout. Therefore, I want to have the DNS server forward the requests for those domains (and any others that act the same way) to the standard corporate DNS servers which are able to resolve those domains. I know I can manually add domains as I find them, but that's a bit of a pain and not something I'd like to do if possible.

Basically, I'm looking for something like a "forward-last" option as opposed to the "forward-first" one, where forwarding is used as the option of last resort before returning a failure. Is that something that can be done already, and if so in what version (I'm currently using 1.6.6)? If not, is that something that could be added?

use-caps-for-id: try

Have a weaker form of 0x20 that registers if an upstream authoritative can handle dns-0x20 on first contact.

[bug] no fallback to ipv4 for auth-zone if node's ipv6 upstream connectivity is not available

1.9.1 on OpenWrt 18.06 trunk


in case the node's ipv6 upstream connectivity/link is unavailable unbound's upstream query traffic falls back to ipv4 and dns queries are resolved.

However, for some inexplicable reason that is not working for traffic specified via:

auth-zone:
  name: "."
  for-downstream: no
  fallback-enabled: yes
  zonefile: "/srv/unbound/zone files/root"
  master: e.root-servers.net
  master: d.root-servers.net
  master: c.root-servers.net
  master: b.root-servers.net
  master: k.root-servers.net
  master: g.root-servers.net
  master: f.root-servers.net

The unbound log is showing is the apparent attempts to contact the root zone servers via ipv6, which fails as expected in case the node's ipv6 upstream connectivity/link is unavailable, however it would be expected then to fallback to ipv4 instead, same as unbound is doing for upstream query traffic that is not specified in auth-zone:

info: resolving f.root-servers.net. A IN
info: resolving f.root-servers.net. AAAA IN
info: resolving g.root-servers.net. A IN
info: resolving g.root-servers.net. AAAA IN
info: resolving k.root-servers.net. A IN
info: resolving k.root-servers.net. AAAA IN
info: resolving b.root-servers.net. A IN
info: resolving b.root-servers.net. AAAA IN
info: resolving c.root-servers.net. A IN
info: resolving c.root-servers.net. AAAA IN
info: resolving d.root-servers.net. A IN
info: resolving d.root-servers.net. AAAA IN
info: resolving e.root-servers.net. A IN
info: resolving e.root-servers.net. AAAA IN
notice: sendto failed: Permission denied
notice: remote address is 2001:500:2f::f port 53
notice: sendto failed: Permission denied
notice: remote address is 2001:500:12::d0d port 53
notice: sendto failed: Permission denied
notice: remote address is 2001:7fd::1 port 53
notice: sendto failed: Permission denied
notice: remote address is 2001:500:200::b port 53
notice: sendto failed: Permission denied
notice: remote address is 2001:500:2::c port 53
notice: sendto failed: Permission denied
notice: remote address is 2001:500:2d::d port 53
notice: sendto failed: Permission denied
notice: remote address is 2001:500:a8::e port 53


It seems that in such case unbound is not attempting ipv4 connectivity and just gives up, even with global server setting prefer-ip6: no.
Only way to get it working appears to be setting do-ip6: no

regexps with libpcre-8.43 in Unbound

Hello, Wouter!

I remeber about my patch related to calc_hash() function, but now i concerned on regexps in the best and fastest in the world resolver ;) So, what i need to:

  1. I would like to filter (answering of NXDOMAIN) incoming DNS queries using a set of regular expressions, like these (in the example the set with 6 rules):
    "^[a-z,0-9,-](.)?[a-z,0-9,-](xxx2)[a-z,0-9,-].(ripn)[a-z,0-9,-.](.)?$"
    "^[a-z,0-9,-](.)?[a-z,0-9,-](xxx3)[a-z,0-9,-].(ripn)[a-z,0-9,-.](.)?$"
    "^[a-z,0-9,-](.)?[a-z,0-9,-](xxx4)[a-z,0-9,-].(ripn)[a-z,0-9,-.](.)?$"
    "^[a-z,0-9,-](.)?[a-z,0-9,-](xxx5)[a-z,0-9,-].(ripn)[a-z,0-9,-.](.)?$"
    "^[a-z,0-9,-](.)?[a-z,0-9,-](xxx6)[a-z,0-9,-].(ripn)[a-z,0-9,-.](.)?$"
    "^[a-z,0-9,-](.)?[a-z,0-9,-](xxx7)[a-z,0-9,-].(ripn)[a-z,0-9,-.](.)?$"
  2. And i want that the perfomance of Unbound is not affected by filtering incoming queries using these filters
  3. All the rules need to be loaded/reloaded from unbound.conf

This feature might be resolvable with Python (using python module in Unbound) but the perfomance in this case is too poor (15000 replies per second). And the bottleneck is invalidateQueryInCache() system call from Python script.

And what i have done at the moment:

  1. I wrote simple C-file (with its own header file) with calls from libpcre-8.43 (PCRE1). I won't give detailed descrition of the functions, i think you will everything see by youself:
  • the header fastregexp/fastregexp.h:
    #include <pcre.h>

struct my_regex {
pcre *my_reCompiled;
pcre_extra *my_pcreExtra;
pcre_jit_stack *my_jit_stack;
struct my_regex *next;
};

void cleanup_fast_regexp(struct my_regex *my_regex);
int do_fast_regexp(struct my_regex *my_regex, char *testString);
struct my_regex *study_fast_regexp(struct my_regex *my_regex);
struct my_regex *compile_fast_regexp(struct my_regex *my_regex, char *aRegexStrV[], int num_aRegexStrV);

  • C-file fastregexp/fastregexp.c:
    #include "config.h"
    #include "util/log.h"
    #include "fastregexp/fastregexp.h"
    #include <pcre.h>
    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>

void cleanup_fast_regexp(struct my_regex *my_regex)
{
struct my_regex *my_regex_next;

    log_err("Cleaning up all regex structures");

    while(my_regex_next != NULL) {
            my_regex_next = my_regex->next;
            free(my_regex);
            my_regex = my_regex_next;
    }

}

int do_fast_regexp(struct my_regex *my_regex, char *testString)
{
int subStrVec[30];

while(my_regex != NULL) {
int pcreExecRet = pcre_jit_exec(my_regex->my_reCompiled,
my_regex->my_pcreExtra,
testString,
strlen(testString),
0,
0,
subStrVec,
30,
my_regex->my_jit_stack);

       if(pcreExecRet >= 0)
            return 1;
       my_regex = my_regex->next;

} /* end of while */

return 0;
}
struct my_regex *study_fast_regexp(struct my_regex *my_regex)
{
pcre_extra *pcreExtra;
const char *pcreErrorStr;
struct my_regex *my_regex_start = my_regex;
pcre_jit_stack *jit_stack;

while(my_regex != NULL) {
pcreExtra = pcre_study(my_regex->my_reCompiled, PCRE_STUDY_JIT_COMPILE, &pcreErrorStr);
/* pcre_study() returns NULL for both errors and when it can not optimize the regex. The last argument is how one checks for
errors (it is NULL if everything works, and points to an error string otherwise. */
if(pcreErrorStr != NULL) {
log_err("fastregexp: JIT optimization error: %s. Cleaning up all regex structures", pcreErrorStr);
cleanup_fast_regexp(my_regex_start);
return NULL;
}

    jit_stack = pcre_jit_stack_alloc(32*1024, 1024*1024);
    pcre_assign_jit_stack(pcreExtra, NULL, jit_stack);

    my_regex->my_pcreExtra = pcreExtra;
    my_regex->my_jit_stack = jit_stack;
    my_regex = my_regex->next;

} /* end of while */

return my_regex_start;
}
struct my_regex *compile_fast_regexp(struct my_regex *my_regex, char *aRegexStrV[], int num_aRegexStrV)
{
pcre *reCompiled;
const char *pcreErrorStr;
int pcreErrorOffset;
char **aStrRegex;

struct my_regex *my_regex_prev = NULL;
struct my_regex *my_regex_start = NULL;

for(int i=0; i<num_aRegexStrV; i++) {
//log_err("the regex is: %s", aRegexStrV[i]);
if((my_regex = (struct my_regex*) malloc(sizeof(struct my_regex))) == NULL) {
log_err("fastregexp: general memory allocation error");
return NULL;
}

    if(my_regex_prev != NULL) {
            my_regex_prev->next = my_regex;
    } else {
            my_regex_start = my_regex;
    }

    reCompiled = pcre_compile(aRegexStrV[i], 0, &pcreErrorStr, &pcreErrorOffset, NULL);
    if(reCompiled == NULL) {
            log_err("fastregexp: error allocating memory for PCRE stack: regex is %s: the reason: %s. Cleaning up all regex structures", aRegexStrV[i], pcreErrorStr);
            cleanup_fast_regexp(my_regex_start);
            return NULL;
    }

    my_regex->my_reCompiled = reCompiled;
    my_regex->next = NULL;
    my_regex_prev = my_regex;

} /* end of for */

return my_regex_start;

//pcre_free_substring(psubStrMatchStr);
pcre_free(reCompiled);

// Free up the EXTRA PCRE value (may be NULL at this point)
// if(pcreExtra != NULL) {
//#ifdef PCRE_CONFIG_JIT
// pcre_free_study(pcreExtra);
//#else
// pcre_free(pcreExtra);
//#endif
// }
}

Next, i patched your following source files:
--- unbound-1.9.2.orig/util/module.h 2019-06-17 11:50:16.000000000 +0300
+++ unbound-1.9.2/util/module.h 2019-09-16 11:54:20.302813000 +0300
@@ -156,6 +156,8 @@
#include "util/storage/lruhash.h"
#include "util/data/msgreply.h"
#include "util/data/msgparse.h"
+//igorr
+#include "fastregexp/fastregexp.h"
struct sldns_buffer;
struct alloc_cache;
struct rrset_cache;
@@ -512,6 +514,10 @@

    /* Make every mesh state unique, do not aggregate mesh states. */
    int unique_mesh;
  •    //igorr
    
  •    /* pointer to the my_regex structure to perform fast PCRE regexp's */
    
  •    struct my_regex *my_fast_regexp;
    

};

/**

--- unbound-1.9.2.orig/daemon/worker.c 2019-06-17 11:50:16.000000000 +0300
+++ unbound-1.9.2/daemon/worker.c 2019-09-17 13:00:20.176700000 +0300
@@ -1892,6 +1892,11 @@
worker->env.cfg->stat_interval);
worker_restart_timer(worker);
}
+

  •    //igorr
    
  •    if((worker->env.my_fast_regexp = compile_fast_regexp(worker->env.my_fast_regexp, worker->env.cfg->regexstrv, worker->env.cfg->num_regexstrv)) != NULL)
    
  •            worker->env.my_fast_regexp = study_fast_regexp(worker->env.my_fast_regexp);
    
  •   return 1;
    

}

@@ -1933,6 +1938,8 @@
alloc_clear(&worker->alloc);
regional_destroy(worker->env.scratch);
regional_destroy(worker->scratchpad);

  •   //igorr
    
  •   cleanup_fast_regexp(worker->env.my_fast_regexp);
      free(worker);
    

}

--- unbound-1.9.2.orig/iterator/iterator.c 2019-06-17 11:50:16.000000000 +0300
+++ unbound-1.9.2/iterator/iterator.c 2019-09-16 12:34:32.062665000 +0300
@@ -160,6 +160,7 @@
outbound_list_init(&iq->outlist);
iq->minimise_count = 0;
iq->minimise_timeout_count = 0;
+
if (qstate->env->cfg->qname_minimisation)
iq->minimisation_state = INIT_MINIMISE_STATE;
else
@@ -2576,6 +2577,23 @@
enum response_type type;
iq->num_current_queries--;

  •    //igorr
    
  •   if(qstate->env->my_fast_regexp != NULL) {
    
  •           char my_qname[1024];
    
  •           char *my_qname_p = my_qname;
    
  •           char qdn_buf[1024];
    
  •           char *qdn_buf_p = qdn_buf;
    
  •                   size_t qdn_buf_len = sizeof(qdn_buf);
    
  •           strcpy(my_qname_p, qstate->qinfo.qname);
    
  •           size_t my_qname_len = qstate->qinfo.qname_len;
    
  •           sldns_wire2str_dname_scan(&my_qname_p, &my_qname_len, &qdn_buf_p, &qdn_buf_len, NULL, 0);
    
  •           if(do_fast_regexp(qstate->env->my_fast_regexp, qdn_buf) == 1) {
    
  •                   log_warn("blacklisting the domain name: %s", qdn_buf);
    
  •                   return error_response_cache(qstate, id, LDNS_RCODE_NXDOMAIN);
    
  •           }
    
  •   }
    
  •   if(!inplace_cb_query_response_call(qstate->env, qstate, iq->response))
              log_err("unable to call query_response callback");
    

--- unbound-1.9.2.orig/util/config_file.h 2019-06-17 11:50:16.000000000 +0300
+++ unbound-1.9.2/util/config_file.h 2019-09-16 13:07:10.312655000 +0300
@@ -575,6 +575,10 @@
int redis_timeout;
#endif
#endif

  •   //igorr
    
  •   /* fastregexp regexp descriptions */
    
  •   int num_regexstrv;
    
  •   char **regexstrv;
    

};

/** from cfg username, after daemonize setup performed */

--- unbound-1.9.2.orig/util/config_file.c 2019-06-17 11:50:16.000000000 +0300
+++ unbound-1.9.2/util/config_file.c 2019-09-16 17:28:00.678244000 +0300
@@ -327,6 +327,9 @@
cfg->cachedb_backend = NULL;
cfg->cachedb_secret = NULL;
#endif

  •   //igorr
    
  •   cfg->num_regexstrv = 0;
    
  •   cfg->regexstrv = NULL;
      return cfg;
    

error_exit:
config_delete(cfg);
@@ -1092,6 +1095,8 @@
else O_STR(opt, "backend", cachedb_backend)
else O_STR(opt, "secret-seed", cachedb_secret)
#endif

  •   //igorr
    
  •   else O_IFC(opt, "pattern", num_regexstrv, regexstrv)
      /* not here:
       * outgoing-permit, outgoing-avoid - have list of ports
       * local-zone - zones and nodefault variables
    

@@ -1428,6 +1433,8 @@
free(cfg->cachedb_backend);
free(cfg->cachedb_secret);
#endif

  •   //igorr
    
  •   config_del_strarray(cfg->regexstrv, cfg->num_regexstrv);
      free(cfg);
    

}

--- unbound-1.9.2.orig/util/configparser.y 2019-06-17 11:50:16.000000000 +0300
+++ unbound-1.9.2/util/configparser.y 2019-09-16 17:27:35.678485000 +0300
@@ -158,6 +158,7 @@
%token VAR_IPSECMOD_MAX_TTL VAR_IPSECMOD_WHITELIST VAR_IPSECMOD_STRICT
%token VAR_CACHEDB VAR_CACHEDB_BACKEND VAR_CACHEDB_SECRETSEED
%token VAR_CACHEDB_REDISHOST VAR_CACHEDB_REDISPORT VAR_CACHEDB_REDISTIMEOUT
+%token VAR_REGEXP VAR_REGEXP_PATTERN
%token VAR_UDP_UPSTREAM_WITHOUT_DOWNSTREAM VAR_FOR_UPSTREAM
%token VAR_AUTH_ZONE VAR_ZONEFILE VAR_MASTER VAR_URL VAR_FOR_DOWNSTREAM
%token VAR_FALLBACK_ENABLED VAR_TLS_ADDITIONAL_PORT VAR_LOW_RTT VAR_LOW_RTT_PERMIL
@@ -174,7 +175,7 @@
forwardstart contents_forward | pythonstart contents_py |
rcstart contents_rc | dtstart contents_dt | viewstart contents_view |
dnscstart contents_dnsc | cachedbstart contents_cachedb |

  •   authstart contents_auth
    
  •   authstart contents_auth | regexpstart contents_regexp
      ;
    

/* server: declaration */
@@ -2959,6 +2960,28 @@
}
}
;
+regexpstart: VAR_REGEXP

  •    {
    
  •            OUTYY(("\nP(regexp:)\n"));
    
  •    }
    
  •    ;
    

+contents_regexp: contents_regexp content_regexp

  •    | ;
    

+content_regexp: regexp_pattern

  •    ;
    

+regexp_pattern: VAR_REGEXP_PATTERN STRING_ARG

  •    {
    
  •            OUTYY(("P(regexp_pattern:%s)\n", $2));
    
  •            if(cfg_parser->cfg->num_regexstrv == 0)
    
  •                    cfg_parser->cfg->regexstrv = calloc(1, sizeof(char*));
    
  •            else    cfg_parser->cfg->regexstrv = realloc(cfg_parser->cfg->regexstrv,
    
  •                            (cfg_parser->cfg->num_regexstrv+1)*sizeof(char*));
    
  •            if(!cfg_parser->cfg->regexstrv)
    
  •                    yyerror("out of memory");
    
  •            else
    
  •                    cfg_parser->cfg->regexstrv[cfg_parser->cfg->num_regexstrv++] = $2;
    
  •    }
    
  •    ;
    

%%

/* parse helper routines could be here */

--- unbound-1.9.2.orig/util/configlexer.lex 2019-06-17 11:50:16.000000000 +0300
+++ unbound-1.9.2/util/configlexer.lex 2019-09-16 15:04:30.764354000 +0300
@@ -483,6 +483,8 @@
redis-server-host{COLON} { YDVAR(1, VAR_CACHEDB_REDISHOST) }
redis-server-port{COLON} { YDVAR(1, VAR_CACHEDB_REDISPORT) }
redis-timeout{COLON} { YDVAR(1, VAR_CACHEDB_REDISTIMEOUT) }
+regexp{COLON} { YDVAR(0, VAR_REGEXP) }
+pattern{COLON} { YDVAR(1, VAR_REGEXP_PATTERN) }
udp-upstream-without-downstream{COLON} { YDVAR(1, VAR_UDP_UPSTREAM_WITHOUT_DOWNSTREAM) }
tcp-connection-limit{COLON} { YDVAR(2, VAR_TCP_CONNECTION_LIMIT) }
<INITIAL,val>{NEWLINE} { LEXOUT(("NL\n")); cfg_parser->line++; }

--- unbound-1.9.2.orig/Makefile 2019-09-17 13:38:35.414726000 +0300
+++ unbound-1.9.2/Makefile 2019-09-16 12:31:51.334154000 +0300
@@ -59,14 +59,14 @@
PYTHON_CPPFLAGS=-I. -I/usr/local/include/python2.7
CFLAGS=-DSRCDIR=$(srcdir) -g -O2 -D_THREAD_SAFE -pthread
LDFLAGS=-L/usr/local/lib -L/usr/local/lib -L/usr/local/lib
-LIBS=-lutil -levent -L/usr/local/lib -L/usr/local/lib/python2.7 -L. -lpython2.7 -lcrypto -lhiredis
+LIBS=-lutil -levent -L/usr/local/lib -L/usr/local/lib/python2.7 -L. -lpython2.7 -lcrypto -lhiredis -lpcre
LIBOBJS= ${LIBOBJDIR}explicit_bzero$U.o ${LIBOBJDIR}reallocarray$U.o

filter out ctime_r from compat obj.

LIBOBJ_WITHOUT_CTIME= explicit_bzero.o reallocarray.o
LIBOBJ_WITHOUT_CTIMEARC4= explicit_bzero.o
RUNTIME_PATH= -R/usr/local/lib
DEPFLAG=-MM
-DATE=20190917
+DATE=20190912
LIBTOOL=$(libtool)
BUILD=build/
UBSYMS=-export-symbols $(srcdir)/libunbound/ubsyms.def
@@ -126,7 +126,8 @@
edns-subnet/edns-subnet.c edns-subnet/subnetmod.c
edns-subnet/addrtree.c edns-subnet/subnet-whitelist.c
cachedb/cachedb.c cachedb/redis.c respip/respip.c $(CHECKLOCK_SRC)
-$(DNSTAP_SRC) $(DNSCRYPT_SRC) $(IPSECMOD_SRC)
+$(DNSTAP_SRC) $(DNSCRYPT_SRC) $(IPSECMOD_SRC)
+fastregexp/fastregexp.c
COMMON_OBJ_WITHOUT_NETCALL=dns.lo infra.lo rrset.lo dname.lo msgencode.lo
as112.lo msgparse.lo msgreply.lo packed_rrset.lo iterator.lo iter_delegpt.lo
iter_donotq.lo iter_fwd.lo iter_hints.lo iter_priv.lo iter_resptype.lo
@@ -139,7 +140,7 @@
validator.lo val_kcache.lo val_kentry.lo val_neg.lo val_nsec3.lo val_nsec.lo
val_secalgo.lo val_sigcrypt.lo val_utils.lo dns64.lo cachedb.lo redis.lo authzone.lo
$(SUBNET_OBJ) $(PYTHONMOD_OBJ) $(CHECKLOCK_OBJ) $(DNSTAP_OBJ) $(DNSCRYPT_OBJ)
-$(IPSECMOD_OBJ) respip.lo
+$(IPSECMOD_OBJ) respip.lo fastregexp.lo
COMMON_OBJ_WITHOUT_UB_EVENT=$(COMMON_OBJ_WITHOUT_NETCALL) netevent.lo listen_dnsport.lo
outside_network.lo
COMMON_OBJ=$(COMMON_OBJ_WITHOUT_UB_EVENT) ub_event.lo
@@ -692,7 +693,7 @@
$(srcdir)/services/modstack.h $(srcdir)/util/net_help.h $(srcdir)/util/regional.h $(srcdir)/util/data/dname.h
$(srcdir)/util/data/msgencode.h $(srcdir)/util/fptr_wlist.h $(srcdir)/util/tube.h $(srcdir)/util/config_file.h
$(srcdir)/util/random.h $(srcdir)/sldns/wire2str.h $(srcdir)/sldns/str2wire.h $(srcdir)/sldns/parseutil.h \

  • $(srcdir)/sldns/sbuffer.h
  • $(srcdir)/sldns/sbuffer.h $(srcdir)/fastregexp/fastregexp.h
    iter_delegpt.lo iter_delegpt.o: $(srcdir)/iterator/iter_delegpt.c config.h $(srcdir)/iterator/iter_delegpt.h
    $(srcdir)/util/log.h $(srcdir)/services/cache/dns.h $(srcdir)/util/storage/lruhash.h $(srcdir)/util/locks.h
    $(srcdir)/util/data/msgreply.h $(srcdir)/util/data/packed_rrset.h $(srcdir)/util/regional.h
    @@ -1214,7 +1215,7 @@
    $(srcdir)/util/fptr_wlist.h $(srcdir)/util/tube.h $(srcdir)/util/edns.h $(srcdir)/iterator/iter_fwd.h
    $(srcdir)/iterator/iter_hints.h $(srcdir)/validator/autotrust.h $(srcdir)/validator/val_anchor.h
    $(srcdir)/respip/respip.h $(srcdir)/libunbound/context.h $(srcdir)/libunbound/unbound-event.h \
  • $(srcdir)/libunbound/libworker.h $(srcdir)/sldns/wire2str.h $(srcdir)/util/shm_side/shm_main.h
  • $(srcdir)/libunbound/libworker.h $(srcdir)/sldns/wire2str.h $(srcdir)/util/shm_side/shm_main.h $(srcdir)/fastregexp/fastregexp.h
    testbound.lo testbound.o: $(srcdir)/testcode/testbound.c config.h $(srcdir)/testcode/testpkts.h
    $(srcdir)/testcode/replay.h $(srcdir)/util/netevent.h $(srcdir)/dnscrypt/dnscrypt.h
    $(srcdir)/util/rbtree.h $(srcdir)/testcode/fake_event.h
    @@ -1247,7 +1248,7 @@
    $(srcdir)/util/fptr_wlist.h $(srcdir)/util/tube.h $(srcdir)/util/edns.h $(srcdir)/iterator/iter_fwd.h
    $(srcdir)/iterator/iter_hints.h $(srcdir)/validator/autotrust.h $(srcdir)/validator/val_anchor.h
    $(srcdir)/respip/respip.h $(srcdir)/libunbound/context.h $(srcdir)/libunbound/unbound-event.h \
  • $(srcdir)/libunbound/libworker.h $(srcdir)/sldns/wire2str.h $(srcdir)/util/shm_side/shm_main.h
  • $(srcdir)/libunbound/libworker.h $(srcdir)/sldns/wire2str.h $(srcdir)/util/shm_side/shm_main.h $(srcdir)/fastregexp/fastregexp.h
    acl_list.lo acl_list.o: $(srcdir)/daemon/acl_list.c config.h $(srcdir)/daemon/acl_list.h
    $(srcdir)/util/storage/dnstree.h $(srcdir)/util/rbtree.h $(srcdir)/services/view.h $(srcdir)/util/locks.h
    $(srcdir)/util/log.h $(srcdir)/util/regional.h $(srcdir)/util/config_file.h $(srcdir)/util/net_help.h
    @@ -1462,3 +1463,4 @@
    reallocarray.lo reallocarray.o: $(srcdir)/compat/reallocarray.c config.h
    isblank.lo isblank.o: $(srcdir)/compat/isblank.c config.h
    strsep.lo strsep.o: $(srcdir)/compat/strsep.c config.h
    +fastregexp.lo fastregexp.o: $(srcdir)/fastregexp/fastregexp.c config.h $(srcdir)/fastregexp/fastregexp.h $(srcdir)/util/log.h

Thats all if i didn't forget anything. About Makefile - i know, that is the right way to patch Makefile.in. But now i'm interesting in final result of stabilty and perfomance. And yacc/lex-sources - i tried to add my two options (regexp: and pattern:) using existing declarations of config options. And it was too hard for me ;)

Now what i have:

  • Perfomance - 110000-120000 replies per second (my sandbox is KVM virtual machine)
  • I can add/remove regexps from unbound.conf. The structure of this config section is the following:
    regexp:
    pattern: "^[a-z,0-9,-](.)?[a-z,0-9,-](xxx2)[a-z,0-9,-].(ripn)[a-z,0-9,-.](.)?$"
    pattern: "^[a-z,0-9,-](.)?[a-z,0-9,-](xxx3)[a-z,0-9,-].(ripn)[a-z,0-9,-.](.)?$"
    pattern: "^[a-z,0-9,-](.)?[a-z,0-9,-](xxx4)[a-z,0-9,-].(ripn)[a-z,0-9,-.](.)?$"
    pattern: "^[a-z,0-9,-](.)?[a-z,0-9,-](xxx5)[a-z,0-9,-].(ripn)[a-z,0-9,-.](.)?$"
    pattern: "^[a-z,0-9,-](.)?[a-z,0-9,-](xxx6)[a-z,0-9,-].(ripn)[a-z,0-9,-.](.)?$"
    pattern: "^[a-z,0-9,-](.)?[a-z,0-9,-](xxx7)[a-z,0-9,-].(ripn)[a-z,0-9,-.](.)?$"

But i have several issues:

  • memory leakage after "unbound-conrol reload" when i adding/removing some regexp rules
  • if i use redis cachedb and i added some regexp rules to filter queries that previously were serviced, such queries are resoving from redis

What i would like now - is your authoritative opinion about if all my actions is right or maybe i could (and this is most likely) be wrong in my code. Could you please revise my pathces and tell me what i have to do else

Big thank you in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.