GithubHelp home page GithubHelp logo

ci's Introduction

CoreDNS

Documentation CodeQL Go Tests CircleCI Code Coverage Docker Pulls Go Report Card CII Best Practices

CoreDNS is a DNS server/forwarder, written in Go, that chains plugins. Each plugin performs a (DNS) function.

CoreDNS is a Cloud Native Computing Foundation graduated project.

CoreDNS is a fast and flexible DNS server. The key word here is flexible: with CoreDNS you are able to do what you want with your DNS data by utilizing plugins. If some functionality is not provided out of the box you can add it by writing a plugin.

CoreDNS can listen for DNS requests coming in over:

Currently CoreDNS is able to:

  • Serve zone data from a file; both DNSSEC (NSEC only) and DNS are supported (file and auto).
  • Retrieve zone data from primaries, i.e., act as a secondary server (AXFR only) (secondary).
  • Sign zone data on-the-fly (dnssec).
  • Load balancing of responses (loadbalance).
  • Allow for zone transfers, i.e., act as a primary server (file + transfer).
  • Automatically load zone files from disk (auto).
  • Caching of DNS responses (cache).
  • Use etcd as a backend (replacing SkyDNS) (etcd).
  • Use k8s (kubernetes) as a backend (kubernetes).
  • Serve as a proxy to forward queries to some other (recursive) nameserver (forward).
  • Provide metrics (by using Prometheus) (prometheus).
  • Provide query (log) and error (errors) logging.
  • Integrate with cloud providers (route53).
  • Support the CH class: version.bind and friends (chaos).
  • Support the RFC 5001 DNS name server identifier (NSID) option (nsid).
  • Profiling support (pprof).
  • Rewrite queries (qtype, qclass and qname) (rewrite and template).
  • Block ANY queries (any).
  • Provide DNS64 IPv6 Translation (dns64).

And more. Each of the plugins is documented. See coredns.io/plugins for all in-tree plugins, and coredns.io/explugins for all out-of-tree plugins.

Compilation from Source

To compile CoreDNS, we assume you have a working Go setup. See various tutorials if you don’t have that already configured.

First, make sure your golang version is 1.21 or higher as go mod support and other api is needed. See here for go mod details. Then, check out the project and run make to compile the binary:

$ git clone https://github.com/coredns/coredns
$ cd coredns
$ make

This should yield a coredns binary.

Compilation with Docker

CoreDNS requires Go to compile. However, if you already have docker installed and prefer not to setup a Go environment, you could build CoreDNS easily:

docker run --rm -i -t \
    -v $PWD:/go/src/github.com/coredns/coredns -w /go/src/github.com/coredns/coredns \
        golang:1.21 sh -c 'GOFLAGS="-buildvcs=false" make gen && GOFLAGS="-buildvcs=false" make'

The above command alone will have coredns binary generated.

Examples

When starting CoreDNS without any configuration, it loads the whoami and log plugins and starts listening on port 53 (override with -dns.port), it should show the following:

.:53
CoreDNS-1.6.6
linux/amd64, go1.16.10, aa8c32

The following could be used to query the CoreDNS server that is running now:

dig @127.0.0.1 -p 53 www.example.com

Any query sent to port 53 should return some information; your sending address, port and protocol used. The query should also be logged to standard output.

The configuration of CoreDNS is done through a file named Corefile. When CoreDNS starts, it will look for the Corefile from the current working directory. A Corefile for CoreDNS server that listens on port 53 and enables whoami plugin is:

.:53 {
    whoami
}

Sometimes port number 53 is occupied by system processes. In that case you can start the CoreDNS server while modifying the Corefile as given below so that the CoreDNS server starts on port 1053.

.:1053 {
    whoami
}

If you have a Corefile without a port number specified it will, by default, use port 53, but you can override the port with the -dns.port flag: coredns -dns.port 1053, runs the server on port 1053.

You may import other text files into the Corefile using the import directive. You can use globs to match multiple files with a single import directive.

.:53 {
    import example1.txt
}
import example2.txt

You can use environment variables in the Corefile with {$VARIABLE}. Note that each environment variable is inserted into the Corefile as a single token. For example, an environment variable with a space in it will be treated as a single token, not as two separate tokens.

.:53 {
    {$ENV_VAR}
}

A Corefile for a CoreDNS server that forward any queries to an upstream DNS (e.g., 8.8.8.8) is as follows:

.:53 {
    forward . 8.8.8.8:53
    log
}

Start CoreDNS and then query on that port (53). The query should be forwarded to 8.8.8.8 and the response will be returned. Each query should also show up in the log which is printed on standard output.

To serve the (NSEC) DNSSEC-signed example.org on port 1053, with errors and logging sent to standard output. Allow zone transfers to everybody, but specifically mention 1 IP address so that CoreDNS can send notifies to it.

example.org:1053 {
    file /var/lib/coredns/example.org.signed
    transfer {
        to * 2001:500:8f::53
    }
    errors
    log
}

Serve example.org on port 1053, but forward everything that does not match example.org to a recursive nameserver and rewrite ANY queries to HINFO.

example.org:1053 {
    file /var/lib/coredns/example.org.signed
    transfer {
        to * 2001:500:8f::53
    }
    errors
    log
}

. {
    any
    forward . 8.8.8.8:53
    errors
    log
}

IP addresses are also allowed. They are automatically converted to reverse zones:

10.0.0.0/24 {
    whoami
}

Means you are authoritative for 0.0.10.in-addr.arpa..

This also works for IPv6 addresses. If for some reason you want to serve a zone named 10.0.0.0/24 add the closing dot: 10.0.0.0/24. as this also stops the conversion.

This even works for CIDR (See RFC 1518 and 1519) addressing, i.e. 10.0.0.0/25, CoreDNS will then check if the in-addr request falls in the correct range.

Listening on TLS (DoT) and for gRPC? Use:

tls://example.org grpc://example.org {
    whoami
}

Similarly, for QUIC (DoQ):

quic://example.org {
    whoami
    tls mycert mykey
}

And for DNS over HTTP/2 (DoH) use:

https://example.org {
    whoami
    tls mycert mykey
}

in this setup, the CoreDNS will be responsible for TLS termination

you can also start DNS server serving DoH without TLS termination (plain HTTP), but beware that in such scenario there has to be some kind of TLS termination proxy before CoreDNS instance, which forwards DNS requests otherwise clients will not be able to communicate via DoH with the server

https://example.org {
    whoami
}

Specifying ports works in the same way:

grpc://example.org:1443 https://example.org:1444 {
    # ...
}

When no transport protocol is specified the default dns:// is assumed.

Community

We're most active on Github (and Slack):

More resources can be found:

Contribution guidelines

If you want to contribute to CoreDNS, be sure to review the contribution guidelines.

Deployment

Examples for deployment via systemd and other use cases can be found in the deployment repository.

Deprecation Policy

When there is a backwards incompatible change in CoreDNS the following process is followed:

  • Release x.y.z: Announce that in the next release we will make backward incompatible changes.
  • Release x.y+1.0: Increase the minor version and set the patch version to 0. Make the changes, but allow the old configuration to be parsed. I.e. CoreDNS will start from an unchanged Corefile.
  • Release x.y+1.1: Increase the patch version to 1. Remove the lenient parsing, so CoreDNS will not start if those features are still used.

E.g. 1.3.1 announce a change. 1.4.0 a new release with the change but backward compatible config. And finally 1.4.1 that removes the config workarounds.

Security

Security Audits

Third party security audits have been performed by:

Reporting security vulnerabilities

If you find a security vulnerability or any security related issues, please DO NOT file a public issue, instead send your report privately to [email protected]. Security reports are greatly appreciated and we will publicly thank you for it.

Please consult security vulnerability disclosures and security fix and release process document

ci's People

Contributors

aojea avatar chrisohaver avatar johnbelamaric avatar miekg avatar nyodas avatar rajansandeep avatar stuartnelson3 avatar xuanwo avatar yongtang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ci's Issues

kubernetes: TestKubernetesAPIFallthrough has confusing output

The TestKubernetesAPIFallthrough test expects to see connection errors, but those errors are printed to the log. Should hide these because they look like test failures...

=== RUN   TestKubernetesAPIFallthrough
E0208 23:21:07.036014    4158 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:268: Failed to list *v1.Endpoints: an error on the server ("Unable to establish connection to upstream tcp://invalidip:8080: dial tcp: lookup invalidip on 147.75.207.207:53: no such host") has prevented the request from succeeding (get endpoints)
E0208 23:21:07.036036    4158 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:267: Failed to list *v1.Service: an error on the server ("Unable to establish connection to upstream tcp://invalidip:8080: dial tcp: lookup invalidip on 147.75.207.207:53: no such host") has prevented the request from succeeding (get services)
--- PASS: TestKubernetesAPIFallthrough (4.20s)

error message in log

The integration test has the not found error in its output:

export KUBECONFIG=$HOME/.kube/config
if [[ -z ${K8S_VERSION} ]]; then
  minikube start --vm-driver=none
else
  minikube start --vm-driver=none --kubernetes-version=${K8S_VERSION}
fi
./build/kubernetes/minikube_setup.sh: 17: ./build/kubernetes/minikube_setup.sh: [[: not found
Starting local Kubernetes v1.7.5 cluster...

all kubernetes CI fail after minikube upgrade.

This is mostly caused by minikube's migration from localkube to kubeadm.
Infrastructure need to be adapted.

minikube start --vm-driver=none --kubernetes-version=v1.10.0
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0606 16:09:18.307854    9085 start.go:276] Error starting cluster:  kubeadm init error sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  running command: : running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI 
 output: [init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
	[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.1-ce. Max validated version: 17.03
	[WARNING Hostname]: hostname "minikube" could not be reached
	[WARNING Hostname]: hostname "minikube" lookup minikube on 147.75.207.207:53: no such host
	[WARNING FileExisting-ebtables]: ebtables not found in system path
	[WARNING FileExisting-ethtool]: ethtool not found in system path
	[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.
[preflight] Some fatal errors occurred:
	[ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI 
.: exit status 2

CI tests not triggering with new PRs

CI tests not triggering with new PRs, but are triggering when new commits are pushed to a PR.
Currently only processing "synchronized" type events from PRs.
Probably need to also process "opened" event from PRs.

docker on CI server gets sick every so often

Every other month or so, docker on the CI server (drone.coredns.io) gets "sick" and cannot spin up new containers, causing all CI tests to fail. Manually restarting the docker service fixes this (systemctl restart docker.service).

Either resolve root cause (e.g. prevent docker from getting sick).
Or automatically remediate in test scripts.

kubernetes: Add test for services with PublishNotReadyAddresses

For a service with the v1.Service.PublishNotReadyAddresses field set (or the deprecated service.alpha.kubernetes.io/tolerate-unready-endpoints), all endpoints of the service should be considered "Ready".

Currently the k8s API honors the field, and therefore CoreDNS doesn't need to check for it.

However, there is an open discussion kubernetes/kubernetes#49239, on whether the PublishNotReadyAddresses field should not be checked by the API, and instead be checked by DNS implementations. So it may be a good idea to add a check in integration test for this case to capture if the API behavior changes.

The test would be to create a perpetually unready endpoint for a headless service with the v1.Service.PublishNotReadyAddresses field set, and verify that we create a DNS record for the unready endpoints. A way of creating a perpetually unready endpoint is to add the following to the container spec in the deployment definition:

readinessProbe:

          exec:

            command:

            - /bin/false

Add Corefile reload test

Add a Corefile reload test to catch bugs like coredns/coredns#4880 ...

  • validate configuration is reloaded (e.g. via metric md5 or functionally e.g. via added entry to host plugin).
  • check log and asserts that no panics are present (important, since a panic will result in the new config being loaded, which would pass test above).

TTL 303 -> 5

All those 303s TTLs have masked TTL on more than one occasion. This should probably all use the default of 5s

intregrate ci with dreck

The release script is working (almost 100%, next release will pinpoint remaining failure). Good thing is except the caddy's config all just script come from github.

We should move ci to this as well, which means we can remove:

  • remove a bunch of shell scripting, as dreck can handle some of this, i.e. updating the PR status, etc. (not 100% sure about the details link though)
  • remove webhook and it's config

Don't use statically defined Endpoints in tests

Instead, let K8s create the Endpoints (Endpointslices) per Deployments.

For headless services, this requires adjusting tests to account for dynamically assigned IPs.

This further complicates testing ipv6 and dual stack cases.
For now, just fall back to testing ipv4, and leave ipv6 and dual stack tests to be handled later.

This may be an opportunity to re-implement the integrations tests entirely (e.g. moving away from circle ci) - perhaps moving them into coredns/coredns (which would greatly facilitate coordinating test and function changes).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.