Knot DNS Resolver daemon

The server is in the daemon directory, it works out of the box without any configuration.

$ kresd -h # Get help
$ kresd -a ::1

Enabling DNSSEC

The resolver supports DNSSEC including RFC 5011 automated DNSSEC TA updates and RFC 7646 negative trust anchors. To enable it, you need to provide trusted root keys. Bootstrapping of the keys is automated, and kresd fetches root trust anchors set over a secure channel from IANA. From there, it can perform RFC 5011 automatic updates for you.


Automatic bootstrap requires luasocket and luasec installed.

$ kresd -k root.keys # File for root keys
[ ta ] bootstrapped root anchor "19036 8 2 49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5"
[ ta ] warning: you SHOULD check the key manually, see:
[ ta ] key: 19036 state: Valid
[ ta ] next refresh: 86400000

Alternatively, you can set it in configuration file with trust_anchors.file = 'root.keys'. If the file doesn’t exist, it will be automatically populated with root keys validated using root anchors retrieved over HTTPS.

This is equivalent to using unbound-anchor:

$ unbound-anchor -a "root.keys" || echo "warning: check the key at this point"
$ echo "auto-trust-anchor-file: \"root.keys\"" >> unbound.conf
$ unbound -c unbound.conf


Bootstrapping of the root trust anchors is automatic, you are however encouraged to check the key over secure channel, as specified in DNSSEC Trust Anchor Publication for the Root Zone. This is a critical step where the whole infrastructure may be compromised, you will be warned in the server log.

Manually providing root anchors

The root anchors bootstrap may fail for various reasons, in this case you need to provide IANA or alternative root anchors. The format of the keyfile is the same as for Unbound or BIND and contains DS/DNSKEY records.

  1. Check the current TA published on IANA website
  2. Fetch current keys (DNSKEY), verify digests
  3. Deploy them
$ kdig DNSKEY . +noall +answer | grep "DNSKEY[[:space:]]257" > root.keys
$ ldns-key2ds -n root.keys # Only print to stdout
... verify that digest matches TA published by IANA ...
$ kresd -k root.keys

You’ve just enabled DNSSEC!

CLI interface

The daemon features a CLI interface, type help to see the list of available commands.

$ kresd /var/run/knot-resolver
[system] started in interactive mode, type 'help()'
> cache.count()

Verbose output

If the debug logging is compiled in, you can turn on verbose tracing of server operation with the -v option. You can also toggle it on runtime with verbose(true|false) command.

$ kresd -v

Scaling out

The server can clone itself into multiple processes upon startup, this enables you to scale it on multiple cores. Multiple processes can serve different addresses, but still share the same working directory and cache. You can add start and stop processes on runtime based on the load.

$ kresd -f 4 rundir > kresd.log &
$ kresd -f 2 rundir > kresd_2.log & # Extra instances
$ pstree $$ -g
           │              ├─kresd(19212)
           │              └─kresd(19212)
$ kill 19399 # Kill group 2, former will continue to run
           │              ├─kresd(19212)
           │              └─kresd(19212)


On recent Linux supporting SO_REUSEPORT (since 3.9, backported to RHEL 2.6.32) it is also able to bind to the same endpoint and distribute the load between the forked processes. If the kernel doesn’t support it, you can still fork multiple processes on different ports, and do load balancing externally (on firewall or with dnsdist).

Notice the absence of an interactive CLI. You can attach to the the consoles for each process, they are in rundir/tty/PID.

$ nc -U rundir/tty/3008 # or socat - UNIX-CONNECT:rundir/tty/3008
> cache.count()

The direct output of the CLI command is captured and sent over the socket, while also printed to the daemon standard outputs (for accountability). This gives you an immediate response on the outcome of your command. Error or debug logs aren’t captured, but you can find them in the daemon standard outputs.

This is also a way to enumerate and test running instances, the list of files int tty correspond to list of running processes, and you can test the process for liveliness by connecting to the UNIX socket.


This is very basic way to orchestrate multi-core deployments and doesn’t scale in multi-node clusters. Keep an eye on the prepared hive module that is going to automate everything from service discovery to deployment and consistent configuration.

Running supervised

Knot Resolver can run under a supervisor to allow for graceful restarts, watchdog process and socket activation. This way the supervisor binds to sockets and lends them to resolver daemon. Thus if the resolver terminates or is killed, the sockets are still active and no queries are dropped.

The watchdog process must notify kresd about active file descriptors, and kresd will automatically determine the socket type and bound address, thus it will appear as any other address. There’s a tiny supervisor script for convenience, but you should have a look at real process managers.

$ python scripts/ ./daemon/kresd
$ [system] interactive mode
> quit()
> [2016-03-28 16:06:36.795879] process finished, pid = 99342, status = 0, uptime = 0:00:01.720612
[system] interactive mode

The daemon also supports systemd socket activation, it is automatically detected and requires no configuration on users’s side.


In it’s simplest form it requires just a working directory in which it can set up persistent files like cache and the process state. If you don’t provide the working directory by parameter, it is going to make itself comfortable in the current working directory.

$ kresd /var/run/kresd

And you’re good to go for most use cases! If you want to use modules or configure daemon behavior, read on.

There are several choices on how you can configure the daemon, a RPC interface, a CLI, and a configuration file. Fortunately all share common syntax and are transparent to each other.

Configuration example

-- interfaces
net = { '', '::1' }
-- load some modules
modules = { 'policy' }
-- 10MB cache
cache.size = 10*MB


There are more configuration examples in etc/ directory for personal, ISP, company internal and resolver cluster use cases.

Configuration syntax

The configuration is kept in the config file in the daemon working directory, and it’s going to get loaded automatically. If there isn’t one, the daemon is going to start with sane defaults, listening on localhost. The syntax for options is like follows: group.option = value or group.action(parameters). You can also comment using a -- prefix.

A simple example would be to load static hints.

modules = {
        'hints' -- no configuration

If the module accepts configuration, you can call the module.config({...}) or provide options table. The syntax for table is { key1 = value, key2 = value }, and it represents the unpacked JSON-encoded string, that the modules use as the input configuration.

modules = {
        hints = '/etc/hosts'


Modules specified including their configuration may not load exactly in the same order as specified.

Modules are inherently ordered by their declaration. Some modules are built-in, so it would be normally impossible to place for example hints before rrcache. You can enforce specific order by precedence operators > and <.

modules = {
   'hints  > iterate', -- Hints AFTER iterate
   'policy > hints',   -- Policy AFTER hints
   'view   < rrcache'  -- View BEFORE rrcache
modules.list() -- Check module call order

This is useful if you’re writing a module with a layer, that evaluates an answer before writing it into cache for example.


The configuration and CLI syntax is Lua language, with which you may already be familiar with. If not, you can read the Learn Lua in 15 minutes for a syntax overview. Spending just a few minutes will allow you to break from static configuration, write more efficient configuration with iteration, and leverage events and hooks. Lua is heavily used for scripting in applications ranging from embedded to game engines, but in DNS world notably in PowerDNS Recursor. Knot DNS Resolver does not simply use Lua modules, but it is the heart of the daemon for everything from configuration, internal events and user interaction.

Dynamic configuration

Knowing that the the configuration is a Lua in disguise enables you to write dynamic rules. It also helps you to avoid repetitive templating that is unavoidable with static configuration.

if hostname() == 'hidden' then
        net.listen(net.eth0, 5353)
        net = { '', net.eth1.addr[1] }

Another example would show how it is possible to bind to all interfaces, using iteration.

for name, addr_list in pairs(net.interfaces()) do

You can also use third-party packages (available for example through LuaRocks) as on this example to download cache from parent, to avoid cold-cache start.

local http = require('socket.http')
local ltn12 = require('ltn12')

if cache.count() == 0 then
        -- download cache from parent
        http.request {
                url = 'http://parent/cache.mdb',
                sink = ltn12.sink.file('cache.mdb', 'w'))
        -- reopen cache with 100M limit
        cache.size = 100*MB

Events and services

The Lua supports a concept called closures, this is extremely useful for scripting actions upon various events, say for example - prune the cache within minute after loading, publish statistics each 5 minutes and so on. Here’s an example of an anonymous function with event.recurrent():

-- every 5 minutes
event.recurrent(5 * minute, function()

Note that each scheduled event is identified by a number valid for the duration of the event, you may cancel it at any time. You can do this with anonymous functions, if you accept the event as a parameter, but it’s not very useful as you don’t have any non-global way to keep persistent variables.

-- make a closure, encapsulating counter
function pruner()
        local i = 0
        -- pruning function
        return function(e)
                -- cancel event on 5th attempt
                i = i + 1
                if i == 5 then

-- make recurrent event that will cancel after 5 times
event.recurrent(5 * minute, pruner())

Another type of actionable event is activity on a file descriptor. This allows you to embed other event loops or monitor open files and then fire a callback when an activity is detected. This allows you to build persistent services like HTTP servers or monitoring probes that cooperate well with the daemon internal operations.

For example a simple web server that doesn’t block:

local server, headers = require 'http.server', require 'http.headers'
local cqueues = require 'cqueues'
-- Start socket server
local s = server.listen { host = 'localhost', port = 8080 }
-- Compose per-request coroutine
local cq =
      -- Create response headers
      local headers =
      headers:append(':status', '200')
      headers:append('connection', 'close')
      -- Send response and close connection
      assert(stream:write_headers(headers, false))
      assert(stream:write_chunk('OK', true))
-- Hook to socket watcher
event.socket(cq:pollfd(), function (ev, status, events)
  • File watchers


Work in progress, come back later!

Configuration reference

This is a reference for variables and functions available to both configuration file and CLI.


env (table)

Return environment variable.

env.USER -- equivalent to $USER in shell
Returns:Machine hostname.
verbose(true | false)
Returns:Toggle verbose logging.
mode('strict' | 'normal' | 'permissive')
Returns:Change resolver strictness checking level.

By default, resolver runs in normal mode. There are possibly many small adjustments hidden behind the mode settings, but the main idea is that in permissive mode, the resolver tries to resolve a name with as few lookups as possible, while in strict mode it spends much more effort resolving and checking referral path. However, if majority of the traffic is covered by DNSSEC, some of the strict checking actions are counter-productive.

Action Modes
Use mandatory glue strict, normal, permissive
Use in-bailiwick glue normal, permissive
Use any glue records permissive
user(name, [group])
  • name (string) – user name
  • group (string) – group name (optional)


Drop privileges and run as given user (and group, if provided).


Note that you should bind to required network addresses before changing user. At the same time, you should open the cache AFTER you change the user (so it remains accessible). A good practice is to divide configuration in two parts:

-- privileged
net = { '', '::1' }
-- unprivileged
cache.size = 100*MB
trust_anchors.file = 'root.key'

Example output:

> user('baduser')
invalid user name
> user('kresd', 'netgrp')
> user('root')
Operation not permitted
resolve(qname, qtype[, qclass = kres.class.IN, options = 0, callback = nil])
  • qname (string) – Query name (e.g. ‘com.’)
  • qtype (number) – Query type (e.g. kres.type.NS)
  • qclass (number) – Query class (optional) (e.g. kres.class.IN)
  • options (number) – Resolution options (see query flags)
  • callback (function) – Callback to be executed when resolution completes (e.g. function cb (pkt, req) end). The callback gets a packet containing the final answer and doesn’t have to return anything.



-- Send query for root DNSKEY, ignore cache
resolve('.', kres.type.DNSKEY, kres.class.IN, kres.query.NO_CACHE)

-- Query for AAAA record
resolve('', kres.type.AAAA, kres.class.IN, 0,
function (answer, req)
   -- Check answer RCODE
   local pkt = kres.pkt_t(answer)
   if pkt:rcode() == kres.rcode.NOERROR then
      -- Print matching records
      local records = pkt:section(kres.section.ANSWER)
      for i = 1, #records do
         if rr.type == kres.type.AAAA then
            print ('record:', kres.rr2str(rr))
      print ('rcode: ', pkt:rcode())

Network configuration

For when listening on localhost just doesn’t cut it.


Use declarative interface for network.

net = { '', net.eth0, net.eth1.addr[1] }
net.ipv4 = false
net.ipv6 = true|false
Return:boolean (default: true)

Enable/disable using IPv6 for recursion.

net.ipv4 = true|false
Return:boolean (default: true)

Enable/disable using IPv4 for recursion.

net.listen(address, [port = 53])

Listen on address, port is optional.

net.listen({address1, ...}, [port = 53])

Listen on list of addresses.

net.listen(interface, [port = 53])

Listen on all addresses belonging to an interface.


net.listen(net.eth0) -- listen on eth0
net.close(address, [port = 53])

Close opened address/port pair, noop if not listening.

Returns:Table of bound interfaces.

Example output:

[] => {
    [port] => 53
    [tcp] => true
    [udp] => true
Returns:Table of available interfaces and their addresses.

Example output:

[lo0] => {
    [addr] => {
        [1] => ::1
        [2] =>
    [mac] => 00:00:00:00:00:00
[eth0] => {
    [addr] => {
        [1] =>
    [mac] => de:ad:be:ef:aa:bb


You can use net.<iface> as a shortcut for specific interface, e.g. net.eth0


Get/set maximum EDNS payload available. Default is 1452 (the maximum unfragmented datagram size). You cannot set less than 1220 (minimum size for DNSSEC) or more than 65535 octets.

Example output:

> net.bufsize(4096)
> net.bufsize()

Get/set per-client TCP pipeline limit (number of outstanding queries that a single client connection can make in parallel). Default is 50.

Example output:

> net.tcp_pipeline() 50 > net.tcp_pipeline(100)

Trust anchors and DNSSEC

trust_anchors.hold_down_time = 30 * day
Return:int (default: 30 * day)

Modify RFC5011 hold-down timer to given value. Example: 30 * sec

trust_anchors.refresh_time = nil
Return:int (default: nil)

Modify RFC5011 refresh timer to given value (not set by default), this will force trust anchors to be updated every N seconds periodically instead of relying on RFC5011 logic and TTLs. Example: 10 * sec

trust_anchors.keep_removed = 0
Return:int (default: 0)

How many Removed keys should be held in history (and key file) before being purged. Note: all Removed keys will be purged from key file after restarting the process.

  • keyfile (string) – File containing DNSKEY records, should be writeable.

You can use only DNSKEY records in managed mode. It is equivalent to CLI parameter -k <keyfile> or trust_anchors.file = keyfile.

Example output:

> trust_anchors.config('root.keys')
[trust_anchors] key: 19036 state: Valid
  • nta_list (table) – List of domain names (text format) representing NTAs.

When you use a domain name as an NTA, DNSSEC validation will be turned off at/below these names. Each function call replaces the previous NTA set. You can find the current active set in trust_anchors.insecure variable.


Use the trust_anchors.negative = {} alias for easier configuration.

Example output:

> trust_anchors.negative = { 'bad.boy', '' }
> trust_anchors.insecure
[1] => bad.boy
[2] =>
  • rr_string (string) – DS/DNSKEY records in presentation format (e.g. . 3600 IN DS 19036 8 2 49AAC11...)

Inserts DS/DNSKEY record(s) into current keyset. These will not be managed or updated, use it only for testing or if you have a specific use case for not using a keyfile.

Example output:

> trust_anchors.add('. 3600 IN DS 19036 8 2 49AAC11...')

Modules configuration

The daemon provides an interface for dynamic loading of daemon modules.


Use declarative interface for module loading.

modules = {
        hints = {file = '/etc/hosts'}

Equals to:

hints.config({file = '/etc/hosts'})
Returns:List of loaded modules.
  • name (string) – Module name, e.g. “hints”


Load a module by name.

  • name (string) – Module name


Unload a module by name.

Cache configuration

The cache in Knot DNS Resolver is persistent with LMDB backend, this means that the daemon doesn’t lose the cached data on restart or crash to avoid cold-starts. The cache may be reused between cache daemons or manipulated from other processes, making for example synchronised load-balanced recursors possible.

cache.size (number)

Get/set the cache maximum size in bytes. Note that this is only a hint to the backend, which may or may not respect it. See

cache.size = 100 * MB -- equivalent to ` * MB)` (string)

Get or change the cache storage backend configuration, see cache.backends() for more information. If the new storage configuration is invalid, it is not set.

print( = 'lmdb://.'
Returns:map of backends

The cache supports runtime-changeable backends, using the optional RFC 3986 URI, where the scheme represents backend protocol and the rest of the URI backend-specific configuration. By default, it is a lmdb backend in working directory, i.e. lmdb://.

Example output:

[lmdb://] => true
return:table of cache counters

The cache collects counters on various operations (hits, misses, transactions, ...). This function call returns a table of cache counters that can be used for calculating statistics.[, config_uri])
  • max_size (number) – Maximum cache size in bytes.


Open cache with size limit. The cache will be reopened if already open. Note that the max_size cannot be lowered, only increased due to how cache is implemented.


Use kB, MB, GB constants as a multiplier, e.g. 100*MB.

The cache supports runtime-changeable backends, see cache.backends() for mor information and default. Refer to specific documentation of specific backends for configuration string syntax.

  • lmdb://

As of now it only allows you to change the cache directory, e.g. lmdb:///tmp/cachedir.

Returns:Number of entries in the cache.

Close the cache.


This may or may not clear the cache, depending on the used backend. See cache.clear().


Return table of statistics, note that this tracks all operations over cache, not just which queries were answered from cache or not.


print('Insertions:', cache.stats().insert)
  • max_count (number) – maximum number of items to be pruned at once (default: 65536)

{ pruned: int }

Prune expired/invalid records.

Returns:list of matching records in cache

Fetches matching records from cache. The domain can either be:

  • a domain name (e.g. "")
  • a wildcard (e.g. "*")

The domain name fetches all records matching this name, while the wildcard matches all records at or below that name.

You can also use a special namespace "P" to purge NODATA/NXDOMAIN matching this name (e.g. " P").


This is equivalent to cache['domain'] getter.


-- Query cache for ''
-- Query cache for all records at/below ''

Purge cache records. If the domain isn’t provided, whole cache is purged. See cache.get() documentation for subtree matching policy.


-- Clear records at/below ''
-- Clear packet cache
cache.clear('*. P')
-- Clear whole cache

Timers and events

The timer represents exactly the thing described in the examples - it allows you to execute closures after specified time, or event recurrent events. Time is always described in milliseconds, but there are convenient variables that you can use - sec, minute, hour. For example, 5 * hour represents five hours, or 5*60*60*100 milliseconds.

event.after(time, function)
Returns:event id

Execute function after the specified time has passed. The first parameter of the callback is the event itself.


event.after(1 * minute, function() print('Hi!') end)
event.recurrent(interval, function)
Returns:event id

Similar to event.after(), periodically execute function after interval passes.


msg_count = 0
event.recurrent(5 * sec, function(e)
        msg_count = msg_count + 1
        print('Hi #'..msg_count)

Cancel running event, it has no effect on already canceled events. New events may reuse the event_id, so the behaviour is undefined if the function is called after another event is started.


e = event.after(1 * minute, function() print('Hi!') end)

Watch for file descriptor activity. This allows embedding other event loops or simply firing events when a pipe endpoint becomes active. In another words, asynchronous notifications for daemon.

event.socket(fd, cb)
  • fd (number) – file descriptor to watch
  • cb – closure or callback to execute when fd becomes active

event id

Execute function when there is activity on the file descriptor and calls a closure with event id as the first parameter, status as second and number of events as third.


e = event.socket(0, function(e, status, nevents)
print(‘activity detected’)

end) e.cancel(e)

Scripting worker

Worker is a service over event loop that tracks and schedules outstanding queries, you can see the statistics or schedule new queries. It also contains information about specified worker count and process rank.


Return current total worker count (e.g. 1 for single-process)

Return current worker ID (starting from 0 up to worker.count - 1)


Return table of statistics.

  • udp - number of outbound queries over UDP
  • tcp - number of outbound queries over TCP
  • ipv6 - number of outbound queries over IPv6
  • ipv4 - number of outbound queries over IPv4
  • timeout - number of timeouted outbound queries
  • concurrent - number of concurrent queries at the moment
  • queries - number of inbound queries
  • dropped - number of dropped inbound queries



Using CLI tools

  • kresd-host.lua - a drop-in replacement for host(1) utility

Queries the DNS for information. The hostname is looked up for IP4, IP6 and mail.


$ kresd-host.lua -f root.key -v has address (secure) has IPv6 address 2001:1488:0:3::2 (secure) mail is handled by 10 (secure) mail is handled by 20 (secure) mail is handled by 30 (secure)
  • kresd-query.lua - run the daemon in zero-configuration mode, perform a query and execute given callback.

This is useful for executing one-shot queries and hooking into the processing of the result, for example to check if a domain is managed by a certain registrar or if it’s signed.


$ kresd-query.lua 'assert(kres.dname2str(req:resolved() == "")' && echo "yes"
$ kresd-query.lua -C 'trust_anchors.config("root.keys")' 'assert(req:resolved():hasflag(kres.query.DNSSEC_WANT))'
$ echo $?