Knot DNS Resolver modules

Static hints

This is a module providing static hints from /etc/hosts like file for forward records (A/AAAA) and reverse records (PTR). You can also use it to change root hints that are used as a safety belt, or if the root NS drops out of cache.


-- Load hints after iterator
modules = { 'hints > iterate' }
-- Load hints before rrcache, custom hosts file
modules = { ['hints < rrcache'] = 'hosts.custom' }
-- Add root hints
  [''] = { '2001:503:c27::2:30', '' }
-- Set custom hint
hints['localhost'] = ''


  • path (string) – path to hosts file, default: "/etc/hosts"

{ result: bool }

Load specified hosts file.

  • hostname (string) – i.e. "localhost"

{ result: [address1, address2, ...] }

Return list of address record matching given name.

  • pair (string) – hostname address i.e. "localhost"

{ result: bool }

Set hostname - address pair hint.

Returns:{ [''] = { '', '', ...}, ... }


If no parameters are passed, returns current root hints set.

  • root_hints (table) – new set of root hints i.e. {['name'] = 'addr', ...}

{ [''] = { '', '', ...}, ... }

Replace current root hints and return the current table of root hints.


> hints.root({
        [''] = '',
        [''] = ''
[] => {
    [1] =>
[] => {
    [1] =>


A good rule of thumb is to select only a few fastest root hints. The server learns RTT and NS quality over time, and thus tries all servers available. You can help it by preselecting the candidates.

Statistics collector

This modules gathers various counters from the query resolution and server internals, and offers them as a key-value storage. Any module may update the metrics or simply hook in new ones.

-- Enumerate metrics
> stats.list()
[answer.cached] => 486178
[iterator.tcp] => 490
[answer.noerror] => 507367
[] => 618631
[iterator.udp] => 102408
[query.concurrent] => 149

-- Query metrics by prefix
> stats.list('iter')
[iterator.udp] => 105104
[iterator.tcp] => 490

-- Set custom metrics from modules
> stats['filter.match'] = 5
> stats['filter.match']

-- Fetch most common queries
> stats.frequent()
[1] => {
        [type] => 2
        [count] => 4
        [name] => cz.

-- Fetch most common queries (sorted by frequency)
> table.sort(stats.frequent(), function (a, b) return a.count > b.count end)


  • key (string) – i.e. ""


Return nominal value of given metric.

stats.set(key, val)
  • key (string) – i.e. ""
  • val (number) – i.e. 5

Set nominal value of given metric.

  • prefix (string) – optional metric prefix, i.e. "answer" shows only metrics beginning with “answer”

Outputs collected metrics as a JSON dictionary.


Outputs list of most frequent iterative queries as a JSON array. The queries are sampled probabilistically, and include subrequests. The list maximum size is 5000 entries, make diffs if you want to track it over time.


Clear the list of most frequent iterative queries.


Outputs list of soon-to-expire records as a JSON array. The list maximum size is 5000 entries, make diffs if you want to track it over time.


Clear the list of soon expiring records.

Built-in statistics

  • - total number of answered queries
  • answer.cached - number of queries answered from cache
  • answer.noerror - number of NOERROR answers
  • answer.nodata - number of NOERROR, but empty answers
  • answer.nxdomain - number of NXDOMAIN answers
  • answer.servfail - number of SERVFAIL answers
  • answer.10ms - number of answers completed in 10ms
  • answer.100ms - number of answers completed in 100ms
  • answer.1000ms - number of answers completed in 1000ms
  • answer.slow - number of answers that took more than 1000ms
  • query.edns - number of queries with EDNS
  • query.dnssec - number of queries with DNSSEC DO=1

Query policies

This module can block, rewrite, or alter queries based on user-defined policies. By default, it blocks queries to reverse lookups in private subnets as per RFC 1918, RFC 5735 and RFC 5737. You can however extend it to deflect Slow drip DNS attacks for example, or gray-list resolution of misbehaving zones.

There are several policies implemented:

  • pattern - applies action if QNAME matches regular expression
  • suffix - applies action if QNAME suffix matches given list of suffixes (useful for “is domain in zone” rules), uses Aho-Corasick string matching algorithm implemented by @jgrahamc (CloudFlare, Inc.) (BSD 3-clause)
  • rpz - implementes a subset of the RPZ format. Currently it can be used with a zonefile, a binary database support is on the way. Binary database can be updated by an external process on the fly.
  • custom filter function

There are several defined actions:

  • PASS - let the query pass through
  • DENY - return NXDOMAIN answer
  • DROP - terminate query resolution, returns SERVFAIL to requestor
  • TC - set TC=1 if the request came through UDP, forcing client to retry with TCP
  • FORWARD(ip) - forward query to given IP and proxy back response (stub mode)


The module (and kres) expects domain names in wire format, not textual representation. So each label in name is prefixed with its length, e.g. “” equals to "\7example\3com". You can use convenience function todname('') for automatic conversion.

Example configuration

-- Load default policies
modules = { 'policy' }
-- Whitelist 'www[0-9]'
policy:add(policy.pattern(policy.PASS, '\4www[0-9]\6badboy\2cz'))
-- Block all names below
policy:add(policy.suffix(policy.DENY, {'\6badboy\2cz'}))
-- Custom rule
policy:add(function (req, query)
        if query:qname():find('%d.%d.%d.224\7in-addr\4arpa') then
                return policy.DENY
-- Disallow ANY queries
policy:add(function (req, query)
        if query.type == kres.type.ANY then
                return policy.DROP
-- Enforce local RPZ
policy:add(policy.rpz(policy.DENY, 'blacklist.rpz'))
-- Forward all queries below '' to given resolver
policy:add(policy.suffix(policy.FORWARD(''), {'\7company\2se'}))
-- Forward all queries matching pattern
policy:add(policy.pattern(policy.FORWARD('2001:DB8::1'), '\4bad[0-9]\2cz'))
-- Forward all queries (complete stub mode)



Pass-through all queries matching the rule.


Respond with NXDOMAIN to all queries matching the rule.


Drop all queries matching the rule.


Respond with empty answer with TC bit set (if the query came through UDP).

policy.FORWARD (address)

Forward query to given IP address.

  • rule – added rule, i.e. policy.pattern(policy.DENY, '[0-9]+\2cz')
  • pattern – regular expression

Policy to block queries based on the QNAME regex matching.

  • action – executed action for all queries

Perform action for all queries (no filtering).

policy.pattern(action, pattern)
  • action – action if the pattern matches QNAME
  • pattern – regular expression

Policy to block queries based on the QNAME regex matching.

policy.suffix(action, suffix_table)
  • action – action if the pattern matches QNAME
  • suffix_table – table of valid suffixes

Policy to block queries based on the QNAME suffix match.

policy.suffix_common(action, suffix_table[, common_suffix])
  • action – action if the pattern matches QNAME
  • suffix_table – table of valid suffixes
  • common_suffix – common suffix of entries in suffix_table

Like suffix match, but you can also provide a common suffix of all matches for faster processing (nil otherwise). This function is faster for small suffix tables (in the order of “hundreds”).

policy.rpz(action, path[, format])
  • action – the default action for match in the zone (e.g. RH-value .)
  • path – path to zone file | database
  • format – set to ‘lmdb’ for binary DB, currently NYI

Enforce RPZ rules. This can be used in conjunction with published blocklist feeds. The RPZ operation is well described in this Jan-Piet Mens’s post, or the Pro DNS and BIND book. Here’s compatibility table:

Policy Action RH Value Support
NODATA *. partial, implemented as NXDOMAIN
Unchanged rpz-passthru. yes
Nothing rpz-drop. yes
Truncated rpz-tcp-only. yes
Modified anything no
Policy Trigger Support
CLIENT-IP partial, may be done with views
IP no
NS-IP no
policy.todnames({name, ...})
Param:names table of domain names in textual format

Returns table of domain names in wire format converted from strings.

-- Convert single name
assert(todname('') == '\7example\3com\0')
-- Convert table of names
policy.todnames({'', ''})
{ '\7example\3com\0', '\2me\2cz\0' }

Views and ACLs

The policy module implements policies for global query matching, e.g. solves “how to react to certain query”. This module combines it with query source matching, e.g. “who asked the query”. This allows you to create personalized blacklists, filters and ACLs, sort of like ISC BIND views.

There are two identification mechanisms:

  • subnet - identifies the client based on his subnet
  • tsig - identifies the client based on a TSIG key

You can combine this information with policy rules.

view:addr('', policy.suffix(policy.TC, {'\7example\3com'}))

This fill force given client subnet to TCP for names in You can combine view selectors with RPZ to create personalized filters for example.

Example configuration

-- Load modules
modules = { 'policy', 'view' }
-- Whitelist queries identified by TSIG key
view:tsig('\5mykey', function (req, qry) return policy.PASS end)
-- Block local clients (ACL like)
view:addr('', function (req, qry) return policy.DENY end))
-- Drop queries with suffix match for remote client
view:addr('', policy.suffix(policy.DROP, {'\3xxx'}))
-- RPZ for subset of clients
view:addr('', policy.rpz(policy.PASS, 'whitelist.rpz'))
-- Forward all queries from given subnet to proxy
view:addr('', policy.all(policy.FORWARD('2001:DB8::1')))


view:addr(subnet, rule)
  • subnet – client subnet, i.e.
  • rule – added rule, i.e. policy.pattern(policy.DENY, '[0-9]+\2cz')

Apply rule to clients in given subnet.

view:tsig(key, rule)
  • key – client TSIG key domain name, i.e. \5mykey
  • rule – added rule, i.e. policy.pattern(policy.DENY, '[0-9]+\2cz')

Apply rule to clients with given TSIG key.


This just selects rule based on the key name, it doesn’t verify the key or signature yet.

Prefetching records

The module tracks expiring records (having less than 5% of original TTL) and batches them for predict. This improves latency for frequently used records, as they are fetched in advance.

It is also able to learn usage patterns and repetitive queries that the server makes. For example, if it makes a query every day at 18:00, the resolver expects that it is needed by that time and prefetches it ahead of time. This is helpful to minimize the perceived latency and keeps the cache hot.


The tracking window and period length determine memory requirements. If you have a server with relatively fast query turnover, keep the period low (hour for start) and shorter tracking window (5 minutes). For personal slower resolver, keep the tracking window longer (i.e. 30 minutes) and period longer (a day), as the habitual queries occur daily. Experiment to get the best results.

Example configuration


This module requires ‘stats’ module to be present and loaded.

modules = {
        predict = {
                window = 15, -- 15 minutes sampling window
                period = 6*(60/15) -- track last 6 hours

Defaults are 15 minutes window, 6 hours period.


Use period 0 to turn off prediction and just do prefetching of expiring records.

Exported metrics

To visualize the efficiency of the predictions, the module exports following statistics.

  • predict.epoch - current prediction epoch (based on time of day and sampling window)
  • predict.queue - number of queued queries in current window
  • predict.learned - number of learned queries in current window


predict.config({ window = 15, period = 24})

Reconfigure the predictor to given tracking window and period length. Both parameters are optional. Window length is in minutes, period is a number of windows that can be kept in memory. e.g. if a window is 15 minutes, a period of “24” means 6 hours.

Graphite module

The module sends statistics over the Graphite protocol to either Graphite, Metronome, InfluxDB or any compatible storage. This allows powerful visualization over metrics collected by Knot DNS Resolver.


The Graphite server is challenging to get up and running, InfluxDB combined with Grafana are much easier, and provide richer set of options and available front-ends. Metronome by PowerDNS alternatively provides a mini-graphite server for much simpler setups.

Example configuration

Only the host parameter is mandatory.

By default the module uses UDP so it doesn’t guarantee the delivery, set tcp = true to enable Graphite over TCP. If the TCP consumer goes down or the connection with Graphite is lost, resolver will periodically attempt to reconnect with it.

modules = {
        graphite = {
                prefix = hostname(), -- optional metric prefix
                host = '',  -- graphite server address
                port = 2003,         -- graphite server port
                interval = 5 * sec,  -- publish interval
                tcp = false          -- set to true if want TCP mode

The module supports sending data to multiple servers at once.

modules = {
        graphite = {
                host = { '', '', '::1' },


  • luasocket available in LuaRocks

    $ luarocks install luasocket

Memcached cache storage

Module providing a cache storage backend for memcached, which makes a good fit for making a shared cache between resolvers.

After loading you can see the storage backend registered and useable.

> modules.load 'kmemcached'
> cache.backends()
[memcached://] => true

And you can use it right away, see the libmemcached configuration reference for configuration string options, the most essential ones are –SERVER or –SOCKET. Here’s an example for connecting to UNIX socket.

> = 'memcached://--SOCKET="/var/sock/memcached"'


The memcached instance MUST support binary protocol, in order to make it work with binary keys. You can pass other options to the configuration string for performance tuning.


The memcached server is responsible for evicting entries out of cache, the pruning function is not implemented, and neither is aborting write transactions.

Build resolver shared cache

The memcached takes care of the data replication and fail over, you can add multiple servers at once.

> = 'memcached://--SOCKET="/var/sock/memcached" --SERVER= --SERVER=cache2.domain'


Depends on the libmemcached library.

Redis cache storage

This modules provides Redis backend for cache storage. Redis is a BSD-license key-value cache and storage server. Like memcached backend, Redis provides master-server replication, but also weak-consistency clustering.

After loading you can see the storage backend registered and useable.

> modules.load 'redis'
> cache.backends()
[redis://] => true

Redis client support TCP or UNIX sockets.

> = 'redis://'
> = 'redis://'
> = 'redis:///tmp/redis.sock'

It also supports indexed databases if you prefix the configuration string with DBID@.

> = 'redis://9@'


The Redis client doesn’t really support transactions nor pruning. Cache eviction policy shoud be left upon Redis server, see the Using Redis as an LRU cache.

Build distributed cache

See Redis Cluster tutorial.


Depends on the hiredis library, which is usually in the packages / ports or you can install it from sources.

Etcd module

The module connects to Etcd peers and watches for configuration change. By default, the module looks for the subtree under /kresd directory, but you can change this in the configuration.

The subtree structure corresponds to the configuration variables in the declarative style.

$ etcdctl set /kresd/net/ 53
$ etcdctl set /kresd/cache/size 10000000

Configures all listening nodes to following configuration:

net = { '' }
cache.size = 10000000

Example configuration

modules = {
        ketcd = {
                prefix = '/kresd',
                peer = ''


Work in progress!


  • lua-etcd available in LuaRocks

    $ luarocks install etcd --from=

Web interface

This module provides an embedded web interface for resolver. It plots current performance in real-time, including a feed of recent iterative queries. It also includes bindings to MaxMind GeoIP, and presents a world map coloured by frequency of queries, so you can see where do your queries go.

The stats module is required for plotting query rate. By default, it listens on localhost:8053.


-- Load web interface
modules = { 'tinyweb' }
-- Listen on specific address/port
modules = {
  tinyweb = {
    addr = 'localhost:8080', -- Custom address
    geoip = '/usr/local/var/GeoIP' -- Different path to GeoIP DB


It depends on Go 1.5+, package.

$ <install> libgeoip
$ go get


The module for RFC 6147 DNS64 AAAA-from-A record synthesis, it is used to enable client-server communication between an IPv6-only client and an IPv4-only server. See the well written introduction in the PowerDNS documentation.


The A record sub-requests will be DNSSEC secured, but the synthetic AAAA records can’t be. Make sure the last mile between stub and resolver is secure to avoid spoofing.

Example configuration

-- Load the module with a NAT64 address
modules = { dns64 = 'fe80::21b:77ff:0:0' }
-- Reconfigure later


The module renumbers addresses in answers to different address space. e.g. you can redirect malicious addresses to a blackhole, or use private address ranges in local zones, that will be remapped to real addresses by the resolver.


While requests are still validated using DNSSEC, the signatures are stripped from final answer. The reason is that the address synthesis breaks signatures. You can see whether an answer was valid or not based on the AD flag.

Example configuration

modules = {
        renumber = {
                -- Source subnet, destination subnet
                {'', ''},
                -- Remap /16 block to localhost address range
                {'', ''}