Knot DNS Resolver modules

Static hints

This is a module providing static hints from /etc/hosts like file for forward records (A/AAAA) and reverse records (PTR). You can also use it to change root hints that are used as a safety belt, or if the root NS drops out of cache.

Examples

-- Load hints after iterator
modules = { 'hints > iterate' }
-- Load hints before rrcache, custom hosts file
modules = { ['hints < rrcache'] = 'hosts.custom' }
-- Add root hints
hints.root({
  ['j.root-servers.net.'] = { '2001:503:c27::2:30', '192.58.128.30' }
})
-- Set custom hint
hints['localhost'] = '127.0.0.1'

Properties

hints.config([path])
Parameters:
  • path (string) – path to hosts file, default: "/etc/hosts"
Returns:

{ result: bool }

Load specified hosts file.

hints.get(hostname)
Parameters:
  • hostname (string) – i.e. "localhost"
Returns:

{ result: [address1, address2, ...] }

Return list of address record matching given name.

hints.set(pair)
Parameters:
  • pair (string) – hostname address i.e. "localhost 127.0.0.1"
Returns:

{ result: bool }

Set hostname - address pair hint.

hints.root()
Returns:{ ['a.root-servers.net'] = { '1.2.3.4', '5.6.7.8', ...}, ... }

Tip

If no parameters are passed, returns current root hints set.

hints.root(root_hints)
Parameters:
  • root_hints (table) – new set of root hints i.e. {['name'] = 'addr', ...}
Returns:

{ ['a.root-servers.net'] = { '1.2.3.4', '5.6.7.8', ...}, ... }

Replace current root hints and return the current table of root hints.

Example:

> hints.root({
        ['l.root-servers.net.'] = '199.7.83.42',
        ['m.root-servers.net.'] = '202.12.27.33'
})
[l.root-servers.net.] => {
    [1] => 199.7.83.42
}
[m.root-servers.net.] => {
    [1] => 202.12.27.33
}

Tip

A good rule of thumb is to select only a few fastest root hints. The server learns RTT and NS quality over time, and thus tries all servers available. You can help it by preselecting the candidates.

Statistics collector

This modules gathers various counters from the query resolution and server internals, and offers them as a key-value storage. Any module may update the metrics or simply hook in new ones.

-- Enumerate metrics
> stats.list()
[answer.cached] => 486178
[iterator.tcp] => 490
[answer.noerror] => 507367
[answer.total] => 618631
[iterator.udp] => 102408
[query.concurrent] => 149

-- Query metrics by prefix
> stats.list('iter')
[iterator.udp] => 105104
[iterator.tcp] => 490

-- Set custom metrics from modules
> stats['filter.match'] = 5
> stats['filter.match']
5

-- Fetch most common queries
> stats.frequent()
[1] => {
        [type] => 2
        [count] => 4
        [name] => cz.
}

-- Fetch most common queries (sorted by frequency)
> table.sort(stats.frequent(), function (a, b) return a.count > b.count end)

Properties

stats.get(key)
Parameters:
  • key (string) – i.e. "answer.total"
Returns:

number

Return nominal value of given metric.

stats.set(key, val)
Parameters:
  • key (string) – i.e. "answer.total"
  • val (number) – i.e. 5

Set nominal value of given metric.

stats.list([prefix])
Parameters:
  • prefix (string) – optional metric prefix, i.e. "answer" shows only metrics beginning with “answer”

Outputs collected metrics as a JSON dictionary.

stats.frequent()

Outputs list of most frequent iterative queries as a JSON array. The queries are sampled probabilistically, and include subrequests. The list maximum size is 5000 entries, make diffs if you want to track it over time.

stats.clear_frequent()

Clear the list of most frequent iterative queries.

stats.expiring()

Outputs list of soon-to-expire records as a JSON array. The list maximum size is 5000 entries, make diffs if you want to track it over time.

stats.clear_expiring()

Clear the list of soon expiring records.

Built-in statistics

  • answer.total - total number of answered queries
  • answer.cached - number of queries answered from cache
  • answer.noerror - number of NOERROR answers
  • answer.nodata - number of NOERROR, but empty answers
  • answer.nxdomain - number of NXDOMAIN answers
  • answer.servfail - number of SERVFAIL answers
  • answer.10ms - number of answers completed in 10ms
  • answer.100ms - number of answers completed in 100ms
  • answer.1000ms - number of answers completed in 1000ms
  • answer.slow - number of answers that took more than 1000ms
  • query.edns - number of queries with EDNS
  • query.dnssec - number of queries with DNSSEC DO=1

Query policies

This module can block, rewrite, or alter queries based on user-defined policies. By default, it blocks queries to reverse lookups in private subnets as per RFC 1918, RFC 5735 and RFC 5737. You can however extend it to deflect Slow drip DNS attacks for example, or gray-list resolution of misbehaving zones.

There are several policies implemented:

  • pattern - applies action if QNAME matches regular expression
  • suffix - applies action if QNAME suffix matches given list of suffixes (useful for “is domain in zone” rules), uses Aho-Corasick string matching algorithm implemented by @jgrahamc (CloudFlare, Inc.) (BSD 3-clause)
  • rpz - implementes a subset of the RPZ format. Currently it can be used with a zonefile, a binary database support is on the way. Binary database can be updated by an external process on the fly.
  • custom filter function

There are several defined actions:

  • PASS - let the query pass through
  • DENY - return NXDOMAIN answer
  • DROP - terminate query resolution, returns SERVFAIL to requestor
  • TC - set TC=1 if the request came through UDP, forcing client to retry with TCP
  • FORWARD(ip) - forward query to given IP and proxy back response (stub mode)

Note

The module (and kres) expects domain names in wire format, not textual representation. So each label in name is prefixed with its length, e.g. “example.com” equals to "\7example\3com". You can use convenience function todname('example.com') for automatic conversion.

Example configuration

-- Load default policies
modules = { 'policy' }
-- Whitelist 'www[0-9].badboy.cz'
policy:add(policy.pattern(policy.PASS, '\4www[0-9]\6badboy\2cz'))
-- Block all names below badboy.cz
policy:add(policy.suffix(policy.DENY, {'\6badboy\2cz'}))
-- Custom rule
policy:add(function (req, query)
        if query:qname():find('%d.%d.%d.224\7in-addr\4arpa') then
                return policy.DENY
        end
end)
-- Disallow ANY queries
policy:add(function (req, query)
        if query.type == kres.type.ANY then
                return policy.DROP
        end
end)
-- Enforce local RPZ
policy:add(policy.rpz(policy.DENY, 'blacklist.rpz'))
-- Forward all queries below 'company.se' to given resolver
policy:add(policy.suffix(policy.FORWARD('192.168.1.1'), {'\7company\2se'}))
-- Forward all queries matching pattern
policy:add(policy.pattern(policy.FORWARD('2001:DB8::1'), '\4bad[0-9]\2cz'))
-- Forward all queries (complete stub mode)
policy:add(policy.all(policy.FORWARD('2001:DB8::1')))

Properties

policy.PASS

Pass-through all queries matching the rule.

policy.DENY

Respond with NXDOMAIN to all queries matching the rule.

policy.DROP

Drop all queries matching the rule.

policy.TC

Respond with empty answer with TC bit set (if the query came through UDP).

policy.FORWARD (address)

Forward query to given IP address.

policy:add(rule)
Parameters:
  • rule – added rule, i.e. policy.pattern(policy.DENY, '[0-9]+\2cz')
  • pattern – regular expression

Policy to block queries based on the QNAME regex matching.

policy.all(action)
Parameters:
  • action – executed action for all queries

Perform action for all queries (no filtering).

policy.pattern(action, pattern)
Parameters:
  • action – action if the pattern matches QNAME
  • pattern – regular expression

Policy to block queries based on the QNAME regex matching.

policy.suffix(action, suffix_table)
Parameters:
  • action – action if the pattern matches QNAME
  • suffix_table – table of valid suffixes

Policy to block queries based on the QNAME suffix match.

policy.suffix_common(action, suffix_table[, common_suffix])
Parameters:
  • action – action if the pattern matches QNAME
  • suffix_table – table of valid suffixes
  • common_suffix – common suffix of entries in suffix_table

Like suffix match, but you can also provide a common suffix of all matches for faster processing (nil otherwise). This function is faster for small suffix tables (in the order of “hundreds”).

policy.rpz(action, path[, format])
Parameters:
  • action – the default action for match in the zone (e.g. RH-value .)
  • path – path to zone file | database
  • format – set to ‘lmdb’ for binary DB, currently NYI

Enforce RPZ rules. This can be used in conjunction with published blocklist feeds. The RPZ operation is well described in this Jan-Piet Mens’s post, or the Pro DNS and BIND book. Here’s compatibility table:

Policy Action RH Value Support
NXDOMAIN . yes
NODATA *. partial, implemented as NXDOMAIN
Unchanged rpz-passthru. yes
Nothing rpz-drop. yes
Truncated rpz-tcp-only. yes
Modified anything no
Policy Trigger Support
QNAME yes
CLIENT-IP partial, may be done with views
IP no
NSDNAME no
NS-IP no
policy.todnames({name, ...})
Param:names table of domain names in textual format

Returns table of domain names in wire format converted from strings.

-- Convert single name
assert(todname('example.com') == '\7example\3com\0')
-- Convert table of names
policy.todnames({'example.com', 'me.cz'})
{ '\7example\3com\0', '\2me\2cz\0' }

Views and ACLs

The policy module implements policies for global query matching, e.g. solves “how to react to certain query”. This module combines it with query source matching, e.g. “who asked the query”. This allows you to create personalized blacklists, filters and ACLs, sort of like ISC BIND views.

There are two identification mechanisms:

  • subnet - identifies the client based on his subnet
  • tsig - identifies the client based on a TSIG key

You can combine this information with policy rules.

view:addr('10.0.0.1', policy.suffix(policy.TC, {'\7example\3com'}))

This fill force given client subnet to TCP for names in example.com. You can combine view selectors with RPZ to create personalized filters for example.

Example configuration

-- Load modules
modules = { 'policy', 'view' }
-- Whitelist queries identified by TSIG key
view:tsig('\5mykey', function (req, qry) return policy.PASS end)
-- Block local clients (ACL like)
view:addr('127.0.0.1', function (req, qry) return policy.DENY end))
-- Drop queries with suffix match for remote client
view:addr('10.0.0.0/8', policy.suffix(policy.DROP, {'\3xxx'}))
-- RPZ for subset of clients
view:addr('192.168.1.0/24', policy.rpz(policy.PASS, 'whitelist.rpz'))
-- Forward all queries from given subnet to proxy
view:addr('10.0.0.0/8', policy.all(policy.FORWARD('2001:DB8::1')))

Properties

view:addr(subnet, rule)
Parameters:
  • subnet – client subnet, i.e. 10.0.0.1
  • rule – added rule, i.e. policy.pattern(policy.DENY, '[0-9]+\2cz')

Apply rule to clients in given subnet.

view:tsig(key, rule)
Parameters:
  • key – client TSIG key domain name, i.e. \5mykey
  • rule – added rule, i.e. policy.pattern(policy.DENY, '[0-9]+\2cz')

Apply rule to clients with given TSIG key.

Warning

This just selects rule based on the key name, it doesn’t verify the key or signature yet.

Prefetching records

The module tracks expiring records (having less than 5% of original TTL) and batches them for predict. This improves latency for frequently used records, as they are fetched in advance.

It is also able to learn usage patterns and repetitive queries that the server makes. For example, if it makes a query every day at 18:00, the resolver expects that it is needed by that time and prefetches it ahead of time. This is helpful to minimize the perceived latency and keeps the cache hot.

Tip

The tracking window and period length determine memory requirements. If you have a server with relatively fast query turnover, keep the period low (hour for start) and shorter tracking window (5 minutes). For personal slower resolver, keep the tracking window longer (i.e. 30 minutes) and period longer (a day), as the habitual queries occur daily. Experiment to get the best results.

Example configuration

Warning

This module requires ‘stats’ module to be present and loaded.

modules = {
        predict = {
                window = 15, -- 15 minutes sampling window
                period = 6*(60/15) -- track last 6 hours
        }
}

Defaults are 15 minutes window, 6 hours period.

Tip

Use period 0 to turn off prediction and just do prefetching of expiring records.

Exported metrics

To visualize the efficiency of the predictions, the module exports following statistics.

  • predict.epoch - current prediction epoch (based on time of day and sampling window)
  • predict.queue - number of queued queries in current window
  • predict.learned - number of learned queries in current window

Properties

predict.config({ window = 15, period = 24})

Reconfigure the predictor to given tracking window and period length. Both parameters are optional. Window length is in minutes, period is a number of windows that can be kept in memory. e.g. if a window is 15 minutes, a period of “24” means 6 hours.

Graphite module

The module sends statistics over the Graphite protocol to either Graphite, Metronome, InfluxDB or any compatible storage. This allows powerful visualization over metrics collected by Knot DNS Resolver.

Tip

The Graphite server is challenging to get up and running, InfluxDB combined with Grafana are much easier, and provide richer set of options and available front-ends. Metronome by PowerDNS alternatively provides a mini-graphite server for much simpler setups.

Example configuration

Only the host parameter is mandatory.

By default the module uses UDP so it doesn’t guarantee the delivery, set tcp = true to enable Graphite over TCP. If the TCP consumer goes down or the connection with Graphite is lost, resolver will periodically attempt to reconnect with it.

modules = {
        graphite = {
                prefix = hostname(), -- optional metric prefix
                host = '127.0.0.1',  -- graphite server address
                port = 2003,         -- graphite server port
                interval = 5 * sec,  -- publish interval
                tcp = false          -- set to true if want TCP mode
        }
}

The module supports sending data to multiple servers at once.

modules = {
        graphite = {
                host = { '127.0.0.1', '1.2.3.4', '::1' },
        }
}

Dependencies

  • luasocket available in LuaRocks

    $ luarocks install luasocket

Memcached cache storage

Module providing a cache storage backend for memcached, which makes a good fit for making a shared cache between resolvers.

After loading you can see the storage backend registered and useable.

> modules.load 'kmemcached'
> cache.backends()
[memcached://] => true

And you can use it right away, see the libmemcached configuration reference for configuration string options, the most essential ones are –SERVER or –SOCKET. Here’s an example for connecting to UNIX socket.

> cache.storage = 'memcached://--SOCKET="/var/sock/memcached"'

Note

The memcached instance MUST support binary protocol, in order to make it work with binary keys. You can pass other options to the configuration string for performance tuning.

Warning

The memcached server is responsible for evicting entries out of cache, the pruning function is not implemented, and neither is aborting write transactions.

Build resolver shared cache

The memcached takes care of the data replication and fail over, you can add multiple servers at once.

> cache.storage = 'memcached://--SOCKET="/var/sock/memcached" --SERVER=192.168.1.1 --SERVER=cache2.domain'

Dependencies

Depends on the libmemcached library.

Redis cache storage

This modules provides Redis backend for cache storage. Redis is a BSD-license key-value cache and storage server. Like memcached backend, Redis provides master-server replication, but also weak-consistency clustering.

After loading you can see the storage backend registered and useable.

> modules.load 'redis'
> cache.backends()
[redis://] => true

Redis client support TCP or UNIX sockets.

> cache.storage = 'redis://127.0.0.1'
> cache.storage = 'redis://127.0.0.1:6398'
> cache.storage = 'redis:///tmp/redis.sock'

It also supports indexed databases if you prefix the configuration string with DBID@.

> cache.storage = 'redis://9@127.0.0.1'

Warning

The Redis client doesn’t really support transactions nor pruning. Cache eviction policy shoud be left upon Redis server, see the Using Redis as an LRU cache.

Build distributed cache

See Redis Cluster tutorial.

Dependencies

Depends on the hiredis library, which is usually in the packages / ports or you can install it from sources.

Etcd module

The module connects to Etcd peers and watches for configuration change. By default, the module looks for the subtree under /kresd directory, but you can change this in the configuration.

The subtree structure corresponds to the configuration variables in the declarative style.

$ etcdctl set /kresd/net/127.0.0.1 53
$ etcdctl set /kresd/cache/size 10000000

Configures all listening nodes to following configuration:

net = { '127.0.0.1' }
cache.size = 10000000

Example configuration

modules = {
        ketcd = {
                prefix = '/kresd',
                peer = 'http://127.0.0.1:7001'
        }
}

Warning

Work in progress!

Dependencies

  • lua-etcd available in LuaRocks

    $ luarocks install etcd --from=http://mah0x211.github.io/rocks/

Web interface

This module provides an embedded web interface for resolver. It plots current performance in real-time, including a feed of recent iterative queries. It also includes bindings to MaxMind GeoIP, and presents a world map coloured by frequency of queries, so you can see where do your queries go.

The stats module is required for plotting query rate. By default, it listens on localhost:8053.

Examples

-- Load web interface
modules = { 'tinyweb' }
-- Listen on specific address/port
modules = {
  tinyweb = {
    addr = 'localhost:8080', -- Custom address
    geoip = '/usr/local/var/GeoIP' -- Different path to GeoIP DB
  }
}

Dependencies

It depends on Go 1.5+, github.com/abh/geoip package.

$ <install> libgeoip
$ go get github.com/abh/geoip

DNS64

The module for RFC 6147 DNS64 AAAA-from-A record synthesis, it is used to enable client-server communication between an IPv6-only client and an IPv4-only server. See the well written introduction in the PowerDNS documentation.

Tip

The A record sub-requests will be DNSSEC secured, but the synthetic AAAA records can’t be. Make sure the last mile between stub and resolver is secure to avoid spoofing.

Example configuration

-- Load the module with a NAT64 address
modules = { dns64 = 'fe80::21b:77ff:0:0' }
-- Reconfigure later
dns64.config('fe80::21b:aabb:0:0')

Renumber

The module renumbers addresses in answers to different address space. e.g. you can redirect malicious addresses to a blackhole, or use private address ranges in local zones, that will be remapped to real addresses by the resolver.

Warning

While requests are still validated using DNSSEC, the signatures are stripped from final answer. The reason is that the address synthesis breaks signatures. You can see whether an answer was valid or not based on the AD flag.

Example configuration

modules = {
        renumber = {
                -- Source subnet, destination subnet
                {'10.10.10.0/24', '192.168.1.0'},
                -- Remap /16 block to localhost address range
                {'166.66.0.0/16', '127.0.0.0'}
        }
}