NS1's Kris Beevers Discusses DevOps Need for Dedicated DNS
Wednesday, March 30, 2016
Richard Harris |
We spoke with Kris Beevers, CEO of NS1, to learn more about how DevOps is changing the way applications are developed and how DNS technology is evolving to keep up with today’s application delivery challenges. These developments form the background for the introduction of NS1’s new offering, Dedicated DNS.
Dedicated DNS is a managed DNS service for enterprises, service providers and developers based on NS1’s intelligent authoritative and recursive DNS technology. The managed offering provides a solution to a number of challenges network and application teams face in delivering reliable and responsive application and service connectivity for both internal and external users.
ADM: What, in your view, is the problem with DNS technology today?
Beevers: DNS is a foundational protocol of the Internet, and just about every online application depends on it both externally - enabling users to type “example.com” instead of remembering a clunky IP address - and internally, certainly to solve the same naming problem for intranet and corporate users but also to enable much more complex application-centric use cases, like discovery of microservices (“what database should this application server connect to?”), internal traffic management (“which origin datacenter will get this content to our CDN fastest?”) and so on.
The challenge with existing DNS technology - mostly exemplified by open source software, hardware appliances and some long-in-the-tooth managed services vendors - is that none of it was designed with today’s dynamic, distributed, DevOps-driven application environments in mind. DNS was always meant to be a highly reliable distributed interface to a not-too-dynamic database, mapping names to service information. That not-too-dynamic assumption no longer holds.
ADM: How does your Dedicated DNS address this challenge?
Beevers: For some time now, NS1’s been solving problems for companies addressing global application and content delivery challenges with our next-gen Managed DNS platform. There, our focus is on providing highly dynamic, data-driven traffic management via our globally distributed intelligent DNS network.
But it turns out there are a lot of needs for the same technology inside our customers’ environments. The API-addressability of our platform makes it well suited to automated, configuration managed applications. As a simple example, if you’re using a container management system to move Web servers around in an internal compute cluster, you need to propagate the details of that to your DNS quickly so the rest of your microservices can find the Web servers.
At the same time, we’re also finding a number of internal or on-prem use cases that look like those we address with our external Managed DNS. For example, if you’ve deployed an edge network to cache content close to a global user base but you need to make real-time decisions about which core storage facility to go fetch uncached content from, it’s useful to take a modern traffic management platform and embed it at your edge to make those decisions at high frequency, with low latency. There are probably a lot of use cases yet to be uncovered.
In a nutshell, we’re taking a modern DNS and traffic management engine, which natively takes care of synchronizing very dynamic configuration changes and data across a distributed set of nodes, and can be easily automated against with a full-featured API – and letting our customers put the delivery nodes wherever makes sense for their application: inside their environments, in the cloud, behind the firewall, Internet-facing, wherever.
And we’re absolving customers of the problem of managing the technology and keeping it up to date, since Dedicated DNS is fully mangaged and synchronized with our control plane.
ADM: Open source DNS has been in use for years – why should that change?
Beevers: Open source DNS systems are awesome. BIND is the canonical implementation of the DNS protocol, and many managed service providers in the space are built atop it. PowerDNS, nsd, knot – all these (and many others) are important projects upon which the community relies, and we support and are involved in some of them. There will always be a place for open source DNS servers.
Today, most open source DNS servers are designed around the more traditional idea of what DNS is: they’re configured with zone files or maybe a backing database and built to serve up static mappings of names to IP addresses and other service information reliably and fast. They are great at that.
But in general they aren’t built from the ground up to be deployed as part of a highly distributed, highly dynamic application ecosystem where global synchronization of a rapidly changing DNS dataset is critical, and certainly they aren’t built to enable some of the super-advanced traffic management that we’re doing today. Nor should they be: DNS is the substrate for delivery of our technology, in the same way HTTP is the substrate for a global content delivery network – you wouldn’t expect nginx to implement global content synchronization, purging and the rest of what makes up a modern CDN, and neither should you expect BIND to implement global real-time DNS configuration and metadata synchronization, telemetry-based routing, global real-time reporting or the rest of what goes into a modern managed and dedicated DNS platform like NS1’s.
ADM: Talk about the issue of redundancy in DNS.
Beevers: If DNS breaks, then usually, so does your application. This is true especially of public- facing DNS, because DNS lookup is the entrypoint to the application; when you type “example.com“ into your browser and that fails, the rest of the crazy-redundant infrastructure you’ve built to serve your users doesn’t matter.
The tools we use to ensure DNS availability have changed a lot over the years. First, operators leveraged zone transfers and slaving to replicate mostly-static DNS data across a few servers. Later on, BGP anycasting cropped up and today, if you’re hyper-sensitive to DNS downtime, you should be working with a provider like NS1 that operates a globally anycasted delivery network.
For the biggest, most advanced applications on the Internet today, even that isn’t enough: any one network could have a hiccup, even when it’s heavily overbuilt, internally redundant and connected to the Internet via an array of upstream carriers. In the last year, we’ve seen outages across some of the biggest names on the Internet because of DNS issues. But using more than one DNS network is hard once you start relying on the complex traffic management features and other tools of a modern platform, because every platform is different and it’s hard to translate config across providers.
ADM: How does your solution address this?
Beevers: One of the fascinating use cases we’ve seen for our new Dedicated DNS offering is to deploy redundant DNS delivery networks - on isolated hardware and with independent connectivity - that share the same delivery engine and are automagically kept in sync by our cloud control plane.
This is the holy grail of DNS reliability: a modern platform with advanced traffic management features and API-addressability, in a dual-network configuration to add redundancy, without any of the synchronization headaches you run into when using independent providers with incompatible configurations.
ADM: In what way is quality of experience (QoE) affected by typical DNS routing?
Beevers: In context of an online application, QoE means a lot of things, but usually most importantly, it’s about ensuring the user is able to interact with the application - fetching static content, taking dynamic actions and seeing results, etc. - in a reliable and performant way.
As audiences become more global and more demanding, and as applications become more dynamic, maintaining QoE is a challenge. Necessarily, applications are becoming more distributed, both locally (e.g., with automated horizontal scaling to manage QoE in a traffic spike) and globally (e.g., by pushing application servers out to the “edges” of the Internet to provide quick service to nearby users).
Typical DNS technology isn’t really “QoE-aware” – it’s meant to statically answer DNS queries reliably and fast, but not to have a deeper impact on how a user subsequently interacts with a distributed infrastructure. Modern platforms like NS1’s are different: they take in telemetry from the infrastructure and the Internet to make smart decisions about how users should connect to an application.
Dedicated DNS extends this capability into the application environments themselves. So, for example, when an application server needs to make a fully automated, QoE-aware decision about which backend service to interact with, especially when the backend service itself is distributed, a local DNS lookup to a Dedicated DNS node results in a smarter decision based on real-time data on the performance of the backend service.
ADM: Give us the nuts and bolts of how Dedicated DNS works.
Beevers: Dedicated DNS is an on-prem SaaS solution, not a software solution. That means operationally, Dedicated DNS nodes are just more nodes in our global intelligent DNS network, fully managed by our engineers, updated with our latest capabilities, monitored and synchronized – but they’re dedicated to a specific customer and deployable in very flexible setups, essentially just like server software.
In a typical deployment, we’ll work with the DevOps team of our customer to recommend a setup that is right-sized and meets the reliability and performance needs of the application, and once server and network resources are provisioned, we’ll take over from there and turn the resources into a Dedicated DNS network.
Using Dedicated DNS is no different from using our Managed DNS platform: customers use our APIs and portal to manage DNS zones and records, get visibility into the behavior of their DNS and configure advanced traffic management. The interfaces are unified, and most of our Dedicated DNS customers are also using our Managed DNS platform for their public-facing domains.
ADM: How is this different from what other managed providers are offering?
Beevers: No other managed DNS provider offers an “as-a-Service” model for DNS technology deployed on customer-owned infrastructure, whether inside an enterprise’s network or in the cloud. Until now, the only real solution for the challenges solved by Dedicated DNS was to build and maintain your own DNS infrastructure with open source software, or to buy expensive appliances.
None of those approaches are really suitable for today’s application environments in all their dynamism, and they require a lot of in-house expertise. Modern DNS use cases are complex enough that for most companies, it makes a lot of sense to offload management of the DNS infrastructure, both on-network and public-facing, to experts. Doing so frees up DevOps bandwidth and capital for work that’s actually core to the business.
Today, we’re working with enterprises spanning a diverse set of industries, from online media to advertising tech, consumer SaaS and even global manufacturing and financial services companies, to deliver dedicated, fully managed solutions to their DNS challenge.
Read more: http://www.ns1.com/dedicated-dns
Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks.
MEMBERS GET ACCESS TO
- - Exclusive content from leaders in the industry
- - Q&A articles from industry leaders
- - Tips and tricks from the most successful developers weekly
- - Monthly issues, including all 90+ back-issues since 2012
- - Event discounts and early-bird signups
- - Gain insight from top achievers in the app store
- - Learn what tools to use, what SDK's to use, and more
Subscribe here