Configure your multicast WAN for OTV

It is easy to find design and configuration guides about OTV implementations on Nexus 7000 switches, ASR and CSR routers. But it is much harder to find some information about the requirements for your WAN.
Please read my previous blog posts about OTV here, here, here and here. I’ll cover the OTV device configurations in those posts. But for now, lets start with the DCI WAN for OTV.
First of all, there are two OTV deployment options:

  • Unicast mode
  • Multicast mode

The WAN requirements in unicast mode are simple: deliver unicast connectivity between the join interfaces of all OTV edge devices. This is just a simple straight forward configuration, I will not cover this in this blog post.
The multicast deployment is a bit harder to configure and requirements are less easier to find. This blog post will cover the required WAN configuration in a multicast deployment. In this particular scenario, we use dark fiber / DWDM connections as DCI to get a more clear understanding about the requirements and configuration.
First, a drawing to get a view on this deployment scenario:

OTV WAN multicast  layout

OTV WAN multicast layout


This blog will provide you with the most easiest way to get your OTV multicast deployment up and running. There are some more finetune options available, but that will not be covered in this blog.
Read more

LISP Mobility with OTV

In previous posts we talked about implementing OTV with ASR routers. OTV is a overlay network to get end-to-end layer 2 connections over a layer 3 (WAN) network. In most implementations is FHRP (First Hop Redundancy Protocol, like HSRP/VRRP) filtering needed. These filters are needed to keep routing in the same datacenter where the traffic originates.
Let’s take another look at the high level design:
OTV Network layout
When FHRP filtering is active, the Virtual IP (aka.. default gateway for clients) is active in both datacenters. Which means: a packetflow from a server in DC1 is routed on the core switch/router in DC1. If you move (vMotion/ live migrate) that server to DC2, the packetflow is routed on the switch/router in DC2.
If you think this through, the datacenter outgoing trafficflows are efficient: routing will be done on the most nearby router. But… incoming traffic from branch offices is still not efficient: the WAN network does not know where the VM is hosted, so the packets are routed by the normal routing protocols. This could result in inefficient routing: if the IP range is routed to DC1 on the WAN and the VM is hosted in DC2, the Datacenter-Interconnect (OTV) will be used to get the packets to the VM.
This is where LISP mobility comes in.
Read more

Configuring OTV on a Cisco ASR

During a project I’ve been working on, we needed to configure OTV on a Cisco ASR. I did write a blog for configuring OTV on a Nexus 7000 before (click here) but the configuration on a Cisco ASR router is a bit different. The used technologies and basic configuration steps are equal, but the syntax is different for a few configuration steps .
Unfortunately, the documentation is not as good as for the Nexus 7000. I’ve found one good configuration guide (here) but this guide isn’t covering all. So, it’s a good reason to write a blog post about the basic OTV configuration on a Cisco ASR router.
For more information about OTV, check this website.
First, the network layout for this OTV network.
OTV Network layout
 
As you can see in the diagram, the ASR routers are back-to-back connected. There is no guideline how to connect these routers, as long as there is IP connectivity between them with multicast capabilities and a MTU of atleast 1542 btyes.
Read more