Configure your multicast WAN for OTV

It is easy to find design and configuration guides about OTV implementations on Nexus 7000 switches, ASR and CSR routers. But it is much harder to find some information about the requirements for your WAN.
Please read my previous blog posts about OTV here, here, here and here. I’ll cover the OTV device configurations in those posts. But for now, lets start with the DCI WAN for OTV.
First of all, there are two OTV deployment options:

  • Unicast mode
  • Multicast mode

The WAN requirements in unicast mode are simple: deliver unicast connectivity between the join interfaces of all OTV edge devices. This is just a simple straight forward configuration, I will not cover this in this blog post.
The multicast deployment is a bit harder to configure and requirements are less easier to find. This blog post will cover the required WAN configuration in a multicast deployment. In this particular scenario, we use dark fiber / DWDM connections as DCI to get a more clear understanding about the requirements and configuration.
First, a drawing to get a view on this deployment scenario:

OTV WAN multicast  layout

OTV WAN multicast layout


This blog will provide you with the most easiest way to get your OTV multicast deployment up and running. There are some more finetune options available, but that will not be covered in this blog.
Read more

LISP Mobility with OTV

In previous posts we talked about implementing OTV with ASR routers. OTV is a overlay network to get end-to-end layer 2 connections over a layer 3 (WAN) network. In most implementations is FHRP (First Hop Redundancy Protocol, like HSRP/VRRP) filtering needed. These filters are needed to keep routing in the same datacenter where the traffic originates.
Let’s take another look at the high level design:
OTV Network layout
When FHRP filtering is active, the Virtual IP (aka.. default gateway for clients) is active in both datacenters. Which means: a packetflow from a server in DC1 is routed on the core switch/router in DC1. If you move (vMotion/ live migrate) that server to DC2, the packetflow is routed on the switch/router in DC2.
If you think this through, the datacenter outgoing trafficflows are efficient: routing will be done on the most nearby router. But… incoming traffic from branch offices is still not efficient: the WAN network does not know where the VM is hosted, so the packets are routed by the normal routing protocols. This could result in inefficient routing: if the IP range is routed to DC1 on the WAN and the VM is hosted in DC2, the Datacenter-Interconnect (OTV) will be used to get the packets to the VM.
This is where LISP mobility comes in.
Read more

OTV FHRP filtering on a ASR router

We configured a OTV DCI in my previous post and it was working as expected and by design. But during testing of all the VLANs I discovered a problem with HSRP over OTV, but only for 1 specific VLAN. The test results:

  • A ping from a host in DC1 in VLAN 10 to the HSRP address gives random drops
  • A ping from a host in DC1 in any VLAN to the HSRP address pings without any problems
  • Shutdown the SVI of VLAN 10 in DC2, A ping from a host in DC1 in VLAN 10 to the HSRP address without any problems
  • VLAN 10 is still disabled in DC2, but a host can ping the HSRP address from DC2 to DC1. This should be impossible because of the FHRP filtering
  • Changing the standby group number (they are the same in DC1 and DC2 to keep the same MAC address) partially solved the problem, but some hosts in DC1 got the HSRP MAC of DC2 in the ARP table. This is not what we want.
  • Moving the SVI from a 6500 switch to a 3750 switch in DC1, none of the above problems

I still have no idea why this problem only exists for VLAN 10, all other VLANs work as expected but I’ve found a good workaround for this in the configuration guide:
http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/wan/command/wan-cr-book/wan-m1.html#wp3953249580
Read more

Configuring OTV on a Cisco ASR

During a project I’ve been working on, we needed to configure OTV on a Cisco ASR. I did write a blog for configuring OTV on a Nexus 7000 before (click here) but the configuration on a Cisco ASR router is a bit different. The used technologies and basic configuration steps are equal, but the syntax is different for a few configuration steps .
Unfortunately, the documentation is not as good as for the Nexus 7000. I’ve found one good configuration guide (here) but this guide isn’t covering all. So, it’s a good reason to write a blog post about the basic OTV configuration on a Cisco ASR router.
For more information about OTV, check this website.
First, the network layout for this OTV network.
OTV Network layout
 
As you can see in the diagram, the ASR routers are back-to-back connected. There is no guideline how to connect these routers, as long as there is IP connectivity between them with multicast capabilities and a MTU of atleast 1542 btyes.
Read more

Cisco Nexus 7000 OTV configuration

Another post, this time about the basic OTV configuration on a Nexus 7000.
The OTV configuration has the be made on a different switch (or VDC) where no SVI’s are configured for the VLAN’s you want to extend to the other site.
First of all some terminology:
  • Edge device: This device performs layer 2 activities (to the internal network) and OTV transportation to the other site(s).
  • Transport network: This is the network (can be layer 3) that connects all the sites. This is your WAN connection, possible managed by your service provider.
  • Join interface: This is the uplink interface on the edge device that is connected to the transport network.
  • Internal interface: This is the interface on the edge device that is connected to the internal network.
  • Overlay interface: This is a logical interface, with support for multi access, multicast. This interface encapsulates layer 2 frames in IP headers (also ‘MAC routing’)
  • Overlay network: A logical network that connects all sites together and uses MAC routing for interconnecting the sites.
  • Site: Your (layer 2) network on a location. In most cases, this is one of your datacenters.