Configuring OTV on a Cisco ASR
During a project I’ve been working on, we needed to configure OTV on a Cisco ASR. I did write a blog for configuring OTV on a Nexus 7000 before (click here) but the configuration on a Cisco ASR router is a bit different. The used technologies and basic configuration steps are equal, but the syntax is different for a few configuration steps .
Unfortunately, the documentation is not as good as for the Nexus 7000. I’ve found one good configuration guide (here) but this guide isn’t covering all. So, it’s a good reason to write a blog post about the basic OTV configuration on a Cisco ASR router.
For more information about OTV, check this website.
First, the network layout for this OTV network.
As you can see in the diagram, the ASR routers are back-to-back connected. There is no guideline how to connect these routers, as long as there is IP connectivity between them with multicast capabilities and a MTU of atleast 1542 btyes.
OTV configuration on a ASR router REQUIRES you to have at least two physical interfaces. You cannot get OTV working with a one-interface configuration. Reason behind this is simple: the ASR is a router and is because of that unaware about VLANs. Also, the ‘trunk’ configuration on the ASR does not allow you to use subinterfaces.
– Step 1: Join interface configuration on both routers
DC1
interface TenGigabitEthernet0/1/0 mtu 1542 ip address 1.1.1.1 255.255.255.252 ip pim passive ip igmp version 3 end
DC2
interface TenGigabitEthernet0/1/0 mtu 1542 ip address 1.1.1.2 255.255.255.252 ip pim passive ip igmp version 3 end
The interface configuration is pretty basic. PIM and IGMP are used for multicast traffic for OTV. In our configuration, multicast is being used for MAC advertisements. Your WAN network has to be multicast enabled. Unicast-only is possible, you’ll need a adjacency server for that.
Another important configuration step is to change the MTU. OTV uses a header of 42 bytes, which means that the WAN network has to carry packets of at least 1542 bytes. Make sure your WAN network has a MTU of at least 1542 bytes!
– Step 2: Global configuration
DC1
otv site bridge-domain 11 otv site-identifier 0000.0000.0011
DC2
otv site bridge-domain 12 otv site-identifier 0000.0000.0012
The “Site bridge-domain” defines the site-VLAN. This VLAN is unique per-site and can not be stretched to other datacenters!
The site-identifier is a unique ID for a site (aka.. datacenter).
– Step 3: Overlay configuration
DC1
interface Overlay1 no ip address otv control-group 239.2.3.4 otv data-group 232.1.1.0/24 otv join-interface TenGigabitEthernet0/1/0 no otv suppress arp-nd service instance 10 ethernet encapsulation dot1q 10 bridge-domain 10 service instance 20 ethernet encapsulation dot1q 20 bridge-domain 20 service instance 30 ethernet encapsulation dot1q 30 bridge-domain 30 service instance 40 ethernet encapsulation dot1q 40 bridge-domain 40 service instance 50 ethernet encapsulation dot1q 50 bridge-domain 50 service instance 60 ethernet encapsulation dot1q 60 bridge-domain 60
DC2
interface Overlay1 no ip address otv control-group 239.2.3.4 otv data-group 232.1.1.0/24 otv join-interface TenGigabitEthernet0/1/0 no otv suppress arp-nd service instance 10 ethernet encapsulation dot1q 10 bridge-domain 10 service instance 20 ethernet encapsulation dot1q 20 bridge-domain 20 service instance 30 ethernet encapsulation dot1q 30 bridge-domain 30 service instance 40 ethernet encapsulation dot1q 40 bridge-domain 40 service instance 50 ethernet encapsulation dot1q 50 bridge-domain 50 service instance 60 ethernet encapsulation dot1q 60 bridge-domain 60
This is another configuration difference with the Nexus 7000: the service instance. With this configuration, the OTV enabled VLAN’s are defined. In our case, VLAN 10,20,30,40,50,60 are stretched between the datacenters.
– Step 4: Connection to datacenter LAN
DC1
interface TenGigabitEthernet0/2/0 no ip address service instance 10 ethernet encapsulation dot1q 10 bridge-domain 10 service instance 11 ethernet encapsulation dot1q 11 bridge-domain 11 service instance 20 ethernet encapsulation dot1q 20 bridge-domain 20 service instance 30 ethernet encapsulation dot1q 30 bridge-domain 30 service instance 40 ethernet encapsulation dot1q 40 bridge-domain 40 service instance 50 ethernet encapsulation dot1q 50 bridge-domain 50 service instance 50 ethernet encapsulation dot1q 50 bridge-domain 50
DC2
interface TenGigabitEthernet0/2/0 no ip address service instance 10 ethernet encapsulation dot1q 10 bridge-domain 10 service instance 12 ethernet encapsulation dot1q 12 bridge-domain 12 service instance 20 ethernet encapsulation dot1q 20 bridge-domain 20 service instance 30 ethernet encapsulation dot1q 30 bridge-domain 30 service instance 40 ethernet encapsulation dot1q 40 bridge-domain 40 service instance 50 ethernet encapsulation dot1q 50 bridge-domain 50 service instance 50 ethernet encapsulation dot1q 50 bridge-domain 50
As we know, the ASR is a router, so unaware of VLANs. The get a (required) layer 2 connection to the datacenter LAN, we need to configure service instances on the LAN facing interface. Note: you cannot share this interface with the WAN (join) interface!
This is the OTV configuration. It’s straight forward.
To verify the OTV configuration, use the following commands:
Show otv
Overlay Interface Overlay1 VPN name : None VPN ID : 1 State : UP AED Capable : Yes IPv4 control group : 239.2.3.4 Mcast data group range(s): 232.1.1.0/23 Join interface(s) : TenGigabitEthernet0/1/0 Join IPv4 address : 1.1.1.1 Tunnel interface(s) : Tunnel0 Encapsulation format : GRE/IPv4 Site Bridge-Domain : 11 Capability : Multicast-reachable Is Adjacency Server : No Adj Server Configured : No Prim/Sec Adj Svr(s) : None
Show otv route
Codes: BD - Bridge-Domain, AD - Admin-Distance, SI - Service Instance, * - Backup Route OTV Unicast MAC Routing Table for Overlay1 Inst VLAN BD MAC Address AD Owner Next Hops(s) ---------------------------------------------------------- 0 20 20 0050.56aa.217a 40 BD Eng Te0/2/0:SI99 0 20 20 0050.56aa.38f1 40 BD Eng Te0/2/0:SI99 0 20 20 0050.56aa.4fcb 40 BD Eng Te0/2/0:SI99 0 20 20 0050.56ca.65df 40 BD Eng Te0/2/0:SI99 0 20 20 0050.5b39.1533 40 BD Eng Te0/2/0:SI99 0 20 20 00a0.8e42.1d49 40 BD Eng Te0/2/0:SI99 0 20 20 c08c.6428.4b94 50 ISIS hostname 0 10 10 0010.daff.601b 40 BD Eng Te0/2/0:SI406
Show otv summary
OTV Configuration Information, Site Bridge-Domain: 11 Overlay VPN Name Control Group Data Group(s) Join Interface State 1 None 239.2.3.4 232.1.1.0/23 Te0/1/0 UP Total Overlay(s): 1
That’s it!
The te0/2/0 connection to your DC LAN switch, is the switchport configured as a typical dot1q trunk with the allowed vlans? Also, would we need a seperate L3 link from ASR to DC switch for routed traffic that’s not part of the OTV vlans? For example, lets say vlan 500 is on the LAN switch at DC B with vlan 500 SVI, a packet from vlan 500 hits the SVI to get routed to the rest of the network at DC: A (the main data center) how does it get routed to the ASR so it can then get routed across the dark fiber link back to the other DC?
The switchport connected to Te0/2/0 is indeed a regular dot1q trunk with an allowed list.
In the config I used for this blog, the ASR routers are back to back connected. All you need for connecting the ASR routers is a layer 3 connection. You can use (dark)fibers, mpls, some ISP layer 2 or 3 wan, as long as you have a connection with at least a mtu of 1542.
If you use the same DC lan switch for the wan connection, make sure you don’t mix up internal and wan routing.
Hey Rob,
I think Dan might have been asking about the link between ASRDC switch where the L3 termination point for all the local VLANs are, not necessarily between ASRs. So I actually have the same question: Te0/2/0, is that configured with an IP on one of the SIs or did you use a separate layer 3 link between your ASR and data center switch for routed traffic?
Great article though.
Hi Ryan,
Great question, you had me thinking for a few minutes!
From configuration standpoint, it’s simply not possible to configure IP addresses on Service Instances (SIs). The ASR/OTV configuration does only extend your layer 2 domain between both DCs. If you need one or multiple routed connections, I’ll suggest you create a new VLAN on the switches in both DCs, extend the VLAN over OTV and use this VLAN as a transit network for your routed connections.
Great blog Rob.
Currently working on the OTV implementation between two Data Center running 2 Dark Fiber VPL link. I am a bit confuse on the otv site bridge-domain and would really appreciate if you can clear up my confusion. In your example above are the “otv site bridge-domain 11” & “otv site bridge-domain 12” are these two vlans an active Vlans on both side of the Data center if not can Ijust use any VLANs that I don’t want to extend across between DataCenters? I know you mention not to extend these Vlans.
Hi Jepoy,
There is only one site VLAN (bridge-domain) per-datacenter. You can use the same VLAN ID for both datacenters, but you really should not extend this VLAN. To make it more future proof (maybe someone is going to extent, by mistake, that specific VLAN in the future) I’ll recommend to use different VLAN IDs for each datacenter. The VLAN ID itself is your choice, it can be any regular VLAN ID.
Thanks Rob. So in essence I can just create a VLAN “500” on DC1 then VLAN “501” on DC2 and only use it for the purpose of (Bridge-domain) and not use these VLANS for anything else is that right?
That is correct!
Thanks for clarifying that. My other question if I may.
1:) Client PC is talking to SQL server that’s on VLAN 20 and the SQL server presence is Local to (DC1). If we vmotion the SQL server to (DC2) and the VLAN 20 is extended between the DCs via OTV, how will the PC know to route the traffic over the OTV being that the LAN interface in your example doesn’t have a Layer 3 adjacency with the OTV router?
The laptop will never have knowledge about the vmotion: the MAC address of the VM will not change.
After the vmotion, OTV will learn about the MAC address in DC2 (by a broadcast frame send from the VM) and it will update its OTV route table and advertise that specific MAC address to all other OTV devices.
So in a few simple words: all cam tables in DC1 will point to the OTV device and OTV will extend the frame to the other datacenter.
Because of the fact that OTV has to learn and advertise the movement of the MAC address, a vmotion between datacenters can result in a downtime of a few seconds. In real world implementations I’ve seen, that is less then 2 seconds in most cases.
Thank you Rob. Really appreciate your input and great blog.
-Jepoy
Great start, thanks, but what pussels me..how would OTV and ASR be setup in a redundant way between 2 DC sites? 2 dark fibers, with each a ASR at both ends are the HW, but then the multicast domain is split in two? Any suggestions or alternatives? (Or where should I raise the case?)
I haven’t test that scenario, but in theory that should technically work. But, if 1 ASR router failes, you only have 1 dark fiber active. So, there is less redundancy if you use this scenario. A better scenario would be a routed WAN: get some routers (or switches) and create a routed WAN ring.
I have to extend a LAN switch that only contains VLAN 1/default VLAN. Is it possible for me to configure two interfaces on the ASR, one to connect to the switch with non-default VLANs, such as 500-510 and one to the switch that doesn’t have any VLAN configured?
Thanks
That is possible to do! Configure the router port for the switch with the 500-510 vlans as dscribed in this blog. There are 2 options for the vlan1 switch:
Configure a second router port on the ASR with the following config:
Service instance 1 ethernet
Encapsulation dot1q 1
Bridge-domain 1
And configure a trunk with a (dummy) native vlan on the vlan1 switch to connect to the ASR.
Option 2:
The ASR port config will be:
Service instance 1
Encapsulation native
Bridge-domain 1
Don’t forget to add this bridge domain in the overlay interface, regardless the option you choose.
I would recommend to use option 1 with the dummy native vlan.
Thanks, I will give that a try.
I have the two ASRs and the two Cisco 3560G configured on both ends as you suggested. I am not able to ping across the two switches.
Any thoughts?
Thanks
Use “show otv” to check if the overlay is up.
Can you see (at least) both mac addresses of your hosts in the “show otv route” table?
Also, double check the trunk configuration of the switches and the otv routers!
If you can’t get it running, please post the (partial) interface configs of the switches and routers to get a more clear view.
show otv
Overlay Interface Overlay1
VPN name : dp
VPN ID : 1
State : UP
Fwd-capable : No
Fwd-ready : No
AED-Server : No
Backup AED-Server : No
AED Capable : No, overlay DIS not elected
IPv4 control group : 239.2.3.4
Mcast data group range(s): 232.1.1.0/24
Join interface(s) : TenGigabitEthernet0/0/1
Join IPv4 address : 172.31.2.1
Tunnel interface(s) : Tunnel0
Encapsulation format : GRE/IPv4
Site Bridge-Domain : 101
Capability : Multicast-reachable
Is Adjacency Server : No
Adj Server Configured : No
Prim/Sec Adj Svr(s) : None
DPASR#show run int g0/0/4
interface GigabitEthernet0/0/4
no ip address
speed 1000
no negotiation auto
ntp disable
cdp enable
service instance 1 ethernet
description Site VLAN – Not Extended
encapsulation untagged
bridge-domain 101
!
service instance 2 ethernet
encapsulation dot1q 1
bridge-domain 1
DPASR#show run int ov1
interface Overlay1
no ip address
otv control-group 239.2.3.4
otv data-group 232.1.1.0/24
otv join-interface TenGigabitEthernet0/0/1
otv vpn-name dp
no otv suppress arp-nd
service instance 2 ethernet
encapsulation dot1q 1
bridge-domain 1
!
service instance 500 ethernet
encapsulation dot1q 500
bridge-domain 500
!
service instance 501 ethernet
encapsulation dot1q 501
bridge-domain 501
!
service instance 502 ethernet
encapsulation dot1q 502
bridge-domain 502
!
service instance 503 ethernet
encapsulation dot1q 503
bridge-domain 503
!
service instance 504 ethernet
encapsulation dot1q 504
bridge-domain 504
!
service instance 505 ethernet
encapsulation dot1q 505
bridge-domain 505
!
service instance 506 ethernet
encapsulation dot1q 506
bridge-domain 506
!
service instance 507 ethernet
encapsulation dot1q 507
bridge-domain 507
!
service instance 508 ethernet
encapsulation dot1q 508
bridge-domain 508
!
service instance 509 ethernet
encapsulation dot1q 509
bridge-domain 509
DPASR#show otv vlan
Key: SI – Service Instance, NA – Non AED, NFC – Not Forward Capable.
Overlay 1 VLAN Configuration Information
Inst VLAN BD Auth ED State Site If(s)
0 1 1 – inactive Gi0/0/4:SI2
0 500 500 – inactive(NFC) Gi0/0/5:SI500
0 501 501 – inactive(NFC) Gi0/0/5:SI501
0 502 502 – inactive(NFC) Gi0/0/5:SI502
0 503 503 – inactive(NFC) Gi0/0/5:SI503
0 504 504 – inactive(NFC) Gi0/0/5:SI504
0 505 505 – inactive(NFC) Gi0/0/5:SI505
0 506 506 – inactive(NFC) Gi0/0/5:SI506
0 507 507 – inactive(NFC) Gi0/0/5:SI507
0 508 508 – inactive(NFC) Gi0/0/5:SI508
0 509 509 – inactive(NFC) Gi0/0/5:SI509
Total VLAN(s): 11
Note the VLAN is shown as inactive.
Switch:
show run int g0/1
switchport trunk encapsulation dot1q
switchport mode trunk
switchport trunk allowed vlan 1
switchport trunk native vlan 1
Thank you.
I can suggest three things about the configuration:
1)
AED Capable : No, overlay DIS not elected
This mostly means some multicast errors in the WAN. Are the routers back-to-back connected? Make sure multicast is configured in the WAN and on the join interface!
2) the site vlan has the be tagged! It cannot be untagged on the inside interface (also, make sure this vlan is not in the overlay)
3) on interface g0/0/4 service instance 2 you’ve configured vlan 1 as tagged, but the switch g0/1 is untagged for vlan 1.
1. AED is now shown as Capable
DPASR#show otv
Overlay Interface Overlay1
VPN name : dp
VPN ID : 1
State : UP
Fwd-capable : Yes
Fwd-ready : Yes
AED-Server : Yes
Backup AED-Server : No
AED Capable : Yes
IPv4 control group : 239.2.3.4
Mcast data group range(s): 232.1.1.0/24
Join interface(s) : TenGigabitEthernet0/0/1
Join IPv4 address : 172.31.2.1
Tunnel interface(s) : Tunnel0
Encapsulation format : GRE/IPv4
Site Bridge-Domain : 101
Capability : Multicast-reachable
Is Adjacency Server : No
Adj Server Configured : No
Prim/Sec Adj Svr(s) : None
2. I have enabled tagging for site vlan
service instance 1 ethernet
description Site VLAN – Not Extended
encapsulation dot1q 101
bridge-domain 101
3. I have remove switchport trunk native vlan 1 so it will be passed as tagged.
When I do show otv vlan, I see VLAN 1 as active
I am still not able to ping across the two switches.
Thanks
Configure a dummy native vlan on the switchport. Cisco is using vlan 1 as a default native vlan. Configure something like:
Swi trunk native vlan 99
And check if you can see mac addresses in “show otv route”.
That is it. After creating a new VLAN on the 3560G and make that native VLAN, I am now able to ping across the OTV.
Thanks
Another thing I am concerned with now is the HSRP groups on our routers that we are extending across this OTV. Does OTV do anything with HSRP or will it pass all the HSRP traffic?
Thanks
OTV does filter all HSRP/VRRP/GLBP traffic. This is needed to achieve an active/active datacenter. Routing is always done in the local datacenter.
Now, when I do show otv, it is showing
Overlay Interface Overlay1
VPN name : dp
VPN ID : 1
State : DOWN(Cleanup in Progress)
Reason : Admin Down
Fwd-capable : No
Fwd-ready : No
AED-Server : No
Backup AED-Server : No
AED Capable : No, overlay DIS not elected
IPv4 control group : 239.2.3.4
Mcast data group range(s): 232.1.1.0/24
Join interface(s) : TenGigabitEthernet0/0/1
Join IPv4 address : 172.31.2.1
Tunnel interface(s) : Tunnel0
Encapsulation format : GRE/IPv4
Site Bridge-Domain : 101
Capability : Multicast-reachable
Is Adjacency Server : No
Adj Server Configured : No
Prim/Sec Adj Svr(s) : None
What does DOWN(Cleanup in progress) mean? It has been like that since yesterday.
Thanks
I can only see: Reason : Admin Down. Did you shutdown the overlay1 interface?
On an ASR1001 FHRP/HSRP filtering does not function properly, not using the OTV’s built-in filtering, or even using an ACL and MAC filtering. The easiest workaround for this is to use HSRP authentication (You’re doing this anyway, right?!) and use a different MD5 or password for the pairs at each disparate site.
Hi,
This is a great explanation…Do the gateways to the vlan’s reside on the downstream switch at each site? I will be using OTV to extend my Vlans between two DC’s across an MPLS network with other sites. If there are no IP addresses on the physical interface of the ASR connecting to the LAN, how do I route to the other sites out of the join interface through the MPLS network? Thanks.
Hi, there is no need to configure any routing on your internal network. The MAC address tables (with a little help of ARP) will take care of the layer 2 “routing” between your datacenters.
What IP address would be the gateway of users sitting in DC1 and DC2 for extended VLAN10 ? Do I need to create SVI of VLAN 10 on both sides aggregation switches or just creating the VLAN is enough and users of DC2 can communicate to the gateway SVI configured in DC1.
It depends on the protocol you use: You need to create an gateway SVI in each DC if you use HSRP / VRRP / GLBP. If you’re not using one of those protocols, 1 SVI in 1 DC will work.
But besides that, I would not recommend 1 SVI in 1 DC from a architecture point of view.
Hi
Great Explanation, Best i’ve come across.
Regarding the site VLAN, I have 2 ASR routers at a site connecting to my internal switches. So for both to see each other on the site, Will the following config do, considering I am using say vlan 50 for the site VLAN
ASR config on both ASRs:
otv site bridge-domain 50
otv site-identifier 0000.0000.0050
interface TenGigabitEthernet0/2/0
no ip address
service instance 50 ethernet
encapsulation dot1q 50
bridge-domain 50
Then on the internal switches, create VLAN 50 and allow on the trunks to the OTV routers
Thanks
Hi, Thanks!
Your configuration is correct. Reminder: Do NOT stretch the site VLAN over the overlay to other datacenters!
Also, your site identifier is 50, this could be any number and is not related to the bridge domain (aka site VLAN) as long as it is unique in your OTV network.
Thanks Rob
Just to clarify, although the site identifier can be a unique number different from the bridge domain, do all edge devices on 1 site have to have the same site identifier and the same bridge domain. My guess is yes but just making sure.
Is that also the case with the service instance number, is that just the instance number and not a vlan number, although in the example it is the same as the vlan number?
Thanks
Hi,
OTV on a stick setup? You really have to use two physical interfaces on the ASR to set up OTV.
The control-group is the multicast address which is used for OTV control traffic over the WAN to the other datacenter(s). All OTV edge devices in the same overlay have to use the same control group IP.
The data-group subnet is used for mapping multicast streams to a data group IP to be transported over OTV. Make sure this subnet has enough IPs for all your multicast streams.
Thanks, and yes i’m using two interfaces, one routed for the join interface and an internal interface
Hi
Another questions…….The bridge domain (aka site vlan) for DC1 is 11 in the example. Should the bridge domain for all the service instances in DC1 not also be 11?
Thanks
The bridge domain is an internal per-vlan mapping and should be unique for every VLAN or service instance.
Hi Rob
When extending a VLAN between two DCs, so for example vlan 10. Is a VLAN interface required at each DC on the local core switch, and how is this addressed in terms of IP addresses. Do they share a default gateway at each site somehow?
Thanks
That all depends on your design. OTV gives you the option to do active/active routing (with FHRP filtering, check this post: http://www.infraworld.eu/otv-fhrp-filtering-asr-router/).
If you use FHRP filtering, your hsrp/vrrp IP is the same in both datacenters to achieve active/active state, but the SVI IPs have to be unique in the network.
Hi,
The OTV requires a multicast enabled transport network , I have queries regarding this.
1) Does this restrict me to connect Data centers Over the Internet?
2) What kind of connectivity should I get from Telecom Service Provider for connecting Data Centers. Is it MPLS L2 VPN Or MPLS Layer 3 VPN Or VPLS Or Dark Fiber Or Point To Point Link?
3) Do I have to tell the Telecom Service Provider to enable Multicast on Internet or VPN?
Hi Ranjeet,
It is not possible to implement OTV over internet for various reasons. The most important ones are multicast and a higher MTU of at least 1542.
All your suggestions are working for OTV. As long as you have IP reachability with a MTU of 1542, you’re good to go. Multicast is not a requirement, if your service provider can’t enable it you can use OTV unicast mode (which uses a central “server” for helping with the MAC advertisements). The higher MTU is still a requirement.
In my opinion: if multicast is available, use it.
See my other blog post about multicast WAN for some more details: http://www.infraworld.eu/configure-multicast-wan-otv/
Hi Rob,
Thanks for your reply , Appreciate it.
What I make from your reply is that 1) You need a PC_CE- WAN link supporting MTU of 1542 2) Multicast is required only for NON-Unicast mode of OTV else it is required.
I read that that OTV is an “Ethernet over MPLS over GRE tunnel” , How does Muiticast fit into this in Control and Data group? Why do we Multicast since it is complicating the Control/Data Plane?
Hi Ranjeet,
The control-group multicast address is used to advertise the MAC addresses to other OTV edge devices. If you use unicast mode, this is not needed.
The data-group multicast range is used to map internal multicast streams to a data-group IP, so the multicast stream is available in the other datacenters.
OTV is indeed a EoMPLSoGRE protocol deepdown and multicast is used to find out where the GREs tunnels between the datacenters are needed. But this is out of your control, OTV is taking care of all this. The configuration in this post is all you need to get it running in multicast mode (which is recommended).
OTV is possible over the internet and does not require an MTU of 1542. On the ASR platform only, you can allow fragmentation for OTV. The Nexus family does not support this feature. In your topology, you would use the following global command:
otv fragmentation join-interface TenGigabitEthernet0/1/0
Good evening; this configuration does not work; could you tell me what to do
This should be a working configuration. Can you give some more details about the things you’ve done?
Good evening; this configuration does not work; could you tell me what to do:
Hi, try to use different service instance numbers.
Rob – do you know of any reference material that lists the OTV capacity of the ASR1000 – specifcally the maximum number of VLANs supported? For example, the N7K has a maximum of 1500 OTV VLANs (assuming NX-OS 6.x or later) – is the ASR the same assuming the requisite IOS?
I checked, but I couldn’t find any cisco documents related to capacity on the ASR.
ASR1k supports 4000 VLAN’s per Overlay which is on XE3.14 and with total number of OTV overlay as 50, this contributes to a v big number 🙂
Hi Rob,
I am procuring ASR1002 for OTV configuration and our core swicthes are 3850. As long as I can do VLAN it can communicate with edge router ASR to establish OTV?
Hi Nak,
That is correct! You only need a trunk from your core switches to the ASR which is the OTV edge device. You can use any type of switch as core, as long as you can connect with a trunk.
Good Evening.
First of all I want to thak you for the great config example. I need to build an OTV in unicast mode. There’s some differences in the design?
Thanks Luca!
The configuration is almost identical, except for the multicast part. To use unicast mode, a adjacency is required. This is a role you configure on one of your ASRs. See this guide for the required configuration: http://www.cisco.com/c/dam/en/us/td/docs/solutions/Enterprise/Data_Center/DCI/5-0/OTVunicast.pdf
Many thanks Rob!
Keep up the good work
Just a final question: I need to use MPLS in order to connect the two DC’s. I use BGP between PE and ASR 1000. I connect a PC to two Nexus 5k. I need to use isis in this deployment?
Thanks in advance.
BGP between PE and ASR is fine, as long as the WAN network supports the higher MTU. I didn’t use isis over OTV yet, but I can confirm that OSPF and BGP are working perfectly. I don’t see any reason why isis should not work over OTV.
Thank you.
Hi Rob,
Great blog!
Is it possible to aggregate two phYsical interface into a port-channel as Internal interface?
For example:
1001-HX with two ports aggregated as a Join interface (port-channel1 L3) and and two ports aggregated for internal interface with service instance related to a port-channel 2
Thanks and Regards
Hi,
You can indeed use a port channel as internal interface. Only 1 internal interface is supported, so make sure you stay at 1 port channel.
One small remark: I doubt that bandwidth wise you need a port channel. I can see use cases for a port channel as redundancy solution, but I still advice two OTV edge devices per site which gives you about the same redundancy levels and even better from OTV perspective.