MAC addresses in a VSS cluster

As you might know, creating a VSS on Cisco Catalyst 4500-X switches is pretty easy and there are many many guides with information how to do this. I think THIS guide is one of the best to do this.
However, there is one additional note which is not mentioned on that (and other) blogs if you are planning to use multiple Catalyst 4500-X VSS clusters. This is related to the switch MAC address.
By default, all MAC addresses used by the Catalyst 4500-X VSS cluster is automatically generated and is based on the VSS domain ID. But what does this mean?
If you’re planning to use multiple VSS clusters in the same network and in the same VLAN(s), you’ll end up with duplicate MAC addresses. I’m sure I don’t have to tell you that this is something you don’t want: it brakes a lot of things in your network.
Read more

Cisco Nexus 9000 update 2016

This blog is about recent updates on the Cisco Nexus 9000 series datacenter switches. This describes my view on the switches and technologies.Nexus9k
Nexus 9000 series is currently Cisco’s flagship on datacenter networking. This is today and still for tomorrow.  Mounting the Nexus 9000 switches should be the last physical and configuration work you’ve done in the (future?) datacenter. All upcoming tasks (aka.. Configuring and provisioning the switches) should be done automatically by any automation tool. We have to move from our traditional networking tools to more automating and orchestration tools from today on. Nobody wants or has time to manually configure all these switches like in the old days.
The Nexus 9000 series switches are ready to achieve this with many on-the-box features like automation with PoAP, REST CALL with NXAPI and the “unix-way of management”. This, to program and configure the network / fabric.
Read more

Cisco Tetration Analytics introduction

Cisco introduced a new datacenter product yesterday (june 15 2016): Tetration Analytics.

Tetration1

Cisco Tetration Analytics. Monitoring everything, analyze in real time, actionable insights


Tetration Analytics is an solution to monitoring, analyzing and replaying datacenter traffic. If you’ve had a attack a few weeks ago and a fix is available now, the traffic of the attack can be replayed to verify if the fix is working (awesome!). A few slides to introduce the product:
Tetration2
Read more

Cisco ACI Naming convention thoughts

As you might know, Cisco ACI is a object related product. Every object you will create has to be named with a unique name so it can be identified later. Because of the simple fact that you cannot rename objects (it’s not implemented yet) it’s highly recommended to think of a good naming convention before you start creating the first one.
If you really want to rename an earlier created object, you have to remove and recreate the object and link it again to all other linked object.
To give you a head start on the naming convention, you have to think about the following objects:

Fabric naming

  • SPINE / LEAF switch naming
  • APIC Naming
  • VLAN-pools
  • Domains
  • Attachable Access Entity Profile
  • Link Level Policy
  • Interface policy group
  • Interface Selector
  • Switch Selector
  • Switch Profile

Creating a naming convention is network specific, but try to take the following tips in consideration:
Read more

Cisco ACI & Microsoft Hyper-V & L4 – L7 integration

There are options to integrate L4 – L7 devices, like firewalls or load balancers (Cisco ASA, F5, Citrix Netscaler, etc), into Cisco ACI. These integrations can be done in a managed mode, with a device package, or unmanaged mode. Both modes are available if you are using Cisco ACI with VMware vCenter integration.
When you are using Cisco ACI with Microsoft Hyper-V, you cannot integrate any L4 – L7 device yet (Q1 2016). The options to integrate these devices are not available if you select an SCVMM domain.
More to come..
My thought
Cisco ACI is a great product, which I’ve implement at some customers already. I’ve seen the product grow in the last year from something “not production ready” to an stable product which can be used in production environments. But like all new products, there are still some limitations around which can be a struggle during implementations. The VMware integration into ACI is done and complete, the Hyper-V implementation is still pretty new and some features are missing. I’m sure that the Hyper-V implementation will be more complete in the next major ACI release, but at this point in time you need to know about the limitations which are still around.

Cisco ACI Initial APIC configuration

There are a lot of blog posts around about the Cisco ACI technology and design tips and tricks. If you want to know more about ACI, please read the Cisco ACI Fundamentials 
This post describes your first steps to create and installation of a ACI fabric. Our example design will look like this:
ACI network layout
Our network will exist in only one datacenter with two spine switches, two leaf switches and two  APIC controllers. The spine and leaf switches are connected with 40Gb/s, the APIC controllers are multihomed with 1Gb/s links.
Read more

Configure your multicast WAN for OTV

It is easy to find design and configuration guides about OTV implementations on Nexus 7000 switches, ASR and CSR routers. But it is much harder to find some information about the requirements for your WAN.
Please read my previous blog posts about OTV here, here, here and here. I’ll cover the OTV device configurations in those posts. But for now, lets start with the DCI WAN for OTV.
First of all, there are two OTV deployment options:

  • Unicast mode
  • Multicast mode

The WAN requirements in unicast mode are simple: deliver unicast connectivity between the join interfaces of all OTV edge devices. This is just a simple straight forward configuration, I will not cover this in this blog post.
The multicast deployment is a bit harder to configure and requirements are less easier to find. This blog post will cover the required WAN configuration in a multicast deployment. In this particular scenario, we use dark fiber / DWDM connections as DCI to get a more clear understanding about the requirements and configuration.
First, a drawing to get a view on this deployment scenario:

OTV WAN multicast  layout

OTV WAN multicast layout


This blog will provide you with the most easiest way to get your OTV multicast deployment up and running. There are some more finetune options available, but that will not be covered in this blog.
Read more

LISP Mobility with OTV

In previous posts we talked about implementing OTV with ASR routers. OTV is a overlay network to get end-to-end layer 2 connections over a layer 3 (WAN) network. In most implementations is FHRP (First Hop Redundancy Protocol, like HSRP/VRRP) filtering needed. These filters are needed to keep routing in the same datacenter where the traffic originates.
Let’s take another look at the high level design:
OTV Network layout
When FHRP filtering is active, the Virtual IP (aka.. default gateway for clients) is active in both datacenters. Which means: a packetflow from a server in DC1 is routed on the core switch/router in DC1. If you move (vMotion/ live migrate) that server to DC2, the packetflow is routed on the switch/router in DC2.
If you think this through, the datacenter outgoing trafficflows are efficient: routing will be done on the most nearby router. But… incoming traffic from branch offices is still not efficient: the WAN network does not know where the VM is hosted, so the packets are routed by the normal routing protocols. This could result in inefficient routing: if the IP range is routed to DC1 on the WAN and the VM is hosted in DC2, the Datacenter-Interconnect (OTV) will be used to get the packets to the VM.
This is where LISP mobility comes in.
Read more

OTV FHRP filtering on a ASR router

We configured a OTV DCI in my previous post and it was working as expected and by design. But during testing of all the VLANs I discovered a problem with HSRP over OTV, but only for 1 specific VLAN. The test results:

  • A ping from a host in DC1 in VLAN 10 to the HSRP address gives random drops
  • A ping from a host in DC1 in any VLAN to the HSRP address pings without any problems
  • Shutdown the SVI of VLAN 10 in DC2, A ping from a host in DC1 in VLAN 10 to the HSRP address without any problems
  • VLAN 10 is still disabled in DC2, but a host can ping the HSRP address from DC2 to DC1. This should be impossible because of the FHRP filtering
  • Changing the standby group number (they are the same in DC1 and DC2 to keep the same MAC address) partially solved the problem, but some hosts in DC1 got the HSRP MAC of DC2 in the ARP table. This is not what we want.
  • Moving the SVI from a 6500 switch to a 3750 switch in DC1, none of the above problems

I still have no idea why this problem only exists for VLAN 10, all other VLANs work as expected but I’ve found a good workaround for this in the configuration guide:
http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/wan/command/wan-cr-book/wan-m1.html#wp3953249580
Read more

1 2