Posts Tagged ‘Cisco’

Never thought I would be writing about how to utilize IPv6 in 2017 because of all the excellent material on the Internet; however, I have discovered a few things:

  1. There are still technologies which have horrible support for IPv6 (including new stuff)
  2. There are people still resistant to implementing it
  3. There is material on the Internet which shows up early in Google searches which references deprecated standards

Without any further delay, I am going to outline a few items you should keep in mind when deploying your IPv6 network:

Subnet mask size

In IPv6, barring a few exceptions like point-to-point links, you should always utilize a /64 for each deployed subnet. Why? Well, if you wanted to use DHCPv6 you’ll find Microsoft’s implementation won’t even allow you to change from a /64 and even a DHCPv6 server in Linux, while it will actually run with a mask larger than a /64, it will only hand out a /64. Also, you’ll find the use of anything larger than a /64 breaks a lot of the auto-discovery mechanisms in the switch/router, namely around EUI-64, and just doesn’t make sense.

What subnet size should I get from ISP/provider/administrator?

If you’re not going to “own” your IPv6 network, that is you’re not getting an assignment with an ASN to advertise, you’re either looking to obtain a public block of addresses for use and/or you’re internal and need your networking administrators to assign you a prefix which you can further subnet yourself. There is a standard most follow to assign prefixes to “customers”.

An ISP, for instance, may have numerous /32’s (or maybe a bit larger) assigned to them for their use to distribute to customers. Lets call them ISP and you work for “company” and you’re an internal IT organization within “company” who uses “ISP”. Your company would request from the ISP an IPv6 block assignment. From one of the ISP’s /32’s you’ll get, lets say, a /48 just for the hell of it. This is how your company can break it down internally for assignment:

  • 65,536 =  /64’s
  • 32,768 = /63’s
  • 16,384 = /62’s
  • 8192 = /61’s
  • 4096 = /60’s
  • 2048 = /59’s
  • 1024 = /58’s
  • 512 = /57’s
  • 256 = /56’s
  • 128 = /55’s
  • 64 = /54’s
  • 32 = /53’s
  • 16 = /52’s
  • 8 = /51’s
  • 4 = /50’s
  • 2 = /49’s

How your company doles these out, is up to them. However, almost no one is going to just directly carve out /64’s from the assigned /48 block, that is stupid. Generally, you’re looking to summarize and aggregate where possible throughout your network and we’ll assume you’re in location “A” at “company”.

We’ll go ahead and assume the company has decided each location is assigned a /58, which gives each location a total of 64 available /64’s to use. As you see, no different than standard IPv4 in the sense of ensuring proper aggregation, except now you’re no longer having to worry about the size of a VLAN’s subnet mask, you’ll always use /64.

What about private IPv6 address space?

If you do not want a Globally Unique IPv6 address you can indeed have what is called a “Unique Local IPv6 address = ULA”. There is a guide on how to properly generate these addresses, which includes a variable which references the time and date, along with other factors to ensure absolute uniqueness.

Why does this matter with private address space? Have you ever been involved with a merger/acquisition, or having to aggregate two offices together which use the same private IPv4 subnet range? I need not say anymore because this can be a PITA! Thus, ULA, when done right, ensures this will never happen; however, there is absolutely nothing stopping you from selecting your own, basic, prefix.

IPv6 ULA uses the FC00::/7 prefix, divided into two groups:

  1. fc00::/8 – The idea for this prefix is to be administered by some authority, but no one can agree to it, so just forget about it
  2. fd00::/8 – Is defined for the generation of /48 prefixes only, using the last 40 bits to generate a random, unique, prefix, according to the algorithm in RFC4193

You will want to use option 2 and you can use online generation tools like those from SiXXs or use a tool from another resource, either way, make sure it generates a proper /48 prefix for you and is, by some degree, RFC4193 compliant.

Finally, your company’s IT department is likely to have this /48 already and is almost very likely to have assigned you a prefix according to the same standards for which they’ll dole out their Globally Unique IPv6 addresses; thus, no additional explanation needed.

Get your DNS infrastructure setup for IPv6 AAAA and PTR-record resolution

I won’t delve into this much more other than you absolutely must make sure your DNS infrastructure is setup for IPv6 AAAA-record and IPv6 PTR-record solution or you WILL have issues!

One area to ponder is the hostnames that’ll resolve when you’re in a dual-stack environment. Do you want the same hostname to return on both a A-record and AAAA-record? Well, some say no, some say yes. Me? I say you should discuss this with your vendor to ensure their solution doesn’t have a problem with this, especially in a dual-stack environment. I was told, by co-workers who know more about Vmware vCenter than I do right now, this is a problem and the returned hostnames must be different when using dual-stack based environments.

Always research and question IPv6 support on your devices

This goes for hardware and software vendors, many have made claims their stuff works with IPv6; however, what, if any, testing was done isn’t known and there are a variety of scenarios to consider. For instance:

  • Does it support native IPv6 from installation-to-operation?
  • Does it support dual-stack, from installation-to-operation?
  • How does it handle DNS requests in dual stack?
    • Does the system start with IPv6 AAAA requests and then fails over to IPv4 A-record requests?
    • If so, what is the timeout if a AAAA record is not available and it must try for an IPv4 A-record?
    • Is the order of DNS resolution preference configurable? (Can you choose to have IPv4 A-records first?)
  • What forms of address configuration are available for IPv6? (SLAAC, static, DHCPv6?)
  • What IPv6 address types are supported? (Globally Unique and/or ULA?)
  • Are there specific “sections” of configuration which cannot support IPv6?
    • For instance, in Cisco NX OS, you cannot reference an IPv6 address for use on a vPC peer keep-alive link.

More questions will come to mind, but these are from experience and I can promise you are a lot of reasons why most IPv6 implementations in the enterprise, and data center, fail. Question all vendors!

This is it for now, hope this clears up some stuff for you out there who’re thinking about their IPv6 implementation


DNSMASQ is both a DNS and DHCP server that is quick and efficient to run on Linux systems and is likely already running on your Linux box. If you’re in need of a quick DHCP server to run your environment to serve multiple DHCP scopes for different subnets in your VLAN, of which we all know the best practice is subnet == VLAN == Broadcast domain, then DNSMASQ is your go to guy and I prefer it over the ISC DHCPD server. This quick tutorial will go over the basics of how to get this setup and running and assumes you’re not going to utilize the DNS service.

Create a directory for your DHCP leases file:

sudo mkdir /opt/dnsmasq

Setup dnsmasq.conf:

#
#Disable the DNS server
#
port=0
#
#Setup the server to be your authoritative DHCP server
#
dhcp-authoritative
#
#Set the DHCP server to hand addresses sequentially
#
dhcp-sequential-ip
#
#Enable more detailed logging for DHCP
#
log-dhcp
#
#Set your DHCP leases file location
#
dhcp-leasefile=/opt/dnsmasq/dnsmasq.leases
#
#Create different dhcp scopes for each of the three simulated subnets here, using tags for ID
#Format is: dhcp-range=<your_tag_here>,<start_of_scope>,<end_of_scope>,<subnet_mask>,<lease_time>
#
dhcp-range=subnet0,10.0.0.5,10.0.0.250,255.255.255.0,8h
dhcp-range=subnet1,10.0.1.5,10.0.1.250,255.255.255.0,8h
dhcp-range=subnet2,10.0.2.5,10.0.2.250,255.255.255.0,8h
#
#Setup different options for each of the unique subnets, since default gateways will be different
#The format for this is: dhcp-options=<your_tags_here>,<option>,<option_value> - 3 is router
#
dhcp-options=subnet0,3,10.0.0.1
dhcp-options=subnet1,3,10.0.1.1
dhcp-options=subnet2,3,10.0.2.1

Once this is complete, enable your DHCP service to start automatically. You should also check your systems firewall/IPTABLES service(s) to ensure you have created rules to allow UDP traffic over port 67 and port 68, or you can just flush your IPTABLES and/or disable your firewall, your choice, this isn't a security blog so I'll leave the choice to you, the person who knows their environment better.


If you’re looking to use command line variables for scripting stuff you have some predefined variables in the NX-OS environment to use and you can also create your own. For now, I’ll just show you how to use the most common, the switches hostname. In some environments you’ll have to save the output of a show tech file and later on upload it via SCP. However, if you’re doing this to 2 or more switches, you’ll need unique file names to make your life easier. Instead of going to each one, you can just use the variable SWITCHNAME in the file. So, if you’re using a script or something like cluster-ssh, this makes your job easier.


sh tech all > bootflash:///shtech-$(SWITCHNAME)


There has been some slight confusion and ambiguity around the “single-connection” configuration statement provided by Cisco switches and routers, including SAN MDS switches. As of this writing, Cisco Nexus 9000 NXOS switches on 7.0.3.I5.1 code do not support single-connection in their tacacs host configuration; however, certain MDS switches do. In either case, if you do find yourself wondering here for the answer, let me elaborate for you.

The purpose of single-connection is to multiplex all of your TACACS authentication requests using a single TCP oriented connection from the switch to the TACACS server. Using tac_plus, an open source TACACS server, you can absolutely set the single-connection bit from say, a Cisco 9706 MDS switch; however, upon packet analysis of any TACACS authentication requests you may discover the single-connection bit is set to 0.

Refer to draft-grant-tacacs-02 and scroll to the FLAGS section for an explanation of where you will, and should, see the single-connection bit set in the TACACS flag. Basically, you’ll only ever find the bit set in the initial setup of the connection so both the TACACS server and the client agree on single-connection TCP. Thus, instead of each and every TACACS request coming through as a unique TCP connection (essentially having to use multiple sockets, sockets being the 4-tuple of SRC IP, DST IP, SRC port, and DST port) the TACACS query and response messages are just carried over the single TCP connection.

If your system supports this, its worth attempting to see if it works as it can save some resources; however, your mileage may vary.


If you have upgraded your Cisco Nexus switches to code level 7.0(3)I2(1) or higher and had flowcontrol enabled on an interface, you’ll likely find you’re not able to do a “no flowcontrol receive on” because the command was deprecated. Current recommendation is to default the switch configuration but I have a solution you can implement one switch at-a-time with a single reload to fix this issue:

copy run startup-config
!
copy startup-config <tftp: | scp:>
!
sh run | sed 's/flowcontrol receive on//g' >> bootflash:///no-flow-control-startup-config
!
copy bootflash:///no-flow-control-startup-config startup-config
!
reload
! Do not save the running-config to startup-config - just reload one switch at-a-time

See my video on this very specific topology, what I’ve encountered, and the solution I found to work for me:


So, you’ve surely seen some interesting tidbits in the previous section, things you haven’t noticed from other configurations on the Internet. I will outline why these are present in this configuration based on the failure scenario I present below:

Complete and total loss of spine connections on a single leaf switch – First I’ll outline the ONLY reasons why a single leaf switch would lose all of its spine uplinks:

  1. Total and absolute failure of the entire leaf switch
  2. The 40GbE GEM card has failed, but the rest of the switch remains operational
  3. An isolated ASIC failure affecting only the GEM module
  4. Someone falls through a single cable tray in your data center, taking out all the connections you placed in a single tray
  5. Total and complete failure of all 40GbE QSFP+ modules, at the same time
  6. Total loss of power to either the leaf switch or to all spine switches
  7. All three line cards, in three different spine switches, at the same time, suffer the same failure
  8. Someone reloaded the spine switches at the same time
  9. Someone made a configuration change and hosed your environment

OK, now, lets make one thing clear: NO one, and I mean no one, can prevent any issue with starts with “Someone”, you can’t fix stupid. If you lose power to both of your 9396PX power supplies or to the 3+ PSUs in the 9508 spine switches, I think your problem is much larger than you care to believe. Lets see, we now have just 5 scenarios left.

If your leaf switch just dies, well, you know. Down to four! Yes, a GEM card can fail, I’ve seen it, but this isn’t common and is usually related to an issue which will down the entire switch anyway, but we’ll keep that in our hat. Failure of all the connected QSFP+ modules at the same time? I’ll call BS on this, if all of those QSFP+ modules have failed, your switch is on the train towards absolute failure anyways.

Isolated ASIC failure? So uncommon I feel stupid mentioning it. All three line cards in the spine failing at the same time? Yeah, right. So, in all we’re looking to circumvent a failure in the event of a GEM card failure which doesn’t also mean your switch is dead, being the only real valid reason; however, please note, I am only providing this as proof of concept and I don’t think anyone should allow their environment to operate in a degraded state. If your environments operating status isimportant to you, perhaps a different choice of leaf switch for greater redundancy, a cold or warm backup switch, or at least have 24x7x4 Cisco Smartnet.

When you have a leaf switch suffering from a failure of all the spine uplinks, your best course of action, on a vPC enabled VTEP, is to down the VPC itself on the single leaf switch experiencing the failure. This is where the tracking objects against the IP route and the tracking list which groups them for use within the event manager come to use. Once all the links have gone down, using the boolean AND, by the removal of the BGP host address in the routing table, the event manager applet named “spine down” initiates and shuts down the vPC, loopback0, and the NVE interface, respectively.

When all the links return to operation, there is a 12 second delay, configured for our environment to allow for the BGP peers to reach the established state, and then the next event manager applet named “spine up” initiates, basically just “un-shutting” the interfaces in the exact same order. The NVE interface configuration for the source-interface hold-down-timer, brings the NVE interface UP, but keeps the loopback0 interface down long enough to ensure EVPN updates have been received and the vPC port-channels come to full UP/UP status. If this didn’t happen, and the loopback0 and port-channels come up way too soon before the NVE interface, we’ll blackhole traffic from the hosts towards the fabric. If the NVE and loopback0 interface come up too long before the port-channels, you’ll black hole traffic from the network-to-access direction; thus, timing is critical and will vary per environment so testing is required.

A lot of stuff, right? This is all done to prevent the source interface of the NVE VTEP device coming up before the port-channels towards end hosts come up, to prevent the VTEP from advertising itself into the EVPN database and black holing INBOUND traffic.

You might be thinking: Why not just create a L3 link and form an OSPF adjacency between the two switches to allow the failed switch to continue to receive EVPN updates and prevent blackholing? Well, here are my reasons:

  1. Switchport density and cost per port – If it costs you $30,000 for a single switch of 48 10GbE ports, not including smartnet or professional services, you’re over $600/port, and you and I both know you’re not just going to use ONE link in the Underlay, you’ll use at least two. Really expensive fix.
  2. Suboptimal routing – Lets be real here, your traffic will now take an additional hop because your switch is on the way out
  3. Confusing information in EVPN database for next-hop reachability. – Because the switch with the failed spine uplinks still have a path and receiving EVPN updates, you’ll see it show up as a route-distinguisher in the database, creating confusion
  4. It doesn’t serve appropriate justice to a compromised switch – Come on, the switch has failed, while not completely, it is probably toast and should be downed to trigger immediate resolution of the issue, instead of using bubble gum to plug a leak in your infrastructure. The best solution is to bring down the vPC member completely, force an absolute failover to the remaining operational switch, prevent suboptimal routing, and prevent confusion in troubleshooting.

I can’t stress this enough: Engineering anything other just failing this non-border vPC enabled leaf switch, in the event it is the only switch without all, at least, 3 spine connections, is an attempt at either trying to design a fix for stupid or you’re far too focused on why your leaf switch has failed and ignoring the power outage in your entire data center because you lost main power and someone forgot to put diesel in the generator tanks. Part 3 will include more EVPN goodness, stay tuned!


Ooook, here is another configuration example for the Cisco implementation for VXLAN using BGP EVPN for distributed control-plane operations. anycast gateway, and unicast head-end replication. I am using Cisco 9396PX devices for leaf switches and Cisco 9508 chassis switches for the spine using iBGP. We’ll explore the basic setup with the leaf switches being vPC enabled, including the Border Leaf switches, while also going over a few scenarios which can blackhole traffic and how to avoid this without a OSPF adjacency between the leaf switches.

This blog will assume you understand the basic setup of BGP EVPN VXLAN by reading the great Cisco documentation already available; thus, I presume you’re coming here for a more in-depth, real-world deployment scenario and for some better explanations and failure scenario testing and outputs

Below, this diagram shows the connectivity in the UNDERLAY network:

Cisco BGP EVPN UNDERLAY

Cisco BGP EVPN UNDERLAY

You can see we have three spine switches, two configured as route reflectors for scalability. Below is the configuration of a single spine switch being used as a route reflector, the other route reflector is setup the same way, with IP addresses being different and such and, of course, the other spine switch not having any iBGP peering relationships with the third spine switch is just runs OSPF, forms adjacencies with all VTEPS for advertisement of VTEP IP reachability.


nv overlay evpn
feature ospf
feature bgp
feature nv overlay

router ospf 1
router-id 172.16.2.253
log-adjacency-changes
passive-interface default

interface Ethernet1/1
description Leaf01-9kA
link debounce time 0
mtu 9216
medium p2p
ip address 172.16.2.1/30
ip ospf network point-to-point
no ip ospf passive-interface
ip router ospf 1 area 0.0.0.0
no shutdown

interface loopback0
ip address 1.1.1.10/32
ip router ospf 1 area 0.0.0.0

router bgp 65000
router-id 1.1.1.10
address-family ipv4 unicast
neighbor 1.1.1.40
description VTEP1
password 3 SOMEPASSWORD
update-source loopback0
timers 3 9
address-family ipv4 unicast
address-family l2vpn evpn
send-community both
route-reflector-client
neighbor 1.1.1.41 remote-as 65000
description VTEP2
password 3 SOMEPASSWORD
update-source loopback0
timers 3 9
address-family l2vpn evpn
send-community both
route-reflector-client

The above forms the basis of the Underlay network on the spine and sets up the route-reflectors. We have tuned this for protocol convergence speed; thus, timers are aggressive for BGP and you’ll notice the “link debounce time 0”, which disabled link debounce. In a nutshell, by default, the debounce time is the amount of time after a switchport goes down for which the switchport will wait to notify the supervisor, 100msec by default. Disabling this allows immediate updating to the supervisor on a link failure to start protocol convergence. If you’re worried about an unstable interface, it is quite likely in the event of a link failing/flapping issue, the link-flap detection mechanism will down the port. Finally, we set BOTH the interface medium to p2p and set the OSPF network type to point-to-point. Why? In the event someone misses the command to switch OSPF to point-to-point, since this interface type is broadcast by default, the medium p2p command changes the ports operating mode and OSPF will properly adjust to point-to-point; thus, this is just good extra redundancy.

Now, here is the overlay view, pretend this is an OVERLAY named “Tenant-01”:
VXLAN-OVERLAY

Below is the configuration:


nv overlay evpn
feature ospf
feature bgp
feature interface-vlan
feature vn-segment-vlan-based
feature lacp
feature vpc
feature nv overlay

fabric forwarding anycast-gateway-mac 0005.0005.0005
fabric forwarding dup-host-ip-addr-detection 5 180

class-map type qos match-any ONE
match cos 1
match dscp 26
class-map type qos match-any TWO
match cos 2
match dscp 16
class-map type qos match-any THREE
match cos 3
match dscp 48
policy-map type qos REST-YOUR-COS-FOR-UCS-FI
class SILVER
set cos 2
class GOLD
set cos 4
class PLATINUM
set cos 6
policy-map type qos FOR-THE-COS-IGNORANT
class class-default
set cos 2
set dscp 16

spanning-tree vlan 1-3967 hello-time 4

vlan 201
name VXLAN-VLAN01
vn-segment 100201
vlan 202
name VXLAN-VLAN02
vn-segment 900202
vlan 203
name VXLAN-VLAN03
vn-segment 900203
vlan 2999
name VLAN-FOR-BRIDGE-DOMAIN
vn-segment 29999

vrf context Tenant01
vni 29999
rd auto
address-family ipv4 unicast
route-target both auto
route-target both auto evpn
address-family ipv6 unicast
route-target both auto
route-target both auto evpn

track 1 ip route 1.1.1.10/32 reachability
track 2 ip route 1.1.1.20/32 reachability
track 10 list boolean and
object 2
object 1
delay up 12

event manager applet spine-down
event track 10 state down
action 1.0 cli vpc domain 50
action 1.1 cli shutdown
action 1.2 cli interface loopback0
action 1.3 cli shutdown
action 1.4 cli interface nve 1
action 1.5 cli shutdown
event manager applet spine-up
event track 10 state down
action 1.0 cli vpc domain 50
action 1.1 cli no shutdown
action 1.2 cli interface loopback0
action 1.3 cli no shutdown
action 1.4 cli interface nve 1
action 1.5 cli no shutdown

hardware access-list tcam region vacl 0
hardware access-list tcam region e-racl 0
hardware access-list tcam region span 0
hardware access-list tcam region redirect 256
hardware access-list tcam region rp-qos 0
hardware access-list tcam region rp-ipv6-qos 0
hardware access-list tcam region rp-mac-qos 0
hardware access-list tcam region e-ipv6-qos 256
hardware access-list tcam region e-qos-lite 256
hardware access-list tcam region arp-ether 256

vpc domain 100
peer-switch
role priority 8192
system-priority 8192
peer-keepalive destination 192.168.1.1 source 192.168.1.2 interval 500 timeout 3
delay restore 5
peer-gateway
auto-recovery
ipv6 nd synchronize
ip arp synchronize

interface Vlan2999
description L3-VXLAN-BD
no shutdown
mtu 9216
vrf member Tenant01
no ip redirects
ip forward
ipv6 forward
no ipv6 redirects

interface Vlan201
description NET01
no shutdown
mtu 9216
no ip redirects
management
vrf member VXLAN
ip address 10.0.0.1/24
no ipv6 nd redirects
fabric forwarding mode anycast-gateway

interface Vlan202
description NET02
no shutdown
mtu 9216
no ip redirects
vrf member Tenant02
ip address 10.0.1.1/24
fabric forwarding mode anycast-gateway

interface Vlan203
description NET03
no shutdown
mtu 9216
no ip redirects
vrf member Tenant01
ip address 10.0.2.1/24
fabric forwarding mode anycast-gateway

interface port-channel50
description To Ethernet Switch B
switchport mode trunk
vpc peer-link

interface port-channel201
description Fabric-Interconnect-A
switchport mode trunk
switchport trunk allowed vlan 201-203
spanning-tree port type edge trunk
mtu 9216
service-policy type qos output REST-YOUR-COS-FOR-UCS-FI
vpc 201

interface port-channel202
description Fabric-Interconnect-B
switchport mode trunk
switchport trunk allowed vlan 201-203
spanning-tree port type edge trunk
mtu 9216
service-policy type qos output REST-YOUR-COS-FOR-UCS-FI
vpc 202

interface nve1
no shutdown
source-interface loopback0
host-reachability protocol bgp
source-interface hold-down-time 120
member vni 29999 associate-vrf
member vni 100201-100202
suppress-arp
ingress-replication protocol bgp

interface Ethernet2/1
switchport mode trunk
channel-group 50 mode active

interface Ethernet2/2
switchport mode trunk
channel-group 50 mode active

interface Ethernet2/3
no switchport
link debounce time 0
medium p2p
mtu 9216
ip address 172.16.2.18/30
no ipv6 redirects
ip ospf network point-to-point
no ip ospf passive-interface
ip router ospf 1 area 0.0.0.0
no shutdown

interface Ethernet2/4
no switchport
link debounce time 0
medium p2p
mtu 9216
ip address 172.16.3.22/30
ip ospf network point-to-point
no ip ospf passive-interface
ip router ospf 1 area 0.0.0.0
no shutdown

interface loopback0
description Loopback for NVE VTEP
ip address 1.1.100.44/32
ip address 1.1.1.102/32 secondary
ip router ospf 1 area 0.0.0.0

interface loopback1
description Loopback for BGP update-source
ip address 1.1.1.44/32
ip router ospf 1 area 0.0.0.0

router ospf 1
router-id 172.16.2.18
passive-interface default
log-neigh-adj

router bgp 65000
router-id 1.1.1.44
log-neighbor-changes
address-family ipv4 unicast
maximum-paths ibgp 10
neighbor 1.1.1.10
description spine1
password 3 SOMEPASSWORD
update-source loopback1
timers 3 9
address-family ipv4 unicast
address-family l2vpn evpn
send-community both
neighbor 1.1.1.20
description spine2
password 3 SOMEPASSWORD
update-source loopback1
timers 3 9
address-family ipv4 unicast
address-family l2vpn evpn
send-community both
vrf Tenant01
address-family ipv4 unicast
advertise l2vpn evpn
maximum-paths ibgp 10
address-family ipv6 unicast
advertise l2vpn evpn
maximum-paths ibgp 6
evpn
vni 100201 l2
rd auto
route-target import auto
route-target export auto
vni 100202 l2
rd auto
route-target import auto
route-target export auto
vni 100203 l2
rd auto
route-target import auto
route-target export auto

ip tcp path-mtu-discovery
l2rib dup-host-mac-detection 5 180

A lot to see here, right? This is why I decided to break this into two parts, so this is part 1 and my next post is part 2 for border leafs and failure scenarios! Lets get this initial review over with!

I will just outline all the key points here:

  • policy-map type qos REST-YOUR-COS-FOR-UCS-FI – This is for those of you who utilize the COS in Cisco UCS and want to maintain your COS value AFTER your packet is VXLAN DE-CAPSULATED. With this EVPN VXLAN configuration, the original 802.1Q header is stripped at ingress; thus, no COS value remains, but if you set any DSCP at the virtual switch level it is maintained throughout so we’re assuming you’re marking DSCP at your virtual switch along with COS and you have your own unique mapping from COS to DSCP. So, you create the classes I have above, this is all for example, your mappings will/may be different, and then create a policy-map to match against the DSCP value marked from your virtual switch and set the appropriate COS value. You then set this as a QOS OUTBOUND policy on the port-channel towards your Fabric Interconnects, but you will have to adjust your TCAM entries for this to work. The other one, for the COS-IGNORANT, will be for devices which aren’t smart enough to set either the DSCP or COS value; thus, just apply this to the interface, inbound, and set your values as needed
  • fabric forwarding anycast-gateway-mac 0005.0005.0005 – This is for the anycast gateway mac address. You can get “funny” here, but I like to keep it simple, your choice.
  • fabric forwarding dup-host-ip-addr-detection 5 180 – I set the duplicate host IP detection to 5 moves in 180 seconds for my environment, tune to the values best suited for yours
  • track objects and object list – I set these to look for the BGP neighbor address of the route-reflectors in the routing table and then assign each of those to the track object list for later assignment to the VPC. Part 2 will show and explain why
  • hardware tcam entries – Follow these for success in this configuration, especially if you’re in need of using the outbound QOS service policies
  • VPC peer-keepalive and delay-restore timers – Set to our environment and for specific reasons we’ll explain in part 2
  • NVE source-interface hold-down – This timer is set to 120 seconds, tuned for our environment, from the default of 300 seconds. I will explain the use of this and why I use 120 seconds in part 2
  • Loopback0 – Used ONLY for the NVE VTEP interface
  • Loopback0 secondary address – for vPC enabled VTEPS only, this is the PROXY VTEP address used
  • Loopback1 – Used ONLY for BGP source-updates
  • BGP passwords – This is used for security in the Underlay, you can also utilize OSPF authentication too, for extra security
  • So, like Forest Gump said to all his faithful followers “I’m pretty tired….I think I’ll go home now”. So, see you on Part 2, where the FUN is!!!

    CONTINUE TO PART 2


We know that for switches to cooperate inside each region the following must be configured the same:

  • Name – Case Sensitive
  • Revision – Any number, but should be the same
  • Instance mappings and their respective VLANs

Now, what about the VLANs themselves? What about switches and security? I looked all over for this answer and it was vague at best and each vendors documentation said something a little different from each others. However, this is just my preliminary testing, I added multiple instances to my spanning-tree setup on my Cisco Catalyst 3750. My scenario was as follows along with the outputs:

  1. Two instances
  2. Instance 1 had all the real VLANs that were actual VLANs on the switch
  3. Instance 2 had 2 VLANs mapped
    • The first test of MIST2 was with both VLANs not being defined on the switch
    • The second test of MIST2 was with one VLAN defined and the other not
    • The third test of MIST2 was with both the VLANs defined

Because MST instances themselves do not communicate the actual VLANs or VLAN mappings, and IST/CIST does not actually communicate the actual VLAN-to-Instance mapping either. Instead, we rely on IST0 to transmit the BPDUs that contain our information like: name, revision, checksum/Config digest/hash and the actual configuration digest/checkum/hash is the value to which each switch will calculate to determine if they’re operating in the exact same region or in different regions. The digest/hash/checksum is calculated based on parameters present in the MST configuration table. Want to know more about the hashing? Here is a link: 802.1s explained.

The information is long and boring, but do a search for “digest” and you’ll find yourself deep into figuring out how this all works. The test results are soon to come, I am working on both Catalyst and Nexus outputs to benefit not just enterprise and branch, but for those in the data center who’re having to work in vPC hybrid environments with STP attached devices. More to come…


It is a common mistake to assume X number of ports in an etherchannel equates to the common port speed * X; however, this is grossly incorrect and I’ll attempt to explain this behavior to you in layman terms

First, you should ALWAYS combine etherchannel bonds in even numbers (2, 4, 6, or 8). Why? It is the hashing algorithm used to determine how to load balance across the Etherchannel, more to come on how that works.

Second, you need to examine the traffic patterns on your network. If you have a model where your servers live in the “core” of your office and you have access switches connecting back to the core through etherchannel, you’re likely to have a lot of different source addresses (IP and MAC address) going to a common destination address (IP and MAC address). This is especially true of a backup server solution pulling backups for all your computers in the network or for users sending their default gateway traffic to a router which has a L3 port-channel configured from the core switch, which is a common network pattern you’ll find today. Finally, you can have server-to-server traffic patterns, where the source and destination IP addresses remain constant; however, the servers are probably utilizing numerous source and destination TCP/UDP ports; thus, the Etherchannel carrying this traffic needs to be adjusted. What about if the both models are going across the same Etherchannel (clients to the server and server-to-server) and you can’t build a separate etherchannel? The only recommendation here is to examine your traffic carefully, figure out what is more effective for your organization, we won’t get into that here.

Third, you need to understand what load balancing algorithms are available to you. However, take notice, this largely depends on the equipment you’re using. If your organization, like one I have worked inside, has decided that using 3650/3750 devices as a “core” to their network, you’re limited to the basic; however, if your organization uses true core switches (4500, 6500, 6800) you have all the options available to you. I will list the options available in ALL models below

  • src-ip – Source IP address only
  • dst-ip – Destination IP address only
  • src-dst-ip – Source and destination IP address only (XOR)
  • src-mac – Source mac address only
  • dst-mac – Destination mac address only
  • src-dst-mac – Source and Destination mac address only (XOR)

Now, here is what you’ll find available on true core switch models, in addition to the above:

  • src-port – Source port only
  • dst-port – Destination port only
  • src-dst-port – Source and Destination port only (XOR)
  • src-dst-mixed-ip-port – Source and destination IP along with the Source and Destination ports
  • src-mixed-ip-port – Source IP address and port
  • dst-mixed-ip-port – Destination IP address and port

The above commands all depend on what you’re running in your infrastructure, hardware and code level. It pays to put in the appropriate devices according to their duties. If you’re using devices like a 3560/3750 as your “core” you could be out of luck considering the few options available to you with one exception, you can look at installed a 10GB module in your switch and running Etherchannel 10GbE. This WILL NOT fix the load balancing issue but it will provide you the increased bandwidth to get you through until you’re capable of installing the appropriate hardware to support your needs This is given you’re using fiber for Inter-switch links and it supports 10GbE across the distances you’re looking to span.

Understanding your traffic patterns will be a process; however, one I think a lot of you forget is about the L3 Etherchannel you could be using between your core switch and your router. Think about this, the switch resolves the next-hop default gatway and this NEVER changes; thus, destination traffic address is always the same; thus, if you want to see if you’re able to utilize that Etherchannel more appropriately, set your etherchannel to hash based on source mac address towards the router.

I won’t let this get too long, I’ll follow up with some nice diagrams later.