Archive for the ‘Nexus’ Category

We know that for switches to cooperate inside each region the following must be configured the same:

  • Name – Case Sensitive
  • Revision – Any number, but should be the same
  • Instance mappings and their respective VLANs

Now, what about the VLANs themselves? What about switches and security? I looked all over for this answer and it was vague at best and each vendors documentation said something a little different from each others. However, this is just my preliminary testing, I added multiple instances to my spanning-tree setup on my Cisco Catalyst 3750. My scenario was as follows along with the outputs:

  1. Two instances
  2. Instance 1 had all the real VLANs that were actual VLANs on the switch
  3. Instance 2 had 2 VLANs mapped
    • The first test of MIST2 was with both VLANs not being defined on the switch
    • The second test of MIST2 was with one VLAN defined and the other not
    • The third test of MIST2 was with both the VLANs defined

Because MST instances themselves do not communicate the actual VLANs or VLAN mappings, and IST/CIST does not actually communicate the actual VLAN-to-Instance mapping either. Instead, we rely on IST0 to transmit the BPDUs that contain our information like: name, revision, checksum/Config digest/hash and the actual configuration digest/checkum/hash is the value to which each switch will calculate to determine if they’re operating in the exact same region or in different regions. The digest/hash/checksum is calculated based on parameters present in the MST configuration table. Want to know more about the hashing? Here is a link: 802.1s explained.

The information is long and boring, but do a search for “digest” and you’ll find yourself deep into figuring out how this all works. The test results are soon to come, I am working on both Catalyst and Nexus outputs to benefit not just enterprise and branch, but for those in the data center who’re having to work in vPC hybrid environments with STP attached devices. More to come…


Confused about getting QoS working on your Nexus 9300 platform (I worked with the 9396PX)? Well, if you’re coming from the Nexus 5500 platforms you’re in for a little tweaking to get this working as some things are different. I will quickly outline them and move onto some sample configuration:

  • MTU is set on an interface level
  • System defined queuing class-maps
  • egress queues (0 is default and 1-3 which are already pre-mapped using the above mentioned class-maps)
  •  Both access and trunk ports, by default, treat all traffic as if it had CoS 0, moving it into the default queue
  • QOS ingress service-policy must be applied to ports or port-channels to classify traffic

Here is some basic configuration for setting the QOS policy to classify:

class-map type qos match-all RUBY
match cos 4
class-map type qos match-all EMERALD
match cos 2
class-map type qos match-all DIAMOND
match cos 6

policy-map type qos QOS_POLICY
class RUBY
set qos-group 2
class EMERALD
set qos-group 1
class DIAMOND
set qos-group 3

interface port-channel20
switchport mode trunk
switchport trunk allowed vlan all
spanning-tree port type edge trunk
mtu 9216
service-policy type qos input QOS_POLICY

Now, let’s view the system defined queuing class-maps so you can get an idea of this:

class-map type queuing match-any c-out-q3
Description: Classifier for Egress queue 3
match qos-group 3
class-map type queuing match-any c-out-q2
Description: Classifier for Egress queue 2
match qos-group 2
class-map type queuing match-any c-out-q1
Description: Classifier for Egress queue 1
match qos-group 1
class-map type queuing match-any c-out-q-default
Description: Classifier for Egress default queue
match qos-group 0

Finally, let’s assign some bandwidth allocation around those queues:

policy-map type queuing QUEUING_POLICY
class type queuing c-out-q1
bandwidth percent 10
class type queuing c-out-q2
bandwidth percent 15
class type queuing c-out-q3
bandwidth percent 25
class type queuing c-out-q-default
bandwidth percent 50

Now, we apply this QUEUING policy to the system-qos:

system qos
service-policy type queuing output QUEUING_POLICY

I’ll update this more and more as I encounter more QoS with the 9300 platform.


Much like on firewalls you can create object groups in Nexus, which you can utilize when you’re implementing ACLs


object-group ip address {OBJECTNAME}
{subnet/mask}
{subnet/mask}
{subnet/mask}...
exit

ip access-list {ACL_NAME} permit ip addrgroup {OBJECTNAME} [destination]

Makes like simple, huh? What about showing the access-list that has been configured with an object group? Well, under the show access-lists summary you won’t see this, you’ll need to “expand”

show access-lists {ACL_NAME} expanded


In Cisco IOS, this is a monumental pain in the ass if you have a lot of interfaces, typically you’re searching the running config by eye or, if you know how to script, you can send the output to text and filter it the information to get what you need. However, all that sucks because in NX-OS you can just do this

show access-lists summary

The output will give you not only what access-lists is tied to what interface, but also the direction the ACL is applied to. You’ll see the configured section and the active session. Just remember, you can configure the ACL on the interface, but if the interface is not IP enabled, or just plain down, it will not be listed in the active section.


Why do VTP in the data center? I have absolutely no explanation for this, it is generally just a bad idea to use VTP to begin with. Perhaps “easy” is one argument, but look at the problems you face with it:

  • Rogue switch with higher revision can screw the network
  • ON some IOS versions, if not all, the VLAN configuration doesn’t reside in the startup-config
  • Rogue switch can be used to gather VLAN information on the network, helping form an inside attack

In a data center you expect a highly available, reliable, and secure computing environment, this is something VTP simply doesn’t offer for a network in the data center. Look at the Nexus lineup, VTP is a feature which is disabled by default! What a great concept, finally! I’ll go ahead and just say it, if you’re using VTP in the data center, you’re just being lazy.


Just a quick tip for those looking. If you’re using 6.X code you can use the F2E for an internal OTV interface. You can actually get control plane traffic between two devices using an F2E, you’ll see the mac addresses in the: show otv route command; however, no encapsulation will occur. You will need to get an M-series card to perform OTV in a Nexus chassis.