See my video on this very specific topology, what I’ve encountered, and the solution I found to work for me:

So, you’ve surely seen some interesting tidbits in the previous section, things you haven’t noticed from other configurations on the Internet. I will outline why these are present in this configuration based on the failure scenario I present below:

Complete and total loss of spine connections on a single leaf switch – First I’ll outline the ONLY reasons why a single leaf switch would lose all of its spine uplinks:

  1. Total and absolute failure of the entire leaf switch
  2. The 40GbE GEM card has failed, but the rest of the switch remains operational
  3. An isolated ASIC failure affecting only the GEM module
  4. Someone falls through a single cable tray in your data center, taking out all the connections you placed in a single tray
  5. Total and complete failure of all 40GbE QSFP+ modules, at the same time
  6. Total loss of power to either the leaf switch or to all spine switches
  7. All three line cards, in three different spine switches, at the same time, suffer the same failure
  8. Someone reloaded the spine switches at the same time
  9. Someone made a configuration change and hosed your environment

OK, now, lets make one thing clear: NO one, and I mean no one, can prevent any issue with starts with “Someone”, you can’t fix stupid. If you lose power to both of your 9396PX power supplies or to the 3+ PSUs in the 9508 spine switches, I think your problem is much larger than you care to believe. Lets see, we now have just 5 scenarios left.

If your leaf switch just dies, well, you know. Down to four! Yes, a GEM card can fail, I’ve seen it, but this isn’t common and is usually related to an issue which will down the entire switch anyway, but we’ll keep that in our hat. Failure of all the connected QSFP+ modules at the same time? I’ll call BS on this, if all of those QSFP+ modules have failed, your switch is on the train towards absolute failure anyways.

Isolated ASIC failure? So uncommon I feel stupid mentioning it. All three line cards in the spine failing at the same time? Yeah, right. So, in all we’re looking to circumvent a failure in the event of a GEM card failure which doesn’t also mean your switch is dead, being the only real valid reason; however, please note, I am only providing this as proof of concept and I don’t think anyone should allow their environment to operate in a degraded state. If your environments operating status isimportant to you, perhaps a different choice of leaf switch for greater redundancy, a cold or warm backup switch, or at least have 24x7x4 Cisco Smartnet.

When you have a leaf switch suffering from a failure of all the spine uplinks, your best course of action, on a vPC enabled VTEP, is to down the VPC itself on the single leaf switch experiencing the failure. This is where the tracking objects against the IP route and the tracking list which groups them for use within the event manager come to use. Once all the links have gone down, using the boolean AND, by the removal of the BGP host address in the routing table, the event manager applet named “spine down” initiates and shuts down the vPC, loopback0, and the NVE interface, respectively.

When all the links return to operation, there is a 12 second delay, configured for our environment to allow for the BGP peers to reach the established state, and then the next event manager applet named “spine up” initiates, basically just “un-shutting” the interfaces in the exact same order. The NVE interface configuration for the source-interface hold-down-timer, brings the NVE interface UP, but keeps the loopback0 interface down long enough to ensure EVPN updates have been received and the vPC port-channels come to full UP/UP status. If this didn’t happen, and the loopback0 and port-channels come up way too soon before the NVE interface, we’ll blackhole traffic from the hosts towards the fabric. If the NVE and loopback0 interface come up too long before the port-channels, you’ll black hole traffic from the network-to-access direction; thus, timing is critical and will vary per environment so testing is required.

A lot of stuff, right? This is all done to prevent the source interface of the NVE VTEP device coming up before the port-channels towards end hosts come up, to prevent the VTEP from advertising itself into the EVPN database and black holing INBOUND traffic.

You might be thinking: Why not just create a L3 link and form an OSPF adjacency between the two switches to allow the failed switch to continue to receive EVPN updates and prevent blackholing? Well, here are my reasons:

  1. Switchport density and cost per port – If it costs you $30,000 for a single switch of 48 10GbE ports, not including smartnet or professional services, you’re over $600/port, and you and I both know you’re not just going to use ONE link in the Underlay, you’ll use at least two. Really expensive fix.
  2. Suboptimal routing – Lets be real here, your traffic will now take an additional hop because your switch is on the way out
  3. Confusing information in EVPN database for next-hop reachability. – Because the switch with the failed spine uplinks still have a path and receiving EVPN updates, you’ll see it show up as a route-distinguisher in the database, creating confusion
  4. It doesn’t serve appropriate justice to a compromised switch – Come on, the switch has failed, while not completely, it is probably toast and should be downed to trigger immediate resolution of the issue, instead of using bubble gum to plug a leak in your infrastructure. The best solution is to bring down the vPC member completely, force an absolute failover to the remaining operational switch, prevent suboptimal routing, and prevent confusion in troubleshooting.

I can’t stress this enough: Engineering anything other just failing this non-border vPC enabled leaf switch, in the event it is the only switch without all, at least, 3 spine connections, is an attempt at either trying to design a fix for stupid or you’re far too focused on why your leaf switch has failed and ignoring the power outage in your entire data center because you lost main power and someone forgot to put diesel in the generator tanks. Part 3 will include more EVPN goodness, stay tuned!

Ooook, here is another configuration example for the Cisco implementation for VXLAN using BGP EVPN for distributed control-plane operations. anycast gateway, and unicast head-end replication. I am using Cisco 9396PX devices for leaf switches and Cisco 9508 chassis switches for the spine using iBGP. We’ll explore the basic setup with the leaf switches being vPC enabled, including the Border Leaf switches, while also going over a few scenarios which can blackhole traffic and how to avoid this without a OSPF adjacency between the leaf switches.

This blog will assume you understand the basic setup of BGP EVPN VXLAN by reading the great Cisco documentation already available; thus, I presume you’re coming here for a more in-depth, real-world deployment scenario and for some better explanations and failure scenario testing and outputs

Below, this diagram shows the connectivity in the UNDERLAY network:

Cisco BGP EVPN UNDERLAY

Cisco BGP EVPN UNDERLAY

You can see we have three spine switches, two configured as route reflectors for scalability. Below is the configuration of a single spine switch being used as a route reflector, the other route reflector is setup the same way, with IP addresses being different and such and, of course, the other spine switch not having any iBGP peering relationships with the third spine switch is just runs OSPF, forms adjacencies with all VTEPS for advertisement of VTEP IP reachability.


nv overlay evpn
feature ospf
feature bgp
feature nv overlay

router ospf 1
router-id 172.16.2.253
log-adjacency-changes
passive-interface default

interface Ethernet1/1
description Leaf01-9kA
link debounce time 0
mtu 9216
medium p2p
ip address 172.16.2.1/30
ip ospf network point-to-point
no ip ospf passive-interface
ip router ospf 1 area 0.0.0.0
no shutdown

interface loopback0
ip address 1.1.1.10/32
ip router ospf 1 area 0.0.0.0

router bgp 65000
router-id 1.1.1.10
address-family ipv4 unicast
neighbor 1.1.1.40
description VTEP1
password 3 SOMEPASSWORD
update-source loopback0
timers 3 9
address-family ipv4 unicast
address-family l2vpn evpn
send-community both
route-reflector-client
neighbor 1.1.1.41 remote-as 65000
description VTEP2
password 3 SOMEPASSWORD
update-source loopback0
timers 3 9
address-family l2vpn evpn
send-community both
route-reflector-client

The above forms the basis of the Underlay network on the spine and sets up the route-reflectors. We have tuned this for protocol convergence speed; thus, timers are aggressive for BGP and you’ll notice the “link debounce time 0”, which disabled link debounce. In a nutshell, by default, the debounce time is the amount of time after a switchport goes down for which the switchport will wait to notify the supervisor, 100msec by default. Disabling this allows immediate updating to the supervisor on a link failure to start protocol convergence. If you’re worried about an unstable interface, it is quite likely in the event of a link failing/flapping issue, the link-flap detection mechanism will down the port. Finally, we set BOTH the interface medium to p2p and set the OSPF network type to point-to-point. Why? In the event someone misses the command to switch OSPF to point-to-point, since this interface type is broadcast by default, the medium p2p command changes the ports operating mode and OSPF will properly adjust to point-to-point; thus, this is just good extra redundancy.

Now, here is the overlay view, pretend this is an OVERLAY named “Tenant-01”:
VXLAN-OVERLAY

Below is the configuration:


nv overlay evpn
feature ospf
feature bgp
feature interface-vlan
feature vn-segment-vlan-based
feature lacp
feature vpc
feature nv overlay

fabric forwarding anycast-gateway-mac 0005.0005.0005
fabric forwarding dup-host-ip-addr-detection 5 180

class-map type qos match-any ONE
match cos 1
match dscp 26
class-map type qos match-any TWO
match cos 2
match dscp 16
class-map type qos match-any THREE
match cos 3
match dscp 48
policy-map type qos REST-YOUR-COS-FOR-UCS-FI
class SILVER
set cos 2
class GOLD
set cos 4
class PLATINUM
set cos 6
policy-map type qos FOR-THE-COS-IGNORANT
class class-default
set cos 2
set dscp 16

spanning-tree vlan 1-3967 hello-time 4

vlan 201
name VXLAN-VLAN01
vn-segment 100201
vlan 202
name VXLAN-VLAN02
vn-segment 900202
vlan 203
name VXLAN-VLAN03
vn-segment 900203
vlan 2999
name VLAN-FOR-BRIDGE-DOMAIN
vn-segment 29999

vrf context Tenant01
vni 29999
rd auto
address-family ipv4 unicast
route-target both auto
route-target both auto evpn
address-family ipv6 unicast
route-target both auto
route-target both auto evpn

track 1 ip route 1.1.1.10/32 reachability
track 2 ip route 1.1.1.20/32 reachability
track 10 list boolean and
object 2
object 1
delay up 12

event manager applet spine-down
event track 10 state down
action 1.0 cli vpc domain 50
action 1.1 cli shutdown
action 1.2 cli interface loopback0
action 1.3 cli shutdown
action 1.4 cli interface nve 1
action 1.5 cli shutdown
event manager applet spine-up
event track 10 state down
action 1.0 cli vpc domain 50
action 1.1 cli no shutdown
action 1.2 cli interface loopback0
action 1.3 cli no shutdown
action 1.4 cli interface nve 1
action 1.5 cli no shutdown

hardware access-list tcam region vacl 0
hardware access-list tcam region e-racl 0
hardware access-list tcam region span 0
hardware access-list tcam region redirect 256
hardware access-list tcam region rp-qos 0
hardware access-list tcam region rp-ipv6-qos 0
hardware access-list tcam region rp-mac-qos 0
hardware access-list tcam region e-ipv6-qos 256
hardware access-list tcam region e-qos-lite 256
hardware access-list tcam region arp-ether 256

vpc domain 100
peer-switch
role priority 8192
system-priority 8192
peer-keepalive destination 192.168.1.1 source 192.168.1.2 interval 500 timeout 3
delay restore 5
peer-gateway
auto-recovery
ipv6 nd synchronize
ip arp synchronize

interface Vlan2999
description L3-VXLAN-BD
no shutdown
mtu 9216
vrf member Tenant01
no ip redirects
ip forward
ipv6 forward
no ipv6 redirects

interface Vlan201
description NET01
no shutdown
mtu 9216
no ip redirects
management
vrf member VXLAN
ip address 10.0.0.1/24
no ipv6 nd redirects
fabric forwarding mode anycast-gateway

interface Vlan202
description NET02
no shutdown
mtu 9216
no ip redirects
vrf member Tenant02
ip address 10.0.1.1/24
fabric forwarding mode anycast-gateway

interface Vlan203
description NET03
no shutdown
mtu 9216
no ip redirects
vrf member Tenant01
ip address 10.0.2.1/24
fabric forwarding mode anycast-gateway

interface port-channel50
description To Ethernet Switch B
switchport mode trunk
vpc peer-link

interface port-channel201
description Fabric-Interconnect-A
switchport mode trunk
switchport trunk allowed vlan 201-203
spanning-tree port type edge trunk
mtu 9216
service-policy type qos output REST-YOUR-COS-FOR-UCS-FI
vpc 201

interface port-channel202
description Fabric-Interconnect-B
switchport mode trunk
switchport trunk allowed vlan 201-203
spanning-tree port type edge trunk
mtu 9216
service-policy type qos output REST-YOUR-COS-FOR-UCS-FI
vpc 202

interface nve1
no shutdown
source-interface loopback0
host-reachability protocol bgp
source-interface hold-down-time 120
member vni 29999 associate-vrf
member vni 100201-100202
suppress-arp
ingress-replication protocol bgp

interface Ethernet2/1
switchport mode trunk
channel-group 50 mode active

interface Ethernet2/2
switchport mode trunk
channel-group 50 mode active

interface Ethernet2/3
no switchport
link debounce time 0
medium p2p
mtu 9216
ip address 172.16.2.18/30
no ipv6 redirects
ip ospf network point-to-point
no ip ospf passive-interface
ip router ospf 1 area 0.0.0.0
no shutdown

interface Ethernet2/4
no switchport
link debounce time 0
medium p2p
mtu 9216
ip address 172.16.3.22/30
ip ospf network point-to-point
no ip ospf passive-interface
ip router ospf 1 area 0.0.0.0
no shutdown

interface loopback0
description Loopback for NVE VTEP
ip address 1.1.100.44/32
ip address 1.1.1.102/32 secondary
ip router ospf 1 area 0.0.0.0

interface loopback1
description Loopback for BGP update-source
ip address 1.1.1.44/32
ip router ospf 1 area 0.0.0.0

router ospf 1
router-id 172.16.2.18
passive-interface default
log-neigh-adj

router bgp 65000
router-id 1.1.1.44
log-neighbor-changes
address-family ipv4 unicast
maximum-paths ibgp 10
neighbor 1.1.1.10
description spine1
password 3 SOMEPASSWORD
update-source loopback1
timers 3 9
address-family ipv4 unicast
address-family l2vpn evpn
send-community both
neighbor 1.1.1.20
description spine2
password 3 SOMEPASSWORD
update-source loopback1
timers 3 9
address-family ipv4 unicast
address-family l2vpn evpn
send-community both
vrf Tenant01
address-family ipv4 unicast
advertise l2vpn evpn
maximum-paths ibgp 10
address-family ipv6 unicast
advertise l2vpn evpn
maximum-paths ibgp 6
evpn
vni 100201 l2
rd auto
route-target import auto
route-target export auto
vni 100202 l2
rd auto
route-target import auto
route-target export auto
vni 100203 l2
rd auto
route-target import auto
route-target export auto

ip tcp path-mtu-discovery
l2rib dup-host-mac-detection 5 180

A lot to see here, right? This is why I decided to break this into two parts, so this is part 1 and my next post is part 2 for border leafs and failure scenarios! Lets get this initial review over with!

I will just outline all the key points here:

  • policy-map type qos REST-YOUR-COS-FOR-UCS-FI – This is for those of you who utilize the COS in Cisco UCS and want to maintain your COS value AFTER your packet is VXLAN DE-CAPSULATED. With this EVPN VXLAN configuration, the original 802.1Q header is stripped at ingress; thus, no COS value remains, but if you set any DSCP at the virtual switch level it is maintained throughout so we’re assuming you’re marking DSCP at your virtual switch along with COS and you have your own unique mapping from COS to DSCP. So, you create the classes I have above, this is all for example, your mappings will/may be different, and then create a policy-map to match against the DSCP value marked from your virtual switch and set the appropriate COS value. You then set this as a QOS OUTBOUND policy on the port-channel towards your Fabric Interconnects, but you will have to adjust your TCAM entries for this to work. The other one, for the COS-IGNORANT, will be for devices which aren’t smart enough to set either the DSCP or COS value; thus, just apply this to the interface, inbound, and set your values as needed
  • fabric forwarding anycast-gateway-mac 0005.0005.0005 – This is for the anycast gateway mac address. You can get “funny” here, but I like to keep it simple, your choice.
  • fabric forwarding dup-host-ip-addr-detection 5 180 – I set the duplicate host IP detection to 5 moves in 180 seconds for my environment, tune to the values best suited for yours
  • track objects and object list – I set these to look for the BGP neighbor address of the route-reflectors in the routing table and then assign each of those to the track object list for later assignment to the VPC. Part 2 will show and explain why
  • hardware tcam entries – Follow these for success in this configuration, especially if you’re in need of using the outbound QOS service policies
  • VPC peer-keepalive and delay-restore timers – Set to our environment and for specific reasons we’ll explain in part 2
  • NVE source-interface hold-down – This timer is set to 120 seconds, tuned for our environment, from the default of 300 seconds. I will explain the use of this and why I use 120 seconds in part 2
  • Loopback0 – Used ONLY for the NVE VTEP interface
  • Loopback0 secondary address – for vPC enabled VTEPS only, this is the PROXY VTEP address used
  • Loopback1 – Used ONLY for BGP source-updates
  • BGP passwords – This is used for security in the Underlay, you can also utilize OSPF authentication too, for extra security
  • So, like Forest Gump said to all his faithful followers “I’m pretty tired….I think I’ll go home now”. So, see you on Part 2, where the FUN is!!!

    CONTINUE TO PART 2

If you’re attempting to use SCP on your Nexus switch and you realize you’re getting slow performance, even with jumbo frames enabled on your source interface, the physical connected interface, and you’ve verified everything along the path is set to the correct jumbo MTU, you’re likely going to need to reference your system QOS settings for network-qos. By default, the standard policy-map applied under system qos is references the class-default network-qos class and sets the MTU to 1500. You will need to create a new policy-map like this:

policy-map type network-qos jumbo
class type network-qos class-default
mtu 9216
system qos
service-policy type network-qos jumbo

Once you have this created, the next thing you want to do is enable path-mtu-discovery

ip tcp path-mtu-discovery

From there, you can attempt to ping your destination using a jumbo frame packet size with the df-bit set for testing, you should see it go through successfully and you’ll notice your SCP transfers are much faster for those large BIN files for code upgrades.

The smoking gun to finding this issue is going to be on your physical uplinks doing a show interface e#/# and you’ll notice on the TX side you’ll see jumbo frames sent and then you’ll see incrementing input errors on the RX, in increasing precision with the number of jumbo frames sent.

This was testing on a Cisco Nexus 3172 running 7.0.3.I2.2 code.

There is an issue I have noticed with VMware systems deployed with Nexus vPC technology that involve traffic only making it out of the vPC by disabling half the vPC or getting rid of the vPC completely. Initially you’re thinking this is a Cisco issue and I am here to tell you that you’re wrong.

In the virtual switch port-groups and the VMNIC teaming there is a load balancing algorithm you can choose from. I have seen issues where the VMNICS are set to route based on IP hash but the port-group could be set to something like route based on originating  port-id. 

If you’re noticing that pinging the machine from the vPC enabled switches, if they have a SVI enabled, that the ping is only responsive on ONE of the devices and from a north end machine, outside the vPC and probably your desk, only gets responses when HALF the vPC is down, you need to immediately check the hashing for the vmnics and the port-group.

Use the command: esxtop – to review what virtual machines are using what vSwitch and vmnic port to further aid in your troubleshooting.

I would highly suggest you keep it the same at both levels, there may be only odd circumstances where mixing these is helpful but you’re likely trading predictability for what may be perceived performance you’re probably not getting.

I was in a training class recently and they were speaking about ECMP and how it “converges” if a link goes down. Let me just say this, that is absolutely incorrect and is just as bad as saying “I have two class C’s”, it really doesn’t bode well with most people.

With ECMP you’re actually installing multiple routes of the same cost into the routing table and you’re either going to load balance based on a per-packet or per flow basis with per-flow being the most preferred because of the nature of TCP operations. Now, how it load balances on which link will be determined upon the algorithm used, most use round-robin.

Please understand, ECMP doesn’t mean the links are of EQUAL bandwidth and latency, just from a metric cost perspective they’re “equal”. When a link goes down there is absolutely no convergence taking place, the packets/flow just get routed out of one of the other available, equal-cost links. Please stop saying they’re “converging” because that makes most think there is either a dynamic computation taking place with a dynamic routing protocol or the router itself is having to install a route into the RIB from the FIB.

We know that for switches to cooperate inside each region the following must be configured the same:

  • Name – Case Sensitive
  • Revision – Any number, but should be the same
  • Instance mappings and their respective VLANs

Now, what about the VLANs themselves? What about switches and security? I looked all over for this answer and it was vague at best and each vendors documentation said something a little different from each others. However, this is just my preliminary testing, I added multiple instances to my spanning-tree setup on my Cisco Catalyst 3750. My scenario was as follows along with the outputs:

  1. Two instances
  2. Instance 1 had all the real VLANs that were actual VLANs on the switch
  3. Instance 2 had 2 VLANs mapped
    • The first test of MIST2 was with both VLANs not being defined on the switch
    • The second test of MIST2 was with one VLAN defined and the other not
    • The third test of MIST2 was with both the VLANs defined

Because MST instances themselves do not communicate the actual VLANs or VLAN mappings, and IST/CIST does not actually communicate the actual VLAN-to-Instance mapping either. Instead, we rely on IST0 to transmit the BPDUs that contain our information like: name, revision, checksum/Config digest/hash and the actual configuration digest/checkum/hash is the value to which each switch will calculate to determine if they’re operating in the exact same region or in different regions. The digest/hash/checksum is calculated based on parameters present in the MST configuration table. Want to know more about the hashing? Here is a link: 802.1s explained.

The information is long and boring, but do a search for “digest” and you’ll find yourself deep into figuring out how this all works. The test results are soon to come, I am working on both Catalyst and Nexus outputs to benefit not just enterprise and branch, but for those in the data center who’re having to work in vPC hybrid environments with STP attached devices. More to come…

So, most of you probably got here because you’re probably on your CCIE track and you’re hearing a ton about the 32-bit words in the IPv4 headers and looking for an answer to the topic. It is without question that most may never know exactly what they’re talking about when they say “word” and this can lead to some confusion. First, the definition of a word from Wikipedia is:

“A word is basically a fixed-sized group of digits (binary or decimal) that are handled as a unit by the instruction set or the hardware of the processor. The number of digits in a word (the word size, word width, or word length) is an important characteristic of any specific processor design or computer architecture.”

Essentially, this means each 32 bits, 32 different positions where the values can be 0 or 1 in binary, is a “WORD”. Thus, when they’re referencing the IPv4 header length in a packet capture, you’ll see the size of the header. That header size is calculated by looking at the raw header, generally the next position after the Type, and you’ll find a hexadecimal value, lets say D, which is 13. Thus, you have 13 different 32-bit words.

Now, 13*32=416. Take the 416/8=52 bytes in the IPv4 header. Why 8? There are 8 bits in each byte. So, the next time you hear someone mention there are X number of 32-bit words in an IPv4 header, you now have some idea of what they’re talking about.

Providing you’re either: 1. Using a hostname of the device or 2. You’re positive it will receive the same IP, if you’re using an IP address to connect to your machine using RDP that obtains its IP parameters using DHCP:

ipconfig /release && ipconfig /renew

As simple as that. In fact, you can use the same operation “&&” on a Linux box with a BASH shell using whatever interface configuration commands you’re using, if you don’t have a script which already does it for you.

Confused about getting QoS working on your Nexus 9300 platform (I worked with the 9396PX)? Well, if you’re coming from the Nexus 5500 platforms you’re in for a little tweaking to get this working as some things are different. I will quickly outline them and move onto some sample configuration:

  • MTU is set on an interface level
  • System defined queuing class-maps
  • egress queues (0 is default and 1-3 which are already pre-mapped using the above mentioned class-maps)
  •  Both access and trunk ports, by default, treat all traffic as if it had CoS 0, moving it into the default queue
  • QOS ingress service-policy must be applied to ports or port-channels to classify traffic

Here is some basic configuration for setting the QOS policy to classify:

class-map type qos match-all RUBY
match cos 4
class-map type qos match-all EMERALD
match cos 2
class-map type qos match-all DIAMOND
match cos 6

policy-map type qos QOS_POLICY
class RUBY
set qos-group 2
class EMERALD
set qos-group 1
class DIAMOND
set qos-group 3

interface port-channel20
switchport mode trunk
switchport trunk allowed vlan all
spanning-tree port type edge trunk
mtu 9216
service-policy type qos input QOS_POLICY

Now, let’s view the system defined queuing class-maps so you can get an idea of this:

class-map type queuing match-any c-out-q3
Description: Classifier for Egress queue 3
match qos-group 3
class-map type queuing match-any c-out-q2
Description: Classifier for Egress queue 2
match qos-group 2
class-map type queuing match-any c-out-q1
Description: Classifier for Egress queue 1
match qos-group 1
class-map type queuing match-any c-out-q-default
Description: Classifier for Egress default queue
match qos-group 0

Finally, let’s assign some bandwidth allocation around those queues:

policy-map type queuing QUEUING_POLICY
class type queuing c-out-q1
bandwidth percent 10
class type queuing c-out-q2
bandwidth percent 15
class type queuing c-out-q3
bandwidth percent 25
class type queuing c-out-q-default
bandwidth percent 50

Now, we apply this QUEUING policy to the system-qos:

system qos
service-policy type queuing output QUEUING_POLICY

I’ll update this more and more as I encounter more QoS with the 9300 platform.