I recently ran across the following issue on a DMVPN Hub-Spoke network with some sites. The network was behaving perfectly fine and spokes were happily communicating with the servers behind the hub until windows 10 came along… The customer started to see spoke-spoke traffic while that should not have happened in the first place. So with a packet capture on one of the spoke routers we saw TCP traffic between the spokes on tcp port 7860.
Research on the Internet learned that this is part of a new feature within Windows 10, called “Delivery Optimization”. It basically checks if an update is already downloaded by another PC in the LAN and then downloads that in a peer-to-peer manner. This is of course a great idea if all the PC’s are in the same LAN, so that the Internet line is less impacted in large downloads. But in a managed environment with controlled updates (with WSUS), you do not want to have that enabled. So, the customer changed the group policy and disabled “Delivery Optimization”. Problem solved.. Well.. not completely..
A couple of weeks later we still saw spoke-to-spoke traffic and started troubleshooting further. Not all computers at the customer’s site were part of the domain and it is very safe to assume that this default setting is not changed by many users. So how does Windows 10 “knows” which computers are on the local network. It of course could use a port scan, but that is rather suspicious behavior. Instead Windows 10 appears to use SSDP multicast for that. Which sounds quite logical, as SSDP uses 18.104.22.168 which is a local scoped multicast address, e.g. it should be dropped at the Internet edge router/firewall.
For small (home) networks that makes perfectly sense, but in a larger enterprise network where multicast is configured for other network endpoints you might want to restrict this specific multicast address…
I saw, during the second troubleshooting, that the SSDP address was registered at the Rendezvous-Point (which is configured on the hub router) with specific endpoints “listening in” on the address. So, although SSDP should remain local and you’d expect that the TTL would be set to 1 (quite similar to Bonjour in Apple networks), this was not happening for some traffic.
So how to block SSDP (there are also other, security related reasons to block SSDP/UPnP on your network as it has its share of vulnerabilities and poor implementations) for getting to the hub router and thus providing in spoke-spoke traffic in what Windows thinks is LAN traffic, while it is actually WAN traffic..
NBAR unfortunately didn’t have SSDP as application protocol and using modular QoS with a drop for specific traffic wasn’t working as well. Of course an ingress acl is possible, but it has its drawbacks on manageability and chances of blocking too much traffic. The solution at the end was actually quite easy. Introduce a fake RP address at the spoke router for a specific set of multicast addresses and just drop traffic to that RP address, just like remote triggered blackholing does.
The configuration on the spoke router will become
no ip unreachable
! no ip unreachable protects the control plane and prevents ICMP unreachable messages for blackholed traffic
access-list 1 permit 22.214.171.124
ip pim rp-address 192.0.2.255 1
ip route 192.0.2.255 255.255.255.255 null0
This piece of config tells the router that there will be no ICMP unreachables sent out for any traffic going to the null-interface (drop). The
access-list 1 specifies which multicast addresses you’re interested in and is used in the ip pim rp-address 192.0.2.255 1 command to tell IOS that this RP address must only be used for the addresses matched in
access-list 1. And with the static route IOS knows that the RP address is to be found via null0, so to drop the traffic.
I think this is a very nice, scalable and elegant way to effectively block multicast traffic on the edge if necessary. Part of the credits for this solution goes to one of my colleagues at YaWorks, Jelmer Siljee who pointed me to an alternative RP-address.