The Interceptor appliance is a Riverbed specific appliance to redirects traffic to a cluster of Steelhead appliances. It does not take part in the optimization of the traffic itself, it only redirects.
An Interceptor cluster consists of two types on nodes: one or more Interceptors and one or more Steelhead appliances.
While the Interceptor appliance is located inline in the traffic flow, the Steelhead appliances in the cluster are only connected with the WAN interface.
Figure 5.211. Network setup for the Interceptor cluster
.----------. | Router | '----------' | .--------. | IC | '--------' | .--------. | .---| SH 1 | | | '--------' | | .--------. | |---| SH 2 | | | '--------' | | .----------. | Switch | '----------'
There are several traffic flows:
Unoptimized traffic, which flows through the Interceptor appliance. This includes inner channels of optimized TCP sessions with the Correct Addressing or Port Transparency WAN Visibility methods which are not terminating on this Interceptor cluster.
New traffic that could be optimized:
A SYN+ packet comes in via the WAN interface and gets GRE encapsulated forwarded to one of the Steelhead appliances in the cluster.
A naked SYN comes in via the LAN interface and gets GRE encapsulated forwarded to one of the Steelhead appliances in the cluster.
A naked SYN/ACK comes in via the LAN interface and gets GRE encapsulated forwarded to the correct Steelhead appliance.
A SYN/ACK+ comes in via the WAN interface and gets GRE encapsulated forwarded to the correct Steelhead appliance.
Traffic from the server to the client, which gets received on the LAN side. The destination IP address in the IP header will be swapped for the IP address of the Steelhead appliance in-path interface and send to the correct Steelhead appliance.
The inner channel of an optimized TCP session from the client-side Steelhead appliance to the server-side Steelhead appliance with the Full Transparency WAN visibility mode. This gets modified from Full Transparency mode to Correct Addressing mode and then send to the correct Steelhead appliance.
The communication between the individual nodes is happening via the Connection Forwarding protocol via their in-path interfaces. At startup of the redirection service, the Interceptor appliance will contact all other nodes, Interceptor and Steelhead appliances, and exchange capability information which includes the IP addresses of the in-path interfaces and, for the Steelhead appliances, the capacity and health status.
Figure 5.212. Setup of connection forwarding session between Interceptor and Steelhead appliance
IC interecptor[25354]: [neigh/client/channel.INFO] - {- -} Establishing neighbor channel f \ rom 192.168.1.12 to 192.168.1.6:7850 IC interecptor[25354]: [neigh/client/channel.INFO] - {- -} Neighbor channel to 192.168.1.6 \ :7850 established.
If the neighbouring Steelhead appliance isn't reachable, it will time out:
Figure 5.213. Failure in the setup of connection forwarding session between Interceptor and Steelhead appliance
IC interceptor[4937]: [neigh/client/channel.INFO] - {- -} Establishing neighbor channel fr \ om 192.168.1.12 192.168.1.6:7850 IC interceptor[4937]: [neigh/client/channel.WARN] - {- -} Connection failure: couldn't con \ nect to neighbor 192.168.1.6:7850. Connection timed out
If the neighbouring Steelhead appliance optimization service isn't running, it will show a connection refused.
Figure 5.214. Failure in the setup of connection forwarding session between Interceptor and Steelhead appliance
IC interceptor[4937]: [neigh/client/channel.INFO] - {- -} Establishing neighbor channel fr \ om 192.168.1.12 192.168.1.6:7850 IC interceptor[4937]: [neigh/client/channel.WARN] - {- -} Connection failure: couldn't con \ nect to neighbor 192.168.1.6:7850. Connection refused
If the neighbour Steelhead appliance becomes unreachable, the session will timeout after three seconds.
Figure 5.215. Failure in the connection forwarding session between Interceptor and Steelhead appliance
IC interecptor[25354]: [neigh/client/channel.INFO] - {- -} Establishing neighbor channel f \ rom 192.168.1.12 to 192.168.1.6:7850 IC interecptor[25354]: [neigh/client/channel.INFO] - {- -} Neighbor channel to 192.168.1.6 \ :7850 established. IC interecptor[25354]: [neigh/client/channel.WARN] - {- -} No response from neighbor 192.1 \ 68.1.6:7850. Count = 1 IC interecptor[25354]: [neigh/client/channel.WARN] - {- -} No response from neighbor 192.1 \ 68.1.6:7850. Count = 2 IC interecptor[25354]: [neigh/client/channel.WARN] - {- -} No response from neighbor 192.1 \ 68.1.6:7850. Count = 3 IC interecptor[25354]: [neigh/client/channel.WARN] - {- -} No response from neighbor 192.1 \ 68.1.6:7850. Neighbor is unreachable.
If the optimization service on the neighbour Steelhead appliance stops, the session will be terminated too:
Figure 5.216. Failure in the connection forwarding session between Interceptor and Steelhead appliance
IC interecptor[25354]: [neigh/client/channel.INFO] - {- -} Establishing neighbor channel f \ rom 192.168.1.12 to 192.168.1.6:7850 IC interecptor[25354]: [neigh/client/channel.INFO] - {- -} Neighbor channel to 192.168.1.6 \ :7850 established. IC interceptor[25354]: [neigh/server/channel.NOTICE] - {- -} End of stream reading neighbo \ r from 192.168.1.6:7850. Peer maybe down.
When a new optimizable TCP session gets seen, the Interceptor appliance will select one of the Steelhead appliances to which it will be redirected which uses the Connection Forwarding mechanism to inform all the other Interceptor and Steelhead appliances about it. If a new optimized TCP session gets setup to a Steelhead appliance directly, for example via a Fixed Target rule, it also will inform all the nodes in the Interceptor cluster about it.
When a new optimizable TCP session gets seen by an Interceptor appliance, it needs to be forwarded to one of the Steelhead appliances in the cluster. The best data reduction can be obtained between two Steelhead appliances which have exchanged the most references. So the Interceptor cluster always tries to send the TCP session from a certain remote Steelhead appliance to the same Steelhead appliance in the cluster, to obtain the best data reduction.
With peering affinity, it would be possible for traffic going through an Interceptor cluster with multiple Steelhead appliances to end up on the same Steelhead appliance. With the Fair Peering feature the Interceptor cluster puts effort in the attempts to distribute the traffic from remote Steelhead appliances equally over the number of Steelhead appliances in the cluster.
With the Pressure Monitoring feature enabled, the Interceptor appliances keep track of the memory usage and the disk pressure on the Steelhead appliances. With the Capacity Adjustment feature, the Interceptor appliances will dynamically reduce the initially exchanged capacity of the Steelhead appliances if there is pressure on them. This will decrease the number of TCP sessions being redirected to that Steelhead appliance, giving it a chance to reduce the pressure. Once every hour the Interceptor appliance will decide if the pressure on the Steelhead appliances has been reduced far enough to increase the capacity again.
When a Steelhead appliance in an Interceptor cluster is replaced, its data store will be new to the network. Therefore, the peering affinity algorithm will not forward any TCP sessions to that new Steelhead appliance. Only when the Interceptor appliances consider the other Steelhead appliances in the cluster to be in admission control, either for real or via capacity adjustments, then new TCP sessions will be forwarded to the new Steelhead appliance.