CIFS latency optimization works in the following ways:
Directory index pre-fetching and directory meta-data subscription.
File read-ahead and write-behind.
CIFS latency optimization does not work if:
The client or server operating system is not supported. For example the AS/400 CIFS client is not supported as a client.
CIFS traffic is signed and the Active Directory integration failed. For example Active Directory user delegation for the CIFS server being optimized to is not enabled in the Active Directory configuration. Another example is that the Windows 2008 R2 server wasn't supported for Active Directory integration until RiOS version 6.1.
If the server-side Steelhead appliance cannot get a lock on a file. This can happen on CIFS file servers which have this file locking as a non-default option like the NetApp file server.
If the SMB version required by the client is not supported by the Steelhead appliance. For example SMBv2 is supported only from RiOS version 6.5 and SMBv3 is supported since RiOS version 8.5.
The result will be that the interactive performance of the CIFS session will be like an unoptimized CIFS session and that during file-transfers only data-reduction will happening.
When a Windows client attaches to a CIFS share, it will setup two TCP sessions towards the server: One on port 445 and one on port 139. The one on port 139 will be terminated immediately if the one on port 445 sets up properly.
Figure 8.1. Example of a TCP session on port 139 being terminated immediately
CSH sport[6693]: [splice/probe.NOTICE] 0 {- -} (locl: 10.0.1.6:50499 clnt: 10.0.1.1:2686 s \ erv: 192.168.1.1:139) No inner channel created for this probe splice CSH sport[6693]: [clientsplice.ERR] - {clnt:10.0.1.1:2690 peer: 192.168.1.6:7800 serv:192. \ 168.1.1:139} End of stream reading connect result
The first line gets logged when the TCP RST packet was received by the client-side Steelhead appliance before the inner channel was setup, the second line gets logged when the TCP session gets closed by the server without having send or received any data over the TCP connection.
Since the Steelhead appliance is a WAN optimizer and not a network file system, all file system related operations and retrieval of data are still coming from the remote file servers. Requests for the creation, opening, closing and removal of a file are still fully going to the remote file server and answered by the remote file server. The reading from and writing to files will be locally acknowledged by the Steelhead appliance and then forwarded to the remote file server.
To display the response times in Wireshark go to the Statistics -> Service Response Times -> SMB.
Figure 8.2. Wireshark Response time statistics for an unoptimized CIFS session
================================================================= SMB SRT Statistics: Filter: Commands Calls Min SRT Max SRT Avg SRT Close 1 0.133577 0.133577 0.133577 Open AndX 1 0.133348 0.133348 0.133348 Read AndX 18 0.368455 1.169644 0.902801 Tree Disconnect 2 0.133128 0.133643 0.133386 Negotiate Protocol 1 0.132288 0.132288 0.132288 Session Setup AndX 2 0.132696 0.135587 0.134142 Tree Connect AndX 2 0.133005 0.133278 0.133142 Query Information Disk 1 0.132691 0.132691 0.132691 Transaction2 Commands Calls Min SRT Max SRT Avg SRT FIND_FIRST2 2 0.136227 0.558457 0.347342 FIND_NEXT2 4 0.133153 0.135975 0.134642 QUERY_FILE_INFO 1 0.133137 0.133137 0.133137 GET_DFS_REFERRAL 1 0.132695 0.132695 0.132695 NT Transaction Commands Calls Min SRT Max SRT Avg SRT =================================================================
All SMB commands take at least the 130 ms of the RTT time.
Figure 8.3. Wireshark Response time statistics for an optimized CIFS session
================================================================= SMB SRT Statistics: Filter: Commands Calls Min SRT Max SRT Avg SRT Close 1 0.000212 0.000212 0.000212 Open AndX 1 0.134227 0.134227 0.134227 Read AndX 18 0.354067 0.887230 0.646874 Tree Disconnect 1 0.000169 0.000169 0.000169 Negotiate Protocol 1 0.133985 0.133985 0.133985 Session Setup AndX 2 0.133340 0.136641 0.134991 Tree Connect AndX 2 0.132323 0.134432 0.133378 Query Information Disk 1 0.222203 0.222203 0.222203 Transaction2 Commands Calls Min SRT Max SRT Avg SRT FIND_FIRST2 2 0.287443 0.560205 0.423824 FIND_NEXT2 4 0.000094 0.009760 0.004776 QUERY_FILE_INFO 1 0.134086 0.134086 0.134086 GET_DFS_REFERRAL 1 0.133184 0.133184 0.133184 NT Transaction Commands Calls Min SRT Max SRT Avg SRT =================================================================
Locally handled commands like the Close and Tree Disconnect return immediately. Pre-fetched information like the Read AndX (18 * 0.646 seconds optimized versus 18 * 0.902 seconds unoptimized) and the FIND_NEXT2 commands (2 * 0.423 + 4 * 0.004 seconds optimized versus 2 * 0.347 + 4 * 0.134 seconds unoptimized) take much less time.
The preview feature in the Windows Explorer and the Mac OS X Finder only read parts of the file to get the meta-data of it, but the read-ahead functionality in the latency optimization service will read much more than ever will be requested by the client, therefore the statistics for that TCP option or for the that protocol might be skewed.
Figure 8.4. Skewed statistics on the client-side Steelhead appliance because of read-ahead in the Windows Explorer.
![]() |
This behaviour might be seen on the client-side Steelhead where the amount of WAN traffic is significant larger than the amount of LAN traffic, or when comparing the client-side and server-side LAN traffic where the amount of LAN traffic on the server-side is significantly larger than on the client-side.
Figure 8.5. Comparison of the client-side and server-side LAN traffic for CIFS latency optimization
XXX
The CIFS read-ahead feature works great when reading a file sequentially, but when the data is accessed randomly it will cause a large amount of unnecessary reads to the server and data transfers to the client-side Steelhead appliance. If the extensions of the files which are randomly accessed are known, it is possible to put them in a blacklist for the CIFS read-ahead feature:
Figure 8.6. Disable the CIFS read-ahead feature for files with the extension "myob"
CSH (config) # protocol cifs big-read-blklst add myob You must restart the optimization service for your changes to take effect. CSH (config) # show protocol cifs big-read-blklst myob
When a request to a file server takes more than 500 milliseconds, the Steelhead appliance will log a message about this:
Figure 8.7. Heads up that the replies from the file servers are slow
CSH sport[27756]: [smbcfe.NOTICE] 379355 {10.0.1.1:49202 192.168.1.1:445} Request nt_creat \ e_andx still waiting after 0.715137 sec, mid=2496 queue_cnt=1 queue_wait_state=1 enque \ ue time=1347517794.630794 CSH sport[27756]: [smbcfe.NOTICE] 379355 {10.0.1.1:49202 192.168.1.1:445} Request nt_creat \ e_andx still waiting after 1.715144 sec, mid=2496 queue_cnt=1 queue_wait_state=1 enque \ ue time=1347517794.630794 CSH sport[27756]: [smbcfe.NOTICE] 379355 {10.0.1.1:49202 192.168.1.1:445} Request nt_creat \ e_andx still waiting after 2.715170 sec, mid=2496 queue_cnt=1 queue_wait_state=1 enque \ ue time=1347517794.630794 CSH sport[27756]: [smbcfe.NOTICE] 379355 {10.0.1.1:49202 192.168.1.1:445} Request nt_creat \ e_andx still waiting after 4.715130 sec, mid=2496 queue_cnt=1 queue_wait_state=1 enque \ ue time=1347517794.630794
The request types can be nt_create_andx, nt_read_andx and transaction2.
If this happens on the server-side Steelhead appliance, then check the load on the file server and any speed/duplex issues on the path towards the file server.
If this only happens on the client-side Steelhead appliance, then check the path towards the server-side Steelhead appliance.
The Write-behind feature stops if the CIFS share is more than 98% full and will be enabled again when the CIFS share is less than 92% full.
Figure 8.8. CIFS write-behind stopping because of a full disk.
CSH sport[14140]: [smbcfe.NOTICE] 679459 {10.0.1.1:3346 192.168.1.1:445} Disk usage reache \ d threshold (2048 MB free) on share \FOO\BAR, disabling write/setfile pre-acking
An oplock is a feature which deals with the opening of the same file by multiple clients. When one client has a lock on a file, any other clients cannot get a lock until the first client has released the lock. If the server-side Steelhead appliance deals with the lock, it will be able to do better read-ahead optimization.
If the server-side Steelhead appliance cannot obtain the oplock, it will not perform read-ahead on the file and the CIFS performance to the client will suffer from it:
Figure 8.9. Unable to obtain oplock on a file
CSH sport[22371]: [smbcfe.INFO] 2720339 {10.0.1.1:11847 192.168.1.1:445} process_create_an \ dx_response file was not granted any oplock File: \Share\book.pdf Desired Access : 0x2 \ 0089 FID : 0x4012
Possible reasons are:
The file server does not support oplocks. Please check the configuration of the file server.
The file server was accessed by a remote client whose CIFS session did not get optimized or which CIFS session did not go through this Steelhead appliance.
The file was opened on the console of the file server.