Improving the Performance of Asynchronous Web Service Calls

Notice that the name of this post doesn't say that web service performance can be improved through asynchronous calls.  That is on purpose.  This particular post deals with a limitation that impacts applications that utilize async web service methods.  Actually, it has the potential to impact any application that interacts with the Internet.  This is actually one time I wish my blog was more widely read, because I can pretty much guarantee that there are thousands of developers who are unaware that the nice async design they've implemented isn't having the performance boosting effect that they expected.  And thanks to Marc Durand for pointing this out to me.

The limitation I'm talking about is one that is buried in the HTTP/1.1 specification (RFC2616).  The spec says that, to prevent a single computer from overrunning a server, the limit to the maximum number of connections that can be made to a server is two.  What this means is that if your application makes three async calls to the same server, the third call will be blocked until one of the first two is finished.  I know that this came as a big surprise to me.

Fortunately, there is a configuration setting that can adjust this number without requiring any coding changes.  In the app.config (or machine.config) file, add a connectionManagement tag.  Within connectionManagement, the add tag is used to specify the optional server and the maximum connections allowed.  The following example allows up to 10 simultaneous connections to be made to 216.221.85.164 and 40 to any server.

<configuration>
  <system.net>
    <connectionManagement>
      <add address="216.221.85.164" maxconnection="10" />
      <add address="*" maxconnection="40" />
    </connectionManagement>
  </system.net>
</configuration>

For those who like to program, you can accomplish the same behavior on a temporary basis using the ServicePointManager.DefaultConnectionLimit property.  But I was never big into adding code when I didn't have to.