ANCM In-Process Start Failure

Or, alternatively, Failed to launch debug adapter.

Blazor error

These were the two errors I was getting while playing with Blazor. They turn out to have the same cause, although the errors are not particularly indicative of the issue.

First, to get to the heart of the matter, the problem was that the server portion of the Blazor application had a runtime error. Turns out I had forgotten to include the reference to a class that was being injected into a server component. What slowed me down was that the messages were not overly precise about the source of the problem. 

But what might be more instructive is different ways the problem might be discovered. For my project, the server portion of the application was loaded into IIS Express. This was one of the reasons why I couldn't immediately see the problem. When the project is launched using the server application directly, a console window opens. In this window would be logging information, including, in this case, the runtime error saying that the server could not be started.

Armed with that information, I added logging to the server code. Now the actual problem was surfaced. I forgot to define the concrete object for one of my injected interfaces. My bad. And easy to fix, once you figure out the real problem.

Putting the Developer in DevOps

The term DevOps has been getting a lot of play lately. While it’s possible (albeit unlikely) that DevOps is a passing fad, my personal opinion is that it’s the next logical step in the maturation of the software development process. Which means that, as a developer, it behooves you to become aware of the tasks that make up DevOps and the options which are available to help you accomplish them.

In case it wasn’t immediately apparently, DevOps is a portmanteau of the words “Developer” and “Operations”. The overarching idea is that consideration is given to the needs of the system administrators during the design and development phase of a project. In some cases, this might mean that the administrators themselves work along side the developers. But at a minimum, the developer must have an understanding of the needs of the system administrator after the application goes live and bake the appropriate hooks and instrumentation into the code before it goes live.

This approach is different from the ‘traditional’ approach. For many (most??), the Dev side of the process involves the creation of the software. Ops, on the other hand, are viewed as simply the people who have to deal with the artifact of the developer’s creative genius once it’s in the hands of real people. These two groups have been treated, by management and users alike, as separate entities. Or is that enmities?

However, in the real world, this division is not optimal. It is well documented that the majority of an application’s life will be spent in production. Doesn’t it make sense to include functionality in the application that helps to ensure that this life will be both productive, helpful and informative? Of course it does. But being obvious doesn’t mean that it has come to pass. Yet.

But by taking the more holistic view that operation functionality is actually just one more aspect of an application’s functionality, it makes sense to address delivery and instrumentation right from the start. This results is a more robust product and, ultimately, provides more value to the users.

At ObjectSharp’s At the Movies event in May, I had the opportunity to demo a new component that will provide some of the functionality necessary to integrate your application with operations. Application Insights is an SDK that is available across most Microsoft platforms, including Web (MVC and Forms), Windows Phone, and Windows Store. It’s primary requirement is that a live Internet connection is necessary to send the instrumentation data to a central repository. You (that is, the Dev of DevOps) can submit custom, hierarchical categories, along with corresponding metrics to the repository. And the repository can be visualized using the portal that is available at Visual Studio Online.

image

 

 

 

As you might expect, there are more features that are available for to help with DevOps, including server performance tacking, availability monitoring for your Web site (either by hitting a page or by running a simple script) and even a live data stream of events while you are debugging your application. And Application Insights is a locus of regular innovation, with new versions of the SDK being released on a regular cadence. Almost too regular, if you catch my meaning.

You can learn more about Application Insights at the Visual Studio Online site and if you would like to see my demo (or any of the other sessions) from At the Movies, you can find it on Channel 9.

Mocking and Unit Testing

Last Thursday, I had the opportunity to give a presentation on unit testing in general and mocking using MOQ specifically to the London .NET User Group meeting. Many thanks to Tom Walker for his efforts in organizing the group, as well as to the people who took time out of their evening to attend. For the purpose of preserving the effort for posterity (not that any posterity will actually care), the slide deck is available at http://www.slideshare.net/LACanuck/unit-testing-and-mocking-using-moq, while the source code used for the demos can be found at http://1drv.ms/1kQJWhb.

As always, if you have any questions or comments, you’re welcome to drop me an email or contact me on Twitter (@LACanuck).

The Return of At The Movies

Earlier this week, ObjectSharp announced the 2014 version of our annual At The Movies event, to be held on May 8 from 8:30 until noon. For years, ObjectSharp has brought together leading experts in Microsoft technology and presented what’s new and what’s useful. We call it At The Movies because, well, it’s held at the Scotiabank Theatre on John St. in Toronto. And because by doing so, we get to use movie posters as part of the marketing campaign.

Yeah, we have a good time coming up with the various posters. The call for ideas amongst the many ObjectSharp associates is a good indication that spring is coming. And the creativity and execution of the ideas is worth waiting for. But let’s start with what you get out of coming to At the Movies.

The Scrum TeamFirst, the list of topics. As always, we go with things that you want to hear about. Subjects that are on the at the leading edge of technology, but are currently available so that you can go back to your office and start to use them immediately. This year, we’re covering the following.

- Team Foundation Server 2013

- Visual Studio 2013

- Windows 8 (from Tablets to Phones)

- Azure

- SharePoint 2013

 

If you work in the .NET world, these are areas that you need to know about. They can make your life easier and your development process more efficient. And the speakers that we have covering these topics are experts, among the best in the country. They include:

  • Dave Lloyd – Microsoft ALM MVP and Team Foundation Server expert extraordinaire
  • David Totzke and Lori Lalonde – Authors of Windows Phone 8 Recipes: A Problem Solving Approach
  • Colin Bowern – Solutions Architect, former MVP and recent émigré to New Zealand
  • Ali Aliabadi - 10+ year SharePoint developer, architect and training
  • Bruce Johnson – Microsoft MVP and author of a number of Visual Studio and Azure books

In other words, join us for a morning of entertaining speakers talking about relevant topics. There really isn’t another event like it in Toronto. And even better…it’s free!

To sign up, visit http://www.objectsharp.com/atm. Do it now. It’s almost guaranteed that we’ll sell out quickly.

It’s starting

And there is no escape...

poster-postergeist

JsonConvert.Serialize Fails Silently

I ran into an interesting issue with JSON.NET over the weekend. Specifically, while I was serializing an object, it would fail silently. No exception was raised (or could even be trapped with a try-catch). However, the call to Serialize did not return and the application terminated.

The specific situation in which the call to Serialize was being made was the following:

List<Customer> _customers;

Task creationTask = new Task(() =>
{
    _customers = new List<Customer>();
   // Do stuff to build the list of customers
});
creationTask.ContinueWith(() =>
{
   serializeCustomer();
});

creationTask.Start();

Now the actual call to JsonConvert.Serialize is found in the serializeCustomer method. Nothing special there, other than the method that actually fails. But the reason for the failure is rooted in the code snippet shown above.

This was the code as originally written. It was part of a WPF application that collected the parameters. And it worked just fine. However the business requirements changed slightly and I had to change the WPF application to a console app where the parameters are taken from the command line. No problem. However while there was a good reason to run the task in the background with a WPF application (so that the application doesn’t appear to be hung), that is not a requirement for a console app.  And to minimize the code change as I moved from WPF to Console, I changed a single line of code:

creationTask.RunSynchronously();

Now the call to JsonConvert.Serialize in the serializeCustomer method would fail. Catastrophically. And silently. Not really much of anything available to help with the debugging.

Based on the change, it appears that the problem is related to threading. Although it might not be immediately obvious, the ContinueWith method results in the creation of a Task object. This process represented by this object will be executed in a separate thread from the UI thread. So any issues that relate to cross-threading execution has the potential to cause a problem. I’m not sure, but I suspect that was the issue in this case. When I changed the code to be as follows, the problem went away.

List<Customer> _customers;

Task creationTask = new Task(() =>
{
    _customers = new List<Customer>();
   // Do stuff to build the list of customers
});

creationTask.RunSynchronously();
serializeCustomer();

Now could I have eliminated the need for the Task object completely? Yes. And in retrospect, I probably should have. However if I had, I wouldn’t have had the material necessary to write this blog post. And the knowledge of how JsonConvert.Serialize operates when using Tasks was worthwhile to have, even if it was learned accidentally.

Cloud Computing in 2014

As 2013 came to a close, I put the wraps on my latest book (Professional Visual Studio 2013). While I’m not quite *done* done, all that’s left is to review the galleys of the chapter as they come back from the editor. Work, yes. But not nearly as demanding as everything that has gone before.

As well, since I’ve now published four books in the last 25 months, I’m a little burned out on writing books. I’m sure that I’ll get involved in another project at some point in the future, but for at least the next 6 months, I’m saying ‘no’ to any offer that that involves writing to a deadline.

Yet, the need to write still burns strongly in me. I really can’t *not* write. So what that means is that my blogging will inevitably increase. Be warned.

To start the new year, I thought I’d get into an area that I’m moderately familiar with: Cloud Computing. And for this particular blog, it being the start of the year and all, a prediction column seemed most appropriate. So here we go with 5 Trends in Cloud Computing for 2014

Using the Cloud to Innovate

One of the unintended consequences of the cloud is that it sits at the intersection of the three big current technology movements: mobile, social and big data.

  • Mobile is the biggest trend so far this century and is becoming as significant as the Internet itself did 20 years ago. The commoditization of the service is well underway and smartphones need to be considered in almost every technology project.
  • Social is not at the leading edge of mind share any more. And definitely not to the same level it was a few years ago. It it quickly becoming a given that social, of some form or another, needs to be a part of every new app.
  • Big Data is the newest of these three trends. Not that it hasn’t been around for a while. But the tools are now available for smaller companies to be able to more easily capture and analyze large volumes of data that previously would have simply been ignored.

What do these three trends have in common? They all use (or can use) the cloud as the delivery mechanism for their services. Most companies wouldn’t think of developing a turnkey big data environment. Instead, they would use a Hadoop instance running in Azure (or AWS or pick your favorite hosting environment). And why build an infrastructure to support mobile apps until you really need to roll your own. Instead, use the REST-based API available through Windows Azure Mobile Services. It has become very easy to use the cloud-available services as the jumping off point for your innovation across all three of these dimensions. And by allowing innovators to focus more on their creations and less on the underlying infrastructure, the pace and quality of the innovations will only increase.

Hybrid-ization of the Cloud

Much as some might want (and most don’t), you cannot move every piece of your infrastructure to the cloud. Inevitably, there is some piece of hardware that needs to be running locally in order to deliver value. But more importantly, why would you want to rip out and migrate functionality that already works if such a move provides little or no practical benefits. Instead, the focus of your IT group should be on delivering new value using cloud functionality, transitioning older functions to the cloud only on an as-needed basis.

What this does mean is that most companies are going to need to run a hybrid cloud environment. Some functions will stay on-premise. Others will move to the cloud. It will be up to IT to make this work seamlessly. There are already a number of features available through Azure AD to assist with authentication and authorization. But as you go through the various components of your network, there will be many opportunities to add to the hybrid portion of your infrastructure. And you should take them. The technology has gotten to the point that *most* issues related to creating an hybrid infrastructure have been addressed. Take advantage of this to make the most of the interplay between the two environments.

Transition from Capitalization to Expenses

For most people, the idea of using the cloud in their business environment is driven by the speed with which technology can be deployed. Instead of needing to wade through a budget approval process for a new blade server, followed by weeks of waiting for delivery, you can spin up the equivalent functionality in a matter of minutes.

But while that capability is indeed quite awesome, for business people it’s not really the big win. Instead, it’s the ability to move the cost associated with infrastructure from the balance sheet to the income statement. At the same time as this (generally) beneficial move, the need to over-purchase capacity is removed. Cloud computing allows you to add capacity on an as-needed basis. While it’s not quite like turning on a light switch, it’s definitely less onerous than a multi-week purchase/install/deploy cycle that is standard with physical hardware. One can question whether the cost of ‘renting’ space in the cloud is more or less expensive that the physical counterpart, but the difference in how the costs are accounted for make more of a difference than you think.

So how does this impact you in 2014? More and more, you will need to be aware of the costing models that are being used by your cloud computer provider. While the costs have not yet become as complicated as, say, the labyrinth of Microsoft software licensing, they are getting close. Keep a close eye on how the various providers are charging you and what you are paying for, so that as you move to a cloud environment, you can make the most appropriate choices.

Network Amplification

In order to be successful, your application needs to leverage connections between a wide variety of participants: users, partners, suppliers, employees. This is the ‘network’ for your organization. And, by extension, the applications that are used within your organization.

If you want to maximize the interconnectedness of this network, as well as allowing the participants to take full advantage of your application, you need to provide two fundamental functions: a robust and useable API and the ability to scale that API as needed.

In most cases, a REST-based API is the way to go. And you will see in the coming 12 months an increased awareness of what makes a REST API ‘good’. This is not nearly as simple as it sounds. Or, possibly, as it should be. While some functionality is easy to design and implement, others are not. And knowing the difference between the two is either trial and error or you find someone who has already been through the process.

As for scalability, a properly designed API combined with cloud deployment can come close to giving you that for free. But note the critical condition ‘properly designed’. When it comes to API functionality, it is almost entirely about the up-front design. So spend the necessary effort to make sure that it works as you need it to. Or, more importantly, as the clients of your API need it to.

Predictive Technology

For the longest time, real-time was the goal. Wouldn’t it be nice to see what the user is doing on your Web site at the moment they are doing it. Well, that time is now in the past. If you’re trying to stay ahead of the curve, you need to look ahead to the user’s next actions.

This is not the same as Big Data, although Big Data helps. It’s the ability to take the information (not just the data) extracted from Big Data and use it to modify your business processes. That process could be as simple as changing the data that appears on the screen to modifying the workflow in your production line. But you’ll start to see tools aimed at helping you understand and take advantage of ‘future’ knowledge start to arrive shortly.

So there you are. Five trends that are going to define cloud computing over the next 12 months, ranging from well on the way to slightly more speculative. But all of them are (or should be) applicable to your company. And the future of how you create and deploy applications.

Windows Azure Data Storage

The following is excerpted from my just released book Windows Azure Data Storage (Wiley Press, Oct 2013). And, since the format is eBook only, there will be updates to the content as new features are added to the Azure Data Storage world.


Business craves data.

As a developer, this is not news to you. The people running businesses have wanted it for years. They demand data about how many widgets have been ordered, how much inventory is available to be used in manufacturing, how many accounts are more than 45 days past due. More recently, the corporate appetite for data has spread way past these snacks. They want to store information about how individual consumers navigate through their website. They want to keep track of how different metrics about the machines are used in the manufacturing process. They have hundreds of MB of documents, spreadsheets, pictures, audio, and video files that need to be stored and managed. And the volume of data that is collected grows by an obscene amount every single day.

What businesses plan on doing with this information depends greatly on the industry, as well as the type and quality of the data. Inevitably, the data needs to be stored. Fortunately (or it would be an incredibly short book) Windows Azure has a number of different data storage technologies that are targeted at some of the most common business scenarios. Whether you have transient storage requirements or the need for a more permanent resting place for your data, Windows Azure is likely to have you covered.

Business Scenarios for Storage

A feature without a problem to solve is like a lighthouse on a sunny day—no one really notices and it’s not really helping anyone. To ensure that the features covered in this book don’t meet the same fate, the rest of this chapter maps the Windows Azure Data Storage components and functionality onto problems that you are likely already familiar with. If you haven’t faced them in your own workplace, then you probably know people or companies that have. At a minimum, your own toolkit will be enriched by knowing how you can address common problems that may come up in the future.

NoSQL

A style of data storage that has recently received a lot of attention in the development community is NoSQL. While the immediate impression, given the name, is that style considers SQL to be an anathema, this is not the case. The name actually means Not Only SQL.

To a certain extent, the easiest way to define NoSQL is to look at what it’s not, as well as the niche it tries to fill. There is no question that the amount of data stored throughout the world is vast. And the volume is increasing at an accelerating rate. Studies indicate that over the course of four years (2008-2012), the total amount of digital data has increased by 500 percent. While this is not quite exponential growth, it is very steep linear growth. What is also readily apparent is that this growth is not likely to plateau in the near future.

Now think for a moment about how you might model this structure using a relational database. For relational databases, you would need tables and columns with foreign key relationships. For instance, start with a page table that has a URL column in it. A second table containing the links from that page to other pages would also be created. Each record in the second table would contain the key to the first page and the key to the linked-to page. In the relational database world, this is commonly how many-to-many relationships are created. While feasible, querying against this structure would be time consuming, as every single link in the network would be stored in that one, single table. And to this point, the contents of the page have not yet been considered.

NoSQL is designed to address these issues. To start, it is not a relational data store. Instead, there is no fixed schema and querying does not require any joins to be performed. At least, not in the traditional sense. Instead, NoSQL is a variation (depending on the implementation) of the key-value paradigm. In the Windows Azure world, different forms of NoSQL-style storage is provided through Tables and Blobs.

Big Data

Any discussion of NoSQL tends to lead into the topic of Big Data. As a concept, Big Data has been generating a lot of buzz over the last 12-18 months. Yet, like the cloud before it, people find it challenging to define Big Data specifically. Sure, they know its “Big,” and they know that it’s “Data,” but beyond that, there is not a high level of agreement or understanding of the purpose and process of collecting and evaluating Big Data.

Most frequently, you read about Big Data in the context of Business Intelligence (BI). The goal of BI is to provide decision makers with the important information they need to make the choices that are inevitable in any organization. In order to achieve this goal, BI needs to gain access to data from a variety of sources within an organization, rationalize the definitions (i.e., make sure that the definition for common terms are the same across the different data sources), and present visualizations of the information to the user.

Based on the previous section, you might see why Big Data and NoSQL are frequently covered together. NoSQL supports large values of semi-structured data, and Big Data produces large volumes of semi-structured information. It seems like they are made for one another. Under the covers, they are. However, to go beyond Table, and Blob Storage, the front for Big Data in Windows Azure is Adobe Hadoop. Or, more accurately, the Azure HDInsight Services.

Relational Data

For the vast majority of developers, relational data is what immediately springs to mind when the term Data is mentioned. But since relational data has been intertwined with computers since the early in the history of computer programming, this shouldn’t be surprising.

With Windows Azure, there are two areas where relational data can live. First there are Window Azure Virtual Machines (Azure VMs), which are easy to create and can contain almost any database that you can imagine. Second, there are Windows SQL Azure databases. How you can configure, access and synchronize data with both of these modes are covered in detail in the book.

Messaging

Messaging, message queues, and service bus have a long and occasionally maligned history. The concept behind messages and message queues are quite old (in technology terms) and, when used appropriately, are incredibly useful for implementing certain application patterns. In fact, many developers take advantage of the message pattern when they use seemingly non-messaging related technologies such as Windows Communication Foundation (WCF). If you look under the covers of guaranteed, in-order delivery using protocols, which don’t support such functionality (cough…HTTP…cough), you will see a messaging structure being used extensively.

In Windows Azure, basic queuing functionality is offered through Queue Storage. It feels a little odd to think of a message queue as a storage medium, yet ultimately that’s what it is. An application creates a message and posts it to the appropriate queue. That message sits there (that is to say, is stored) until a second application decides to remove it from the queue. So, unlike the data in a relational database, which is stored for long periods of time, Queue Storage is much more transient. But it still fits into the category of storage.

Windows Azure Service Bus is conceptually just an extension of Queue Storage. Messages are posted to and popped from the Service Bus. However, it also provides the ability for messages to pass between different networks, through firewalls, and even across corporate boundaries. Additionally, there is no requirement to open up an endpoint on either side of the communications channel that would expose the participant to external attacks.

Summary

It should be apparent even from just these sections that the level of integration between Azure and the various tools (both for developers and administrators) is quite high. This may not seem like a big deal, but anything that can improve your productivity is important. And deep integration definitely fits into that category. Second, the features in Azure are priced to let you plan with them at low or no cost. Most features have a long-enough trial period so that you can feel comfortable with the capabilities. Even after the trial, Azure bills based on usage, which means you would only be paying for what you use.

The goal of the book is to provide you with more details about the technologies introduced in this chapter. While the smallest detail of every technology is not covered, there is more than enough information for you to get started on the projects that you need to determine Azure’s viability in your environment.

Sometimes, Little Things Matter–Azure Queues, Poor Performance, Throttling and John Nagle

Sometimes it amazes me how much of a polyglot that developers need to be to solve problems. Not really a polyglot, as that actually relates to learning multiple languages, but maybe a poly-tech.

Allow me to set the scenario. A client of ours is using Windows Azure Queue Storage to collect messages from a large number of different sources. Applications of varying types push messages into the queue. On the receiving side, they have a number of worker roles whose job it is to pull messages from the queue and process them. To give you a sense of the scope, there are around 50,000 messages per hour being pushed through the queues, and between 50-200 worker roles processing the messages on the other end.

For the most part, this system had been working fine. Messages come in, messages go out. Sun goes up, sun goes down. Clients are happy and worker roles are happy.

Then a new release was rolled out. And as part of that release, the number of messages that passed through the queues increased. By greater than a factor of two. Still, Azure prides itself on scalability and even at more than 100,000 messages per hour, there shouldn’t be any issues. Right?

Well, there were some issues as it turned out. The first manifested itself as an HTTP status 503. This occurred while attempting to retrieve a message from the queue. The status code 503 is used to indicate a service unavailable. Which seemed a little odd since not every single attempt to retrieve messages returned that status. Most requests actually succeeded.

Identifying the source of this problem required looking into the logs that are provided automatically by Azure. Well, automatically once you have turned logging on. A very detailed description of what is stored in these logs can be found here. The logs themselves can be found at http://<accountname>.blob.core.windows.net/$logs and what they showed was that the failing requests had a transaction status of ThrottlingError.

Azure Queue Throttling

A single Windows Azure Queue can process up to 2,000 transactions per second. The definition of a transaction is either a Put, a Get or a Delete operation. That last one might catch people by surprise. If you are evaluating the number of operations that you are performing, make sure to include the Delete in your count. This means that a fully processed message actually requires three transactions (because the Get is usually followed by a Delete in a successful dequeue function).

If you crack the 2,000 transactions per second limit, you start to get HTTP 503 status codes. The expectation is that your application will back off on processing when these 503 codes are received. Now the question of how an application backs off is an interesting one. And it’s going to depend a great deal on what your application is doing.

From my perspective, one of the most effective ways to handle this type of throttling is to redesign how the application uses queues. Not a complete redesign, but a shift in the queues being used. The key is found in the idea that the transactions per second limit is on a single queue. So by creating more queues, you can increase the number of transactions per second that your application can handle.

How you want to split your queues up will depend on your application. While there is no ‘right’ way I have seen a couple of different approaches. The first involved creating queues of different priorities. Then the messages being pushed into the queues can be done based on the relative priority.

A second way would be to create a queue for each type of message. This has the possibility of greatly increasing the number of queues. There are a number of benefits. The sender of the message does not have to be aware of the priority assigned to a message. They just submit a message to the queue with no concerns. That makes for a cleaner, simpler client. The worker is where control of where the priority lies. The worker can be pick and choose which queues to focus on based on whatever priority logic the application requires. This approach does presume that it’s easier to update the receiving workers then the clients, but you get the idea.

Nagling

Now that the 503 messages were dealt with, we had to focus on what we perceived to be poor performance when retrieving messages from the queue. Specifically, we found (when we put a stop watch around the GetMessage call) that it was occasionally taking over 1000 milliseconds to retrieve the message. And the median seemed to be someplace in the 400-500 millisecond. This is an order of magnitude over the 50 milliseconds we were expecting.

This source of this particular problem was identified in conversation with a Microsoft support person. And when it was mentioned our collective response was ‘of course’. The requests were Nagling.

Some background might be required. Unless you are a serious poly-tech.

Nagle’s Algorithm is a mechanism by which the efficiency of TCP/IP communication can be improved. The problem Nagle addresses is when the data in the packets being sent are small. In that case, the size of the TCP header might actually be a very large percentage of the data being transmitted. The header for a TCP package is 40 bytes in size. If the payload was 5 or 10 bytes, that is a lot of overhead.

Nagle's algorithm combines these small outgoing messages into a single, larger message. The algorithm actually proscribes that as long as there is a sent packet for which the sender has received no acknowledgment from the recipient, the sender should keep combining payloads until a full packet’s worth is ready to be sent.

All of this is well and good. Until a sender using Nagle interacts with a recipient using TCP Delayed Acknowledgements. With delayed acknowledgements, the recipient may delay the ACK for up to 500ms to give the recipient a change to actually include the response with the ACK packet. Again, the idea is to increase the efficiency of TCP by reducing the number of ‘suboptimal’ packets.

Now consider how these two protocols work in conjunction (actually, opposition) with one another. Let’s say Fred is sending data to Barney. At the very end of the transmission, Fred has less than a complete packet’s worth of data to send. As specified in Nagle’s Algorithm, Fred will wait until it receives an ACK from Barney before it sends the last packet of data. After all, Fred might discover more information that needs to be sent. At the same time, Barney has implemented delayed acknowledgements. So Barney waits up to 500ms before sending an ACK in case the response can be sent back along with the ACK.

Both sides of the transmission end up waiting for the other. It is only the delayed acknowledgement timeout that breaks this impasse. And the result is the potential for occasionally waiting up to 500ms for a response to a GetMessage call. Sound familiar? That’s because it was pretty much exactly the problem we were facing.

There are two solutions to this problem. The first, which is completely unrealistic, is to turn off TCP delayed acknowledgments in Azure. Yeah, right. The second is much, much easier. Disable Nagle’s Algorithm in the call to GetMessage. In Azure, Nagle is enabled by default. To turn it off, you need to use the ServicePointManager .NET class.

CloudStorageAccount account = CloudStorageAccount.Parse(connectionString);
ServicePoint queueServicePoint =
  
ServicePointManager.FindServicePoint(account.QueueEndpoint); queueServicePoint.UseNagleAlgorithm = false;

So there you go. In order to be able to figure out why a couple of issues arose within Azure Queue Storage, you needed to be aware of HTTP status codes, the throttling limitations of Azure, queue design, TCP and John Nagle. As I initially started with, you need to be a poly-tech. And special thanks to Keith Hassen, who discovered much of what appears in this blog post while in the crucible of an escalating production problem.

IIS Express Default Settings

On occasion when I open a Web application in Visual Studio, I receive a message that is similar to the following:

image So that the search bots can find the text, the pertinent portion reads “The following settings were applied to the project based on settings for the local instance of IIS Express”.

The message basically says that the settings on the Web application with respect to authentication don’t match the default settings in your local IIS Express. So Visual Studio, to make sure that the project can be deployed, changes the Web application settings. Now there are many cases where this is not desirable and the message nicely tells you how to change it back. What is hard to find out is how to change the default settings for IIS Express.

If you go through the “normal” steps, your first thought might be to check out IIS Express itself. But even if you change the settings for the Default Web Site (or any other Web Site you have defined), that’s not good enough.

Instead, you need to modify the ApplicationHost.Config file. You will find it in your My Documents directory under IISExpress/Config. In that file, there is an <authentication> section that determines whether each of the different authentication providers is enabled or disabled. If you modify this file to match your Web application’s requirements, you will no longer get that annoying dialog box popping every time your load your Solution. Of course, you *might* have to changed it for different projects, that’s just the way it goes.