In the past week, I have seen a couple of articles that discuss the lack of awareness of the Cloud in the general public. The following, from the Globe and Mail summarizes quite nicely.
“While cloud computing is growing increasingly pervasive, a new survey shows how many people are still cloudy in their thinking about the technology.” - http://www.theglobeandmail.com/report-on-business/small-business/sb-tools/small-business-briefing/cloudy-thinking-about-cloud-computing/article4504986/
The survey includes tidbits like 54% of people don’t think they use cloud computer (only 5% don’t), only 16% identify it correctly and (this one is my favorite) 51% believe that stormy weather can interfere with cloud computer.
(As an aside, I just got back from Punta Cana, where the Internet (and thus cloud computing) was turned off for two days while Tropical Storm Isaac passed through. Pretty certain that’s stormy weather interfering. :))
My comment about this state of affairs is: Who Cares?
What percentage of people have a working knowledge of the internal combustion engine? And yet a majority of people are quite able to drive without this knowledge. How many people have even the most basic understanding of how electricity is generated? And yet they don’t have a problem turning on a light.
Those of us in technology seem to think that it’s important to have others understand what we do. Perhaps it’s a need to appear smart. Perhaps we’re looking for acceptance after spending high school being given wedgies and swirlies. Doesn’t matter. I no more expect the average user of the technology I create to know how it works than I do my mother. And you shouldn’t either.
It should be completely transparent to the user where we put their information. The applications that we create should seamless transition between local storage, on-premise storage and the ‘cloud’. The user should only be aware of this when they use their phone to access the Word document they were writing before they left the office. Actually, I’m wrong. They shouldn’t care even then.
And that’s how you should be building your applications. Seamless integration between the various storage options. This isn’t necessarily the easiest choice for developer. Seamless == more work. But tools like the the Windows Azure Mobile Services can help. But don’t let the user know…they don’t care. They shouldn’t. All of their data should just be there. Like electricity
I love Lego.
To be fair, the number of people who don’t fall into that category is probably fairly small. There is nothing like the joy of taking the slightly differently shaped blocks and creating something bigger and better. And I’m not a big fan of all of the custom kits either.If a piece only has one purpose (like as the nose for an X-wing fighter), then it’s not for me.
I also love Star Trek. Well, not love, but greatly appreciate and enjoy the various forms over the years. And I have referenced Star Trek in various presentations, not to establish my geek cred (of which I have very little), but because of how software works ‘in the future’.
And yes, Lego and Star Trek are related in this way.
The key to Lego block is the simple and consistent interface. Doesn’t matter what the shape of the block is, the fact that every block has the same interface allows them to be connected. And it is through the various connections that a much bigger whole can be created.
Star Trek takes the Lego idea and applies it to software. Every wonder how Geordi and Data were so quickly able to create new and complex software? Because all of the different components had the same interface. Or at least similar enough interfaces so that the components could communicate with one another. And connected software allows you to create a much bigger whole.
Now let’s move back to the here and now. What’s missing from our current software environment that would prevent Geordi from creating the application that saves the Enterprise? Two things, really. The lack of a standard set of interfaces and the inability of most software functionality to be ‘connected’. And this is the next phase in software development.
If you’re a company that provides services to others, then you need to think about enabling access to your service from the cloud. Want to allow people to easily buy your services or products? Give them an interface on the Web that allows them to do so. Not a Web page, but an API. Have some information that others might find useful? Give them an interface to access it. Create the Lego blocks that I was talking about earlier. Find standard interfaces for the type of data/service you offer and expose them on top of your services. In other works, provide Lego blocks for others.
One of the benefits of doing so is that you let others build out functionality based on your services. If your service or data is compelling enough, others will build your front-end for you. You have already seen this happen with a number of the social sites that are out there. People combine information from Foursquare, Twitter, Facebook, Google+, etc. to create interesting apps for others to use. The engagement level of people with apps that run on their phones are high and likely to move higher. Finding ways to integrate your service/data with that ecosystem can only be beneficial.
So what’s the downside? Well, you have to implement and/or design the interface. Technical yes, but not beyond the scope of what most companies can do. And you need to provide the infrastructure for surfacing your API. This is where virtualization comes into play. I’m a fan of Azure and the new functionality it offers, but speaking generically, virtualize where it makes the most sense. If you’re a Microsoft shop, I believe you’ll find the biggest bang for your efforts with Azure.
But the technology is not the key here…it’s the concept. Look at the products you offer to your clients. Find ways to expose those products to the Internet. Be creative. The payoff for your efforts have the potential to be significant. But more importantly, you take the first step towards what will be the development and integration paradigm for the next decade…Lego.
Once of the interesting elements of this year’s MIX is the complete domination of Twitter as a medium for distributing updates. If you have been following me on Twitter (I’m @LACanuck), then you will already have heard a lot about the Windows Phone 7 development announcements. However, as useful as Twitter is, it’s not really a place for opinion. Unless your opinions fit into <140 characters. Mine don’t
There is no question that there is a lot of buzz around developing apps for the Windows Phone 7. This is completely understandable, as WP7 allows Silverlight developers the ability to create applications for the phone. According to Scott Gu’s keynote, there is only “one Silverlight”. That is to say that applications that run on the browser should also be able to run on WP7.
Now there is going to be a little bit of a reality check for that statement, especially as we hit Silverlight 4. I’m not sure, for example, if Silverlight as running on WP7 has the concept of a trusted application. I suspect that it doesn’t, although I’m open to correction if my assumption is misplaced.
But working solely within the security sandbox is not the only real difference. Specifically, the design of a WP7 application is very different than a Web application. The size of the design surface is, naturally, much smaller on the WP7. And the UI needs to consider that main UI gesture is touching, a paradigm that doesn’t apply to Web applications. All of this is to say that while, theoretically, the same application could run on both platforms, it’s much more likely that different views will be used by the different targets. If nothing else screams that you should be using MVVM as a design pattern for Silverlight, this will.
Once you see what’s possible in the WP7 environment, the excitement regarding creating applications is easy to understand. And not only are the apps exciting, so too is the ability to monetize your application. Microsoft will be making a Marketplace available so that you can sell your apps on-line. Given how well Microsoft has done with community driven marketplaces, I have no doubt this will be successful.
But what about your own personal applications? What if you want to develop a WP7 application that is used by your mobile sales force? At the moment, the answer seems to be that you’re out of luck. This might change before it goes live, but the word that I’m hearing is that the only way to get apps onto your phone is through the Marketplace.
Now, that’s not completely accurate. If you have Visual Studio 2010, you can deploy your application to a physically connected phone. However, the time to live for applications which have been deployed in such a matter is limited, To, approximately, a month. After which the app would need to be redeployed.
I’m not a fan of this. In fact, in my mind ,it drops the collection of Silverlight developers who might write WP7 apps by 50%. At least. I can take guesses at the reason why this limitation is the case, but still, it’s not what I was hoping for. The term for what I’m looking for is ‘siloed’ deployment’ (that is, deployment only for people in a particular silo) and I’m hoping that it becomes part of the released version of WP7 before it goes live with the first version.
While there is more of interest that is being revealed here, this is probably a decent start. And I’ll be both blogging and tweeting as much as I can while I’m here at MIX ‘10
One of the first questions that arose from the announcement of off-browser Silverlight was “What will happen to WPF?” The obvious source of this concern is that since Silverlight 3 can run either as part of a Web page or installed in an off-browser mode, why would there ever be a reason to write a WPF application? And since Silverlight seems to be the technology that has all of the new features, is there a possibility that WPF could languish as the ugly step child in the client application development world ?
First, let me assuage your concerns. WPF is not dead. This opinion is based on a couple of on-going development projects at Microsoft: Expression Blend/Design and Visual Studio. In the case of Expression Blend/Design, the entire application was written in WPF. For Visual Studio, the code editors are being re-written in WPF. If nothing else, the investment being made by Microsoft to these products in WPF should demonstrate its on-going commitment to the technology. And there is the on-going integration of WPF into the Live Messenger client to add more fuel to the argument.
Going forward, I see WPF and Silverlight moving ahead more or less in lock-step. Features in Silverlight that are successful will find their way into WPF (Visual State Manager, for example). Features in WPF that are useful will move into Silverlight (element binding, based-on styles). Since the products have different audiences, each technology will be driven forward with a different set of priorities. VB.NET and C# already have this So don’t give up on WPF because of all of the excitement from Silverlight. As a WPF developer, I found some of the excitement generated by Silverlight announcements a little odd (applause for element-to-element binding? WPF has had that for a while now). What playing with Silverlight 3 that I have done so far suggests that the disparity between the two features sets is going to be much less in the future.
That having been said, there is still the open question of when WPF should be used instead of Silverlight 3. I can see three main cases at a minimum, but they all revolve around the same restriction in Silverlight…the security sandbox.
Access to local system resources
Off-browser Silverlight runs in the same security context as on-browser Silverlight. So if your application needs access to the local file system or the communication ports on the client machine, then Silverlight is not going to work out. In fact, this limitation can be extended in include any feature that requires full-trust to operation. Silverlight doesn’t work in full-trust mode. WPF can.
Full 3-D graphics capabilities
With Silverlight 3, perspective 3-D is available. But that is not the same as the complete 3-D capabilities that WPF has. It’s nice, but if you’re looking for fully rotational 3-D images, then WPF is the choice.
I’m sure there are more differences. It’s not like WPF is not a compelling choice for a development platform. In fact, I approach the choice between the two the same way I would between WinForms and ASP.NET. Because ultimately, the decision between Silverlight and WPF will be based on the specific requirements of a project. If Silverlight (either on- or off-browser) is sufficient, then pick Silverlight. If not, then pick WPF. Regardless of your choice, you’re not in danger of dead-ending with the technology. Both areas will flourish and grow for many years to come.
I’ve just come home from spending the last three days in Redmond at the MVP Summit. For those who might not be aware, the Summit is an annual event that Microsoft hosts for Most Valuable Professionals (MVP). The MVP designation is given to people who have contributed in a positive way to the community through speaking, blogging, answering questions in forums or organizing user groups and conferences at the local level. At the Summit, the various product groups get the opportunity to demonstrate some of the futures for their products in order to solicit feedback. The chance to meet and talk with product group members is actually one of the main benefits of being an MVP to me. They are people who are passionate about the code they write and who love to hear about the good and the bad.
However, the futures that are being discussed are really that. We’re not talking about what’s going to be in VS2010. The feature list for that has been set in stone for a while and is generally well known. Instead, we’re talking about what might be coming in the next version of Visual Studio. Or Silverlight. Or ASP.NET. Or Data Programmability. These futures have not, for the most part, even been designed much less coded. So to talk with us about this, MVPs at the Summit (and, indeed, all MVPs) have to sign a non-disclosure agreement (NDA). This means that we cannot discuss with anyone outside of the MVP community what we have seen and heard until the information becomes public.
Today’s technology, combined with the outgoing personalities of MVPs makes this restriction a challenge. Normally when I’m at a conference, I’m live blogging the session that I’m in. Or I’m twittering my schedule. Can’t do that here. It gives me itchy fingers, but the NDA is taken quite seriously. Even the code names for various projects are considered NDA, a problem for the person who unthinkingly twittered one while in sessions on Monday.
So that inability to share is my biggest disappointment. Not an unexpected one (I’ve been to the Summit before and am under NDA constraints constantly), but still a source of sadness nonetheless. But let me just say that I’ve already written some blog posts that will be published once the details of the products are made public in the near future. Hopefully that little tidbit of foreshadowing won’t get the NDA police on my trail.
So to set up this problem, an application that I'm currently working on needs to process data that is stored in an Excel spreadsheet. The creation of the spreadsheet is actually performed by a scientific instrument, so my ability to control the format of the output is limited. The instrument samples liquids and determines a quantity. That quantity is placed into the spreadsheet. If the quantity in the liquid is not successfully read, then the literal "Too low" is placed into the sheet. The application opens up the spreadsheet and loads up a DataSet using the OleDb classes (OleDbConnection and OleDbDataAdapter).
This seemed like a fine setup. At least, until some of the numeric values in the spreadsheet were not being read. Or, more accurately, the values in the spreadsheet were not making it into the DataSet. Head-scratching, to say the least.
After some examination, the problem became apparent. When the values were being successfully read, the column in the DataSet had a data type of Double. When the values were not being successfully read, the column in the DataSet had a data type of String. Now the difference between success and no-success was nothing more than the contents of the spreadsheet.
Now the obvious path to follow is how the data type of the column is determine. Some research brought me to what I believe is the correct answer. The Excel driver looks at the contents of the first 8 cells. If the majority is numeric, then the column's data type is set to Double/Integer. If the majority is alphabetic, then the column becomes a string.
Of course, this knowledge didn't really help me. As I said at the outset, my ability to control the format of the spreadsheet was limited. So I needed to be able to read the numeric data even if the column was a string. And, at present, the cells containing numbers in a column marked as a string were returned as String.Empty.
The ultimate solution is to add an IMEX=1 attribute to the connection string. This attribute causes all of the data to be treated as a string, avoiding all of the cell scanning process. And, for reasons which I'm still not certain of, it also allowed the numeric data to be read in and processed. A long and tortuous route yes, but the problem was eventually solved.
Now that the holidays are over (which weren't particularly fun for me this year, due to a persistent bout with sinusitis), it's time to get back to the posting. And to start things off, let me offer any of you who might be thinking about going to DevTeach Vancouver at the beginning of June (8th to the 12th). Jean-Rene Roy, the organizer of the conference, has offered 50% off the registration cost to the first 30 people who register with the following code: DEVT50OFFVAN. Also, the registration need to be done prior to Feb 10th.
If you've never been to a DevTeach conference, you don't know what you're missing. This is easily the top .NET developer-focused conference in Canada. They get big name speakers presenting on the latest and greatest of technologies. As well, the setup for the conference is such that the speakers are much more accessible than any other conference I've been to. So not only will you be able to hear familiar luminaries, but you'll also get the ability to speak with them one-on-one. A great deal at full price, this becomes an incredible opportunity at half-price. So if you were just thinking of going, let this offer make your mind up for you.
In the excitement of PDC, it slipped my mind to let everyone know that the book on which I was a co-author was actually shipped at the beginning of October. The title is the terse, yet incredibly descriptive MCTS Self-Paced Training Kit (Exam 70-503): Microsoft® .NET Framework 3.5 Windows® Communication Foundation (PRO-Certification). There is a bidding war for the movie rights and I'm hoping that George Clooney plays me in the adaptation. :)
For those of you wondering how the actual release might have slipped my mind, the reason is that I'm not involved in the steps that takes place at the end of the publishing process. Most of the book was written in the first half of the year. Since July, I have been reviewing chapters and responding to editor notes. But since the middle of August my tasks have been done. And, I'm afraid, when it comes to book writing, once I'm done, I mentally move on to the next task. So I wasn't even sure when the publication date was. But it was released and, based on the numbers that I've seen so far, it seems to be doing quite well. If any of you have the chance to read it, I'd be thrilled to hear any feedback (both good and bad).
I've been a fan of Malcolm Gladwell since I read The Tipping Point. And after following that up with Blick, it is clear that Mr. Gladwell is a fascinating author on subjects that are quite interesting, even when it falls outside my normal range of reading material (that being mostly geeky ). Apparently on Tuesday, a new book of his entitled Outliers: The Story of Success is coming out. That in itself is enough to pique my interest. However, it turns out that, as part of his book tour, Mr. Gladwell is speaking in Toronto on Dec 1 at the University of Toronto Rotman School of Business. And the price of the tickets (only $31 and which you can get here) includes a copy of the book. I'm signed up already and if you have found his books interesting, here is a chance to hear him in person.
One of the more farsighted thoughts on the implications of cloud computing is the concern about vendor lock-in. Tim Bray mentioned it in his Get in the Cloud post
Big Issue · I mean a really big issue: if cloud computing is going to take off, it absolutely, totally, must be lockin-free. What that means if that I’m deploying my app on Vendor X’s platform, there have to be other vendors Y and Z such that I can pull my app and its data off X and it’ll all run with minimal tweaks on either Y or Z.
I’m simply not interested in any cloud offering at any level unless it offers zero barrier-to-exit.
This idea was also commented on by Dare Obasanjo here. It was Dare who originally pointed me at Tim's post.
My take on the vendor lock-in problem is two-fold. First is the easier one to deal with - the platform on which the application is running. As it sits right now, use of Azure is dependent on you being able to publish an application. The destination for the application is a cloud service, but that is not a big deal. You can just as easily publish the application to your own servers (or server farm). The application which is being pushed out to the cloud is capable of being deployed onto a different infrastructure.
Now, there are aspects of the cloud service which might place some significant requirements on your target infrastructure. A basic look at the model used by Azure indicates that a worker pattern in being used. Requests arrive at the service and are immediately queued. The requests are then processed in the background by a worker. The placement of the request in the queue helps to ensure the reliability of the application, as well as the ability to scale up on demand. So if you created an infrastructure that was capable of supporting such a model, then your lock-in at the application level doesn't exist. Yes, the barrier is high, but it is not insurmountable. And there is the possibility that additional vendors will take up the challenge.
The second potential for lock-in comes from the data. Again, this becomes a matter of how you have defined your application. Many companies will want to maintain their data within their premises. In the Azure world, this can be done through ADO.NET Data Services. In fact, this is currently (I believe) the expected mechanism. The data stores offered by Azure are not intended to be used for large volumes of data. At some point, I expect that Azure will offer the ability to store data (of the larger variety) within the cloud. At that point, the spectre of lock-in becomes solid. And you should consider your escape options before you commit to the service. But until that happens, the reality is that you are still responsible for your data. It is still yours to preserve, backup and use.
The crux of all this is that the cloud provides pretty much the same lock-in that the operating system does now. If you create an ASP.NET application, you are now required to utilize IIS as the web server. If you create a WPF application, you require either Silverlight or .NET Framework on the client. For almost every application choice you make, there is some form of lock-in. It seems to me that, at least at the moment, the lock-in provided by Azure is no worse than any other infrastructure decision that you would make.