Over the next couple of months, you’re going to start hearing more and more about the new App for Office model. In a nutshell, this is a new model for developing and deploying applications to Office 2013 and SharePoint 2013. The main idea is to remove the installation requirement for applications onto a server while still providing the full range of functionality to the applications. Is this case, the servers I’m talking about are the Exchange Server and the SharePoint farm.
As you might guess, this is not a complete or thorough description of the process. But if you want to learn more, you can join me on Dec 12 in Charlotte (register here) or Dec 14 in Toronto (register here). And if those dates/places don’t work for you, keep your eyes open for new dates across North America starting in February.
So often I see developers mapping to their local folder from the middle of the source control tree to a folder on their desktop, then another folder to a different location. It gets very messy very quickly.
Some people may like to map folders this way, however I prefer a cleaner mapping implementation. When working on a team I would rather have the same folder structure as the server and everyone else on the team. Also it’s nice not to have to map each app I work on. I would rather just map once and be done with it.
Here is what I do: From a clean never been mapped TFS source control repository. If you already have mappings check everything in and remove them before doing this. (These instructions are for Team Explorer 2012)
Open the Source Control Explorer
Select the top node which should be your Server\Collection
Right click and select Advanced | Map to Local Folder from the context menu
Create a folder on the drive of your choice with the same name as the Collection. I like to create a Source folder then inside that folder create the Folder with the same name as the collection. That way if I have more than one Collection they are separate folders but all under Source.
Once you hit the Map button you will be prompted to get latest of everything in the collection.
I recommend you select No. I doubt you want everything, do you?
Now traverse the Source Control tree look and watch the Local Path at the top it changes as you move. To get a particular application just right click on the folder and select Get Latest Version and it will get it into the folder structure you see.
Have everyone on the team do this. Then when you go from machine to machine you always know where to find the source. Also it looks just like the server.
Here is a little Trick I find most people don’t know about.
When setting up a Build Definition you have to tell the build server where to get the source code from. We do this by declaring the folders to download from on the workspace tab of the build definition. It takes time to download all the files to the build server so you don’t want to get any unnecessary folders from source control.
Generally you can just select the root of the branch and pick up everything from there. Although there are times when the build you are creating does not require all the files in the branch. Lets say for example that you have two builds that run one that only builds the application and one for your WIX projects to create an install package. You likely want to keep all the files together for branching purposes. Something like this:
The folders in my example are for the following:
Builds files used by the build process specific to this application. Includes third party DLL’s we do not have the source code for.
Install WIX Project.
Resources Various resource files used by the application.
Source Application Source Code.
Now what I want is to get certain folders when building the app for a CI build and different ones when creating an install package.
I could just do this. But then I am getting more than I need in both cases.
Or to get just what I need I could do this. (Making sure I put everything into the correct folder on the build agent.
Installation Package Build:
Or I could use the cloaked status to let the build know not to get a particular folder.
Therefore on the CI Build where I want everything but the Install Folder I could do this:
And on the Install Package Build I don’t need Source or Resources so I could do this:
If you are writing an app for the general public (as opposed to one you’re writing for you in your organization), one of the first questions you face is whether you create it as a Web application or develop it for your target platforms using native code (XAML/HTML5/C#,Objective C, Java). There are a number of reasons I can think of to use native code over Web technologies. Better performance. Better integration with the device. Able to take advantage of functionality that is specific to the device (as opposed to coding to the lowest common denominator). By the same token, there are reasons to utilize Web technologies instead of native code. Faster time to market. Easier deployment. Consistent experience across all platforms.
But what does this have to do with creating engaging apps?
That’s right. The choice of technologies that you use is not what makes an application engaging. Consider the following tips for an engaging application:
Use the power of faces – People love faces. We have evolved so that we have specific wiring in our brain with the sole purpose of recognizing faces. We do it so well, we see faces where they don’t exist (such as in clouds, tree bark or grilled cheese). Take advantage of this by putting faces on your site. Even better, get the face to look at at important part of your site: people instinctively follow the eyes of the face.
Use food, sex and danger to attract interest – If you have ever heard me teach a courses, you will have heard me say (right at the very beginning) that when someone sees something new, they place it into one of four categories: can I eat it, can it eat me, can I mate with it, and everything else. If you want your site to be considered interesting, put it in one of the first three categories.
Tell a story – We learn through stores. It’s how we teach our children. It’s what we see in movies. Stories are a big part of how information is conveyed to us. Take advantage of this in your app. If you have information to give to the user, put it in story form. It doesn’t matter what medium you use (words, pictures, music), but using a story narrative will help your user understand and retain your information.
Build commitment over time – I’m guessing that most of you did not propose to your wife on the first date. It takes time for both people to make sure of the commitment that is implicit in marriage. The same is true of business. You don’t ask for a 6-figure sales order on the first cold call. Or, if you do, I’m guessing your success rate is low. :) Instead, let the relationship build over time. Let the user choose how they want to interact with you (RSS, Twitter, Facebook) and make sure that you don’t take advantage of the trust that is implicit in that interaction.
See? Nothing at all about technology. Engaging apps is all about the design sensibilities and visual aesthetics of the app. Focus your energies on that. Get that right and the choices you make for technology, so long as it doesn’t get in the way, because ancillary at best.
Come out on Thursday to the Toronto ALM User Group (TALMUG) click here to register. TALMUG
Be Loved By Your Development Teams: Using the Team Foundation Server – Project Server Connector
Organizations are investing heavily in building project management competencies through the improvement in processes and use of tools such as Microsoft Office Project Server to ensure the predictable and reliable delivery of projects. At the same time an increasing number of development teams are moving towards agile techniques. Integrating and reconciling development teams and project management has become extremely important. Microsoft’s Application Lifecycle Management strategy includes solutions designed to enable Visual Studio, Team Foundation Server, and Project Server to connect together seamlessly.
This session will explore the Team Foundation Server - Project Server Integration Feature Pack and demonstrate how this enables development teams and project managers to work efficiently and increase productivity.
Microsoft has just released their new certification exam for Software Testing with Visual Studio 2012. The exam number is 70-497
The skills and/or tasks covered in the exam are:
- Create and Configure Test Plans (31%)
- Manage Test Cases (34%)
- Manage Test Execution (35%)
Check out the details here and see Charles Sterling’s blog for additional information.
If you are interested we have a course on all these items that will help prep you for the exam.
Contact O# at 877-SO-SHARP
Test Scribe – a tool designed to export your Test Plan to word document. The test scribe template can be customized. Shai Raiten has two articles on how to customize.
How to customize test scribe template
Test Scribe – Developer Guide
Regular Expression Tester Extension – Parses regular expressions from your code, so you can modify and test them and insert the updated versions. Matches and groups are highlighted for an easy overview of exactly what captures your regular expression generates. Also allows you to save your regular expressions.
Silverlight Plugin - Using the Microsoft Visual Studio UI Test plugin for Silverlight, you can create Coded UI Tests or action recordings for Silverlight applications.
TFS Power Tools – blog by Brain Harry
If you have an MSDN subscription, then you have access to Windows Azure functionality at no cost. Now the level of functionality (in terms of storage, compute hours, etc) depends on the level of your MSDN subscription. But even at the lowest level, there is still enough to let you thoroughly play with the features that have been made available.
Unless, that is, you’re not careful.
I actually had my free subscription run out of money last month. Not because I was doing anything exceptional with it. But because I hadn’t thoroughly cleaned my toys up after I was done. So let me give you a couple of pointers on what you might need to clear up. Specifically in the area that got me…virtual machines.
The Virtual Machines that are available in Windows Azure are sweet. You can select an image from a gallery that includes Windows Server 2008, Windows Server 2012, SQL Server 2012, BizTalk and a number of Linux distributions. Nice to work with, especially as you’re testing out the new features. But when you’re finished with the machines, deleting them does not completely clean up. Specifically, the creation of a virtual machine already creates an image of that virtual machine (the VHD file) in Azure storage. And this image continues past the deletion of the virtual machine. Taking up storage space, In my case, I had created a 1TB VM, which left 1TB of storage on my blob storage. Which ate through my 45GB/month limit very, very quickly.
To clean up completely after setting up a new VM requires a few steps more than deleting the VM itself. Go into the Windows Azure portal (http://manage.windowsazure.com) and get to the Virtual Machines section. Even though no virtual machines are defined (I had already deleted it), click on the Disks section. Now you’re see the OS disk related to the VM that you had created (and deleted). This is the source of the ‘offending’ storage.
Once the disk is selected, click on the Delete Disk icon and Delete Associated VHD from the menu. This option removes not only the disk, but also cleans up the item kept in storage. If you hadn’t deleted the VHD, the disk would be removed, but the image would still be maintained in storage. Keeping the meter running, so to speak.
While you have not gotten rid of the ‘costly’ portion of the delete VM, to complete the clean up, go back to the main Windows Azure portal and get into the Storage accounts. Then select the storage account for the VM (for me, it had a name like portalvhds95qxznsn1dlm8) and click on the Delete icon. This will completely clean up the VM.
By the way, if you try to delete the storage (or any of the containers in the storage account or even the blob within the container), you will be unable to do so until the Disk is deleted. The error message if the Disk is still around is “There is currently a lease on the blob and no lease ID was specified in the request”. Not particularly clear what’s happening (as I can tell you from experience). But hopefully by including it here, the next people who run into the same message will have a better idea of how to address it.
Recently we have come across an interesting bug in Office 365. In the scenario where you use ADFS to authenticate your Office 365 users and some of the users have multiple email address aliases assigned using adsiedit.msc, Lync might display a wrong name.
For example, user’s name is Walter and his primary email address is email@example.com (not a real email address). Imagine that Walter’s colleague Jesse is leaving the company and they need Walter to take over Jesse’s clients and make sure that all emails that addressed to Jesse are now sent to Walter. At the same time, you don’t want to keep Jesse’s mailbox active because Office 365 charges you per mailbox and that would be a waste of money. So, you archive Jesse’s existing mailbox and add an alias firstname.lastname@example.org to Walter’s mailbox. And, because you use ADFS, you have to add aliases using adsiedit.msc instead going through Office 365 management portal. Make sense, right? Well, this is where it starts being interesting and very-very confusing. Now, when Walter logs into Lync some of the users will see Jesse’s name show up in their Lync client instead of Walter. Weird, isn’t it?
What appears to be happening is that Lync Address Book Service (ABSConfig) queries proxyAddress attribute in user properties and uses whatever entry query returns first. Because proxyAddress field stores data in alphabetical order, in Walter’s user attributes the name “Jesse” entry comes before “Walter.” That’s why we see the wrong name displayed. It’s that simple.
Anyways, if this was an on-premise Lync server then there at least couple of fixes for this problem. Both fixes have to do with making changes on the server side. But this is Office 365, and we do not have access to the server-side. What are those of us living in the cloud supposed to do?! As far as I know, there is no fix, but there is a workaround. Instead of creating email address aliases using adsiedit.msc, you can:
- Create a distribution list in Office 365 management portal. Make sure to allow external senders send emails to this distribution list, so that emails don’t bounce back.
- Assign any email address aliases to that distribution list right from Office 365 management portal. For example, email@example.com or firstname.lastname@example.org
- Add an intended recipient(s) to the distribution list. For example, email@example.com. Now, when people send email to Jesse every email will be sent to Walter’s mailbox and everyone will see Walter as Walter when he signs into Lync. It’s a win-win.
- (Optional) Hide distribution list from Address Book, so your people don’t get confused when they search internal Global Address Book.
Well, it’s not exactly a fix, it’s a workaround and it will do for now. I do hope though that Microsoft will fix this bug in Office 365. Sometime in the next 20 minutes would be great. ;)
One of the joys of distributed development teams is unexpected locks. In this particular case, the file was locked by a very distributed developer. And I needed to get it unlocked, as the lock was preventing a build from running. Oh, and I was using tfspreview.com as the source control repository.
Step 1 – Determine the workspace
In order to perform an unlock/undo, you need to know the workspace and user involved. To find out the workspace for a user, there is a workspaces option for the tf command line prompt. So open up the Visual Studio Command Line window and navigate to your local mapped directory for the project. This navigation is important, as it allows you to minimize some of the command line options that we will be using.
Once you’re in the directory, execute the following command:
tf workspaces /owner;domain\userid
In this case (since we’re using tfspreview.com, the domain\userid is actually the Live ID for the user that currently holds the lock. The output from this command includes the name of the workspace in question.
Step 2 – Undo pending changes (thus releasing the lock)
Another tf command is required for this step. In the same command line window, execute the following command:
tf undo itemspec /workspace:workspace;domain\userid
In this case, the itemspec is the path to the locked item (for example $/MyProject/Directory/fileName.txt), the workspace is the name of the workspace identified in step 2 and domain\userid is the login id (or Live ID in our case) of the person who owns the workspace (and who has the item checked out).
And voila. The lock is undone and I’m now free to wreck havoc…er…check in my code.