Due to the fact that the hosting provider I was using for Syfuhs.net was less than stellar, (names withheld to protect the innocent) I’ve decided to move the blog portion of this site to blogs.objectsharp.com.
With any luck the people subscribing to this site won’t see any changes, and any links directly to www.syfuhs.net should 301 redirect to blogs.objectsharp.com/cs/blogs/steve/.
As I learned painfully of the problems with the last migration to DasBlog, permalinks break easily when switching platforms. With any luck, I will have that resolved shortly.
Please let me know as soon as possible if you start seeing issues.
Cheers!
Sometime last week I sent out an email to quite a few people:
As is the way of things in the tech industry, jobs change. More specifically,
mine.
Sometime around October 1st this email will be turned off as I am starting
a new position with ObjectSharp working with some of the brightest minds in Toronto.
If you need to get in touch with me after that date you can do it through a few channels.
My personal email is steve@syfuhs.net, which
gets checked more often than it should, and my O# email will be ssyfuhs@objectsharp.com.
Cheers
Steve Syfuhs, MCP
Soon to be ex-Software Developer / Database Analyst
Woodbine Entertainment Group
416.675.3993 Ext 2592
While I really enjoyed my job here at Woodbine (even though I complained about it
from time to time), it’s time for change and to move on to new opportunities.
Barry Gervin offered me such an opportunity and I start with ObjectSharp on October
1st. Bonus for starting on a Friday.
My role has many functions. Some internal; some external. Some loud; some…notsomuch.
Some cryptic.
It sounds like it will be an amazing experience!
So… I leave you with this:
Over the past few months I have seen quite a few really cool technologies released
or announced, and I believe they have a very real potential in many markets.
A lot of companies that exist outside the realm of Software Development, rarely have
the opportunity to use such technologies.
Take for instance the company I work for: Woodbine
Entertainment Group. We have a few different businesses, but as a whole
our market is Horse Racing. Our business is not software development.
We don’t always get the chance to play with or use some of the new technologies released
to the market. I thought this would be a perfect opportunity to see what it
will take to develop a new product using only new technologies.
Our core customer pretty much wants Race information. We have proof of this
by the mere fact that on our two websites, HorsePlayer
Interactive and our main site, we have dedicated applications for viewing Races.
So lets build a third race browser. Since we already have a way of viewing races
from your computer, lets build it on the new Windows Phone 7.
The Phone – The application
This seems fairly straightforward. We will essentially be building a Silverlight
application. Let’s take a look at what we need to do (in no particular order):
-
Design the interface – Microsoft has loads of guidance on following with the Metro
design. In future posts I will talk about possible designs.
-
Build the interface – XAML and C#. Gotta love it.
-
Build the Business Logic that drives the views – I would prefer to stay away from
this, suffice to say I’m not entirely sure how proprietary this information is
-
Build the Data Layer – Ah, the fun part. How do you get the data from our internal
servers onto the phone? Easy, OData!
The Data
We have a massive database of all the Races on all the tracks that you can wager on
through our systems. The data updates every few seconds relative to changes
from the tracks for things like cancellations or runner odds. How do we push
this data to the outside world for the phone to consume? We create a WCF Data
Service:
-
Create an Entities Model of the Database
-
Create Data Service
-
Add Entity reference to Data Service (See code below)
public class RaceBrowserData : DataService
{ public static void InitializeService(DataServiceConfiguration config) { if (config
== null) throw new ArgumentNullException("config"); config.UseVerboseErrors
= true; config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); //config.SetEntitySetPageSize("*",
25); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
} }
That’s actually all there is to it for the data.
The Authentication
The what? Chances are the business will want to limit application access to
only those who have accounts with us. Especially so if we did something like
add in the ability to place a wager on that race. There are lots of ways to
lock this down, but the simplest approach in this instance is to use a Secure Token
Service. I say this because we already have a user store and STS, and duplication
of effort is wasted effort. We create a STS Relying Party (The application that
connects to the STS):
-
Go to STS and get Federation Metadata. It’s an XML document that tells relying
parties what you can do with it. In this case, we want to authenticate and get
available Roles. This is referred to as a Claim. The role returned is
a claim as defined by the STS. Somewhat inaccurately, we would do this:
-
App: Hello! I want these Claims for this user: “User Roles”. I am now going
to redirect to you.
-
STS: I see you want these claims, very well. Give me your username and password.
-
STS: Okay, the user passed. Here are the claims requested. I am going
to POST them back to you.
-
App: Okay, back to our own processes.
-
Once we have the Metadata, we add the STS as a reference to the Application, and call
a web service to pass the credentials.
-
If the credentials are accepted, we get returned the claims we want, which in this
case would be available roles.
-
If the user has the role to view races, we go into the Race view. (All users
would have this role, but adding Roles is a good thing if we needed to distinguish
between wagering and non-wagering accounts)
One thing I didn’t mention is how we lock down the Data Service. That’s a bit
more tricky, and more suited for another post on the actual Data Layer itself.
So far we have laid the ground work for the development of a Race Browser application
for the Windows Phone 7 using the Entity Framework and WCF Data Services, as well
as discussed the use of the Windows Identity Foundation for authentication against
an STS.
With any luck (and permission), more to follow.
Unfortunately I will be unable to attend the ALM presentation later this afternoon,
but luckily I was able to catch it in Montreal last week.
When I think of ALM, I think of the development lifecycle of an application – whether
it be agile or waterfall or whatever floats your boat – that encompasses all parts
of the process. We’ve had tools over the years that help us manage each section
or iteration of the process, but there was some obvious pieces missing. What
about the SQL? Databases are essential to pretty much all applications that
get developed nowadays, yet for a long time we didn’t have much in the way to help
streamline and manage the processes of developing database pieces.
Enter ALM for SQL Server. DBA’s are now given all the tools and resources developers
have had for a while. It’s now easier to manage Packaging and Deployment of
Databases, better source control of SQL scripts, and something really cool: Database
schema versioning.
I have a story: Sometime over the last couple years, a developer wrote a small little
application that monitors changes to database schemas through triggers, and then sync’ed
the changes with SVN. This was pretty cool. It allowed us to watch what
changed when things went south. Problem was, it wasn’t necessarily reliable,
it relied on some internal pieces to be added to the database manually, and made finding
changes through SVN tricky.
With ALM, versioning of databases happens before deployment. Changes are stored
in TFS, and its possible to rollback certain changes fairly easily. Certain changes.
:)
That’s pretty cool.
A few months ago some friends of mine at Microsoft told me about a step-up promotion
that was going on for the release of Visual Studio 2010. If you purchased a
license for Visual Studio 2008 through Volume Licensing, it would translate into the
next version up for the 2010 version. Seems fairly straightforward but here
is the actual process:
So we upgraded our licenses to benefit from the step up. Problem was, we couldn’t
access any of the applications we were licensed to use (after RTM, obviously).
After a week or so of back and forth with Microsoft we finally got it squared away.
A lot of manual cajoling in the MSDN Sales system, I suspect, took place. It
turns out a lot of people were running into this issue.
Someone told me this issue got elevated to Steve B (not our specific issue, but the
step-up issue in general). I’m curious where things actually went wrong.
I suspect the workflow that was in place at the business level wasn’t in place at
the technical level, so everything ended up becoming a manual process. However,
that is purely speculative. Talk with Steve if you have questions.
In the end, everything worked out. I got Visual Studio 2010 installed (which
fricken rocks, btw), and my productivity will go up immensely once we get TFS deployed.
After of course, it has the necessary drop while I’m downloading and playing with
the new MSDN subscription.
For those that are interested in the promotion, it’s still valid until the end of
April. Contact your account rep’s if you are interested.
While I am definitely not looking for a new job, I was bored and thought I would take
a stab at a stylized resume to see if I could hone some of my (lack of) graphics skills.
It didn’t turn out too badly, but I am certainly no graphics designer.
What do you think?
Tonight at the IT Pro Toronto we did a pre-launch
of the Infrastructure 2010 project.
Have you ever been in a position where you just don’t have a clear grasp of a concept
or design? It’s not fun. As a result, CIPS
Toronto, IT Pro Toronto, and TorontoSQL banded
together to create a massive event to help make things a little more clear.
To give you a clearer understanding of how corporate networks work. Perhaps
to explain why some decisions are made, and why in retrospect, some are bad decisions.
Infrastructure 2010 is about teaching you everything there is to know about a state-of-the-art,
best practices compliant, corporate intranet. We will build, from the ground
up, an entire infrastructure. We will teach you how to build, from the ground
up, an entire infrastructure.
Sessions are minimum 300 level, and content-rich. Therefore:
Well, maybe. (P.S. if you work for Microsoft, pretend you didn’t see that picture)
While I was in California last week I decided to visit the new Microsoft Store in
Mission Viejo. While there, the managers graciously allowed me to take pictures
of the store. Frankly, they probably thought it was a little creepy. But
nevertheless, they said go for it, and I did.
Now, Microsoft did one hell of a job making it known that the store existed while
I was at the mall. While I was grabbing coffee in the food court, these stickers
were on each table:
Following that, as you head towards the store you see two large LCD screens in the
centre of the walkway. On one side you have a Rock Band - Beatles installation
running XBox 360 over HD.
On the other side was a promotional video.
Microsoft designed their store quite well. Large floor to ceiling windows for
the storefront, with an inviting light wood flooring to create a very warm atmosphere.
While there were hundreds of people in the store, it was very welcoming.
Along the three walls (because the 4th is glass) is a breathtaking video panorama.
I’m not quite sure how to really describe it. It’s as if the entire wall was
a single display, running in full HD.
In the center of the store is a collection of laptops and assorted electronics like
the Zune’s. There’s probably a logical layout, perhaps by price, or performance.
I wasn’t paying too much attention to that unfortunately.
At the center-back of the store is Microsoft’s Answers desk. Much like the Apple
Genius Bar, except not so arrogant. Yes, I said it. Ironically, the display
for customer names looked very iPod-ish here, and in the Apple Store, the equivalent
display looked like XP Media Center. Go figure.
One of the things I couldn’t quite believe was the XBox 360 being displayed overlay
the video panorama video. The video engine for that must have been extremely
powerful. That had to be a 1080P display for the XBox. As a developer,
I was astonished (and wondered where I could get that app!) A few of the employee’s
mentioned that it was driven by Windows 7. Pretty freakin’ sweet.
Also in the store were a couple Surfaces! This was the first time I actually
had the opportunity to play with one. They are pretty cool.
And that in a few pictures was my trip to the Microsoft store. There was also
a couple pamphlets in store describing training sessions and schedules for quick how-to’s
in Windows 7 that I walked away with.
Microsoft did well.
The other day I had the opportunity to take part in an interesting meeting with Microsoft.
The discussion was security, and the meeting members were 20 or so IT Pro’s, developers,
and managers from various Fortune 500 companies in the GTA. It was not a sales call.
Throughout the day, Microsofties Rob Labbe and Mohammad Akif went into significant
detail about the current threat landscape facing all technology vendors and departments.
There was one point that was paramount. Security is not all about technology.
Security is about the policies implemented at the human level. Blinky-lighted devices
look cool, but in the end, they will not likely add value to protecting your network.
Here in lies the problem. Not too many people realize this -- hence the purpose of
the meeting.
Towards the end of the meeting, as we were all letting the presentations sink in,
I asked a relatively simple question:
What resources are out there for new/young people entering the security field?
The response was pretty much exactly what I was (unfortunately) expecting: notta.
Security it seems is mostly a self-taught topic. Yes there are some programs at schools
out there, but they tend to be academic – naturally. By this I mean that there is
no fluidity in discussion. It’s as if you are studying a snapshot of the IT landscape
that was taken 18 months ago. Most security experts will tell you the landscape changes
daily, if not multiple times a day. Therefore we need to keep up on the changes in
security, and any teacher will tell you, it’s impossible in an academic situation.
Keeping up to date with security is a manual process. You follow blogs, you subscribe
to newsgroups and mailing lists, your company gets hacked by a new form of attack,
etc., and in the end you have a reasonable idea of what is out there yesterday. And
you know what? This is just the attack vectors! You need to follow a whole new set
of blogs and mailing lists to understand how to mitigate such attacks. That sucks.
Another issue is the ramp up to being able to follow daily updates. Security is tough
when starting out. It involves so many different processes at so many different levels
of the application interactions that eyes glaze over at the thought of learning the
ins and outs of security.
So here we have two core problems with security:
-
Security changes daily – it’s hard to keep up
-
It’s scary when you are new at this
Let’s start by addressing the second issue. Security is a scary topic, but let’s breaks
it down into its core components.
-
Security is about keeping data away from those who shouldn’t see it
-
Security is about keeping data available for those who need to see it
At its core, security is simple. It starts getting tricky when you jump into the semantics
of how to implement the core. So let’s address this too.
A properly working system will do what you intended it to do at a systematic level:
calculate numbers, view customer information, launch a missile, etc. This is a fundamental
tenant of application development. Security is about understanding the unintended
consequences of what a user can do with that system.
These consequences are of the like:
-
SQL Injection
-
Cross Site Scripting attacks
-
Cross Site Forgery attacks
-
Buffer overflow attacks
-
Breaking encryption schemes
-
Session hijacking
-
etc.
Once you understand that these types of attacks can exist, everything is just semantics
from this point on. These semantics are along the line of figuring out best practices
for system designs, and that’s really just a matter of studying.
Security is about understanding that anything is possible. Once you understand attacks
can happen, you learn how they can happen. Then you learn how to prevent them from
happening. To use a phrase I really hate using, security is about thinking outside
the box.
Most developers do the least amount of work possible to build an application. I am
terribly guilty of this. In doing so however, there is a very high likelihood that
I didn’t consider what else can be done with the same code. Making this consideration
is (again, lame phrase) thinking outside the box.
It is in following this consideration that I can develop a secure system.
So… policies?
At the end of the day however, I am a lazy developer. I will still do as little
work as possible to get the system working, and frankly, this is not conducive to
creating a secure system.
The only way to really make this work is to implement security policies that force
certain considerations to be made. Each system is different, and each organization
is different. There is no single policy that will cover the scope of all systems
for all organizations, but a policy is simple.
A policy is a rule that must be followed, and in this case, we are talking about a
development rule. This can include requiring certain types of tests while developing,
or following a specific development model like the Security Development Lifecycle.
It is with these policies that we can govern the creation of secure systems.
Policies create an organization-level standard. Standards are the backbone of
security.
These standards fall under the category of semantics, mentioned earlier. Given
that, I propose an idea for learning security.
-
Understand the core ideology of security – mentioned above
-
Understand that policies drive security
-
Jump head first into the semantics starting with security models
The downside is that you will never understand everything there is to know about security.
No one will.
Perhaps its not that flawed of an idea.
Definition: a model used to help define who is responsible / accountable;
The RACI model is built around a simple 2-dimensional matrix which shows the 'involvement'
of Functional Roles in a set of Activities. 'Involvement' can be of different kinds:
Responsibility, Accountability, Consultancy or Informational (hence the RACI acronym).
The model is used during analysis and documentation efforts in all types of Service
Management, Quality Management, Process- or Project Management. A resulting RACI chart
is a simple and powerful vehicle for communication. Defining and documenting responsibility
is one of the fundamental principles in all types of Governance (Corporate-, IT-Governance).
What does that mean? All projects require management. Simple enough.
This model is designed to define each level of management and required interaction
on a project or application. The four core levels of involvement attempt to
define who should know what about the project/application/system. Each level
has more direct interaction than the previous level.
The levels are defined as:
- Responsible
Those who do the work to achieve the task. There is typically one role with a participation
type of Responsible, although others can be delegated to assist in the work
required (see also RASCI below for separately identifying those who participate
in a supporting role).
- Accountable (also Approver or final Approving authority)
Those who are ultimately accountable for the correct and thorough completion of the
deliverable or task, and the one to whom Responsible is accountable. In other
words, an Accountable must sign off (Approve) on work that Responsible provides.
There must be only one Accountable specified for each task or deliverable.
- Consulted
Those whose opinions are sought; and with whom there is two-way communication.
- Informed
Those who are kept up-to-date on progress, often only on completion of the task or
deliverable; and with whom there is just one-way communication.
Very often the role that is Accountable for a task or deliverable may also
be Responsible for completing it (indicated on the matrix by the task or deliverable
having a role Accountable for it, but no role Responsible for its completion,
i.e. it is implied). Outside of this exception, it is generally recommended that each
role in the project or process for each task receive, at most, just one of the participation
types. Where more than one participation type is shown, this generally implies that
participation has not yet been fully resolved, which can impede the value of this
technique in clarifying the participation of each role on each task.
Note: I stole most of that from Wikipedia.