Setting focus back to the control that caused the PostBack in an ASP.NET Form

SmartNavigation can be set to true on your ASP.NET webform so that when postbacks occur , the page when rendered back to the browser, will navigate back to the control that caused the postback.

But SmartNavigation can be problematic especially when dynamically loading controls onto your webform.

Therefore if you have SmartNavigation turned off = false, below is a piece of code that you can call from your webform  that will add javascript to your page, to automatically navigate back to the control that originally caused the postback.

I tested the code against IE6 and Netscape 7.1.

  ///


  /// This method will take passed webPage, and find the control that caused the postback. If it finds
  /// one it will set javascript on the page to set focus to that control
  ///

  /// The web page
  public void SetFocusPostBackControl(System.Web.UI.Page webPage)
  {
   string[] ctlPostBack;
   
   ctlPostBack = webPage.Page.Request.Form.GetValues("__EVENTTARGET");
   if (ctlPostBack != null && ctlPostBack.Length > 0)
   {
    string ctlUniqueId;
    ctlUniqueId = ctlPostBack[0];
    System.Web.UI.Control findControl = webPage.Page.FindControl(ctlUniqueId);
    if ((findControl != null) &&
     (findControl is DropDownList ||
     findControl is TextBox ||
     findControl is RadioButton ||
     findControl is RadioButtonList))
    {
     string ctlClientId;
     ctlClientId = findControl.ClientID;
     string jScript;
     jScript = "<SCRIPT language=\"javascript\"> document.getElementById('" + ctlClientId + "').focus(); document.getElementById('"
     + ctlClientId + "').scrollIntoView(true) </SCRIPT>";;
     
     webPage.Page.RegisterStartupScript("focus",jScript ); 

    }
   }
  } 

 

Developing an ASP.NET Framework From a Windows Forms .NET Perspective.

A couple of months ago, I had to quickly develop an ASP.NET framework.
I incorporated parts of a Windows .NET framework that I had previously worked on. The basic
premise being that a Windows .NET Form and an ASP.NET WebForm are both event driven 
and have controls such as buttons and dropdowns.

There were two basic steps in developing this ASP.NET framework.

1) Creating Ancestor code behind pages for all the code behind pages used in the project: 

a) public class WebFormBase : System.Web.UI.Page -> For the Web Forms
b) public class WebUserControlBase : System.Web.UI.UserControl   -> For the Web User Controls
 
When a Webform or Web UserControl needs to be created, their code behinds inherit from the custom base class:

public class  OrderWebForm : WebFormBase
public class  ProductWebuserControl : WebUserControlBase

I think the above is a pretty standard thing to do.

The only thing I really did a little bit differently was to raise more events up to the descendent pages such as:

Loading
Load

Init
Initing

PreRendering
PreRender

etc.

In this way the descendent code has a chance to do some work before and after the code in ancestor.


2) All server side controls used on a WebForm or Web UserControl are inherited from the standard Microsoft Web Controls, or a third party control:


public class MyWebButton : System.Web.UI.WebControls.Button
public class MyWebMenu : Infragistics.Web.UI.UltraWebMenu
etc. etc. As you know there are many more. Hyperlink, Label, DataList etc.

For this framework thats pretty well it, in a nutshell.

This has really paid off for the future development work, because server Side controls can now implement custom interfaces,
such as :
ITranslation
IDisable

Then in the base classes for the code behind for the WebFormBase or the WebUserControlBase, all the code is there to handle translation  of pages to French or English or to disable or enable or disable controls automatically depending on a custom property put on the Web page called Enabled. Other things that have been built into the framework are resource file management, session management, navigation management and a custom help button that launches another browser with some help.

Closing a form before it shows

I'm working on a smart client application at the moment. One of the forms, under certain conditions, launches a wizard to gather information from the user.

From the users perspective they open the form and the wizard displays. If the user hits cancel, I want the wizard to close and the form not to show.

So what is the best way to do this. There is no cancel in the FormLoad event. If you try to close in formload an exception is raised:

Cannot call Close() while doing CreateHandle().

I asked Google and found one solution. Controls have a Public Event, VisibleChanged that fires when the visible property is changed. In this event you can call the forms Close method.

This works fine with one side effect, the form shows for a split second then closes.

This will suffice, but if anyone knows a better way to do this please let me know.

 

 

OpenSource Project for Testing Microsoft Software

Over the past few months, when I question how something works in the .NET Framework (or when somebody asks me).....I have been creating NUnit tests to verify the behaviour of some class and/or methods in the .NET Framework. Initially it is just to observe the behaviour or verify some assumptions, but by the time I'm finished, I usually inject various Assertions into my tests to tighten them up. These now serve as a test bed for me moving to a new version (or even old versions) of the .NET Framework. I can answer the question: Are any of my assumptions about how the 1.1 framework works broken in 1.2? 2.0? 9.0? etc.

I'm building up a nice collection and I might publish my work. But it struck me that this could be an open source project. In fact, I think it should be an open source project and I think it should be started by Microsoft....and not necessarily for the .NET Framework alone - but that would be an easy place to start.

Microsoft has faced increasing pressures over security and quality of their software - to the point that they've actually made windows source code available to key customers, governments and MVP's. I think that's a bit risky if you ask me. I think it is also a bit hypocritical to point the finger at Linux for being “more hackable because source code is available“ but at the same time make your own source code available to the chinese government.

But why not publish the source code to unit tests (say NUnit fixtures) in an open source format for the community to contribute to. When one of these security firms finds a hole in some MS software, they could create an NUnit test to expose it and submit it to Microsoft to fix, and then make the code for that NUnit test part of the open source project.

Instead of publishing source code, which is really meaningless to give people any kind of comfort in the code, publishing unit tests is publishing assumptions and expectations about what software is supposed to do and how it is supposed to behave. I would think this would become more important over time especially moving towards WinFx and Longhorn.

VS.NET Inherited Forms Bug

VS.NET Bug. I was asked to look into a problem last week involving some moving buttons in the designer. The problem was occurring in inherited forms where the parent form contained some protected, and anchored controls. When you resized the child form in the designer and build, Vs.Net would create some interesting locations and sizes, often time off the form entirely. These location and size values are set in the InitializeComponent() which makes things difficult.

So, after some searching and digging I found an interesting KB article, that explains its a bug.
However Microsoft's solutions are actually the causes in VS.NET 2003.
Basically there are a few options:
  • Once you have the controls have been moved, delete the offending locations and sizes from the InitializeComponent() and they will inherit their values from the parent again.
  • Create another size and location property and replace the erroneous values with these hard coded values after InitializeComponent().
  • Change the protection level to Friend.


    Obviously some of these solutions have their own problems, but the bug is fixed in the latest beta release of Whidbey so just bide your time for a bit.
  • Delegation through NUnit Testing and TodoFixtures

    Usually I'm the guy who all the other developers are wiating on to create some reusable framework widget or other. I usually have 10, 000 things on my plate so when somebody asks me to do something or reports a bug with some of my code, I need to find a good way to delegate.

    But if you are the subject matter expert (SME), it's tough to delegate the task of making the fix or adding the feature. If you flip that on it's head though, when you find yourself in this situation, by definition you are NOT the SME of the “feature request” or “bug“. Those are external to the actual implementation which is probably what you are really the SME for. The SME for the request or bug, is of course, the finder of the bug or the requestor of the feature. So in the spirit of getting the right person for the job (and delegating a bit at the same time), get the requestor to create the NUnit test that demonstrates the bug or explains (with code - better than english can ever hope to) the request or how the requestor expects the feature to be consumed.

    Case in point:

    Random Developer A: Barry, there is a bug in the foobar you created. Can you fix it? I know you are busy, want me to give it a go? Do you think it's something I could fix quickly?

    Barry: That's a tricky bit but I think I know exactly what I need to do to fix it, it will take me 10 minutes - but I have 3 days of 10 minute items in front of you. Why don't you create an NUnit test for me that demonstrates the bug, and  I'll fix it. Then it will only take me 2 minutes.

    I also find NUnit tests a great way for people to give me todo items.

    Random Developer B: Hey, I need a method added to FoobarHelper that will turn an apple into an orange, unless you pass it a granny smith, in which case it should turn it into a pickle.

    Barry: Sounds reasonable. I can do that - but just to make sure I got all of that spec correct, would you mind creating an NUnit test that demonstrates that functionality you require? Thanks.

    I do have to admit though that this requires a certain amount of charisma. On more than one occassion this technique has been met with some friction and unusual and jestures and mumbling. :)

    Chicken and Egg, TDD, Class Modeling, DevDrivenTesting, ModelDrivenTesting

    NUNit Testing Practices

    Chicken and Egg, TDD, Class Modeling, DevDrivenTesting, ModelDrivenTesting
    -Create A Test First, and use it to code gen the class you want to implement.
    -Create A Class First, and use it to code gen a stubbed test.
    -Model a Class, Capture Meta Data about the way it's supposed to work, and then generate both the class and unit tests

    Black Box Testing, Service Boundaries and Persistence

    When testing persistence, I often write an NUnit test that programmatically creates a new entity, jams some data into (hard coded into my test) then calls a data access layer to persist it, then I create a new entity and ask my DAL to load it from the database (the same identifier I used to create it). Then I just compare the entities.

    A developer I work with today showed me what he does. He creates an XML file with some test data and has a generic utitlity class to persist that into the database. He then creates a new entity

    Resuable (almost automatic) Transactions

    Can't afford the high costs of COM+ performance overhead in the distributed transaction co-ordinator, but still want somewhat automatic transactions? Same connection? Same Transactions, but different Dac's?

    DacBase dacs = new DacBase[3];

    dacs[0] = new OrderDac();
    dacs[1] = new CustomerDac();
    dacs[2] = new EmployeeDac();


    trans = DbHelper.BeginTrans();
    for i = 1 to dacs.length
    {
     dacs[].update(trans)
    }

    trans.Commit();


    OrderDac.Update(entity, trans)
    CustomerDac.Update(entity, trans)

    Setting Boundaries for Services

    So what does it mean to design an autonomous service.  Based on my previous post, there are two possible issues to consider.  First, the service needs to have a life outside of the client making the request.  Second, the service needs to be self-healing in that any dependence on the actual endpoint of services that are used must be mitigated. To put this second point into an example, if Service A invokes Service B, then Service A must be capable of discovering Service B should Service B move.  Service A should not be dependent on any manually updated configuration information to use Service B. Unfortunately, neither of these two considerations really help to determine what the boundaries of an autonomous service should be. 

    To get a grasp on the criteria that we use for bounding a service, consider the following hierarchy.

    Service Hierarchy Diagram

    Figure 1 - Service Hierarchy

    The process service is a high-level interface where a single service method call invokes a series of smaller steps.  These smaller steps could be either another process or a call to a business entity service.  Eventually, at the bottom of each of the paths, there will be one or more business entity services. These business entities don't contain any data, but instead interact with a data source through a data representation layer.  Each of the blocks in the hierarchy above the level of the data source *can* be a service.  Whether they are or not is one of the questions to be answered.

    Follow the data

    The definition I have found most useful for identifying the boundary for a service is one across which data is passed.  If there is no data moving between the caller and the callee, there is little need for a service-based implementation. Consider a service that provides nothing but functionality with no data.  One that, for example, takes a single number and returns an array of the prime factors.  While such a service could definitely be created, the rationale for implementing it as a service is thin.  After all, the same functionality could be embedded into an assembly and deployed with an application.  Worried about being able to update it regulary?  Place it onto a web server and use zero-touch deployment to allow for dynamic updating. So when trying to define the services, follow the data. 

    Given that little nugget of wisdom, take another look at the hierarchy in Figure 1.  For someone to call a process service, some data must be provided.  In particular, it needs to be passed sufficient information for the process to 'do its thing'.  Want to invoke the “CreateOrder” process service?  Give the service enough information to be able to create the order.  This means both customer and product details.  When defining the business services involved in the process (the next level in the hierarchy), the same type of examination needs to be made.  Look at the places in the process where data is passed.  These data transfer points are the starting point for boundary definition.  

    Keep it Chunky

    The other criteria I use for defining service boundaries is based on the relatively nebulous concept of 'chunkiness'.  The basic premise goes back to the first tenet of services.  That is, calls into a service may be expensive.  This is not surprising given that the movement of data across process or system boundaries is usually part of the process.  As a result of the potential delay, the calling applications performance is improved by keeping the number of service calls to a minimum.  This runs counter to the 'normal' coding style of setting properties and invoking methods on local objects. 

    Once the data flow has been identified (the object sequence diagram is actually quite useful in this regard), look at the interactions between two classes.  If there is a series of call/response patterns that is visible, that interaction is ripe for coalescing into a single service call. 

    The downside of this approach is potentially providing more information that would normally be needed.  Say that the normal call/response pattern goes something like the following:

    Order o = new Order(customerId);
    OrderLine ol;
    ol = o.OrderLines.Add(productId1, quantity1);
    ol.ShipByDate = DateTime.Now.AddDays(2);
    ol = o.OrderLines.Add(productId2, quantity2);

    In order to support the creation of order lines both with and without a custom shipby date, the parameter list for any service would have to change.  But there is a solution.  One of the strengths of XML is its flexibility in this regard.  The acceptible schema can be different.  These differences can then be identified programmatically and the results changed as needed.  For this reason, we usually pass XML documents as the parameter for service calls. 

    The result of this is a sense of where the boundaries of a service should be. First, look at the data passed between objects.  Identify any series of calls between two objects.  Then group the data passed through these calls into a single service using an XML document as the parameter. 

    Will this logic work for every possible case?  Maybe not.  But more often than you think, this kind of design breakdown will result a decent set of boundary definitions for the required services.  The one drawback frequently identified by people is that this approach does not directly consider where the data is stored.  While this is true, it is not that imperative.  Accessing a data source can either be done through a separate service (identified by this analysis process) or through local objects.  In other words, the segragation of data along business or process service boundaries is not necessarily a given. Nor, as it turns out, is it even a requirement.