IE JavaScript Debugging near useless when try/[catch|finally] is used

As far as I can tell, IE JavaScript debuggers, such as Visual Studio or the new IE8 Developer Tool, have no ability to catch “first chance” JavaScript errors. Honestly, if I had to guess, this is probably because of some limitations of the JavaScript engine implementation more than the tools. Whatever the reason though, you run into a serious problems trying to identify where your errors are occurring if you have a try/catch/finally anywhere in the call stack.

If you just have a try/finally what happens is that the debugger will break with the current line of execution being the opening curly brace of the finally block. The entire stack where the error occurred is unrolled and you’re left with no context with which to debug:

image

A try/catch isn’t much better. Now the debugger won’t even break since you’ve handled the error, so you’re left having to make sure you have some sort of breakpoint or logging in all your catch blocks. What sucks even more is that the JavaScript Error class has no contextual information about the script that was executing (i.e. call stack, file name, line number, etc.). This isn’t Microsoft’s fault really, it’s just that whoever designed JavaScript thought it was important enough to have structured error handling but not to actually have any useful debugging information about the error that occurred. *sigh*

So right about now you’re probably all like “who the hell honestly uses try/[catch|finally] in JavaScript anyway”. Well, truth be told, I can’t think of a single time I’ve put it into my own code, but uhh… guess what does have it? ASP.NET AJAX. And guess where it is? Around the async callback code for the XMLHttpExecutor. Why is this such a big deal? Well, because it basically means if an exception occurs anywhere within the stack of the callback function you provide, you cannot debug it. Since one usually ends up executing a lot of code in response to ones data coming back, this is a total pain in the ass.

The good news is the latest ASP.NET AJAX previews have removed the try/finally. The bad news is that’s not RTM yet, so here are some options:

  • Debug with FireFox using FireBug. It does support first chance exceptions and will stop right where the exception occurs.
  • Plop a debugger; statement in your root callback function and step through/into the code line by line until it terminates. What sucks about this is you have to do it once to find out where it terminates and then do it again to actually avoid making the call that killed it next time so you can check out the state of things around you.
  • Pepper your source code judiciously with Sys.Debug.trace calls so you can figure out the last thing that happened before it tanked. Potentially a “good” thing anyway for tracing during testing, but definitely a pain in the ass to maintain and has a runtime performance impact that you’ll want to remove from your release mode scripts.

JIT’d JavaScript is all the rage and Microsoft dropped the ball again

There’s a lot of buzz lately about browsers finally getting JIT’d JavaScript. First it was SquirrelFish in WebKit, then FireFox let the cat out of the bag about their implementation called TraceMonkey  and then Google came out with V8 when they unleashed Chrome on the world. Kudos to all of those teams for pushing performance forward since DHTML/AJAX apps these days are really starting to show the signs of weakness in current JavaScript engine implementations. Now’s when I turn this into a rant on how Microsoft dropped the ball again…

Let’s be honest, Microsoft held the crown for quite some time with their ActiveScripting engine. It was a pretty damn good implementation considering it was created in 1996 and only recently did they actually start to care about its performance again. Microsoft has had a .NET based JavaScript engine since .NET 1.0 (announced in 2000, released in Feb. 2002) which runs on the CLR and, as a result, has spanked the crap out of the ActiveScripting engine since day one. Now, granted, IE6 was released in 2001, so there’s pretty much no way it could have been in there, but… why the heck didn’t they switch to it in IE7? They could have been ahead of the curve and probably still hold a performance crown against these other implementations (or at least come close). Better yet, why didn’t they at least switch to that  in IE8?!? In fact, let’s take it one step further! More recently, as work for the Silverlight 2.1 platform, Microsoft has moved ahead with the Dynamic Language Runtime (DLR) initiative and they’ve provided an entirely new, cross platform implementation of the CLR and a DLR flavor of JavaScript which is probably even more efficient. So, even better, why didn’t IE8 build on that? Why didn’t IE8 just take the dependency on Silverlight 2.0 which is a freakin’ ~2meg add-on which at least 1/3 of which is probably made up for by removing ActiveScript. Not only that, but it would help increase the surface area of Silverlight. Can you say “win, win”? I knew you could.

I’ve worked with Microsoft technology my entire career and I’ve seen the various departments blow integration opportunities so many times I’m really starting to get sick of it.  At least the newer projects within Microsoft seem to be doing a better job, so perhaps they’ve finally got some architects in there that are actually taking in the bigger picture and doing a lot more cross pollination, but man to see IE8 blow it again just makes me shake my head. Between keeping the antique ActiveScripting engine and writing yet another version of a rendering engine that is pretty much redoing everything WPF does with no where near the extensibility or features, I just gotta wonder what that team is thinking. They must have a serious case of “not invented here” syndrome and like writing/maintaining a bunch of plumbing code rather than being able to focus on higher level HTML/CSS specific stuff or spending more time building better browser features. *sigh*

Velocity Cache API needs TryGetValue

Ok, I’ve just started working with Microsoft’s Distributed Caching API (aka “Velocity”) and while I’m very happy with the features thus far (can’t wait for notifications!), I really think the API needs a TryGetValue method. Right now you have the Get, GetAndLock and GetIfNewer methods and all of those return type Object. My suggestion is two-fold:

  1. Add the TryGetValue method with similar overloads to Get. Return a bool which, if true, indicates the item was found.
  2. Take it a step further and make the method generic. This will help when working with simple value types like DateTime, Int32, Guid, etc.

Imagine you’re caching a DateTime and want to look it up… here’s an example of how you need to do that with the API today:

object cacheValue = cache.Get(“MyCachedDateTime”);
DateTime myDateTime; if(cacheValue != null)
{
   myDateTime = (DateTime)cacheValue;
}
else
{
   myDateTime = CalculateSomeComplicatedDateTime();

   cache.Add(“MyCachedDateTime”, myDateTime);
}

// … use myDateTime here …

Notice the annoying need to have a temporary object variable (“cacheValue” in the sample). You need that because you can’t cast straight to a value type like DateTime. Now let’s look at what it might look like with the TryGetValue implementation I’m suggestion:

DateTime myDateTime;

if(!cache.TryGetValue<DateTime>(“MyCachedDateTime”, out myDateTime))
{
   myDateTime = CalculateSomeComplicatedDateTime();

   cache.Add(“MyCachedDateTime”, myDateTime);
}

// … use myDateTime here …

There’s no denying the second version results in less code and, IMHO, this pattern is far more legible.

ScriptReferences to ScriptResource.axd for GAC’d assemblies are problematic in server farms

Ok, I just discovered a nasty little problem with ScriptReferences to script files that are embedded in assemblies that are installed in the GAC… starting with System.Web.Extensions itself.

First, in case you’re not already familiar with this subject, the way scripts are referenced when they are embedded into assemblies is by building a URL to the ScriptResource.axd handler that is provided by the ASP.NET AJAX server side runtime. The URL that is build includes two query string parameters:

  1. “d” – this is a encoded/hashed copy of the assembly identity that includes it’s typical .NET assembly identity info (name, version info, etc.)
  2. “t” – this is a timestamp parameter that is taken from the assembly’s last modified date in terms of the file system.

Second it’s important to realize that when assemblies are placed into the GAC they actually receive a new last modified time on the copy of the assembly that is put into %SYSTEMROOT%Assembly. This is news to me personally, but then again it never mattered before because the times on those files never mattered in the traditional .NET application. So, if you’re installing a GAC based assembly on multiple servers in a web farm, there’s virtually no way that the last modified time on the assembly on one server will end up matching the time on the other servers… unless you were using a virtual server image of course.

So, with those two details out there, I’m sure you already see the problem: the “t” parameter will be emitted differently from each server in your farm resulting in different URLs.

So why is this a problem? Well your end users hit www.foobar.com and ServerA fields the first request. ServerA’s ScriptReference to Microsoft.Ajax.js serves up a ScriptResource.axd URL with a t=1234 (just an example). Now the user does something that requires a new page to be served up and that request ends up going to ServerB. Well ServerB’s ScriptReference to Microsoft.Ajax.js ends up serving up a ScriptResource.axd URL of t=5678 and guess what? Yup, the user has to download that rather large script file all over again because as far was their browser knows the URL is different so the content must be different.

That’s the typical problem most people using ASP.NET AJAX will have. Non-runtime, silent, not deadly, but killing performance none-the-less. Another problem you can have if you’re doing more advanced work is if you dynamically load scripts (using the ScriptLoader) the URLs will not match and you’ll potentially end up trying to load the same script twice and you’ll end up with a runtime error.

This may not seem all that significant, but consider that it will happen for every script that is served out of a GAC based assembly you reference in your web project. Depending on how large those files are and how many servers you have in your farm you may be causing your end users quite a bit of network load/startup time.

<rant>
IMHO, the inclusion of the “t” query string parameter flies in the face of the power of .NET assembly identity. They’ve taken a well defined, working system which was already being leveraged for the “d” parameter and completely broke it by including another piece of information to identify the assembly. The only plausible explanation for the “t” parameter is that they want to support the “lesser” developers out there who aren’t incrementing their assembly version numbers correctly or maybe has to do with the “pure” web project style projects (which I never use personally, always use web application projects), but I’d be willing to bet they don’t even have any version information by default.
</rant>

So how can you solve this problem? Well there’s several ways that I can think of, none of them great:

  1. You can move to host your GAC’d scripts using the file system, but this completely circumvents the entire benefit of having them in the assembly in the first place.
  2. Handle the ScriptManager.ResolveScriptReference event and write the script paths yourself without the “t” parameter. The problem with this approach is that if you wanted to build links to ScriptResource.axd you’re up the creek without a paddle because all of the methods associated with building these URLs are marked as “internal” within the System.Web.Extensions assembly.
  3. You can choose to “Copy Local” the GAC based assemblies in your build, but this really defeats the purpose of assemblies being installed in the GAC in the first place and has farther reaching implications for your applications.
  4. You can manually go and set the last modified time on those assembly files in the GAC across all your servers using a script of some kind.

We’ve opted for #4 here at Mimeo. We’re using a PowerShell script to set the FileInfo::LastWriteTime on the GAC based assemblies to a fixed date for all the servers. It’s a hack, sure, but at least it’s a server configuration hack and not a runtime hack.

Finally, I have submitted a bug to the connect.microsoft.com site about this problem. Please go and vote on it if you feel it’s as important as I do.

ADO.NET Entity Framework’s CompiledQuery when using anonymous projections shows why C# needs “mumble” types

Ok, that’s a really long title for a post, so what the heck do I mean by all that? Well, I’ve started working with the CompiledQuery class and I’ve run into a language limitation problem that almost makes any kind of performance gain I might get from CompiledQuery not worthwhile when using projections due to the fact that they cannot be anonymous so you’re forced to define a new class yourself to represent the projection.

First off, if you’re not familiar with CompiledQuery, it’s basically an optimization provided by the ADO.NET Entity Framework that enables it to take your LINQ expression once, generate a cached execution plan (not in the SQL sense, but in the entity sense) for it and return you a Func delegate on the fly that takes parameters to allow you to plug in any dynamic values you might need for the query (e.g. for a where clause).

Secondly, if you’re not familiar with the C# “mumble” type problem, jump on over to this excellent post from Ian Griffiths where he details the issues quite well and even talks about the language changes that the C# team was thinking about making to support the concept.

Go ahead… I’ll wait. Done yet? K. 🙂 So here’s why CompiledQuery suffers from this problem annoying problem. The whole point of CompiledQuery is that you can call Compile once for the LINQ expression and store the resulting Func delegate instance so you can reuse it over and over. So ideally you’d want to store this Func delegate in a static variable or, at bare minimum, a member variable right? Here’s how you could create a compiled query that allows you to find all orders for a customer in a specific status. Note that for my examples, I’m assuming we’ve created an EF model over the AdventureWorksLT DB.

public static Func<AdventureWorksLTEntities, DateTime, IQueryable<SalesOrderHeader>> GetCustomersOrdersByStatus =
    CompiledQuery.Compile((AdventureWorksLTEntities entities, int customerId, int status) =>
        from order in entities.SalesOrderHeader
        where order.CustomerID == customerId
            &&
        order.Status == status 
        select order);

I can then call this function and consume it’s results like so:

using(AdventureWorksLTEntities entities = new AdventureWorksLTEntities())
{
    foreach(var order in MyCompiledQueries.GetCustomersOrdersByStatus(entities, someCustomerId, someStatusValue))
    {
       /* ... do something with the order ... */
    }
}

Now, the above example actually works out just fine because we’re selecting the entire SalesOrderHeader entity type so I know the type parameter for my IQueryable<T> is going to be SalesOrderHeader. The problem comes in when you want to do a projection of fields, either from multiple entities or just reducing the number of fields you bring back from a single entity because that results in an anonymous type being generated. For example, assume I want to bring back some data from the customer and just some data from the order for presentation in a list view:

public static Func<AdventureWorksLTEntities, int, int, IQueryable<???>> GetCustomersOrdersByStatus =
    CompiledQuery.Compile((AdventureWorksLTEntities entities, int customerId, int status) =>
        from order in entities.SalesOrderHeader
        join customer in entities.Customer on order.CustomerId = customer.CustomerId
        where customer.CustomerID == customerId
            &&
        order.Status == status 
        select new
        {
            CustomerId = customer.CustomerID,
            CustomerFirstName = customer.FirstName,
            CustomerLastName =
customer.LastName,
            CustomerCompanyName = customer.CompanyName,
            OrderSalesOrderId = order.SalesOrderId,
            OrderDate = order.OrderDate,
            OrderTotalDue = order.TotalDue

        });

So, as you can see, I’m now trying to project an anonymous type that is composed of values from two separate entity types. The problem is not the LINQ query because it works just fine by itself. The problem is that I can’t determine the type parameter type for the IQueryable<T> for the result parameter of my Func variable because that type is anonymous. This is where having the ability to use a “mumble” type and say IQueryable<var> would help out tremendously because the compiler does know the anonymous type.

The only solution to this problem today AFAICT is that you actually need to declare a class to represent the projection and use it to type the Func signature and select it instead of an anonymous type in the LINQ query:

public sealed class CustomerOrder
{
    public int CustomerId
    {
        get;
        set;
    }

    /* ... declare other properties here ... */
}

public static Func<AdventureWorksLTEntities, int, int, IQueryable<CustomerOrder>> GetCustomersOrdersByStatus =
    CompiledQuery.Compile((AdventureWorksLTEntities entities, int customerId, int status) =>
        from order in entities.SalesOrderHeader
        join customer in entities.Customer on order.CustomerId = customer.CustomerId
        where order.CustomerID == customerId
            &&
        order.OrderDate == orderDate
        select new CustomerOrder
        {
            CustomerId = customer.CustomerID,
            CustomerFirstName = customer.FirstName,
            CustomerLastName =
customer.LastName,
            CustomerCompanyName = customer.CompanyName,
            OrderSalesOrderId = order.SalesOrderId,
            OrderDate = order.OrderDate,
            OrderTotalDue = order.TotalDue

        });

And so depending on how large your projection is this forces you to do a hell of a lot of code generation which as of today is very manual. It would be great to have a tool that could generate a strong type based on the anonymous type syntax. Maybe there’s one out there today that I don’t know about?

Anyway, it’s still all stuff the compiler should be smart enough to do for you. The annoying thing is it already does this work, but it’s limited to local scope only. Perhaps another solution would be that, instead of using the var keyword
to “mumble”, they actually gave us a way to give the anonymous type a name inline which would basically name the anonymous type instead of the compiler just generating a random type name.

David Teitlebaum on Channel 9 reviewing WPF 3.5 SP1 features

It’s WPF week over on Channel 9 and a new episode went up yesterday which has David Teitlebaum, a PM on the WPF team, giving an overview and demos of the new lower level features that SP1 brings to the table. You don’t want to miss it, so hop on over and check that out.

I think we all know the Effects stuff is the most sought after feature, but I know a lot of people were looking for the WriteableBitmap feature since WPF 3.0 and now they have it and, judging by the demo, the performance is amazing.

Visual Studio 2008 and .NET Framework 3.5 SP1 Beta Now Available

Somasegar broke the news this morning and provides some details of what to expect from the service packs. Jump on over to his blog to read about it and then click here to start downloading.

Also note that if you’re using Expression Blend 2.5 preview, there’s an update to that which makes it work with the 3.5 SP1. You can read more about that over here.

Update 6PM (ET):
Even though I’ve uninstalled all other betas/CTPs that are documented (and then some) I am unfortunately unable to install the .NET Framework service pack due to the following error that occurs about 2 seconds into the installation:

[05/12/08,18:01:05] Microsoft .NET Framework 2.0SP1 (CBS): [2] Error: Installation failed for component Microsoft .NET Framework 2.0SP1 (CBS). MSI returned error code 1
[05/12/08,18:01:16] WapUI: [2] DepCheck indicates Microsoft .NET Framework 2.0SP1 (CBS) is not installed.

I’ve done some searching around and this doesn’t seem to be a 3.5SP1 specific problem. I’ve tried some of the suggested solutions out there, but have yet to get anywhere.

Zune now offers TV shows

Alright, finally! A step in the video direction for Zune. Starting today Zune now offers a plethora of TV shows for download from the Zune Marketplace. Microsoft really should have been out of the gate way before Apple’s ITunes with this stuff because they already had all the content for the 360. These are pretty much the same shows you can get on XBox 360, though not all of them (yet?).

My biggest question at this point is: If I bought an episode on 360, do I get it for my Zune? Vice versa? If not, why not (other than greed)? It’s the same content in a different format.

Oh yeah, there’s also a new software and firmware upgrade that comes with this, so make sure to go into settings and click the update software button. In addition to the video stuff, the software update adds some new community features that are pretty nifty.