Workflow + Portable Class Library (PCL) == No Intellisense In Visual Studio 2012-2013 (with fix!)

A few months ago I decided I wanted to bring some Reactive Extensions (Rx) awesomeness into some of my custom workflow activities. So, as any good .NET dev would do, I fired up the Package Manager Console and typed Install-Package Rx-Main and it installed Rx 2.1 for me. Then I started leveraging Rx APIs in my custom activity implementation and all seemed well… at first. At some point I had closed out of the solution and when I re-opened the solution I suddenly had no Intellisense for any types except those belonging to the mscorlib assembly, yet I could build the project no problem. I thought to myself “Wait, what? This was just working? What happened?”. After several attempts at trying to figure out what had gone wrong, I realized that it was adding the Rx library to the project caused the problem. At first I thought maybe it was something specific about Rx itself, but I knew that couldn’t be it. Then I realized that Rx was a PCL lib (note: 2.1 was at the time, 2.2 now adds a .NET 4.5 specific assembly too). So I installed another PCL lib just to check and, sure enough, it caused the same problem.

Next, as a sanity check, I created a brand new WF Console Application to make sure it had nothing to do with my specific solution/projects. As soon as I added a PCL lib, close and reopened the solution I lost Intellisense. So, I did what any frustrated Visual Studio user should do and I opened a bug on Connect. Unfortunately, that bug was promptly closed as “By Design” with a comment that it would be addressed in a future version of Visual Studio.

Now, I’d like to think I’m pretty good at memorizing APIs, but with the plethora of libraries that one works with these days, not to mention all the overloads, extensions, optional parameters, generic signatures, etc. it’s a very tall order to code at max efficiency without Intellisense support. This bug effectively breaks one of the key features of Visual Studio and they aren’t going to commit to fixing it in the next update… if not sooner? Yeah, I wasn’t satisfied with that response. This basically meant I would have to choose between throwing out WF and finding some other tech to solve those problems or not being able to leverage PCL libs. The investment in WF was too large to just toss (not to mention I personally find WF to be an awesome technology) and with PCL’s becoming more and more prevalent, even from MS groups, this simply wasn’t a choice I could make. So, back against the wall, I spent about an hour yesterday debugging the root cause and coming up with a workaround.

First thing I did was fire up two instances of Visual Studio 2013. The first instance would be where I open the Workflow Console Application project that I had been repro’ing the issue with and the second instance would be attached as a debugger of the first instance. Next I turned off the “Just My Code” feature in the debugger instance and configured to break on all CLR first chance exceptions. Then I popped over to the other instance and opened up the Workflow Console Application project. After stepping through a plethora of benign exceptions from various sources, I started to see an exception saying that the System.Runtime assembly could not be located. Specifically, I saw this:

A first chance exception of type 'System.IO.FileLoadException' occurred in System.Xaml.dll

Additional information: Cannot resolve dependency to assembly 'System.Runtime, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' because it has not been preloaded. When using the ReflectionOnly APIs, dependent assemblies must be pre-loaded or loaded on demand through the ReflectionOnlyAssemblyResolve event.

Hmm, well, I know System.Runtime is type fwd’d to mscorlib in .NET4.5, but why the heck wouldn’t that “just work”? Oh, wait… what’s that bit about ReflectionOnly? So I turned my attention to the stack trace:

System.Xaml.dll!System.Xaml.XamlSchemaContext.UpdateXmlNsInfo() Unknown
System.Xaml.dll!System.Xaml.XamlSchemaContext.TryGetCompatibleXamlNamespace(string xamlNamespace, out string compatibleNamespace) Unknown
System.Xaml.dll!System.Xaml.XamlXmlReader.IsXmlNamespaceSupported(string xmlNamespace, out string newXmlNamespace) Unknown
System.Xaml.dll!System.Xaml.XmlCompatibilityReader.MapNewNamespace(string namespaceName) Unknown
System.Xaml.dll!System.Xaml.XmlCompatibilityReader.GetMappedNamespace(string namespaceName) Unknown
System.Xaml.dll!System.Xaml.XmlCompatibilityReader.NamespaceURI.get() Unknown
System.Xaml.dll!System.Xaml.XmlCompatibilityReader.ReadStartElement(ref bool more) Unknown
System.Xaml.dll!System.Xaml.XmlCompatibilityReader.Read() Unknown
System.Xaml.dll!MS.Internal.Xaml.Parser.XamlScanner.DoXmlRead() Unknown
System.Xaml.dll!MS.Internal.Xaml.Parser.XamlPullParser.Parse() Unknown
System.Xaml.dll!MS.Internal.Xaml.NodeStreamSorter.StartNewNodeStreamWithSettingsPreamble() Unknown
System.Xaml.dll!MS.Internal.Xaml.NodeStreamSorter.NodeStreamSorter(MS.Internal.Xaml.Context.XamlParserContext context, MS.Internal.Xaml.Parser.XamlPullParser parser, System.Xaml.XamlXmlReaderSettings settings, System.Collections.Generic.Dictionary xmlnsDictionary) Unknown
System.Xaml.dll!System.Xaml.XamlXmlReader.Initialize(System.Xml.XmlReader givenXmlReader, System.Xaml.XamlSchemaContext schemaContext, System.Xaml.XamlXmlReaderSettings settings) Unknown
System.Xaml.dll!System.Xaml.XamlXmlReader.XamlXmlReader(System.Xml.XmlReader xmlReader, System.Xaml.XamlSchemaContext schemaContext, System.Xaml.XamlXmlReaderSettings settings) Unknown
XamlBuildTask.dll!Microsoft.Build.Tasks.Xaml.PartialClassGenerationTaskInternal.ReadXamlNodes(string xamlFileName) Unknown
XamlBuildTask.dll!Microsoft.Build.Tasks.Xaml.PartialClassGenerationTaskInternal.ProcessMarkupItem(Microsoft.Build.Framework.ITaskItem markupItem, System.CodeDom.Compiler.CodeDomProvider codeDomProvider) Unknown
XamlBuildTask.dll!Microsoft.Build.Tasks.Xaml.PartialClassGenerationTaskInternal.Execute() Unknown
[AppDomain (DefaultDomain, #1) -> AppDomain (PartialClassAppDomain_ba752eb5-e2fa-4aec-b2e9-5b9713fbf23d, #8)]

Now, as you probably know, Workflow is based on XAML and so what we see here is the XAML compiler trying to do its job. One of the things it does is create a “reflection only” AppDomain where it attempts to load/validate all the types in the XAML document. The other part of the error message mentions that the domain needs to be pre-initialized with all the assemblies that will be referenced OR someone has to resolve the assembly themselves by hooking the ReflectionOnlyAssemblyResolve event. First I peeked to see if something had hooked that event, but the delegate was null, so I knew that the XAML compiler must have been pre-loading the assemblies. So then I looked at the assemblies that were pre-loaded by calling AppDomain::ReflectionOnlyGetAssemblies and, guess what? No System.Runtime. How does the XAML compiler know what list of assemblies to pre-load? By the assembly references you have in your project.

As mentioned earlier, in .NET 4.5, there is no need to reference System.Runtime any more because the types are all in mscorlib now, so if you start a new 4.5 project (or upgrade an existing project to 4.5), there will be no reference to System.Runtime any more. However, PCL libraries target very specific subsets of .NET and if you look at a PCL lib, you will see it explicitly references the System.Runtime assembly. Since the XAML compiler is only pre-loading the references it reads from the project into the “reflection only” AppDomain that explains not being able to find that exactly assembly by name for reflection purposes.

Ok, great, problem identified. So how can we fix it? Turns out it’s pretty simple:

  1. Unload your project file
  2. Open it in the XML editor
  3. Add an explicit reference to System.Runtime by hand. (I usually just go stick it right under the System reference like so:
<Reference Include="System" />
<Reference Include="System.Runtime" />

Now we should just have to reload the project and we’re alllllll set. Err… nope. Didn’t work. WHAT!? “You lied to me!” Well, no, it turns out that after I fixed this it still didn’t work at first, so I debugged again and got another exception like above, but this time saying that it couldn’t find the assembly System.Resources.ResourceManager. This assembly is again rolled up into mscorlib in 4.5, so you need to play the same trick with a manual project reference. Therefore your final project file should look like so:

<Reference Include="System" />
<Reference Include="System.Runtime" />
<Reference Include="System.Resources.ResourceManager" />

Hopefully this work saves other people time and the cost of anger management therapy. I also hope it encourages Microsoft to address the issue with a simple update that has knowledge of PCLs and implicitly loads type forwarded assemblies into the “reflection only” AppDomain so they are properly discovered.

ScriptCs.ClrMD – Enabling rich, programmatic .NET Diagnostics

I’ve been tweeting about this for a little over a week now, but I figured it would make for a good subject for my first blog post since saying I was going to try for a comeback in 2013 which, otherwise, hasn’t been going so well thus far. ;)

Ok, so if you haven’t already heard about it, a couple of weeks ago the .NET team released an awesome managed library wrapper around the internal CLR debugger APIs which they’ve nicknamed “ClrMD” (CLR Managed Debugger). They’ve made this library available as a NuGet Package so that anyone can take grab it and take advantage of its awesome power. If you’ve ever done any serious .NET debugging, odds are you’re familiar with the SOS debuagging extensions, well then you’re already familiar with the kinds of rich detail that can be gathered by the CLR debugger APIs.

The other half of the technology involved here is ScriptCS. Surely heard about that by now, but, if not, you definitely need to take 15 minutes to check it out. It’s a powerful tool that, at it’s most basic, allows you to write scripts using the C# language. More recently the project has added a REPL implementation that is shaping up quite nicely. One of the other features that ScriptCS provides is an extensibility point called “Script Packs”. At a high level Script Packs provide the ability for their authors to expose a set of functionality that a script writer can then easily import and leverage with just a few lines of code to create effective solutions very quickly. There is already a quickly growing list of Script Packs that expose technologies like WebAPI, Nancy and WPF just to name a few.

So, with that, ScriptCs.ClrMD is a Script Pack that brings the power of the ClrMD APIs to ScriptCs. The goal of the project is to ultimately provide all the functionality that someone who is familiar with SOS is used to having under their tool belt to the ScriptCs environment. At it’s most basic right now you are able to attach to a running process and call various “DumpXXX” methods which aim to provide the same kind of diagnostic output that you get when using SOS in WinDbg. This works best when using ScriptCs in REPL mode and there’s a quick intro do doing this here in the ScriptCs.ClrMD Wiki. Where things get more powerful though is that you can get a hold of the ClrRuntime instance from the ClrMD APIs and then, since you’re working with the full power of C# in ScriptCS, you can start writing your own logic that can pull whatever diagnostic information your heart desires out of the process.

It’s early goings right now and both of the technologies on which this is based are fluctuating themselves. There are bound to be rapid and, sometimes, breaking changes to the way things work as this all evolves and gels together into something useful. I welcome any feedback and encourage everyone to participate in the evolution via the ScriptCS.ClrMD GitHub repo. Feel free to open issues with bugs and feature requests or, better yet, fork it and send me a PR. You can also usually find me on JabbR in the ScriptCS room if you want to discuss anything in real-time.

Blogging Comeback 2013…?

Wow, last post was about RTM of VS2010! Where have I been? Well, I got married, moved across the country (from NYC to SEA) and had two baby boys (ages 2.5yrs and 2mos). So you’ll have to forgive me for the lack of blogging. ;)

I’m still working at Mimeo with all the latest and greatest Microsoft tech and am looking forward to getting back into writing about it and sharing my experiences. In the interim I’ve been fulfilling my love of problem solving and knowledge sharing by answering questions over on StackOverflow.

Tech wise, most recently I’ve been doing a lot of work with:

  • ASP.NET WebAPI – the best framework if you’re building HTTP/REST based services in today’s day and age (and who isn’t?)
  • Task Parallel Library (TPL)  – so much more pleasant with async/await in 4.5!
  • TPL Dataflow – awesome library for asynchronous processing
  • Windows Azure – storage, compute, service bus, the whole nine yards
  • Reactive Extensions (Rx) – just scratching the surface with this now and am at all times in awe of anything @xpaulbettsx ever writes with it

I plan to try and carve out time at least one day a week to sit down and write a post or two. In this day and age of low signal to noise I’m not even sure if it will be as worthwhile as it used to be, but at least I’ll enjoy it. :)

VS2010/.NET4 Coming March 22nd

In case you haven’t heard, VS 2010 and .NET 4.0 are officially scheduled for release on March 22nd. If you’re into playing with Betas (I know I am) and have an MSDN subscription you can go download it now. Those of you without subscriptions will need to wait about a week or so.

What I like most about this version of VS2010 is that someone at Microsoft finally wised up and realized that selling a gazillion different flavors wasn’t working and just confused/angered everyone. So now they’ve got just three:

  • Microsoft Visual Studio 2010 Ultimate with MSDN. The comprehensive suite of application life-cycle management tools for software teams to help ensure quality results from design to deployment
  • Microsoft Visual Studio 2010 Premium with MSDN. A complete toolset to help developers deliver scalable, high-quality applications
  • Microsoft Visual Studio 2010 Professional with MSDN. The essential tool for basic development tasks to assist developers in implementing their ideas easily

Microsoft Announces Shared CDN for Common AJAX Scripts

ScottGu broke the news last night that Microsoft is making a shared CDN available for the purposes of hosting the AJAX scripts. The full details of the scripts that are supported right now are available here, but basically it’s ASP.NET AJAX 4.0 Preview 5 (which just came out) and jQuery 1.3.2.

If you’re using ASP.NET AJAX on the server side, you can just tell the ScriptManager to use the CDN by setting the EnableCdn property to true. I find this implementation a little misleading because it only applies to the scripts within System.Web and System.Web.Extensions. If you want support for other scripts you would need to put them on your own CDN and still hook the ScriptManager::ResolveScriptReference event and set the ScriptRefrenceEventArgs::Script property accordingly.

ASP.NET AJAX 4.0 Preview 5 Released, Includes “Disposable Objects” Performance Fix!

Good news, ASP.NET AJAX 4.0 Preview 5 is here! Better yet, Microsoft has overhauled the implementation of tracking disposable objects to include the performance enhancement that was discussed in my “ASP.NET AJAX ‘Disposable Objects’ Performance Heads Up” posts (Part I & II).

So how’d they do it? They tag each disposable object with an integer value which represents the objects position in an internal array using a property named “__msdisposeindex” when it’s registered. When it’s unregistered they delete that property from the object and from the array. Small performance heads up that there is a small hidden cost of reallocating the array once 1000 object have been removed. This keeps the array from growing into infinity should you keep an application open in the browser for a long time and create/destroy lots of disposable objects.

So, it’s not exactly as discussed previously since they use an array/index approach instead of a global, incremented counter with a hash, but… problem solved!

Full Expression Tree Support Coming in .NET/C# 4.0

Yesssssssss! I’ve been waiting and hoping that this was coming in 4.0 and now it’s official: Full Expression Tree Support. This seems like a HUGE step forward to me. With the advent of this feature, one can finally implement support for converting an expression written in C# to something that runs on the GPU. For example, this would enable a scenario in the future where we can actually write WPF shader code using C#/VB and they can be considered safe because a “GLINQ” interpreter could inspect the statements before it converts them to shader code directly on the fly at runtime.

I haven’t blogged in forever (something I want to get back into), but this news warranted a post immediately. Really excited, hope I can find some time to tinker with starting a expression tree based GPU library.

UPDATE/CORRECTION:

Ah, darn it. Looks like I read the post wrong. While full support is coming to the expression library, it looks lilke C# 4.0 will still not implement compiling of full method bodies into expression trees. Do they really think that having the ability to hand write expression trees by hand with the expression library is of any use? ‘Cause I don’t really see it happening… gotta have the language support for it to take off. : 

Guess it’ll be another four years of not being able to implement something like rich GPU support inside of .NET languages. :(

ASP.NET AJAX “Disposable Objects” Performance Heads Up – Part II

Ok, I had to put together a Part II to this topic because I was totally wrong in Part I about objects being able to be used as keys because… well, I’m an idiot and didn’t do all my fact checking to make sure my implementation was 100% sound. :) Thanks to Dave Reed who commented on the original post pointing out my flawed thinking.

Mea culpa

Basically Dave points out that JavaScript objects are really specialized hashtables called associative arrays where the keys absolute MUST be a string OR a type which can be converted to a string. Now because we’re using Object subtypes here, they would have to override the toString() method to provide a meaningful string if we expected them to be used as keys. Well, the instances we were stuffing in the _disposableObjects weren’t providing any such toString implementation, nor do we want to impose one. So, is all lost? No!

Redemption!

Ok, so I totally failed with my first approach, but I shall now redeem myself! As soon as my other idea was beaten down, set it on fire and stomped it out with golf cleats (actually Dave was rather nice, it just FELT like that’s what happened), I quickly came up with another solution. Here’s the nitty gritty:

  1. Sys._Application keeps an internal counter called _dispoableObjectsNextId which is started off at the minimum value of a 32-bit integer number:-2147483648. I choose this for ease and because it will provide us with billions of identifiers which, unless your app runs for a really long time and/or instantiates and disposes of billions of objects should have us covered.
  2. Sys._Application has a constant hanging off of it called _DisposableObjectIdKey which I’ve decided to make “!!disposableObjectId”. I don’t see that colliding with many other chosen JavaScript key names, but because it’s a constant we could change it to be something long and totally ASP.NET AJAX specific to avoid the possibility of collision.
  3. Each time Sys._Application::registerDisposableObject is called we “tag” the incoming object with the next identifier using the constant _DisposableObjectIdKey. Next we use that identifier as the key on the _disposableObjects hashtable with the object as the value.
  4. When Sys._Application::unregisterDisposableObject is called we look up the value of the id on the incoming object using the _DispsosableObjectIdKey we then delete that key from the _disposableObjects hashtable.
  5. For the Sys._Application.dispose implementation we simply for..in the keys on the _disposableObjects hashtable and call dispose item that is registered.

The new performance picture

We’re doing a little more than before here, so naturally we’re gonna take a hit someplace. How bad is it? Do we still outperform the array?

IE 6.0.2900.5512.xpsp.080413-2111
# of Objects Array (ms) Hashtable (ms) Gain
500 160 1(+0) 160x
1000 711 10(+0) 71.1x
5000 *33298 90(+9) 369.9x
10000 *138279 180(+19) 768.2x
IE 8.0.6001.18702
# of Objects Array (ms) Hashtable (ms) Gain
500 91 5(+1) 18.2x
1000 429 11(+2) 39.0x
5000 *27168 57(+10) 476.6x
10000 *110025 114(+10) 965.1x
FireFox 3.0.7
# of Objects Array (ms) Hashtable (ms) Gain
500 21 1(+0) 21.0x
1000 79 2(+0) 39.5x
5000 1891 11(+0) 171.9x
10000 7608 22(-1) 345.8x
Safari 4 Public Beta (5528.16)
# of Objec
ts
Array (ms) Hashtable (ms) Gain
500 63 1(+0) 63.0x
1000 263 2(+0) 131.5x
5000 6805 5(-4) 756.1x
10000 *28315 12(-6) 1573.1x

*Indicates that I had to escape the “Slow running script” dialog to even be able to continue execution on these.

Well, we’re still handily out performing the Array implementation. We took hits in both IEs, but the hit was worse in IE8 which is absolutely baffling. FireFox didn’t really change, in fact the 10000 object test gained a millisecond. Finally, Safari 4 gained in the 5000 and 10000 object tests!

Is this a hack?

So, some will look at this and say: hey man, that’s really hacky that you’re just slapping a random key/value (aka “expando” property) on an arbitrary JavaScript object like that. Wellllll, this is JavaScript and from where I’m standing that’s one of the powerful features of this language. It’s kinda like DependencyObject if you’re familiar with WPF. In most cases that I can think of this is perfectly harmless because, if people don’t know about it or purposefully go messing with it, it can’t hurt anyone. The only case where this could potentially hurt is if someone’s implementation uses a for..in on the object to enumerate all of its keys. That would now turn up our _DisposableObjectIdKey and that could be bad. There is only one aspect of the framework that really does that and that’s when working JSON [de]serialization. In the case of JSON objects though, you’re talking about “pure” data objects and those are not going to be registered as “disposable objects” anyway. So, the real question for me is: Is this “hack” worth the performance gains as long as it comes with a small documentation note that explains how this extra field could affects callers? And for me, the answer is “hell yes”.

Conclusion

Ok, so I screwed up my first approach, but hopefully this second one helps me save some face. We had to do a little extra work inside the framework code base and gave up a wee bit of performance in IE, but we’re still posting huge gains over the existing implementation. I am providing my updated version of the performance test and framework scripts below and will go update the CodePlex issue with this latest version as well. Now, I just have to hope I got it right so Dave doesn’t come back and teach me another lesson. ;)

Links

ASP.NET AJAX “Disposable Objects” Performance Heads Up

Update: Make sure you read Part II as there was ultimately a fundamental flaw in this implementation which prevents it from working as I originally thought.

One of the important features of the ASP.NET AJAX client side framework is the concept of disposing of components/controls so that they unregister event handlers and release DOM element references so that you don’t end up with circular references and, thus, memory leaks. The ASP.NET AJAX client framework takes the same approach as .NET does where there is a Sys.IDisposable interface which you can implement to indicate that your class requires disposal semantics. By implementing this interface certain aspects of the framework, as well as other consumers of your code, will recognize that they need to call your dispose implementation.

How the framework tracks “disposable objects”

The performance problem I want to discuss lies in the way the framework itself tracks known instances of “disposable” objects. First off, anything that derives from Sys.Component is automatically in this boat. Sys.Component is important because it is the base class of other important base classes like Sys.UI.Control and Sys.UI.Behavior. Sys.Component implements Sys.IDisposable, but also has some tight coupling with Sys.Application. Every time you create an instance of a Sys.Component a call is made during its initialization to Sys.Application.registerDisposableObject to which it passes itself in. This method takes whatever instance it is handed and calls Array.add to add the object to an array it maintains internally called _disposableObjects. Conversely, when Sys.Component’s dispose method is called it makes a call to Sys.Application.unregisterDisposableObject at which point the method calls Array.remove to remove the instance from the _disposableObjects array. The astute performance geeks are probably already starting to see where this is going, but before I get to the specifics let’s discuss why this register/unregister dance is even happening in the first place.

Why does it work this way?

So, why does Sys.Application need to even track these objects? Isn’t the person who created them supposed to dispose of them? Well, for the most part yes. However, there’s also the pattern of just creating controls via global functions, such as pageLoad, and then just forgetting about them. In either case, when the application is shutting down either from an explicit call to Application.dispose (which is rare) or a navigation away from the page it needs to be able tell all those objects that it’s time to clean up.

So what’s the problem?

Ok, so, what exactly is the problem? The problem is an array is used to store this list of disposable objects and, as mentioned earlier, when a component asks to be unregistered Array.remove is used. Array.remove uses Array.indexOf to find the position of the item in the list. Array.indexOf is implemented as a for loop indexing into the array and doing equality checks on each item until the item looking to be removed is found. So, the more disposable objects in the array the worse your performance gets. In Big-O Notation, we’re talking O(n) here. Worse yet is that, if you consider the typical pattern where the most recently created objects are the ones most likely to be disposed of, you’re constantly having to scan through to the end of the array. And that’s not all! Once Array.remove has located the index of the item in the array it still has to perform an Array.split to actually remove it from the array which incurs even more overhead.

Seriously, is this gonna even affect me?

Right about now, some of you might be skeptical and wonder why this is such a big deal. I mean, who creates all that many components anyway? Well, I can tell you I’ve already run into this problem in a rich client ASP.NET AJAX application in production. You see, the power (and joy IMHO) of ASP.NET AJAX is that you’re encouraged to create more complex interactions by composing controls and behaviors much the same as you would with WPF/Silverlight. You just have to keep in mind that each control and behavior you attach to an HTML element adds to the _disposableObjects array we’ve been talking about here. Worse still is that the power of templates which make it sooo easy to repeat a bunch of controls/behaviors per each item being bound. You always need to be aware of how many controls/behaviors each instance of a template instantiates of course, but you also need to consider that even binding some text to a <span> element comes with the same cost because Sys.Binding is a Sys.Component subclass.

A proposed solution

So, how can we remedy this problem? Sure sounds like a problem for a HashSet<T> to me in .NET land. Hmm… too bad there isn’t some kind of a hashtable implementation in JavaScript, right? Well, actually, there is! A lot of people don’t realize it, but every JavaScript Object is really just a glorified hashtable of key/value pairs. All we need to do is use the power of the ability to dynamically add key/values to any JavaScript Object using the [key]=value notation. Keys don’t have to be strings or numeric types, any type can! So, with that in mind, if the the internal _disposableObjects field on Sys.Application was just an Object and registerDisposableObject just added the instance being passed in as a key with null as a value and then unregisterDisposableObject just deleted that key from _disposableObjects, we could rely on the power of the JavaScript runtime implementation to find that entry in the list instead of having to scan the entire list looking for it ourselves. Now, naturally it depends on how the JavaScript runtime is implemented, but most implementations today are actually using “real” hashtables/hashcodes behind the scenes so the performance is wayyyyy better than having to index into an array and do equality checks ourselves.

Working code

As not to be all talk and no action, I’ve actually updated the latest version of the ASP.NET AJAX Preview (version 4 as of this writing) with these changes and am providing my updated copy at the bottom of this post. I’ve also relayed this information to Bertrand Leroy who is forwarding it to the ASP.NET AJAX Team who I hope will consider making the fix in the next drop since it’s completely internal to Sys.Application and very easily tested for compatibility. Just to make sure, I also entered an issue over on CodePlex which, if you’re interested in seeing this get fixed, you can go vote on.

Numbers don’t lie

Here’s a quick set of results from a performance test I slapped together where I instantiate some number of disposable objects and then dispose of them in reverse order to simulate the fact that, most often, the youngest of objects die off first. The machine where I ran these tests is a Core 2 Duo 2.66Ghz, 4GB RAM, running Vista 32-bit SP2. The IE6 test was done in the IE6 test VM supplied by Microsoft which is running XP SP3. The Safari test was done on a 13” MacBook with Core 2 Duo 2Ghz, 2GB RAM running OS X 10.5.6. All tests were done with the release mode version of the MicrosoftAjax script and the numbers shown are the median of three consecutive runs. I also executed the test several times before recording the numbers to give the JavaScript engines a chance to employ any kind of code optimization they might use.

IE 6.0.2900.5512.xpsp.080413-2111
# of Objects Array (ms) Hashtable (ms) Gain
500 160 1 160x
1000 711 10 71.1x
5000 *33298 81 411.1x
10000 *138279 161 858.9x
IE 8.0.6001.18702
# of Objects Array (ms) Hashtable (ms) Gain
500 91 4 22.8x
1000 429 9 47.6x
5000 *27168 47 578.0x
10000 *110025 94 1170.5x
FireFox 3.0.7
# of Objects Array (ms) Hashtable (ms) Gain
500 21 1 21.0x
1000 79 2 39.5x
5000 1891 11 171.9x
10000 7608 23 330.8x
Safari 4 Public Beta (5528.16)
# of Objects Array (ms) Hashtable (ms) Gain
500 63 1 63.0x
1000 263 2 131.5x
5000 6805 9 756.1x
10000 *28315 18 1573.1x

*Indicates that I had to escape the “Slow running script” dialog to even be able to continue execution on these.

No surprise that FireFox and Safari are crushing IE in both scenarios. It’s also no surprise that IE6 lags everyone else in both scenarios. Safari appears to have the best hashtable implementation of the three, though FireFox seems to have the best overall execution performance since it beats the others handily in the Array approach. One thing’s for certain, all the browsers show massive gains when moving to the hashtable approach.

Final thoughts

Assuming the ASP.NET AJAX team applies this simple change to the next version of the framework, there’s really not much to worry about going forward because even if you had an application with 10000 registered disposable objects and, at any given time, you dispose of a more realistic number of components, like say 200-300, from a template at once, the overhead of the unregisterDisposableObject implementation will now be so miniscule that all you have to worry about is the actual cost of the dispose implementations themselves.

Links

Getting a distinct list of changed files from TFS using PowerShell

If you’re like me and need to do code-reviews of other people’s stuff or maybe you just want to see everything that’s changed during a certain period of a project, then here’s a nice PowerShell tip for you.

First, make sure you’ve downloaded the latest version of the Team Foundation Powertools. Starting with the October ‘08 release, the tools now include a PowerShell snap-in that add several commands which enable rich interaction with your Team Foundation Server. Of particular interest to us for this exercise though is the Get-TfsItemHistory command.

Now, let’s assume a scenario where we have a Team Project named $/Foo and we’ve been working on some patches in a v1.1 branch within that project. Now it’s time to review the work we’ve done in the current iteration which started on March 1st ‘09 and ended on March 31st ‘09. Here’s how we might start gathering those changes:

Get-TfsItemHistory “$/Foo/v1.1” -Version “D3/1/09~D3/31/09” -Recurse

Now, what this is gonna do is bring us back all the changesets that are related to any file underneath the v1.1 branch. While listing out the changesets is nice, it’s not going to tell exactly which source files I need to review. So the next thing we need to do is make sure we bring back all the changes in the changesets by adding the -IncludeItems parameter to the Get-TfsItemHistory call. We’ll also want to do is flatten out the list because, again, we’re interested in the indvidual changes, not the changesets themselves. So we use Select-Object’s -Expand parameter to flatten out the list:

Get-TfsItemHistory “$/Foo/v1.1” -Version “D3/1/09~D3/31/09” -Recurse -IncludeItems | Select-Object -Expand “Changes&quot;

Great, so now we have a list of all the changes that were made to each file in this release, but this is still a little noisy. For starters, change types such as deletes, branches and merges are shown here. Well, if the file was deleted there’s not much to look at now, so… I don’t want those in my list. Also if a file was simply branched into the project from someplace else we don’t really care because wherever it came from already underwent a review. Merges are also questionable as, hopefully, the merged was reviewed for any conflicts at the time the merge occurred. Plus if there was a conflict that means they must have changed the file which will show up as a standalone edit anyway which means it will still end up on our list. So, how do we filter that noise out? Like so:

Get-TfsItemHistory “$/Foo/v1.1” -Version “D3/1/09~D3/31/09” -Recurse -IncludeItems | Select-Object -Expand “Changes” | Where-Object { ($_.ChangeType -band ([Microsoft.TeamFoundation.VersionControl.Client.ChangeType]::Delete -bor [Microsoft.TeamFoundation.VersionControl.Client.ChangeType]::Merge -bor [Microsoft.TeamFoundation.VersionControl.Client.ChangeType]::Branch)) -eq 0 }

Alright, almost there! Now the only problem is that if the same file is changed multiple times it’s going to be listed multilple times and really we just want the distinct names. This is a little tricky because the Change object contains the Item as a Note property, luckily there’s a Select-TfsItem command to help read what we’re interested in out. After that we just do a little grouping and sorting and we have a list we can work with:

Get-TfsItemHistory “$/Foo/v1.1” -Version “D3/1/09~D3/31/09” -Recurse -IncludeItems | Select-Object -Expand “Changes” | Where-Object { ($_.ChangeType -band ([Microsoft.TeamFoundation.VersionControl.Client.ChangeType]::Delete -bor [Microsoft.TeamFoundation.VersionControl.Client.ChangeType]::Merge -bor [Microsoft.TeamFoundation.VersionControl.Client.ChangeType]::Branch)) -eq 0 } | Select-TfsItem | Group-Object Path | Select-Object Name | Sort-Object

Finally, in true PowerShell fashion, this will write the paths out to the host (console or ISE), but naturally you could also Export-*, Out-*, etc as well.