It’s WPF week over on Channel 9 and a new episode went up yesterday which has David Teitlebaum, a PM on the WPF team, giving an overview and demos of the new lower level features that SP1 brings to the table. You don’t want to miss it, so hop on over and check that out.
I think we all know the Effects stuff is the most sought after feature, but I know a lot of people were looking for the WriteableBitmap feature since WPF 3.0 and now they have it and, judging by the demo, the performance is amazing.
I just wanted to link to this great post on Silverlight 2′s layout and rendering features. Both features borrow heavily from WPF, but there are also important differences. For one, unlike WPF, there is only one tree… no Logical vs. Visual. Also very cool mention of Silverlight 2′s rendering internals being many-core friendly so it scales well on the CPU. Big difference from WPF where rendering is offloaded to the GPU.
Check out this Mix Session for all the skinny on the enhancements coming in the WPF 3.5 “Extensions” later this year. Unfortunately the Mix sessions site is designed quite poorly in terms of being able to provide direct links, but if you just go to there and look for session T11 – “What’s New in Windows Presentation Foundation 3.5″, that’s the one that shows off all the goodies.
Rob shows off new work on the virtualization front, Microsoft’s prototype DataGrid control, performance enhancements and so on. The topic I was waiting to hear about most however was the new Effects API which we finally get our first in depth look at with this session. If you jump to about the 45 minute mark you can get right into it.
First I must say the Effects demo was very impressive. He basically combined 6 different effects on a live 3D object with physics and video playing and it worked flawlessly. In this demo we finally got to see the code that drives that demo and I must say I am supremely disappointed with how quickly he blew through it and how little detail was given. I’m also very disappointed with the implementation. First off, the code was pure shader code in .fx files. A far cry from the beautiful LINQ implementation I had envisioned and no where even close to the Microsoft Research Accelerator project that at least seemed to do it through inheritance and reflection. Upset as I may be with the implementation, I am just glad we finally have some way of actually accessing the GPU for effects.
The next thing I was looking forward to is the WriteableBitmap API. It’s not something I personally need, but I know a lot of people do. WriteableBitmap basically gives you GDI+ like pixel based bitmap graphics. Those graphics however are still fully integrated nicely into the rest of the WPF graphics stack, so you can paint on a 3D surface for example. It does look however like the WriteableBitmap API requires some unsafe code. How much and whether it’s usable at all in a partial trust scenario as a result wasn’t clear.
The recent mention of an improved effects API in the next enhancement release of WPF has gotten me thinking about this again. I’ve written before about how other platforms are stepping up their game when it comes to leveraging the GPU in their graphics stacks and how WPF really needs an answer of its own to this problem. As a refresher, WPF has the BitmapEffects API, but it’s completely CPU based and pretty much trashes the performance of your WPF apps if you decide to use them because it forces the elements the effects are applied to to also be rendered in software.
With the birth of LINQ we’ve seen how Microsoft has enabled us to program using the constructs of our favorite languages, but then end up with an Expression Tree which a LINQ provider can interpret at runtime and translate into another format as well as ship the execution of that expression wherever it wants. The obvious examples for LINQ is DLINQ (or ADO.NET entities) where the expression tree is converted into SQL and remoted to the SQL server for execution. Also on the horizon are the Parallel Extensions which allow you to define your work in terms of tasks that can be executed in parallel and then those tasks are handed over to a scheduler which executes them using all kinds of super cool threading algorithms, leveraging all kinds of hardware heuristics to ensure the tasks are executed as quickly as possible on the hardware that is available.
Well, that got me to thinking… why not do the same thing for GPU programming? We should be able to leverage the same technology to be able to write natural language shader programs. Instead of taking the expression tree and turning into SQL, we would take it and compile it into a shader! The type of code you’d be able write in a “GLINQ” function would limited to the standard constructs of a shader (math using the standard .NET integral data types, loops, variables, etc.) and any shader specific capabilities can be exposed through a custom .NET class, which would really just be an empty stub, and the calls to the methods of that class can then be detected by the compiler and translated to the proper shader features. Best of all, because it’s interpreted, the compiler can include security features that are able to do some kind of static analysis of the program to ensure that it’s not malicious. Also there could be a level of CAS put in place that allowed users to decide exactly which features of the GPU programs are allowed to use.
I really hope this is the kind of implementation we eventually end up because, IMHO, it’s the only “natural” way to implement it looking at the .NET technology stack.
Finally, some food for thought: The GPU is becoming so powerful that companies like nVidia are pitching them as GPGPUs and selling HPC (high performance computing) products that provide massive amounts of power (128 processors, massively parallel) in a little box. So, imagine that we took this same concept a step further and implemented an entire library outside of WPF that allowed you to leverage those kinds of platforms for general programming. Just like DLINQ where the expression is translated to SQL and remove over to your DB server for processing, we could translate and remote over to one of these boxes and execute it in a nanosecon
Scott Guthrie blogged today about the WPF roadmap and what kind of enhancements we can expect to see coming in the next few releases. Check out the post for full details, but here’s a quick list of things to expect:
- Improved setup for WPF apps (i.e. ClickOnce enhancements)
- Improved working set and startup times
- Performance improvements including
New intrinsic controls (certain to please all)
- DropShadow and Blur effects becoming hardware accelerated (w00t!)
- Improvements to text features in certain scenarios
- Media and video performance boost
- New WriteableBitmap API to finally appease the people looking for realtime bitmap manipulation (like GDI)
- “Support for new effects API that enables you to build richer graphics scenarios” (hmm, that’s not very specific, but could this finally be the release of a hardware accelerated BitmapEffects stack???)
- Data scalability improvements including container recycling and better/simpler virtualization support
Improvements to the VS 2008 WPF Designer
The best part about all of these? You do not need to recompile your apps to gain benefits from existing features that are being upgraded. If someone has the latest bits installed, your app gets all the performance gains for free. Good stuff!!!
Well, in case you hadn’t already read today, Scott Guthrie has announced a few things about some upcoming beta releases. One of the betas talked about is the next version of Silverlight which used to be labeled as version 1.1. Microsoft has rethought this strategy and is instead going to call it 2.0. If you ask me this is a great idea because the next version is such a major leap forward with the inclusion of the CLR and now a ton of the richer WPF features that it warranted way more than a minor version increase.
I’m super psyched to here that they’re really taking a much richer subset of WPF and putting it in Silverlight. If it’s going to be touted for building next-gen rich apps, it really needed to be more than a glorified canvas. Scott mentioned that the next beta includes:
- Extensible control framework model – I wonder if it will be the exact same model as WPF (e.g. FrameworkElement -> UIElement -> Control)
- Layout manager support – hopefully this means the Arrange/Measure pattern from WPF is included now so we can implement panels
- Two-way data-binding support – the data-binding architecture of WPF is one of it’s most powerful selling points if you ask me
- Templating – make all your <Button>s appear as rotating 3D cubes!
- “Skinning” – hopefully this means the equivalent of WPF’s resource dictionaries
Scott also mentioned there will be support for WCF like features (e.g. REST, POX, WS-*). Also important to note is that there will be support for cross-domain network access! Finally, he mentions that we can expect to see a Go-Live Beta of Silverlight 2.0 in 2008.
MacOSX has had their Core Image API for a long time which abstracts you from the GPU. Now Adobe has stepped up to the plate by introducing a preview of a new toolkit who’s main technology is a new programming language, codenamed “Hydra”, that enables the creation of filters and effects that can be compiled down to run on the GPU (if GPU is not available, it falls back to CPU). Here’s their tutorial that gives an example of how to write a filter with Hydra. There’s also a gallery of sample filters available here. Kudos to Adobe for attempting to bring this technology to the mainstream.
So, the next question is, when are we going to see something similar for WPF and Silverlight? Sure WPF has BitmapEffects, but as anybody who is familiar with the BitmapEffects API knows, they are themselves completely CPU based and, in turn, also force whatever visuals they’re applied to be rendered on the CPU. This pretty much renders them a no-no if you’re trying to create a really fluid, animated UI… that is as long as you expect to have high frame rates.
People have been asking for this capability since WPF was announced. Where you at Microsoft? If only they documented MILCORE, maybe someone could tack it on for them…
I’d been thinking about this topic since I saw the new interface. Basically, after seeing the screen shots, I thought to myself that if the new Zune Windows client software was not written with WPF, Microsoft will have made a huge mistake by once again not backing one of their own technologies as a viable platform for developing a real world consumer application. Knowing the type of data this kind of app pushes around, the behavior in the v1.0 client today and looking at the new screen shot, that’s an app that screams WPF.
So, yeah, I was going to just wait and see once I downloaded it, but I got curious and started to look around to see if there was an answer to the question already. Sure enough there is. Unfortunately though, it’s not the one I wanted to hear. In this response to a forum thread about the new Zune client, Charlie Owen, Program Manager for Windows eHome (i.e. the Media Center team), states that it is instead based on the Media Center UI framework.
Let’s step back for a second here. WPF has been pitched to us as a pillar of the next generation Windows experience since the announcement of the WinFX framework (later renamed .NET 3.0 *sigh*). For all intents and purposes it is by far one of the best frameworks I’ve ever seen for building rich client applications. Forgetting the flashing lights type stuff for a second, even just the layout and data binding engines alone make it one of the best platforms to work on. Yet when a group of Microsoft developers (remember they took the development in house this time around) sat down at a table and considered their choices for writing the next version of the Zune client they actually decided they’d rather use a variation (Charlie’s words from the post) of the Media Center UI framework instead. Think about that… they didn’t just use the Media Center framework, they actually made changes to it to suit their needs… and they did that rather than pick the fully baked platform of WPF.
End result so far as I can tell? Time wasted creating a proprietary framework. Opportunity to show the world that WPF rocks the house wasted. To bad. So sad. I hope someone out there can point out a positive aspect of this choice. I don’t think being able to run the interface on Media Center extenders, assuming that was the main point of this, is a good enough explanation because you can write WPF based Media Centers today and, while they don’t currently perform as well as a native MCML application, they perform well enough to cover what Zune would need to do AFAICT. Maybe the fact that Media Center forces WPF based apps into the Internet Sandbox is the problem? Why in heck they crippled them this way to begin with is beyond me, but if that were the issue wouldn’t it have made more sense to invest in a joint project to lift that restriction for trusted/signed applications or something?
This is the first list I’ve seen that covers the new features being added to WPF in .NET 3.5. Top three things that stand out to me:
- Binding support for LINQ based data sources
- UIElement3D – adds support for input and focus events to 3D elements (finally, no more hand crafting!)
- Viewport2DVisual3D – enables you to paint 3D surfaces with interactive 2D elements. This was a sorely missed feature from 3.0 that appeared as an open source project later and has now been folded into the actual WPF API as a first class citizen.
Once again, Vista has fewer vulnerabilities than everyone else these past six months. What I find humorous is that OSX 10.4 has only slightly fewer than XP, but a higher number of them are unfixed.