The recent mention of an improved effects API in the next enhancement release of WPF has gotten me thinking about this again. I’ve written before about how other platforms are stepping up their game when it comes to leveraging the GPU in their graphics stacks and how WPF really needs an answer of its own to this problem. As a refresher, WPF has the BitmapEffects API, but it’s completely CPU based and pretty much trashes the performance of your WPF apps if you decide to use them because it forces the elements the effects are applied to to also be rendered in software.
With the birth of LINQ we’ve seen how Microsoft has enabled us to program using the constructs of our favorite languages, but then end up with an Expression Tree which a LINQ provider can interpret at runtime and translate into another format as well as ship the execution of that expression wherever it wants. The obvious examples for LINQ is DLINQ (or ADO.NET entities) where the expression tree is converted into SQL and remoted to the SQL server for execution. Also on the horizon are the Parallel Extensions which allow you to define your work in terms of tasks that can be executed in parallel and then those tasks are handed over to a scheduler which executes them using all kinds of super cool threading algorithms, leveraging all kinds of hardware heuristics to ensure the tasks are executed as quickly as possible on the hardware that is available.
Well, that got me to thinking… why not do the same thing for GPU programming? We should be able to leverage the same technology to be able to write natural language shader programs. Instead of taking the expression tree and turning into SQL, we would take it and compile it into a shader! The type of code you’d be able write in a “GLINQ” function would limited to the standard constructs of a shader (math using the standard .NET integral data types, loops, variables, etc.) and any shader specific capabilities can be exposed through a custom .NET class, which would really just be an empty stub, and the calls to the methods of that class can then be detected by the compiler and translated to the proper shader features. Best of all, because it’s interpreted, the compiler can include security features that are able to do some kind of static analysis of the program to ensure that it’s not malicious. Also there could be a level of CAS put in place that allowed users to decide exactly which features of the GPU programs are allowed to use.
I really hope this is the kind of implementation we eventually end up because, IMHO, it’s the only “natural” way to implement it looking at the .NET technology stack.
Finally, some food for thought: The GPU is becoming so powerful that companies like nVidia are pitching them as GPGPUs and selling HPC (high performance computing) products that provide massive amounts of power (128 processors, massively parallel) in a little box. So, imagine that we took this same concept a step further and implemented an entire library outside of WPF that allowed you to leverage those kinds of platforms for general programming. Just like DLINQ where the expression is translated to SQL and remove over to your DB server for processing, we could translate and remote over to one of these boxes and execute it in a nanosecon