Ok look, I’m not exactly thrilled with the fact that WinFS might not see the light of day until ~2010 either. I’ve been looking forward to the point where rich forms of data are managed by the OS for quite some time. Imagine being able to ship a product where you don’t need to install a custom database solution? Imagine users being able to relate data from your application to data from any other application (and vice vera) ever written? Imagine being able to tag any data with your own metadata using the shell instead of the vendor having to build something into each of their own products? Finally, imagine being able to search all that data, regardless of what it is, in a heartbeat? Those are the promises of WinFS.
Now the funny thing is that the Longhorn client previews that have been released basically contain all of that functionality. So what’s the problem then? Why are we going to have to wait so long to get our hands on it? Well the problem is they only really solved the problem for the individual user. The WinFS team, in all the seminars that I ever attended on the technology, always said they didn’t have a solution for WinFS on the server yet. They never hid this fact. It was designed as a pure client technology and that was their focus for this release. They intended to add the server aspects of WinFS in the releases of Longhorn server which, if memory of PDC roadmaps serves, wasn’t going to be released until ~2007. Obviously all these dates have slipped now by a couple of years, so ~2010 really isn’t that far off.
The problem with anything beyond a stand alone client solution is as follows: Imagine if I create a contact. Then I associate a whole bunch of random data, via WinFS relationships, to that contact (i.e. Emails, Word Documents, Joe Developer’s custom application data, etc.). All the data and metadata are stored in my client WinFS store. I can now search, chase down relationships, relate more data and everything is great. Now, what do you think happens if I try and copy that contact to a shared WinFS store on a Longhorn server? What do you think should happen? This is the scenario that is currently undefined. If you think about it, it’s definitely not an easy problem to solve.
- Should all the related data be copied as well?
- Should links be created back to the related data streams on the client machine?
- What happens if I want to view a document related to the Contact on the share then and the client isn’t connected to the network right now?
- Do you force an item and all of its related items to live in a single WinFS store?
- If the data is distributed, should we search all the related data in satellite WinFS stores to find out if that Contact is in any way related to the search phrase “Acme sales pitch” which may be defined in one of the linked/related data documents?
Now, these questions may seem technical, but take your geek hat off for a second and you’ll realize there’s a serious usability aspect to each one. Each changes the user experience and the way people work with the data. This is the issue that I think most people are overlooking here. The technology is the “easy” part folks, it’s solving the user experience that is the challenge here. If every machine was an island WinFS would work today as evidenced by it’s existence in the Longhorn client previews. Since Microsoft got pushback from the community when they didn’t have an answer for the server side, they decided to delay the release until they could solve the whole story. Yes, you heard that right: it was us who caused this delay. Maybe not the small application developer, but the big companies who basically gain no benefit from WinFS stores being islands since their thousands of users often need to work on the same data with other users. If I can’t store related data together and then have another user in my organization look at that same data with the same relationships and metadata, then what’s the point from a business value add perspective? Hint: there is none. That’s why we’re not gonna see this technology until all these questions are answered.
All this said I of course agree there are technical hurdles. Yes the performance we see in the Longhorn client drops is far from usable. There is also the challenge of row level security which even Microsoft’s flagship SQL Server 2005 still lacks. Scalability of the technology in the instance of a huge, corporate WinFS store may be an issue. I could go on, but, as I’ve already stated, these are technical issues and I have complete confidence that the combined brainpower of Microsoft’s WinFS team can overcome all of them in less time than it will take to consider all the ramifications of the decisions they make up front about how we will work with our data for the next ten to fifteen years of our lives.
This is the best post I have read on why WinFS was postponed.
I agree you have correctly identified that the real issue is not the technical challenge, but the useability problem.
I worked for about 2 years on a product called XTend. Which was a peer2peer relational file system. The technical problems you talk about I have encountered before, and I know that there is no simple answer, especially to the ‘How should it work part of the question’.
But I disagree that you need to have inter WinFS store functionality before WinFS is of value to business. Do you need inter database functionality before a database is of value? Sure adding inter database functionality makes it more valuable, but it is already valuable!
Having a server that can store rich forms of data based on schemas which you can extend, seems very valuable to me, so long as the API can be access from client machines (note API not GUI). I would love a Server tool that allows me to write code like this
Person p = new Person();
p.Name = “Alex”;
And then find the file again and open a stream to it…
Surely that would be cool.
Actually it is so cool I decided to write a down and dirty version…
I must say again, Drew that was a cool post!