On Tuesday, I harped on proprietary data again,
Flemming, still jet-lagged, gave the post a thumbs-up but Mitch urges us to not repeat the errors that have marked so many endeavors: Establishing technical standards that enshrine data, or a way of thinking about it, that becomes dogma. The dogma then dooms the participants to follow the old flawed patterns, perhaps not even realizing what assumptions are baked into their enterprise:
I agree with Mitch, and not just because he is a valued advisor and the expert entrepreneur in the booth of this quiz show. The failure of organizations to understand their data strategies is one of the reasons they’re so frustrating to work with and for. But this design study could easily repeat those kind of mistakes, even if our aims seem more open. So I want to address Mitch’s points in serial fashion to give us a chance to question assumptions behind the current Xpertweb design.
Let’s start at the top: “we need to keep clearly in front of us the idea that human relationships is what we are talking about” To which I respond with a firm, “Well, yes and no!” (Never equivocate on important points…;-).
Yes. I’m inclined to wax rhapsodic when glimpsing a future with a kinder, gentler economy based on human connections across the globe or around the corner. And I do believe that our current structures devalue the creativity most of us pour into our work and the deep and vital relationships we form with partners we transact with. And I think Xpertweb improves the chances of forming and maintaining those relationships, compared to proprietary hierarchies, which would be the most social benefit of our socialware.
No. However, there’s also a pragmatic edge to getting things done, which is why an Object-Oriented, “Open Resource” economy is so attractive, where the obvious expert is easy to find, easy to engage and easy to pay. Just as programmers can plug other people’s code objects into a program, so do we need to plug other people’s expertise into our activities, on-the-fly, and get on with our day. When I use a piece of open source code, admirable for its elegance and price, to do something otherwise impossible or lengthy, it’s just a tool that I use without forming a relationship. As my long-time friend, client and author, Jerry Vass teaches his Fortune 500 clients, “Your company may be impressive, but the buyer doesn’t care if the seller lives or dies, as long as he doesn’t die on the premises.”
Mitch has challenged us to strike the right balance between over-determining behavior or making the design so loose that there’s no value. There’s no way to respond without specifics, so please indulge a detailed overview of the Xpertweb approach, including our allowance for flexibility. Maybe you’ll have some ideas about whether it’s too rigid or loose, and how it could be improved.
The Xpertweb Bottom Line
All we’re really after is to elicit a rating from each transaction and to make it indelible in the public record. The rating needs to be both quantitative (1-99%) and qualitative (a written comment). We all know that we rarely fill out rating surveys after the fact, so the rating must be required at the moment of payment in th
Therefore, at a minimum, we need to provide an invoice form. Ideally, an invoice should summarize the transaction so the buyer can make a rating based on more than memory. That means it’s useful to capture the history of the transaction. We decided to provide some basic transaction forms and a dead-simple data capture system for the transaction details, including the one we care most about, the final rating.
Last time, I revealed my horror of proprietary data–that both parties to a transaction need all the data so they are full peers. Flemming agreed and so does Mitch, if we strike the right balance between rigid and useless. That’s what drove the decision to give everyone the same data tools and to require a web server for both parties to each transaction. It was the only choice left standing after all the other choices wouldn’t work. Data that’s not on both web servers is suspect, since one of the parties may have changed or deleted something. Validation is by duplication.
When you’re designing a campus, put the sidewalks where the grass wears out.
The 5,000 Year-old Flow Chart
How do we know what forms to provide to lead up to the one we actually care about, the evaluation? On March 18, I suggested that we have an ancient model to follow for the Xpertweb transaction flow:
Why Do We Need a Flow Chart?
Transactions are asynchronous.
All those things happen in the real world, but the evaluation is maintained privately by each party, and then not explicitly. Xpertweb intends to make evaluations explicit and public.
Okay, we think we’re clever enough to design those forms, but we need a data store that’s flexible and searchable. There are few examples of open, pure-XML data stores. There’s a lot of data on web pages, but it’s hard for computers to organize and aggregate web info for us, so they’re not really data. A lot of “real data” are served by web pages, but the data are buried in proprietary data bases that Google and the rest of us can’t get at without permission. A Peer Economy must be a permission-free zone.
So, we found ourselves going the eccentric route again. Xpertweb users will store their data on their own web servers, in pure XML format. It won’t require UDDI or SOAP or XML-RPC or anything else exotic to get at user records. You can do it with a browser or a search engine. RSS will provide pointers to help people find what they want.
Why is Data So Difficult?
There are two ways that data systems lock their users into the kind of rigid structures Mitch is warning us against. The data structure itself may not be malleable or the data language may make it hard to design and integrate new forms to gather inputs (old or new) for the database and display the gathered data.
Most companies maintain a little bit of info about a lot of people. Xpertweb users need a lot of data about relatively few people. So instead of using a huge array to store the data, An Xpertweb site will keep three kinds of small datasets:
Within each of those datatypes, the trick is to have just enough structure that the data needs are served, but to avoid freezing that structure. Let’s think about data strategy. Every data design specifies data types (first name, last name, city, zip, etc.) and data forms (rolodex card, product sheet, etc.). The problem with the data tools we’re used to is that, as Mitch suggests, they’re too rigid:
Those factors combine to make it difficult to do what every data structure should: facilitate new types of data and new forms for novel inputs and display of the old and new data types.
This design study tries to avoid those traps. The design is as public as possible
Because the data forms are HTML, there are thousands of people able to modify them or add new ones. HTML skills are possibly the most common computer specialty, so form design and modification can be learned or hired out reasonably, probably as an Xpertweb-rated specialty.
Aha! you say. There may be tens of thousands of HTML-aware people with Front Page or other editors, but only a small fraction of them know how to design the code to equip HTML to save or display data.
That’s the sad truth, esteemed Effendi. Where’s the data design tool for the rest of us that will let someone’s niece or nephew build or modify data forms? I saw an article today praising a barebones 300 page book on web data that he used every day, rather than the several thousand other pages on his shelf.
We call it XWriter and it will be part of the code we’ll provide to every user. XWriter will let any HTML author add the required data calls (inputs or display), working with any of the six types of HTML input widgets (text box, text area, value list, value popup menu, check box and radio buttons). XWriter 0.8 was built by Hurai Rody and Flemming will write Version 1.0 using techniques he’s used on other projects.
So What’s the Point?
If we’re doing this right, these are the most likely reasons:
This has been a lot of detail, but it’s the only way to find out if we’re headed down the slippery slope Mitch warns us against. Isn’t the reason there are so many poor data solutions that the owners aren’t willing to dig around in the details? I’ve consulted on data projects and the users are so rarely involved, it’s no wonder they aren’t ideal. Many bloggers and bloggees have experienced detail aversion first hand, and it’s not a pretty sight.
Fifteen years ago when I funded and later tried to run the Dynamac Computer project, we would send an extra stick-on keyboard key with each computer. It was red with white letters: DWIM. Do What I Mean. It’s what every database customer wants and it’s what most data designers are forced to guess at. We hope that we’re looking at enough details to avoid the DWIM trap, and we hope you will too.