Mitch generally agreed with yesterday’s blow-by-blow dissection of Xpertweb’s measures to avoid hardening of the data arteries. But he adds four important points on the specific biases that are usually embedded in enterprise data design. These are the biases to avoid by providing for gradual system evolution.
Which is why Xpertweb v 1.0 will be able to read your email (or a dedicated email account). It’s not certain though whether the messages will be organized in a way that a script can make sense of it. Certainly they will if they’re script generated, but you never know about those pesky humans.
I’ve always felt that modality bias is the greatest challenge in networked human interaction. Modality is the mode of interface dictated by the software’s User Interface and data categories. Most of us are comfortable with email, as Mitch points out, though we forget how recently we’ve adopted that mode. When you force people into a new modality, you’d better have some damn good reasons, which is not always the case.
But the web’s theme the past few years has been about unstructured content becoming structured. Web pages like this are unstructured. Even if the content is worth something (unlikely as that may be), it’s hard to find and aggregate the nuggets to establish meaning, because HTML is a presentation tool, not a data-tagging tool. That’s what led to the formulation of XML and XHTML, which requires authors to tag their content by categories. XML adoption has been slow, except by enterprises using it to encode data between two otherwise uncooperative data bases. It’s a pain to tag your data, which is Mitch’s point.
Just as so many mastered the email modality in the early 90’s, so have we mastered online purchasing in the later 90’s. It seems to me we’ve become as comfortable with forms used to buy things on line as we have with the familiar three-paned email interface. So, if the Xpertweb forms can look a lot like other online experiences, we’ve got a shot at providing a friendly user modality.
Bingo! Labeling is a huge problem, since programmers can’t get the lingo and categories right without enthusiastic collaboration with the people who will use the system. Such collaboration is rare because users have no interest in pitching in on the design until the piece of dreck is switched on. I can think of only three ways to reduce the semantic bias problem:
Here’s evidence of the steep gradient between the early adopters and the rest of us. The early adopters see the value of the new tools and are adept at responding to new modalities, unlike you and me. Too bad they’re never around when you need some guidance. Fortunately, time spent is a matter of skill, and skill is usually a matter of time and patience. I learned in the Air Force that there are brilliant pilots and the rest of us, but time in the seat is the great skill leveler–most pilots are average pilots. Motivation is the skills leveler. If you have a strong reason to master something you will. Think of all the grandparents who’ve mastered email to stay in touch. Now they’re even exchanging pictures! Money can also be a strong motivation, and Xpertweb has some explicit rewards built into the system to inspire enthusiastic mastery of the bit of procedurality that can’t be avoided.
All of us are seeing Mitch’s historical bias example. Consider how hard it is on the RIAA. Heh.
Historical bias is assured when the data structures, datatypes, data forms and output protocols can’t be evolved to keep up with the people evolving away from the system. But if someone is motivated to fix the problem and the means are easily engaged, then the changes will be done and the curse of legacy avoided.
I sense that Mitch has concerns about the predetermined stages of an Xpertweb transaction: Discover, Identify, Negotiate, Commit, Invoice and Evaluate. The reason to have discrete stages is because the purpose is to sensibly organize work, without requiring an organization. The asynchronous imperative of a transaction record requires that the data needs of each stage be met before moving to the next stage.
At the risk of being doctrinaire, we feel the need to provi
Data for the rest of us is similarly untried and, by many, still unsought. Our design indeed raises problems. Unlike a central data store, there’s no team to manage and maintain it. It’s not optimized for compactness and speed. It assumes the proper functioning of the Xpertweb scripts on each user’s computer. It assumes that there are enough people who will learn a new (to them) technology and a new way of dealing. A determined techie could find another half a dozen objections.
Living With Diversity
We never set out to create a weird data architecture just to be different. We had no choice, since all conventional methods rely on the kind of centralized data hegemony that would eventually pervert our purpose. Xpertweb’s distributed data store is kind of holographic, present on at least four web sites (2 parties to a transaction and their two mentors). The mirroring of identical data on those sites imputes validity. Validation is further promoted by a validation tool built into every installation. This tool verifies the file structure, to make sure it conforms to this model. It also provides a schema to validate the XML structures, and to validate other sites, on a schedule or on request, so your mentor’s site is continually validating your site’s structure and file conformance to your schema.
The schema provides for the datatypes that are obvious and any optional datatypes the owner may designate using the XWriter tool. These would most typically be added to describe attributes of a service or product. Another compelling reason for adding new datatypes is to transform your ID.xml file into a full-fledged Digital ID using Liberty Alliance protocols to become your own Identity Provider.
The Open Source Haven
In facing these challenges, the Xpertweb model is fortunate to not be a business. If it were a business, we’d be capitalized to hire a crew of programmers and arrange for office space and computers and furniture and all the rest. Development would be done in secret so competition wouldn’t get wind of our gazillion dollar concept. Naturally we’d have a business plan to promise a Return On Investment with a stated marketing budget and rollout and the server farm, etc. Once the code was “done” (always a more-or-less state), we’d have a limited horizon to satisfy the nervous investors, so we’d move heaven and earth to inspire massive adoption by our target demographic. As with most new software, despite the promising number of enthusiastic early adopters, the press and the public would note that it’s interesting and worth looking into some day and would go back to business as usual.
Then would start the decade of dimming hope and rising anguish as the shrinking team of stakeholders tries to wring some value out of what has become an old idea that didn’t quite work out.
Xpertweb has no burn rate and no central software or servers. It will put its genetic material out there, with the means to further propagate it. Starting with just six users and spreading slowly at first, we expect to wring it out, make a few adjustments and then, as they say, let ‘er rip.
Perhaps we’re naive to think that ordinary people will choose to, or even can, master these new skills. But it seems less naive than assuming that our current skills and hierarchies will spontaneously inspire higher productivity and individual work satisfaction.