Friday, December 31, 2004
Pointer to article:
No event signifies the failure of MSN Passport—and of identity-aggregation schemes in general--as well as this. Identity federation is the only workable solution to cross-domain, cross-enterprise, cross-platform single sign on (SSO). Microsoft knows this, which is why it has significantly put its weight behind wannabe federation standards such as WS-Federation—though it has had to pull back that spec as well, due to lackluster industry support. The identity federation landscape is now squarely riding on the SAML 1.0/1.1 and WS-Security 2004 standards (and, starting in 2005, on SAML 2.0, which incorporates the bulk of the core Liberty Alliance standards for more robust, multidomain federation).
Now, Passport is more purely an internal MSN-specific identity-aggregation/SSO scheme. And it’s the last vestige of Microsoft’s failed “Hailstorm” initiative of hosted, identity-enabled, subscription services. We can add Passport to the list of failed, proprietary .Net-generation Microsoft infrastructure technologies—such as .Net Remoting—that the vendor has largely abandoned in its “Longhorn” roadmap.
Microsoft keeps returning to the drawing board. And that’s a good thing (unless you invested in the fruits of its last trip to that board). The company’s learning a bit of humility the hard way. And it’s learning that it’s important to pay attention to what’s on the industry’s drawing board. Because some schemes—such as the need for purely standards-based identity federation—are larger than any single vendor, platform, or application.
You think Microsoft some day might auction off Passport technology on eBay? Any bids?
Pointer to article:
For those in need of a good 100-proof shot of irony to close out this puzzling year...hmm...let’s see…IBM’s PC business as the core of its operations...IBM’s headquarters hometown as its headquarters hometown…ex-IBM exec as its CEO...can this even be termed a Chinese company anymore (apart from the not-insignificant fact that the bulk of its operations are in China)?...and let’s not fool ourselves...China is just about the most capitalistic culture on earth (and I'm including overseas Chinese...a people with whom I'm quite personally familiar...in that statement), regardless of whatever nominal ideology the People's Republic of China's current political leadership still feebly espouses...this marks the further emergence of China into the truly supranational world of modern business...wouldn’t it be interesting if some high-profile US high-tech firm moved its HQ to Beijing to be closer to the massive Asian market?...red goes blue...red states...blue states..old states..new states
Thursday, December 30, 2004
Just in case you're wondering (I've been getting calls and e-mails galore). Egidia's family in Jakarta and Borneo are OK. In terms of Indonesia, which bore the brunt of this horrible disaster, the tsunami primarily affected the northwest coast of Sumatra (Aceh province, mostly, which sustained more than 50,000 fatalities and vast homelessness and human misery). Sumatra (the world's third-largest island) effectively operated as a floodwall preventing the tsunami from affecting Java, Borneo, and other islands to the east in the archipelago. If Jakarta had gotten hit, Singapore (being closer to the earthquake's epicenter, and at the end of the Malay Peninsula) would have gotten it much worse.
Thanks for your concern. Obviously, it's been a rough month for us. No point wallowing in the crap that others, including God, dish our way. We're resilient. We're fine (though occasionally I feel a tad coarse).
P.S. Indonesia's a beautiful country with many wonderful people. Here's the poem I wrote this past summer during our stay (my fourth in 16 years) with her sister and her family in Jakarta-Pusat. Yes, Jakarta is crowded and congested, but exciting in its pure energy:
Is all a warm swarming sprawl.
It's as real remembered as
experienced, as crass and
crowded as any shining
as any concrete bog. What's
a car to this labyrinth:
a serpent snaking itself
into impossible slots,
an air-conditioned escape
pod to brave the squeeze of the
welter. Go drive the hive of
brands and goods, up the high and
mighty rises, down through the
frayed Batavian canal-
infested old neighborhoods.
It's a mart. It's a cart. It's
a stall. It's hidden bazaars
and holes-in-the-wall. It's the
superstore and the mega-
mall. Broadcast prayers, dirty air.
All far too far familiar.
Mari kita berdoa kepada Allah untuk orang-orang yang meninggal di propinsi Aceh dan di negeri lain yang berkeliling Samudera India.
Pointer to article:
Orchestration standards continue to proliferate like orchids in a hothouse.
I use the term “orchestration” as the catch-all for this entire space of standards. I’ve personally standardized on “orchestration” as the master term, rather than any of the many synonyms, such as choreography, workflow, and business process management (BPM). I’m just tired of the industry proliferating unnecessary, confusing synonyms for the same phenomenon. And they all do refer to the same core phenomenon: the rule-driven flow of content, context, and control throughout a distributed business process. This core definition is agnostic to such concerns as where the rules engines reside in this orchestrated process (intermediaries or endpoints or mix of both), what the endpoints of this process might be (applications, humans, devices, or what have you), and so forth.
It’s important to note that WS-CDL and WS-BPEL are complementary orchestration standards. WS-CDL would be used to define orchestration rules executed by process endpoints (vis-à-vis the orchestrated message exchange patterns in which those endpoints participate). WS-BPEL, by contrast, is used to define orchestration rules executed by process intermediaries (e.g., integration brokers), vis-à-vis the orchestrated message exchange patterns in which those intermediaries participate.
WS-CDL 1.0 addresses a narrower functional scope than its defunct W3C predecessor, WSCI 1.0, which attempted to match WS-BPEL as a full-fledged orchestration execution language that drives processing at intermediary nodes known as “integration brokers” or “orchestration engines.” WS-CDL 1.0 only addresses the observable, structured interactions of Web services with their users, which may include humans, applications, and/or other Web services. WS-CDL 1.0 can describe peer-to-peer, cross-enterprise, message exchange patterns associated with interactions among users, applications, and/or Web services endpoints. It supports both WSDL 2.0 and SOAP 1.2.
WS-CDL 1.0 documents are essentially “contracts” that govern orchestrated interactions among two or more endpoints. WS-CDL documents are exchanged between endpoints through various means. These documents provide endpoints with the information necessary to mutually establish, coordinate, and manage orchestrated interactions, which may involve peer-to-peer message exchanges or be routed through intermediaries such as integration brokers. Essentially, WS-CDL may be used to describe the coordination context as viewed by distributed participants in orchestrated business processes.
A WS-CDL document defines participants, roles, relationships, interaction patterns (including specification of messages, channels, ordering, constraints), and interaction lifecycle associated with two or more endpoints in a distributed business process. WS-CDL interaction lifecycles specify conditions under which endpoints instantiate, transition, and complete all the steps in a business process, as well as what messages are to be sent and/or received within each of those lifecycle stages. Lifecycle specifications also describe the message exchanges under which endpoints handle errors and recover from aborted or failed business processes.
As I said, WS-CDL 1.0 is complementary to WS-BPEL. Conceivably, a WS-CDL 1.0 contract can be used to determine whether an orchestration developed and executed in WS-BPEL exhibits the message exchange patterns expected by various endpoints in a business process. WS-CDL 1.0 contracts would describe the message sequence and conditions expected by each participating endpoint node, whereas a WS-BPEL orchestration would specify precisely how an orchestration engine executes each step in a complex, multipoint business process. However, W3C and OASIS are not coordinating their work on the respective specifications, and WS-BPEL’s primary advocates—Microsoft and IBM—are not participating in W3C’s development of WS-CDL.
So the orchestration standards wars continue. There’s much more going on in this space than I care to elaborate on in this blog entry. I could write a book about it (actually, I’ve written two books on orchestration topics—not eager to write a third until the market actually starts demanding such titles).
Wednesday, December 29, 2004
Pointer to article:
This news reminded me, obliquely, of a Jerry Seinfeld bit from the 2002 documentary movie “Comedian,” which I rented from Blockbuster last week. In it, Seinfeld was musing on the whole notion of a “think tank,” and what a funny phrase that is, and the weird mental picture it creates of people locked in some intellectual isolation chamber and expected to do something—think—that they can’t help doing anyway. Why not a “breathe tank” or a “metabolize-your-food tank”? When I think (yes, I do think, habitually) of the notion of an “IT industry think tank,” the Seinfeldian (or is it Pythonesque?) image that comes to mind is of rows of industry analysts chained to their PCs, like some sort of cyber-slave-galley, being commanded by the ghost of Thomas Watson Sr. (of IBM fame), under pain of bullwhip, to “think…think…think.”
OK, I’ve actually got a more serious comment on this news. I’ve been an IT industry analyst for years, and have worked in one of those “think tanks,” which, of course, are nothing like any of these images suggest. First of all, you’re not expected to “think” so much as you’re expected to comment. Most of these IT industry analyst firms are more like “commentary communities” than “think tanks.” Commentary gets delivered through various channels: reports, telebriefings, dialogues, consulting, conferences, public speaking, publishing in trade journals, quotes to inquiring reporters, and so forth. And commentary, in most of these firms, concerns the “news”—or the new paradigms, approaches, technologies, vendors, products, standards, and so forth—that continually pop up, as well as commentary on the “olds” (e.g., whether legacy approaches/products/etc are still relevant, or whether and to what extent they interoperate with the “news,” and to what extent they need to be migrated away from).
Fundamentally, in the IT industry food chain, IT industry analysts sit quite close to trade press reporters—the key difference being that analysts not only report on what’s new and important, but they themselves are experts, thought leaders, and consultants helping their customers to assimilate and apply all the new stuff.
So, the news that Gartner is acquiring Meta—what to make of it?
First off, these are the two largest firms in the IT industry research and analysis (R&A) market, so this is a very significant event. I suspect that these firms have a lot of common customers (many enterprises subscribe to two or more R&A firms’ services) who will be expecting some grand cross-fertilization from this nouveau convergence of two R&A “supermarkets.” The combined Gartner/Meta, with its diverse services and hundreds of analysts pumping out reports on a regular basis, will be even more dominant in this space.
Secondly, this merger by no means closes off meaningful competition in the R&A market. There are dozens of other firms in this space, with different technology or vertical focuses, different business models and offerings, and different analysts with different approaches and backgrounds. The IT industry R&A space is huge and ever-expanding, with low barriers to entry; indeed, a two-person analyst shop such as, say, Zapthink, can, through shrewd promotion, elevate themselves to a prominence (in a given, narrowly defined segment) all out of proportion to their size. In addition to boutiques and other, much smaller R&A shops, Gartner/Meta still have to contend with large rivals such as Forrester and IDC.
Thirdly, this merger doesn’t directly improve the quality or depth or relevance of Gartner/Meta’s collective R&A offerings—it’s more of a horizontal consolidation of very similar companies with similar offerings but slightly different customer bases. The combined Gartner/Meta brand has no special halo to it. Actually, much of the actual branding of R&A firms is tied up in identities of the particular analysts within them—the actual “gurus” who, each of them, provide a different take on the “news” and hence, in economic terms, are not directly substitutable for each other (though nobody’s indispensable and, with the rampant pack-analyst commentary cloud in which we all travel, it’s sometimes difficult to tell the difference between Mr. Joe Analyst and Ms. Jane Analyst in terms of their perspectives and recommendations).
There’s no point in questioning whether Gartner’s “magic quadrant” is any better or worse an analytical approach than whatever Meta’s characteristic approach happens to be. Every R&A firm has its own approach which, among other factors, constitutes its corporate “brand.” Enterprise and vendor customers derive varying degrees of value from most R&A firms’ reports, conferences, etc. All the R&A firms do great work. All R&A firms’ analysts consume other firms’ analysts’ work products (usually, in the form of articles that analysts publish in the trade press; I know, cuz I read everybody, and lots of people read my 17-plus-year-and-running Network World column “Above the Cloud”; in terms of longevity, I'm practically their "columnist emeritus," though on the masthead I'm simply "contributing editor"). And we read each others’ blogs (an analyst remains an analyst, regardless of whether he/she is drawing a salary from some larger firm or is simply self-employed and self-motivated).
As analysts, you have to continually provide free samples of your brainpower, through various media, to show the world that you’re staying fresh and that you’ve “got the goods.” The best analysts are also synthesists: bringing in many sources of information and perspective: providing innovative and (hopefully) useful new ways of looking at problem spaces. If we’re doing our jobs well, we become master clarifiers of all the complexity and dynamism in the environment. We also help vendors to understand where they fit in the evolving IT food chain, and help enterprise customers to align their plans and protect their investments in an ever changing world.
IT R&A firms can consolidate all they want. No one firm-—no one individual analyst, for that matter—-will ever monopolize smart thinking on the day’s events and the future’s likely shape.
There's always someone out there with fresher thinking.
Tuesday, December 28, 2004
Pointer to article:
This is no surprise. Actually, it’s a good move for Microsoft and its customers.
Strong content filtering should be a mandatory feature of all messaging systems. It’s especially critical for SMTP-based Internet e-mail, which is the most universally interoperable—but also vulnerable—distributed service. With universal interoperability comes universal exposure to all the beasties—spam, viruses, objectionable images, etc.—that come in with the mail.
The notion that mail filtering should take place only at the “edge” of the network is wrongheaded. Spam is like a tsunami that never lets up: it must be held back with a layered array of floodwalls from the ISP to the DMZ to the mail bridgehead router to the departmental mail server to the desktop. Single-node anti-spam solutions, such as content filtering on enterprise mail servers, are insufficient, because they do nothing to stem the flood of spam that congests ISPs, perimeter firewalls, and enterprise network backbones. Spam-only mail-filtering solutions are insufficient because they don’t address other critical mail-borne threats such as viruses, pornographic content, and oversize file attachments. And static anti-spam filtering rules are insufficient because spammers are smart and continually revise their plans of attack to circumvent the most popular filtering rules.
To address these inadequacies with current anti-spam approaches, the industry needs to explore techniques that fit the following criteria:
• Multinode: deployable at the client, mail server, mail gateway, perimeter gateway, and network service provider levels
• Multithreat: able to block spam, viruses, pornographic file attachments, and other species of mail-borne threats
• Dynamic: adaptable to the ever-evolving character of real-world spam attacks, spamware, and spammers
The forces that flood spam, spim, spit, and other sh*t into our mail systems are relentless. Our collective resolve to stem the tide must also be steadfast. All of these threats are causing a steady rise in the “sea level” of junk mail for all of us, and we’re in continual danger of inundation.
On an even more serious note, please say a prayer for the 40,000+ victims of the horrible earthquake and tsunami in Asia. It was a lovely day and this calamity was totally unexpected and sudden. It could happen to any of us anywhere in the world near coastlines. Which, come to think of it, constitutes the majority of the world’s population. The ocean floor is much larger than dry land area on earth, and probably has as much or more earthquake activity.
Monday, December 27, 2004
Pointer to site:
Brilliant! I remember one website where somebody sketched out the imagined floorplans to fictional sitcom houses (e.g., Munsters, Bradys, Cleavers, etc.). Now we have the imagined skeletons of cartoon characters. As we watch them on the tube, we inhabit their homes and their skins. I’ve got some cartoon-inspired poems that I’ll share eventually.
Pointer to article:
According to the article, US consumers don’t want to use fingerprint or thumbprint recognition (let’s call it digit recognition) in cell phones because they associate it with criminals, crime scenes, jails, and so forth. Well, why not make digit recognition invisible to the cellphone user? On flip phones, why not embed a digit recognition surface in the phone’s cover—in some slightly recessed surface that the user must push with some digit (thumb or index finger) in order to open? That way, the digit recognition is being used for its ultimate purpose: “opening” the phone so that the authorized individual may use it? Or embed it in the side of the phone, in a “grip” that helps the user to hold onto the phone? That way, it’s implicitly helping the user to physically “hold onto” the phone.
Or, thinking outside the box, why not give cellphone users a smart card with an embedded RFID transmitter chip. They put this card in their wallet or purse. The phone only works when it’s within a certain distance of this RFID transmitter. Outside that range, it’s locked up. Maybe what users want is an electronic leash for their cellphones, so that they can share the devices or leave them (disabled) wherever’s convenient.
Friday, December 24, 2004
Pointer to article:
I downloaded Firefox late last week and made it my default browser. So far, so good. I switched away from IE not for security reasons. I switched to Firefox because I was so fed up with IE’s irritating behavior. A couple of months ago, I started getting those inexplicable “ads234” interstitials when visiting various webpages in IE. A couple of weeks ago, I started getting flaky bogus “page/file not found” messages when I clicked various links to pages/files that I know full well are there. Of course, there are the endless popups. And so on and so forth. I felt like I was losing my f***ing mind. So far, Firefox has restored some semblance of good behavior to my webtop. I have enough pressure in my life right now, from many corners. I seriously don't need my computer giving me a hard time too. Something’s gotta give. For me, IE gave. Sorry Microsoft. But, of course, I'm not sorry.
Pointer to article:
This is quite a relief. Slate is a great publication that just happens to have sprung up online first, in the dotcom bubble-building late 90s. The Washington Post is a great media company. May Slate live long and stimulate.
This also gives me more chances to be rejected as a poet. Robert Pinsky (former US poet laureate and prof at Boston University) is Slate’s poetry editor. I’ve sent many poems to him over the years, and all have been returned without comment. The most epic rejection was the first submission to Slate/Pinsky, in August 1997. One evening, I found his e-mail address (at BU) and e-mailed him the then-current version of my book “Pieces of Fate.” The next evening, while the wife and I were drinking beer at home, I got a phone call. A man with a deep NewYork-ish voice asked “Is this James Ko-bee-lee-us?” “Yes,” sez I. “This is Robert Pinsky…” My jaw dropped. My genius finally acknowledged? “Do you realize you sent me the Word Concept Virus.” Oh…sh*t. Yeah, he recognized me, all right, but for the wrong thing. No, it wasn’t deliberate. I was unaware that my home computer had the damn virus. Anyway, I apologized, explained the accidental nature of the infection, advised the (then) US poet laureate not to save changes to “normal.dot,” etc. Then I asked a quick question: “What did you think of the poems?” It was clear to me that he had barely looked at them. All he commented on was the fact that I had alphabetized the running order of the poems. It wasn’t a long call—no more than 3-4 minutes. But that just goes to show how totally pathetic I am at marketing my poems. Which I’m still writing: have more than 500 of them now in “Pieces of Fate,” and have been composing the book continually since August 5, 1995 (arbitrary start date).
A few months ago, I read a new poem that Pinsky had published in “Wired” magazine. The poem was called “Pixel.” I liked it, as I do all his stuff, but I felt that I could do better. So I composed a Kobielus take on the same topic. Gave it the same title. Sent it (via paper mail) to Slate (Pinsky’s still the poetry editor). Same results. Anyway, here it is, Kobielus’ “Pixel” for the pleasure of you my blogreaders:
One ought to thank Planck for the thought
the infinitesimal's not
a fathomless bottomless well
but a plot of versatile dots.
One ought to toast Hearst for the screen
that lays down the points in a clean
mist of crisp pixie light and strips
by the millions milled by machine.
And nod to Turing for blurring
the point where the strip takes sense and
base elements assemble the
cells and scenes and trick behind
the calculated picture of mind.
Send me an e-mail if you’re interested in getting the full “Pieces of Fate” (I'm at email@example.com). I'll send it to you at no charge. And free of viruses and other wee beasties. I promise (or, more to the point, my anti-virus vendor, in whose tool I trust, says it's clean).
OK, now, about the poems. I'm an IT industry analyst/consultant/pundit. Why do I also write poetry? Same reason I write this blog. For me. It's sort of like a diary, but not really. It's a big ol' ball of yarn that I continue to wind from found (in my head) materials; if you derive some value from it all, that’s gravy.
Or is that mixing metaphors?
Pointer to article:
This article mentions the concept of “federation” in a non-IdM context. As I noted in a previous blog entry (“imho Federation much broader than identity”), the term “federate” refers to any governance framework within which autonomous domains honor each other’s decisions and accept each other’s assertions, subject to their respective local policies. In that prior post, I outlined federation scenarios in the presentation, business-logic, data, identity/security, and management tiers. This article generally discusses federation in the data tier, which I characterize as a involving data repositories in different domains federating domain-spanning information-integration scenarios. It all comes down to extending permission-based arms-length sharing of data among autonomous domains. For an interesting application of federation in the data tier, visit www.epok.net, one of the first vendors federating data via XRI/XDI. Yeah, they’re an EII vendor (as are the vendors mentioned in this article), but one that leverages a federated IdM environment to create a federated data-sharing environment. Epok calls it “identity rights management” (not to be confused with digital rights management).
Thursday, December 23, 2004
Pointer to article:
Of course, this is also a key feature of Windows Server 2003. Microsoft is simply signaling that “Longhorn” will continue its roadmap of providing role-fittable Windows versions. This is not a change of direction for Microsoft, by any means. Even within the principal editions of Windows Server 2003, Microsoft encourages enterprises to regard their deployments of the product in a quasi-commodity, role-oriented fashion. Specifically, administrators use a “Manage Your Server” wizard to define the specific server roles that a particular Windows Server 2003 deployment will address, such as domain controller, file server, Web server, and application server. For example, if a Windows Server 2003 deployment is set up solely as a stand-alone departmental file server, the wizard will prevent IIS 6.0, and all of its associated Web and Internet protocol interfaces, from being installed on that server.
Microsoft’s “role-fitted Windows” approach is evidence of ongoing commoditization in the server OS and application platform markets. Commoditization of the server market—not Linux or the open-source community—is Microsoft’s biggest foe and poses the most fundamental threat to the long-term profitability and revenue growth of traditional software vendors. Commodity-like offerings, many of them based on open-source software, are starting to come on strong in many market niches, such as Web servers (Apache, for example) and mail servers (Sendmail, for example).
Microsoft’s response to the server commoditization challenge is savvy. In any market, companies compete on product price or features (or both), and Steve Ballmer has clearly stated that Microsoft will step up R&D in order to distinguish its offerings from rival software products in the various niches where it competes. Microsoft is also increasingly stressing the benefits of its unified enterprise software architecture and features such as Kerberos, Active Directory, and Enterprise UDDI Services.
But, more fundamentally, Microsoft is positioning its products as quasi-commodities (within a broad architectural framework) that are targeted aggressively at various market segments. The company’s recent server software releases are noticeably less monolithic and more function-limited in architecture than previous versions. In addition, Microsoft has deliberately excluded a lot of promised new functionality from the basic "Longhorn" release (as it did from the base Windows Server 2003 release). Instead, Microsoft will continue introduce many new features at various times in the form of “layered” (and separately licensed) add-ons. By productizing various features separately from the basic server OS, Microsoft is giving itself the flexibility to position those products as agile competitors in market niches feeling the commoditization crunch.
For enterprises, these are good signs and good times. Commoditization promises to reduce software licensing costs, increase vendor competition and innovation, and encourage standards-based interoperability. Microsoft is smart enough to know that it's counterproductive to resist this unstoppable trend.
Wednesday, December 22, 2004
A FAST BROKEN
An egg an orange
are a fast broken, a dream
of day’s unfolding.
A note my wife left
reminds me to find someone
to repair our car.
Crumpled, it makes a
fine projectile, an arc traced
through indoor morning.
Sky flick a single flash of
blaze and black then back say God.
Pow come the flowers of our
natural friend without end.
A solid stroboscopic
hour of indeterminate
SEVERAL BILLION ONES
A fish-eye lens
would add unbearable curvature to
the continental expanse of the People's Republic of China.
An only child
downloading from landlocked Lanzhou
would add a brand new national streetmap
and triple the population
of playmates on-line.
In the essential
from whatever drops
of liquid sunshine
are vouchsafed their way
or, failing that, fix
off the glints of glare
that glance in off the
gray and grace their green
transcend us, risen up from
the thumb, giving gaze
to a face more essential.
God hurts for a tree
fresh of fruit when
no one's climbing.
Strips it bare.
Strands it in a field of fog.
to its surroundings.
And the cover of
PLEASE THINGS WORK
Be there when I'm broken, in
such a down state of utter
fingers can't feel their way back
home, eyes can't grasp that red and
green won't connect under such
shadows, that I bang my dim
and damned head down back behind:
Help me hold the mind I find
in diagrams and daemons.
MASS OF THE PLANET
Ponder the it in
us, the stone in which we stand
Gray or grey the dark
dots clump into larger lumps
of loose gravity.
The holes into which
ghosts inject their voices and
and holy may your reign and
design grace the earth.
Sustain us daily.
Forgive us as we others
and lead us from sin.
Above all are your
kingdom power and glory
here now forever.
Father in heaven.
Holy your name. Come your reign
and plan over all.
Bring us to the bread.
Forgive our due and spare us
the trials and devils.
Ever till the last
shall your kingdom power and
Father o father.
May your kingdom and laws spread
and conquer the world.
Give us subsistence
and protection from foes strange
To only you is
due tribute and praise for this
Pointer to article:
The most durable, eye-friendly copy is history’s ultimate transmission medium. Hardcopy is the only format that will survive the never-ending softcopy format wars. Copying from one hardcopy format to others (hopefully, as many as possible) is the best way to ensure that information survives the many conflagrations, inundations, and infestations that will efface today as readily as they’ve blotted out our past. Any innovation that allows more information to jump the erasure gap is on the side of the angels. That’s why I raise a glass in toast to these Xerox researchers. Let’s make every photocopier smart enough to interpolate and reconstruct the hard-to-scan words on adjoining pages at the common book bind. Net-net, the bind/spine of the book is our friend, after all—it holds together large clumps of information in their intended sequence. This innovation—smarter word-image capture from angled/bend pages—lets us have the best of both worlds: preserving the book’s spine while ensuring fuller transmission of the book’s contents.
Apparently, people are reading this blog. Weird how word gets around.
In a recent blog entry (http://www.identityblog.com/2004/12/16.html#a66), Kim Cameron, identity visionary at Microsoft, has responded to my challenge to his "four laws of identity" (in the form of Kobielus' "four principles of identity," which I proposed in an earlier blog entry). Kim puts forth the following principal arguments:
First, he asserts that his "laws" are "explanations of why previous identity systems have failed where they failed and succeeded where they succeeded." If that's so, can he be more specific? Which previous identity systems? How is he defining the success or failure of such systems? How have privacy concerns--the primary focus of his "laws"--stymied acceptance of these identity systems? How can his "laws"--or, more to the point, normative imperatives to be implemented in identity management systems--make a difference?
Second, he construes me as arguing that "[people's] identit[es] are owned and controlled by the [big, impersonal, third-party] authorities who make assertions about [us]." That's often true, but he's overlooking a critical point that I make: in various (actual or ideal) identity regimes, each of us may be an "authority" (hence owner and controller, to greater and lesser degrees) over our own identity information (though often, the predominant identity authorities are large, impersonal institutions that exercise the balance of ownership and control over our identities and what we can do with them). Hence, in such "self-authority" identity regimes, privacy protection and permission-based attribute sharing are critical features. Consequently, my principles absorb and encompass his more limited precepts, which only focus on "self-authority" identity regimes, in which the designated (i.e., identified, named) entity is the only owner/controller. Per my second and fourth principles:
---"Identity is issued, owned, asserted, vouched, interchanged, controlled, disclosed, and administered by one or more recognized authorities, which may be the designated entity itself (i.e., self-declaration) and/or various third parties with responsibility over various roles, transactions, or scenarios in which that entity participates (and who may provision or deprovision some aspect of the entity’s identity at their pleasure, will, or whim, depending on their power over him/her/it in various spheres).
--"Identity is control over the entity that it designates, and that control may reside to varying degrees in the designated entity, various recognized identity authorities, and/or various relying parties."
Finally, he construes me as "dismiss[ing] how the user is treated while we build the identity system." Once again, Kim needs to read my full blog entry. I specifically state the following:
--"Privacy protection is important. "Personal control over one’s own identity information is important. But they aren’t the only requirements that must be addressed in a full-blown identity service bus. They don’t address cases where there’s a legitimate need for anonymity, or for full disclosure (over a designated entity’s objections) of identity. Should illegitimate political regimes be able to penetrate the veil of anonymity in which freedom fighters cloak their righteous activities? By the same token, should suspected terrorists own those identity attributes pertaining to themselves that, disclosed to the proper, legitimate authorities in the nick of time, would prevent massive death and destruction?"
I wouldn't construe any of this as arguing that it's unimportant how the user fares in an identity regime. Control over our lives depends critically on how we architect our identity service bus.
Kim, please read my post more closely. You'll find that I was doing you a service: generalizing, extending, and elaborating on your initial proposal. Yours was a good start, but you didn't model the problem space as deeply as you should have.
Monday, December 20, 2004
Pointer to article:
By the end of this decade, RFID will be a much larger phenomenon than just supply-chain management. Every new computer, peripheral, and office machine sold (photocopier, coffee machine, etc.) will include an RFID tag for property identification, tracking, and control. In fact, every new consumer electronic device will include one as well. Every new premises security system will support the ability to scan and track devices based on their RFIDs. Every identity management (IdM) and access control system will manage RFIDs (and associated attributes, permissions, roles, etc.) in addition to user IDs (and the associated attributes, permissions, roles, etc.). Expect to see all other application platform, IdM, and security vendors embed RFID technology deeply in all of their products. IBM’s a bit ahead of the curve, but it’s clearly surfing on the right curve. RFID is the future of IdM. Before long, every employee ID or access card will embed an RFID tag, thereby converging the new and traditional worlds of IdM. Before long, every portable computer will sport both a native WiFI card and RFID tag, so that network access, authentication, and access are auto-provisioned to you, no matter where you roam.
Pointer to article:
It’s not just a security issue. It’s a stability, performance, and user peace-of-mind issue. I’ve had one computer (a Windows Millennium machine) rendered totally unusable by spyware, adware, and related gunk. I’ve had other computers (Windows XP) seriously gummed up by this junk. I’ve literally had nightmares about “Golden Casino” and other spyware-borne invaders taking over my world, locking me out of my homepage, bombarding me with pop-ups, and so forth. This crap needs to stop now. It’s getting to the point where my user experience of Windows is on a par with attending a wake. It’s as if I need rubber gloves to the touch the computer. I break out in hives just thinking about what may be coming down the wire with the next page I browse or e-mail I retrieve. Using the computer has become far too stressful.
Friday, December 17, 2004
Pointer to article:
Yes. The superplatform vendors are taking over. At heart, a superplatform is the following:
* A single-vendor product portfolio that includes best-of-breed offerings in most critical enterprise software categories, including (especially) application servers, portals, middleware, database management systems, integrated development environments, enterprise information integration tools, and packaged business applications.
* A broad, diverse vendor ecosystem that extends, supplements, and supports all of these core software offerings.
* An established, entrenched customer base that has committed to the vendor’s (and its partners’) offerings across many or most enterprise software categories.
By those criteria, the emerging superplatform vendors, as this article states, are SAP, Oracle, Microsoft, and IBM. They differ in many ways, of course, but they are all motherships that will continue to strengthen their collective market clout, and also engage each other in ever more fierce battles in various product, regional, and enterprise markets.
SAP will attempt to leverage its predominance in packaged business apps by selling its NetWeaver platform offerings more deeply into its huge customer base.
Oracle/PeopleSoft/JD Edwards (now a single company) will continue to chip into SAP’s applications market lead while leveraging Oracle’s clout in databases, middleware, and tools.
Microsoft will attempt (with sporadic success) to push its way out of the SMB application niche up to the high-end enterprises in which SAP and Oracle are predominant (while, of course, continuing to push Windows further, wider, and deeper into all niches).
IBM, though lacking ERP and other apps necessary to go head-to-head against SAP and Oracle in that niche, will make strategic acquisitions in this area. It will also continue to leverage its considerable strengths in most platform software categories while using its predominant asset—its huge, worldwide consulting and systems integration business—to push its and its partners’ offerings into all manner of integration and application development projects.
Where does this leave various platform vendors (BEA, Fujitsu, HP, JBoss, Novell, Red Hat, Sun, Sybase, etc.) and middleware vendors (Sonic, TIBCO, etc.)? Clutching for survival in the second-tier of vendors who haven’t achieved “best-of-breed” status in any particular niche.
More to the point, the enterprise software industry is shaking out into two broad segments:
· Superplatform vendors (the acquirers and survivors): SAP, Oracle, Microsoft, and IBM
· Other vendors (who’ll either be acquired by superplatform vendors, will partner more deeply and symbiotically with the superplatform vendors, or will vanish from the face of the industry): more or less everybody else
This shakeout will be complete by the year 2010.
Thursday, December 16, 2004
This open-source holy war is growing a bit tiresome. Who really cares whether JBoss or Gluecode has the “purest” open-source bundle of goodies? These guys should be going after the J2EE vendors, and after Microsoft, not each other. But they’re ill-equipped to do so. As the article indicates, neither JBoss nor Gluecode currently fields an application and middleware product portfolio of comparable functionality or maturity to any of their bigger, better established “closed source” rivals. And they’re not likely to rise to that high bar for another 2-3 years, at the earliest. Obviously, these open-source vendors are hoping that those support revenues (the core of their business models) can fund their product development for the foreseeable future (to the tune of $10,000+ per year per customer for JBoss and $3,500+ per year per customer for Gluecode). Good luck. Caveat emptor: Open source software is the opposite of free. Much assembly required. Much handholding required. Yeah, the platform software is free, but that’s the least of it. Open source is pure, all right. A pure money-making machine for IT professional services. It’s a full-employment-in-perpetuity program for consultants and system integrators.
Wednesday, December 15, 2004
Pointer to article:
Yeah, they aim high, but they're missing the target by a mile. JBoss’ “Enterprise Middleware System”’s jBPM v2 is not a full-featured, competitive integration broker (aka orchestration engine, BPM engine, workflow engine, choreography engine--pick your synonym). It’s more of a placeholder for an integration strategy that JBoss is still cobbling together. jBPM v2 includes a core orchestration engine, but it lacks support for WS-BPEL, lacks a visual process designer, and lacks a portal-integration human workflow component. On the positive side, JBoss plans to add all of this missing functionality over the next 1-2 years. Also on the positive, this is an open-source orchestration engine that will run in any Java Virtual Machine (including Apache Tomcat’s servlet engine or any third-party J2EE app server). It will doubtless find its way into many Java-oriented integration projects, but as a toolkit that requires additional componentry sourced and/or developed elsewhere. Contrary to the subhead in this article, it won’t compete directly with the much more mature, multifunctional, best-of-breed integration offerings from IBM, BEA, Oracle, and others. At least not for the next 2-3 years. And contrary to what JBoss’ VP says in the article, BEA, IBM, and other integration-broker vendors don’t provide “all of nothing” offerings. Like JBoss, they provide runtime orchestration engines, resource adapters, development tools, portals, and other components that can be deployed and integrated modularly in various configurations. Those other, established platform/middleware vendors simply provide more multifunctional, robust, mature, enterprise-grade offerings in all of these categories than does JBoss. And, as Richard M-H notes, they provide many more resource adapters than JBoss. Those are the sine qua non of all integration products. Stay tuned to JBoss for interesting announcements over the coming year, regarding where they’re going with jBPM. They’ve got an interesting roadmap, but they’re just warming up the car. They haven’t really pulled out of the garage yet.
Pointer to article:
Wireless service is rapidly becoming a commodity market. Carriers are plunging down a precipitous slippery slope to zero margins on their core services. Average revenue per subscriber will continue to drop unless they continue to differentiate themselves through IP multimedia services, streaming video, and other value-added offerings. T-Mobile USA can’t afford to stand still and wait for bandwidth to become available or UMTS to mature. As T-Mobile idles, Sprint/Nextel, Cingular/AT&T Wireless, Verizon, and BellSouth will beat it to market with all the “cool” new broadband-based services. Wireless customers are notoriously fickle, and T-Mobile, lacking a broadband offering, could face swift, extensive customer defections. It would be a shame if the carrier isn’t able to migrate its WiFi hotspot customers to a broadband public wireless offering (be it EDGE, UMTS, or whatnot) in the next 1-2 years. T-Mobile’s future in the US market is at stake, and two years is too long for its customers to wait for it to make progress on its broadband roadmap.
Tuesday, December 14, 2004
"Scientists said today that they have devised a cell phone cover that will grow into a sunflower when thrown away."
What's next: the Chia-phone?
"Politically, it's devastating to take away the spreadsheets from the end user," [Mike Thoma, vice president of marketing at South San Francisco-based Actuate] said. "But on the other hand, you don't want to go to jail."
"Excel is the cocaine of finance," [Gerald Baltrusch, IT director at The Tile Shop LLC, a Plymouth, Minn.-based tile retailer] said. "Once you start using it to calculate your final numbers, you can't stop."
Apparently, input errors are criminal offenses in some jurisdictions.
Monday, December 13, 2004
The term "federation" has been circulating throughout the identity management (IdM) and network security markets for the past several years. It has acquired the status of a holy buzzword, promising flexible, standards-based, loosely coupled single sign-on (SSO) over Web services environments. A growing range of Web services standards--such as Security Assertion Markup Language (SAML), Liberty Alliance Identity Federation Framework (ID-FF), and WS-Security--have sprung up to realize the federation dream. In the process, the concept has usurped much of the industry mindspace formerly enjoyed by public key infrastructure (PKI) and trust infrastructure, though federation environments rely on all of that infrastructure.
However, like all buzzwords that become too popular, "federation" is starting to feel a bit shopworn.
For one thing, the industry has never converged on a consensus definition. One of the most common definitions is that federation "makes identities portable across autonomous domains." That definition has never felt right to me, because it's clearly contradicted by the dominant federation approach: interchange of identity assertions between identity providers (IDPs) and service providers (SPs)--such as under various SAML and Liberty ID-FF implementation profiles. SAML and Liberty do not make "identities portable across domains." Rather, they do the exact opposite. Identities stay put in the IDP domains in which they were registered. Identities are not synchronized or transfered to other domains. Instead, IDPs authenticate logins locally, and simply transfer vouchers for those logins (which SAML calls "authentication assertions") to relying (SP) domains. What identity federation is really about, then, is not to make identities portable across autonomous domains. Rather, identity federation simply makes assertions about identity-related events (e.g., logins) portable across autonomous domains. That's an important distinction. One of the great advantages of federated identity (over, say, meta-directories) is that domains can prevent disclosure of identity information to other domains.
Another problem is that the industry has applied the concept of federation too narrowly. Federation can and should be applied to more than just identity management. And federation can and should leverage the core Web services middleware environment more thoroughly. A more general definition of federation would be as follows: “a governance framework within which autonomous domains honor each other’s decisions and accept each other’s assertions, subject to their respective local policies.”
Looked at that way, we can define federation models in the various tiers of a distributed fabric:
* Presentation tier: presentation servers in different domains federate domain-spanning user sessions
* Business-logic tier: integration brokers in different domains federate domain-spanning business processes
* Data tier: data repositories in different domains federate domain-spanning information-integration scenarios
* Identity/security tier: IdM and security systems in different domains federate domain-spanning account provisioning, SSO, role-based access control, and other scenarios
* Management tier: management systems in different domains federated domain-spanning fault, availability, reliability, performance, and QoS scenarios
Fundamentally, federation environments enable arms-length interoperability among domains through publish-subscribe relationships. Domains choose to publish assertions out to other domains, and to subscribe to assertions issued by other domains. The term “assertion” here should be interpreted more broadly than simply “SAML assertion” or “identity assertion.” An assertion, understood broadly, is any statement about some event, object, or state controlled, owned, or managed by the asserting domain (such as session state, in the presentation tier; orchestration process status, in the business-logic tier; create, update, and delete operations, in the data tier; user logins, in the identity/security tier; and system faults, in the management tier).
To enable federation as a broader, standards-based infrastructure, all of these tiers will need to leverage emerging WS-* for topic-based publish/subscribe (WS-Notification), event notification (WS-Eventing), and reliable messaging (WS-ReliableMessaging).
For starters, I’d like to recommend that the IdM industry put this matter on the roadmap for SAML 3.0 (or whatever it will be called), which, presumably, will converge SAML 2.0 (itself still in the works) and WS-Federation (which is in a state of suspended development at this time). SAML 3.0 (or WS-SAML, or WS-Federation, or whatever it may ultimately be called) should include an implementation profile that describes how SAML IDPs and SPs can leverage WS-Notification et al. within their “inter-site transfer service” (complementing the browser/artifact, browser/post, and other existing profiles).
Over the next 3-5 years, these WS-* standards will form the basis for the enterprise service bus (ESB), enabling robust interoperability traditionally associated with message-oriented middleware. Federation across all distributed tiers will rely critically on these standards. The IdM industry should take the lead, recognize that inevitable trend, and begin to implement them in their federation standards.
Pointer to article:
PeopleSoft and JD Edwards customers know now that the writing is on the wall. Though Oracle will continue to release enhancements to those product families for the next several years, and support them for a long time to come, the acquiring vendor almost always relegated the acquired products to legacy status. The principal exception is when the acquired products have strong functionality or brands that the acquirer can’t match. The Oracle brand is stellar and its apps are best-of-breed in many app categories. I sense that the PeopleSoft and JD Edwards brands will be allowed to decline gracefully over the next 5-10 years.
The principal loser in this deal is Microsoft. Over the next 3-5 years, its principal hope for competing with SAP and Oracle at the high end of the business application market lay in acquiring a major application vendor outright. Now that PeopleSoft is off the table, Microsoft will have to rely on its small-to-midsized business (SMB) applications, acquired through acquisitions (Great Plains, Solomon, Axapta, Navision) and also under internal development (“Project Green”). None of those products directly compete for the high-end market in which SAP and Oracle duke it out. And it’s highly unlikely that Microsoft will be able to gain inroads by positioning those products into the high-end market. SAP/Oracle app customers (ERP, CRM, SCM, etc.) typically upgrade or migrate only once every 10 years or so, but, more often than not, stick with their primary business app vendor for much longer.
Another loser is IBM. It lacks packaged business apps to compete in the enterprise or SMB markets. It had formed a “white knight” partnership recently with PeopleSoft to co-market IBM platforms, middleware, and tools with PeopleSoft apps. Apparently, that deal is now effectively defunct. I’d be very surprised if Oracle doesn’t push its own platforms, middleware, and tools aggressively to PeopleSoft and JD Edwards users. IBM doesn’t directly compete with SAP/Oracle in the ERP market, and, unless it acquires Lawson or some other second-tier player, it won’t be a viable competitor anytime soon.
Friday, December 10, 2004
Kim Cameron is a powerfully probing thinker. He’s also a lot of fun to speak to. He doesn’t mince words and isn’t afraid to twist the tail of the “authority” (i.e., Microsoft) that has issued him an important piece of his “identity” (i.e., designated identity visionary, evangelist, and provocateur within Microsoft). Of course, even if Kim and his current employer went their separate ways for whatever reason, Kim’s core identity—visionary identity guru—would endure. His blog is a great medium for channeling Kim’s unvarnished thinking to us, direct from his head to ours, come what may.
I have to take issue with Kim’s recently proposed “four laws of identity,” from his blog. To do that, let me first propose Kobielus’ “four principles of identity,” and then use that to move into a specific critique of Kim’s approach. In doing so, I’m responding to Craig Burton’s call for more conceptual and lexical clarity in this discussion.
Kobielus’ four principles of identity recognize that identity is a resource for controlling entities, the transactions in which they may engage, and their vulnerabilities to various risks and liabilities. Kobielus’ four principles are:
* Identity is a uniquely denotative set of one or more attributes associated with a designated entity.
* Identity is issued, owned, asserted, vouched, interchanged, controlled, disclosed, and administered by one or more recognized authorities, which may be the designated entity itself (i.e., self-declaration) and/or various third parties with responsibility over various roles, transactions, or scenarios in which that entity participates (and who may provision or deprovision some aspect of the entity’s identity at their pleasure, will, or whim, depending on their power over him/her/it in various spheres).
* Identity is queried, retained, and relied upon by one or more other parties when engaging in various relationships or interactions, public or private, with the designated entity.
* Identity is control over the entity that it designates, and that control may reside to varying degrees in the designated entity, various recognized identity authorities, and/or various relying parties.
Cameron’s four laws of identity are all geared to maximizing the control wielded by the designated entity over its own identity. In other words, they are laws that ensure accountability while safeguarding privacy protection and ensuring permission-based attribute sharing. They assume that the identity’s “owner” is the designated entity, and that any authorities and relying parties are simply stewards or consumers of others’ identities. Cameron’s four laws are:
* Technical identity systems must only reveal information identifying a user with the user's consent.
* The solution which discloses the least identifying information is the most stable, long-term solution.
* Technical identity systems must be designed so the disclosure of identifying information is limited to parties having a necessary and justifiable place in a given identity relationship.
* A universal identity system must support both "omnidirectional" identifiers for use by public entities and "unidirectional" identifiers for use by private entities, thus facilitating discovery while preventing unnecessary release of correlation handles.
Contrary to what he claims, Cameron’s “laws” are not “a set of ‘objective’ dynamics that will constrain the definition of an identity system capable of being widely enough accepted that it can enable distributed computing on a universal scale.” If they were value-neutral positive theorems for how identity systems actually behave, or are actually accepted (or rejected) in practice, they would have to account for the persistence of stable, legitimate identity regimes in which entities don’t control their identities and/or don’t control the circumstances in which others disclose, access, and rely on their identities. For example, where the issuing authorities and/or relying parties are government agencies with monopoly jurisdiction over some critical identity/credential (e.g., driver’s license, social security number, passport), the designated entity has no choice but to accept that identity regime, regardless of whether it conforms to Cameron’s “laws.” Where the authorities and relying parties are commercial organizations with considerable market share and clout (e.g., credit card companies), one also must largely accept their rules rather than attempt to “buck the system” (for example, by panhandling or living up in the mountains, eschewing money or credit cards altogether). Like it or not, in these real-world instances some major portions of your identity are granted to you by an overpowering authority, who may just as easily take them away from you.
At heart, Cameron’s “laws” are merely ideological, normative precepts with a transparent agenda and a limited, though laudable, aim. Privacy protection is important. Personal control over one’s own identity information is important. But they aren’t the only requirements that must be addressed in a full-blown identity service bus. They don’t address cases where there’s a legitimate need for anonymity, or for full disclosure (over a designated entity’s objections) of identity. Should illegitimate political regimes be able to penetrate the veil of anonymity in which freedom fighters cloak their righteous activities? By the same token, should suspected terrorists own those identity attributes pertaining to themselves that, disclosed to the proper, legitimate authorities in the nick of time, would prevent massive death and destruction?
Kim: This is 2004, not 1994. Put aside the cypherpunk assumptions of yesteryear. Personal empowerment and privacy are critically important, where identity is concerned. But your “laws” are at odds with the real, legislated, post-9/11 laws in this country and elsewhere. There are overarching authorities who are rendering your hoped-for privacy-friendly identity regime politically infeasible.
Please rethink and recast them in that broader context.
Pointer to article:
I'm not going to critique the objectives, scope, methodology, metrics, and other aspects of this study. I'm just going to point out that computers are just a tool, and sometimes (often) they get in the way of critical thinking (which is a fundamental skill for learning anything). I don't know about you, but I find the glut of files, e-mails, blogs, applications, navigation paradigms, etc.) to be maddening at times, especially when it's updated in real-time, continuously, like a firehose or the frickin' 24-hour news channels. Don't you just want, sometimes, to turn it all off and listen to some great music? Or walk and talk and socialize with friends and just get out of the house? I know I do. I don't even compose my poems on my computer. It's too distracting. I write them in my head while exercising, walking, shopping, or listening to boring tech presentations. I only transcribe them to Word when they're pretty much all composed. Only when my thoughts are composed and crystalline and fit for presentation.
Note that the people who learn most while using computers are those who write e-mails the most (at least, that's what the article says the study found). That sounds about right. One of the best ways to truly nail down, crystallize, and reinforce your learning is to regurgitate back in writing. Back in my school days, I loved essay questions on tests. Those were the only questions that made me feel like I was truly demonstrating the knowledge I had gained. And that's because I could write it back--truly compose my thoughts--in a way that I could stare at and say, yeah, I got that down solid. That was the content that I tended to retain longest.k
On a tangent, I saw a story on the TV a few days ago about a study regarding what "truly makes people happy" (i.e., people were asked to report, throughout their respective days, what they did and how they felt about it). The study didn't ask people to report generally what makes them happy in life in general; that kind of question generally elicits the usual "spend time with the kids" response that respondents feel the questioner wants to hear. What people actually said, as they logged their daily activities, that they most enjoyed sex (duh!), socializing, and reading. One of the activities that makes them least happy was using computers. Yes, we all know that computers are often a pain in the rear end. This finding is no surprise. Few things cause me more stress than computers, especially when they screw up and I need to fix them myself (and find a workaround to continue with my computer-intensive work and home life).
Damn computers. You know what makes me happiest with computers? Writing e-mails and writing blog posts. Yeah, writing. Every message or post an essay, composed and crystalline. Makes me happy as hell. I think I'm learning something in the process.
And I hope you are too.
Has anybody noticed that that the application platform market is melting down?
I mean that on a couple of levels, one of which is what I said in my recent Network World column "SOA and the death of platforms" (http://www.nwfusion.com/columnists/2004/090604kobielus.html). To summarize, service-oriented architecture (the latest paradigm) refers to techniques for designing shareable, reusable, interoperable Web services. As SOA principles take hold everywhere, they're dissolving the formerly tidy underpinnings of yesterday's computing environments, making concepts such as "platform," "application," and "language" irrelevant in the world of Web services. When all platforms share a common environment (environments that natively implement the growing WS-* stack of standards and specs) for describing, publishing and invoking services, the notion of self-contained platforms disintegrates in favor of SOA, which is essentially a platformless service cosmos.
But SOA and Web services is just one part of the platform-meltdown equation. The platform vendors are also flailing about, not able to achieve the sort of momentum that would place their environment--be it Windows, J2EE, or Linux/Apache/MySQL/PPP (LAMP)--head and shoulders above the rest. All of these platforms are dissolving into a pool of fear, uncertainty, and doubt (FUD) that has enterprise customers scratching their heads, seeing no clear "slam dunk" platform for their current and evolving needs.
Look at Microsoft's tortuous path from .NET (aka the Win2K and Win2K+3 generations) to "Longhorn" and beyond. They've decomposed the new generation into a bunch of incremental releases with various release dates, many of them quite indefinite, strung out over several years, with the strong likelihood of unanticipated delays in the releases to which they commit. Nobody has any confidence that Microsoft will ship any piece of its "Longhorn" roadmap on time, by which I'm defining "on time" as within Software Assurance timeframes that would entitle customers to an upgrade (for which many have prepaid, with no guarantee of delivery). Nobody has any confidence that the resulting "Longhorn" generation platform components, apps, or tools will enable tight security, though no one doubts that Microsoft's working extremely hard on all things security-related.
The rival J2EE camp is slogging through its own field of FUD. One of the biggest issues is whether the J2EE “standard” (actually, an evolving assemblage of standards and specifications developed under the Java Community Process by the collected Java community of vendors) will survive in the face of “rebel” Java-based frameworks that offer simpler development/runtime approaches than the full J2EE 1.3 or 1.4 stacks. There are many rebel Java frameworks, addressing simplified programming in the presentation tier (e.g., Struts, Tapestry, Velocity), business logic tier (e.g., Aspect-Oriented Programming, Inversion of Control), and data tier (e.g., Hibernate). Much of the rebellion has to do with the fact that the most basic Java programming model—servlets—is good enough for most developers. The fundamental fault line in the Java community is between those developers who favor development of POJO (plain old Java objects) vs. those that stress what I call “MOJO” (massive obnoxious J2EE overhead). That’s a spectrum from simplicity to complexity, from lightweight to heavyweight, from loosey-goosey to strict-constructionist Java programming. Though J2EE 1.4 is out and J2EE 1.5 is in the works, thanks to diligent work within the JCP, nobody has any confidence that most J2EE app platforms vendors will support the full evolving “standard” in future releases. In the trenches of real-world development projects and practices, the Java community is busily deconstructing the grand J2EE edifice it took them several years to build up. It’s an exciting trend, but it shows that bloom is off the J2EE rose, perhaps for good. Enterprises that have committed to J2EE are sweating profusely, wondering whether the fabled cross-platform framework is a thing of the past.
The LAMP platform vendor community is still in its heyday—in other words, vendors such as Red Hat, Novell, and JBoss can still claim considerable momentum in new customer wins. LAMP isn’t a platform in the single-vendor governance model (a la Windows) or single-community governance model (a la J2EE). Rather, LAMP refers loosely to application environments built on Linux and other open-source components (including but not limited to the “AMP” components in its name).
One of the biggest FUD issues with LAMP is vendors’ inability to assure customer indemnification against damages that may result from IP infringements associated with various open-source components (though Novell and others claim, unconvincingly, that their crack legal eagles will be able to come to the defense if such suits are filed). Closed-source vendors, by contrast, have always provided indemnification as a standard feature of their licenses, and, before the rise of open source, customers always took this legalese for granted. But with LAMP, it’s like buying a house without title insurance—you pay hundreds of thousands of dollars without any assurance that a long-lost titleholder won't some day be able to evict you from your property. Who can stand that degree of insecurity to their bread and butter?
Another FUD issue is the potential for nasty technical fingerpointing when the inevitable security issues strike LAMP platforms. No two vendors’ LAMP platforms are alike. Each vendor (indeed, each user) can and does assemble their environment from a diverse collection of open source components from diverse open source communities. Any security issue or interoperability glitch that impacts diverse open-source components will be a bear to fix, considering the fact that all the piece-parts of the problem are not under single governance. Contrast this to Windows’ unified governance model (a single vendor owns it all) or even J2EE (app platform vendors own their various implementations of the common framework, and also participate in a more-or-less unified community to work out common issues).
All of these platforms—Windows, J2EE, and LAMP—will survive. All of them will continue to evolve. But none of them will overcome the flood of FUD that cramps their respective futures.
Thursday, December 09, 2004
Age old desires crowd the sweet
rebirthing we’ve been lead to
anticipate. A crack of
vitamin light suppresses
the urge to summarize what
clearly is still unspooling.
Days, careers, an ignition
promptly turns itself over
and cycles through another
blue and glorious. Memories
charge the atmosphere. Adult
children show. We stay. The late
hours accumulate. We feel
another morning story.
My many layers
keep the season at
bay and that suits me
fine. My survivor
status in this and
all winters past and
possible. My taste
for snows soon to light
and the tedium
that always falls my
way. I lunge for the
shovel and my loved
ones know to leave me
to these elements.
a religious man
i love our lemon
green ancient forebear
two eyes fixed on
phantoms, the rumored
earth, hands ready to
pull her tender frame
reaches in a
branching bouquet of
one of which, or so
the story goes, was
take families whole
into a supple
recess and mothers
needn't worry that
could ever attract
he of the curvy
claws and the piercing
Pointer to article:
This is an important couple of competitive announcements: BEA WebLogic Server 9.0 and Oracle Application Server 10g Release 2. What’s most important is that both of these application server vendors have begun to push enterprise service bus (ESB) functionality into their platforms. At heart, ESB functionality brings together the legacy world of vendor-proprietary message-oriented middleware (MOM) with the growing stack of vendor-independent WS-* standards. Today’s ESB products provide a range of features that pave the way for the eventual obsolescence of proprietary MOMs in favor of purely WS-based reliable messaging, event notification, and pub/sub. By implementing WS-Reliable Messaging (WS-RM), both BEA and Oracle jump ahead of such application platform rivals as Microsoft and IBM. All platform vendors will need to implement WS-RM in their architectures in order to enable reliable, guaranteed, once-only delivery of SOAP messages over Web services environments. By the end of this decade, ESB functionality will be common-denominator functionality implemented on all platforms, leveraging WS-RM, WS-Notification, WS-Eventing, and other emerging WS-* standards. By embedding this standards-based, robust integration functionality, application server vendors will make it difficult for today’s middleware pure-plays—such as TIBCO, Sonic Software, and Fiorano Software—to show value-added over and above what’s available at no extra charge in the platforms.
Pointer to article:
Windows, like all platforms, is having its profitability stripped away by commoditization, open source alternatives, and customer reluctance to participate in forced migrations. Microsoft acknowledges that inexorable trend by releasing this Windows stripped-down version at a lower price in these Asian markets than elsewhere—flexible global pricing (a slippery slope if there ever was one). The price of Windows and every other OS and platform component will continue to decline to absolute zero. It will see its revenue oxygen sucked away like a planetary system losing its sun. Call it the cruel Kelvin endgame.
As regards this functionally-constrained Windows desktop version for certain Asian countries plus Russia (which, of course, is also half-Asian), I don’t see much hope, at least in terms of its stimulating non-PC users to get with the program (yeah, yeah, pun intended). First of all, pretty much everybody in those national markets who wants/needs to use a PC already has one (they actually are in the modern age, and are populated by the same range of expert-to-clueless users as the US, Europe, or Japan). Second, non-PC users will usually go first (and perhaps exclusively) to handheld/mobile devices (for which, of course, Microsoft has yet another Windows version).
“Developing countries” have fully developed users in much the same ratio as other nations. Don’t condescend to them by pretending otherwise.
Wednesday, December 08, 2004
Pointer to article:
This is no surprise at all, and it doesn’t mark Oracle’s “entrance” into the BI market. Oracle’s been a strong player in the ERP, BI, analytics, data warehousing, corporate performance management, and data integration markets for some time. It only makes sense for Oracle to lay down a roadmap for bringing all of that data-intensive/processor-intensive functionality to its new grid-enabled 10g platform generation. It would be good if trade-press reporters/editors did a smidgen of background research before they publish stories/headlines such as this.
Pointer to article: http://www.computerworld.com/databasetopics/data/datacenter/story/0,10801,98048,00.html
This is no surprise. Actually, it’s a bit on the conservative side, as estimates go. If autonomic computing succeeds in its objectives, we should see IT operations labor-force reductions in excess of 90 percent over the next (human) generation. After all, isn’t “lights-out” automation of data centers what enterprises and service providers have been striving for since the dawn of computing?
Don’t think of it as “imperiling” jobs. Rather, think of it as just a long-term, inexorable trend toward outsourcing tedious human functions to machines. Maybe this is the utopian in me speaking, but I don’t think human beings (net-net) are disadvantaged in their careers by this trend. People who want to make their careers in IT (long term) will migrate away from operational functions to the growing, creative jobs, such as product management, application development, and e-commerce.
Oh, and on a slight tangent, it seems to me that the very notion of a data “center” is becoming a bit quaint, due to the growth of grid computing. In grid environments (think of them as the next generation of dynamic clustering), distributed CPU, storage, and other hardware and software resources get served from everywhere in a virtual application infrastructure (from servers, desktops, and other nodes), with less “big iron” anchoring the enterprise mothership. No center to the cosmos, but, rather, a much more capacious federation of galaxies serving light and matter to hungry life forms who know how to harvest it all.
Tuesday, December 07, 2004
Pointer to article: http://www.crn.com/sections/breakingnews/dailyarchives.jhtml?articleId=54800190
Clearly, Microsoft is having more trouble than it expected cutting the support umbilical for its three-generation-ago platform. For a sizeable group of Microsoft customers, NT 4.0 is stable and good enough for its core file/print server deployments. NT 4.0 users are loath to upgrade for additional features (in Windows 2000 & 2003 servers) that they just don't need.
Microsoft customers are pushing back in a major way on forced migrations, and Microsoft is starting to realize that their loyalty cannot be taken for granted. Indeed, the commoditization of platforms--thanks in large part to the Linux/open source challenge--is seriously diminishing Microsoft's ability to lock customers into migrations to future Windows platforms. Also, the universal spread of service-oriented architecture (SOA) and Web services are reducing the importance of the underlying application platforms (such as Windows/.NET, J2EE, and Linux/Apache/MySQL/PPP).
Microsoft shouldn't be nickel-and-diming its longtime customers on such issues as NT 4.0 support. It has seriously strained customers' loyalty with its ill-considered Software Assurance program, its increasingly cloudy "Longhorn" roadmap, and its legendary security vulnerabilities. IMHO, the vendor should extend its flat-fee-based custom-support program for NT 4.0 indefinitely, not just through December 2006. It would notch customer-goodwill brownie points from such a move without exposing itself to exorbitant long-term financial burdens. After all, the NT 4.0 customer base is diminishing gradually due to normal attrition and Microsoft's copious incentives.
Besides, Microsoft needs its current customers' NT 4.0 deployments to prove out the value proposition (legacy consolidation/coexistence/codeployment) of its new Virtual Server 2005 product. Virtualization technologies such as this--and SOA, come to think of it--are giving older platforms a new lease on life. Any older software that can be encapsulated with Web services interfaces (WSDL, SOAP, etc.) or run in a virtualization platform can coexist and interoperate with the brave new world of Linux, "Longhorn," and any other new app platforms that the industry develops over the foreseeable future.
Monday, December 06, 2004
I’m a well-known, respected, seasoned industry analyst, consultant, author, and pundit who is currently between jobs. If you’re interested in hiring me, please call me at 703-924-6224 or e-mail me at firstname.lastname@example.org. Resume provided on demand.
My coverage areas (in which I’ve published books, research reports, feature articles, columns, and so forth) include operating systems, application servers, enterprise application integration, integration brokers, middleware, Web services, management tools, orchestration, business process management, service-oriented architecture, application development platforms and tools, directories, federated identity and access management, application and network security, collaboration, messaging, mobile/wireless, performance acceleration, grid services, anti-spam, and XML. Bibliography provided on demand.
I have another persistent presence on the ‘Net, on Network World’s website. I’ve been a contributing editor there since 1987. You can look up my latest thoughts at http://www.nwfusion.com/columnists/kobielus.html. Brilliant insights provided on demand. Poems too.
I look forward to hearing from you. Oh…and keep checking this blog for a steady stream of my new thinking. I try to stir things up in your heads and hearts. Sometimes you’ll find me meandering through all manner of topics, IT and otherwise. I’ll try not to bore you or box you into industry “conventional wisdom.”
I just need an outlet. Indulge me please.