All:
Pointer to article:
http://www.wired.com/news/wireless/0,1382,66437,00.html
Kobielus kommentary:
One of these days we’re going to have to stop calling them “area codes.” Increasingly, they’re not tied to an area, or only loosely to caller location. In some ways, these three-digit calling prefixes are the telephonic equivalent of surnames. Back in the (medieval) day, when most European cultures began to postpend surnames to given names, the surnames often had a direct link to personal attributes of the (male) identities to which they were initially attached: location (“Atwater”), patrilineage (“Johnson”), occupation (“Baker”), and so forth. But these personal attributes of particular ancestors grew less and less pertinent to the identities of their children’s children’s children’s children, who had adopted these surnames as group identities. For example, my surname apparently is archaic southern Polish for “basketweaver,” but I’m useless at arts and crafts.
As for “area codes,” why not simply “exchange codes” (as in “national calling exchanges,” which may or may not be correlated with location or carrier or with the exchanges of any other telecommunication service you’re subscribing to). An exchange code is becoming an empty group identity: having no fixed set of group attributes that apply to all instances of that group. Just a prepended “surnumber” that that points back to an increasingly distant matrilineage: back to some long lost common ancestor named “Ma Bell.”
Jim
Monday, January 31, 2005
fyi Bloggies Recognizes New Trends in Format
All:
Pointer to article:
http://www.washingtonpost.com/wp-dyn/articles/A46429-2005Jan29.html?referrer=email
Kobielus commentary:
The most noteworthy item in this article is AOL’s discontinuance of access to the venerable Usenet newsgroups. Of course, Usenet has been effectively a dead medium for years now, obsoleted by the Web, blogs, IM/chat, and threaded e-mail discussion groups. Historians should note that Usenet was the first Internet collaboration space afflicted by spam. To some degree, Usenet’s demise was precipitated by the overwhelming volume of spam deposited on any group. From the mid-90s onward, it was Uselessnet, as far as I was concerned. Media have their lifecycles, and deathcycles.
Jim
Pointer to article:
http://www.washingtonpost.com/wp-dyn/articles/A46429-2005Jan29.html?referrer=email
Kobielus commentary:
The most noteworthy item in this article is AOL’s discontinuance of access to the venerable Usenet newsgroups. Of course, Usenet has been effectively a dead medium for years now, obsoleted by the Web, blogs, IM/chat, and threaded e-mail discussion groups. Historians should note that Usenet was the first Internet collaboration space afflicted by spam. To some degree, Usenet’s demise was precipitated by the overwhelming volume of spam deposited on any group. From the mid-90s onward, it was Uselessnet, as far as I was concerned. Media have their lifecycles, and deathcycles.
Jim
Friday, January 28, 2005
lol Dick Cheney, Dressing Down
All:
Pointer to picture:
http://www.washingtonpost.com/wp-dyn/articles/A43247-2005Jan27.html
I don't usually comment on anybody's attire, man or woman. And Cheney's gray parka and toque would go unremarked in any other context. But look at him there at Auschwitz, surrounded by a sea of black. Doesn't the White House have a chief of protocol who advises on how the vice president should dress for such a somber occasion? Black black black.
Jim
Pointer to picture:
http://www.washingtonpost.com/wp-dyn/articles/A43247-2005Jan27.html
I don't usually comment on anybody's attire, man or woman. And Cheney's gray parka and toque would go unremarked in any other context. But look at him there at Auschwitz, surrounded by a sea of black. Doesn't the White House have a chief of protocol who advises on how the vice president should dress for such a somber occasion? Black black black.
Jim
fyi Spammer Directory Harvest Attacks Hammer Enterprises
All:
Pointer to article:
http://www.crn.com/nl/crndirect/showArticle.jhtml?articleId=59100169
Kobielus kommentary:
Talk about identity aggregation! Every mail server’s directory is a juicy identity-aggregation point there for the harvesting. And the spammers are plucking this low-hanging fruit through brute-force attacks everywhere, all the time. Invalid address lookups—and the concomitant mail-server CPU hits, mail-queue clogs, and mail-delivery delays--are the overhead we have to endure from the spammer’s relentless penetration testing. But, of course, mail server directories aren’t centralized identity aggregators—the inherently decentralized, federated nature of the worldwide SMTP e-mail utility has scattered these sweet little identity honeypots all over creation.
Yes, I said “federated.” Internet e-mail has been a federated messaging environment for quite some time: that’s been key to its success. I define “federated messaging” as “messaging domains that establish trust relationships under which they can choose to accept each other’s messaging assertions and honor each other’s messaging decisions--or reject them--subject to local policies.” Notice the parallel with my discussion of “federated identity” in the previous blog posting. Federated messaging depends on a constrained variety of federated identity—in this case, each mail domain being able to register, vouch for, and manage its own mail identities (e.g., username@maildomain1.com).
Messaging federation, it seems, hasn’t deterred identity thieves in their efforts to grab identities scattered all over kingdom come. Instead, it’s made them more ingenious, creating a widespread directory-harvest-attack infrastructure. Lots of machines throughout the cybershmear are trained to raid the many mail-directory honeypots for unprotected spammunition.
Can a federated IdM infrastructure withstand the inevitable directory harvest attacks? What form will they take? How can we nip them in the bud?
Jim
Pointer to article:
http://www.crn.com/nl/crndirect/showArticle.jhtml?articleId=59100169
Kobielus kommentary:
Talk about identity aggregation! Every mail server’s directory is a juicy identity-aggregation point there for the harvesting. And the spammers are plucking this low-hanging fruit through brute-force attacks everywhere, all the time. Invalid address lookups—and the concomitant mail-server CPU hits, mail-queue clogs, and mail-delivery delays--are the overhead we have to endure from the spammer’s relentless penetration testing. But, of course, mail server directories aren’t centralized identity aggregators—the inherently decentralized, federated nature of the worldwide SMTP e-mail utility has scattered these sweet little identity honeypots all over creation.
Yes, I said “federated.” Internet e-mail has been a federated messaging environment for quite some time: that’s been key to its success. I define “federated messaging” as “messaging domains that establish trust relationships under which they can choose to accept each other’s messaging assertions and honor each other’s messaging decisions--or reject them--subject to local policies.” Notice the parallel with my discussion of “federated identity” in the previous blog posting. Federated messaging depends on a constrained variety of federated identity—in this case, each mail domain being able to register, vouch for, and manage its own mail identities (e.g., username@maildomain1.com).
Messaging federation, it seems, hasn’t deterred identity thieves in their efforts to grab identities scattered all over kingdom come. Instead, it’s made them more ingenious, creating a widespread directory-harvest-attack infrastructure. Lots of machines throughout the cybershmear are trained to raid the many mail-directory honeypots for unprotected spammunition.
Can a federated IdM infrastructure withstand the inevitable directory harvest attacks? What form will they take? How can we nip them in the bud?
Jim
Thursday, January 27, 2005
imho The laws of identity governance
All:
Identities must have their sovereignty safeguarded within a conducive governance structure. Per one of my previous blog postings, the core identity-governance normative/prescriptive principles are:
• Each person is the only legitimate owner of their identity, all manifestations of that identity, and all associated identity attributes.
• Each person must be able to exert full control over all instances, attributes, disclosure, and management of their own identity.
• Identity environments must be architected to enable each person to exert that control, while facilitating identity-based security functions (authentication, access control, etc.), ensuring permission-based identity-attribute sharing, and safeguarding personal privacy.
• Where each person’s identity information is concerned, any other party in the identity environments is either a registrar, steward, or consumer (not an owner) of such information.
• Other parties in the identity chain must ensure that their policies, procedures, activities, and operations don’t violate or compromise people’s control over their own identity information.
Extending the thoughts from my previous “imho” posting, I would boil down all of Cameron's proposed identity “laws” into three prescriptive rules that might be referred to collectively as the “laws of identity governance” (and which support the core identity-governance principles just outlined):
• Law of identity federation: Domains must be able to establish trust relationships under which they can choose to accept each other’s identity assertions and honor each other’s identity decisions--or reject them--subject to local policies.
• Law of identity assurance: Entities must be able to unambiguously ascertain, resolve, and verify each other’s identities, and reserve the right to refrain from or repudiate interactions in which such assurance is lacking.
• Law of identity self-empowerment: Humans must be able to self-assert their identities, and reveal or conceal as much or little of their identity as they wish, at any time, for any reason, from any other party, for any duration, and also to unlaterally defederate from any domain that deliberately or inadvertently compromises or violates these rights.
Implemented within a common governance structure, these laws would safeguard privacy while ensuring interoperability and trust in an environment forever fragmented into diverse identity domains. The right to an autonomous personal domain—each one in charge of his or her core identity—must be guaranteed to all humans, and the right to federate that idio-domain—or not federate it—must never be infringed.
Notice how I'm not calling for a "universal identity system" (which could be misconstrued as a Passport-emerges-from-its-crypt identity-aggregation scheme). I'm also not calling for, and the laws don't imply, a universal identifier. Or universal credential. Or universal SSO. I'm not actually calling for anything universal here. I'm just calling for trust, interoperability, and privacy protection mechanisms to be built into any multi-domain identity environment, no matter how narrow the domain structure (two domains interoperating, or millions of domains interoperating). These "laws of identity governance" should be basic ground rules for any distributed identity environment, simple or complex.
Essentially, I've mapped the "laws of identity" down to these minimal three (I based this in my prior formulations, and on Cameron's formulations, and on everybody's blog discussions around this topic). Applying William of Ockham’s Razor always reduces unwieldy lists down to a tidy trinity.
Jim
Identities must have their sovereignty safeguarded within a conducive governance structure. Per one of my previous blog postings, the core identity-governance normative/prescriptive principles are:
• Each person is the only legitimate owner of their identity, all manifestations of that identity, and all associated identity attributes.
• Each person must be able to exert full control over all instances, attributes, disclosure, and management of their own identity.
• Identity environments must be architected to enable each person to exert that control, while facilitating identity-based security functions (authentication, access control, etc.), ensuring permission-based identity-attribute sharing, and safeguarding personal privacy.
• Where each person’s identity information is concerned, any other party in the identity environments is either a registrar, steward, or consumer (not an owner) of such information.
• Other parties in the identity chain must ensure that their policies, procedures, activities, and operations don’t violate or compromise people’s control over their own identity information.
Extending the thoughts from my previous “imho” posting, I would boil down all of Cameron's proposed identity “laws” into three prescriptive rules that might be referred to collectively as the “laws of identity governance” (and which support the core identity-governance principles just outlined):
• Law of identity federation: Domains must be able to establish trust relationships under which they can choose to accept each other’s identity assertions and honor each other’s identity decisions--or reject them--subject to local policies.
• Law of identity assurance: Entities must be able to unambiguously ascertain, resolve, and verify each other’s identities, and reserve the right to refrain from or repudiate interactions in which such assurance is lacking.
• Law of identity self-empowerment: Humans must be able to self-assert their identities, and reveal or conceal as much or little of their identity as they wish, at any time, for any reason, from any other party, for any duration, and also to unlaterally defederate from any domain that deliberately or inadvertently compromises or violates these rights.
Implemented within a common governance structure, these laws would safeguard privacy while ensuring interoperability and trust in an environment forever fragmented into diverse identity domains. The right to an autonomous personal domain—each one in charge of his or her core identity—must be guaranteed to all humans, and the right to federate that idio-domain—or not federate it—must never be infringed.
Notice how I'm not calling for a "universal identity system" (which could be misconstrued as a Passport-emerges-from-its-crypt identity-aggregation scheme). I'm also not calling for, and the laws don't imply, a universal identifier. Or universal credential. Or universal SSO. I'm not actually calling for anything universal here. I'm just calling for trust, interoperability, and privacy protection mechanisms to be built into any multi-domain identity environment, no matter how narrow the domain structure (two domains interoperating, or millions of domains interoperating). These "laws of identity governance" should be basic ground rules for any distributed identity environment, simple or complex.
Essentially, I've mapped the "laws of identity" down to these minimal three (I based this in my prior formulations, and on Cameron's formulations, and on everybody's blog discussions around this topic). Applying William of Ockham’s Razor always reduces unwieldy lists down to a tidy trinity.
Jim
fyi Study Shows Search Users Ignorant About Paid Results
All:
Pointer to article:
http://www.ecommercetimes.com/story/40012.html
Kobielus kommentary:
People are so addicted to free content, including free (to them) search-engine results that they often to fail to notice that somebody has paid for that content. And that somebody (merchants, legit and otherwise, and the search engines that are “common carriers” for such results) obviously has a vested interest in downplaying the bias inherent in this free-to-the-browser sponsored content.
But this is a minor issue. The search engines specialize in scouring the Internet for relevant material, sponsored and otherwise. Users/browsers, if they’re so inclined and diligent, can easily dig further for non-sponsored content surfaced through the same keywords that delivered the sponsored stuff. The lazy/naive users will be hooked by sponsored content. Their hipper, more diligent counterparts will surface the biased and impartial stuff, and weight/handicap them accordingly in their minds.
In a very real sense, all content is “paid.” Some of it’s paid promotional material, but the rest is “paid” for by the author (from their own pockets, with their own blood sweat and tears) and put out on the cybersphere at their own expense (of time, if nothing else). And everything is biased. You just need to be alert to the author/sponsor’s particular bias, perspective, or agenda. Even "unbiased" analyst ponderings are geared to selling something: selling the (licensed document subscription, dialogue, consulting, webinar, conference, etc.) services of that analyst (individually and/or the firm he/she works for) to the IT (enterprise and/or vendor) community. When you read an analyst report, you should always wonder who's paying the analyst to produce that report (it needn't be direct tit-for-tat money-for-sponsored-report payment; more often, it's simply indirect "big vendor will not renew license for analyst subscription service unless analyst says more nice things about vendor" pressure). You definitely need to be hip to that dynamic operating throughout the IT research/analyst market segment.
Sponsorship in the IT whitepaper world--it's a reality in the online whitepaper ecosystem. But this sort of bias is not fundamentally different from what you'll find in any other publishing medium, online or otherwise. Somebody has to find it worth their while to produce such stuff. Actually, I think Google does a good job of surfacing the “sponsored links” high up in their search results. And I think a lot of sponsored research is top-notch. Smart people were paid decent money to produce something of value. The good sponsored research is in fact research—it should tell you something you didn’t already know. The transparently self-serving vacuous marketing/promotional whitepapers are easily identified and ignored.
Of course, I spend my working hours consuming (and occasionally producing) such whitepapers, as well as promulgating my “unbiased” (yeah, Jim, you’re a philosopher-king sitting on Mount Olympus) thoughts via this blog, my Network World column, and other channels. So I’m relatively “hip” to this phenomenon. Maybe others aren’t as tuned into this stuff as I am.
For the record, you should browse my blog, columns, and published reports for my collective “bias”—I like to think of it as “perspective”—on all things IT and otherwise. If you’re comfortable and aligned with my bias, please keep tuning in. If you’re not, then you’re not reading this now, most likely, so this here final sentence is moot, I suppose.
Jim
Pointer to article:
http://www.ecommercetimes.com/story/40012.html
Kobielus kommentary:
People are so addicted to free content, including free (to them) search-engine results that they often to fail to notice that somebody has paid for that content. And that somebody (merchants, legit and otherwise, and the search engines that are “common carriers” for such results) obviously has a vested interest in downplaying the bias inherent in this free-to-the-browser sponsored content.
But this is a minor issue. The search engines specialize in scouring the Internet for relevant material, sponsored and otherwise. Users/browsers, if they’re so inclined and diligent, can easily dig further for non-sponsored content surfaced through the same keywords that delivered the sponsored stuff. The lazy/naive users will be hooked by sponsored content. Their hipper, more diligent counterparts will surface the biased and impartial stuff, and weight/handicap them accordingly in their minds.
In a very real sense, all content is “paid.” Some of it’s paid promotional material, but the rest is “paid” for by the author (from their own pockets, with their own blood sweat and tears) and put out on the cybersphere at their own expense (of time, if nothing else). And everything is biased. You just need to be alert to the author/sponsor’s particular bias, perspective, or agenda. Even "unbiased" analyst ponderings are geared to selling something: selling the (licensed document subscription, dialogue, consulting, webinar, conference, etc.) services of that analyst (individually and/or the firm he/she works for) to the IT (enterprise and/or vendor) community. When you read an analyst report, you should always wonder who's paying the analyst to produce that report (it needn't be direct tit-for-tat money-for-sponsored-report payment; more often, it's simply indirect "big vendor will not renew license for analyst subscription service unless analyst says more nice things about vendor" pressure). You definitely need to be hip to that dynamic operating throughout the IT research/analyst market segment.
Sponsorship in the IT whitepaper world--it's a reality in the online whitepaper ecosystem. But this sort of bias is not fundamentally different from what you'll find in any other publishing medium, online or otherwise. Somebody has to find it worth their while to produce such stuff. Actually, I think Google does a good job of surfacing the “sponsored links” high up in their search results. And I think a lot of sponsored research is top-notch. Smart people were paid decent money to produce something of value. The good sponsored research is in fact research—it should tell you something you didn’t already know. The transparently self-serving vacuous marketing/promotional whitepapers are easily identified and ignored.
Of course, I spend my working hours consuming (and occasionally producing) such whitepapers, as well as promulgating my “unbiased” (yeah, Jim, you’re a philosopher-king sitting on Mount Olympus) thoughts via this blog, my Network World column, and other channels. So I’m relatively “hip” to this phenomenon. Maybe others aren’t as tuned into this stuff as I am.
For the record, you should browse my blog, columns, and published reports for my collective “bias”—I like to think of it as “perspective”—on all things IT and otherwise. If you’re comfortable and aligned with my bias, please keep tuning in. If you’re not, then you’re not reading this now, most likely, so this here final sentence is moot, I suppose.
Jim
Wednesday, January 26, 2005
fyi Mobile marketing is slowly improving for US marketers
All:
Pointer to article:
www.cellular-news.com/story/11802.shtml
Kobielus kommentary:
That may be so on some levels, such as the ability to push bulk messages into people’s hands everywhere. But it’s obviously turning into a huge new spam source. And the evolving dynamics of the cellular market will make it even spammier.
Target marketing will become more and more difficult in the mobile services market as prepaid cellular services predominate (postpaid cellular service plans still outnumber prepaid in the US and a handful of other countries, but prepaid rules the roost everywhere else). The convenience, simplicity, and “contractlessness” of prepaid are just too attractive, considering what a commodity cellular service is becoming. Yes, prepaid airtime is still more expensive in the US than postpaid, but, as this niche heats up, expect the difference to narrow considerably.
But why does prepaid cellular open the barn door to more cellular spam? One of the interesting things about prepaid cellular is that it often conceals the subscriber’s identity from the carrier. Under most prepaid plans, you buy your cellphone from various parties (which may be a carrier’s retail outlet, but more often will be a third-party electronics or big-box retailer) and your prepaid airtime from various sources (for example, CVS/pharmacy sells prepaid airtime cards on most major US carriers and MVNOs: Cingular, T-Mobile, TracFone, Verizon, Virgin). You hand your credit card to the company that sells you the phone and the airtime, and they hand you a toll-free number and PIN to activate your prepaid account. Yes, the retailer may or may not be able to get useful demographic info from you, but the carrier, just being a bulk provider of wireless airtime, probably won’t. So it can’t target you with messages to your cellphone. So the SMS messages you do get will be spam, or only those that you explicitly opt-in to (after a while, those latter messages will blur into spam as well).
Virgin Mobile is doing something interesting. They give you bonus airtime if you refill through their website and use a credit card. Then they give you the option of auto-paying future refills from the same card. Expect to see other cellular carriers follow suit. They must have more user identity info in order to sell more stuff to you over their prepaid services. If they have a big pot of subscriber identity info, they can take the upper hand in their mobile e-commerce/content partnerships.
Whether you need more promotion, marketing, and spam messages pushed to your handset is another issue altogether.
I personally am sick of advertising of any sort being pushed to any of my computers.
Jim
Pointer to article:
www.cellular-news.com/story/11802.shtml
Kobielus kommentary:
That may be so on some levels, such as the ability to push bulk messages into people’s hands everywhere. But it’s obviously turning into a huge new spam source. And the evolving dynamics of the cellular market will make it even spammier.
Target marketing will become more and more difficult in the mobile services market as prepaid cellular services predominate (postpaid cellular service plans still outnumber prepaid in the US and a handful of other countries, but prepaid rules the roost everywhere else). The convenience, simplicity, and “contractlessness” of prepaid are just too attractive, considering what a commodity cellular service is becoming. Yes, prepaid airtime is still more expensive in the US than postpaid, but, as this niche heats up, expect the difference to narrow considerably.
But why does prepaid cellular open the barn door to more cellular spam? One of the interesting things about prepaid cellular is that it often conceals the subscriber’s identity from the carrier. Under most prepaid plans, you buy your cellphone from various parties (which may be a carrier’s retail outlet, but more often will be a third-party electronics or big-box retailer) and your prepaid airtime from various sources (for example, CVS/pharmacy sells prepaid airtime cards on most major US carriers and MVNOs: Cingular, T-Mobile, TracFone, Verizon, Virgin). You hand your credit card to the company that sells you the phone and the airtime, and they hand you a toll-free number and PIN to activate your prepaid account. Yes, the retailer may or may not be able to get useful demographic info from you, but the carrier, just being a bulk provider of wireless airtime, probably won’t. So it can’t target you with messages to your cellphone. So the SMS messages you do get will be spam, or only those that you explicitly opt-in to (after a while, those latter messages will blur into spam as well).
Virgin Mobile is doing something interesting. They give you bonus airtime if you refill through their website and use a credit card. Then they give you the option of auto-paying future refills from the same card. Expect to see other cellular carriers follow suit. They must have more user identity info in order to sell more stuff to you over their prepaid services. If they have a big pot of subscriber identity info, they can take the upper hand in their mobile e-commerce/content partnerships.
Whether you need more promotion, marketing, and spam messages pushed to your handset is another issue altogether.
I personally am sick of advertising of any sort being pushed to any of my computers.
Jim
Monday, January 24, 2005
imho Further thoughts on Cameron’s “laws of identity”
All:
It’s been interesting to see everybody’s comments on Cameron’s “laws of identity.” Some further comments from yours truly.
First, I’ll have to take issue with Jamie Lewis’ contention that Kim’s terming his principles “laws” was a “minor conceit.” In fact, Kim meant “laws” in both senses of the term:
• Descriptive (i.e., empirical) principles characterizing the actual dynamics of (identity) system behavior (in this case, the behavior of users accepting or rejecting that system);
• Prescriptive (i.e., imperative) principles governing the desired dynamics of (identity) system behavior (in this case, an identity system designed to support user self-empowerment and be privacy-friendly)
And there’s nothing wrong with Kim proposing identity principles (aka “laws”) in either sense. However, as I said in a previous post, I still haven’t seen any evidence for real-world IdM systems that have failed to gain acceptance because they failed to implement Kim’s descriptive “laws.” But, as I also said, I applaud Kim’s development of prescriptive “laws” to promote user self-empowerment and privacy protection. In other words, Kim’s development of “architecture principles” (Jamie’s term) to guide IdM planning, design, implementation, and administration. Kim’s use of the uppercase word “MUST” in each “law” gives away the prescriptive nature of what he’s proposing.
Second, it occurs to me that Kim’s prescriptive/imperative “laws” revolve around a core IdM principle that one might describe as “the law of identity opacity”:
• Universal identity systems must allow users to opaque (or reveal) as much or little of their identity as they wish at any time, for any reason, from any other party, and for any duration.”
With this as the core principle, we can derive four of Kim’s six (so far) principles: the “law of control,” “law of minimal disclosure,” “law of fewest parties,” and “law of directed identity.” However, the “law of pluralism” doesn’t follow from this (it really should be called the “law of identity federation”), nor does the “law of human integration” (it should be called the “law of identity idiotproofing,” or is that too condescending?).
These laws/principles can sustain Lewis' "motherhood" test. Reasonable people can suggest and defend contrary positions:
• Contrary to the "law of identity opacity": identity transparency (i.e., no secrets, concealment, or privacy) promotes universal accountability, auditing, and security, which is valuable from the point-of-view of law enforcement (aka "Big Brother")
• Contrary to the "law of identity federation": identity aggregation (i.e., one big identity vault) promotes universal SSO, which is valuable from the point of view of all users
• Contrary to the "law of identity idiotproofing": identity multifactoring (i.e., onerous multi-step registration, validation, and login procedures with IDs, passwords, PINs, biometrics, etc.) promotes a high degree of authentication assurance, which is valuable from the point of view of relying parties, and also end users
Anyway, Kim's principles are a sound basis for further development of the universal IdM governance structure. I’m curious what else Kim has up his sleeve. And whether/how/when IdM vendors are interpreting/implementing these in their solutions (the laws of identity federation, idiotproofing, and opacity, or, per the universal law of analyst self-aggrandizement, “Kobielus’ meta-laws of identity governance”).
Jim
It’s been interesting to see everybody’s comments on Cameron’s “laws of identity.” Some further comments from yours truly.
First, I’ll have to take issue with Jamie Lewis’ contention that Kim’s terming his principles “laws” was a “minor conceit.” In fact, Kim meant “laws” in both senses of the term:
• Descriptive (i.e., empirical) principles characterizing the actual dynamics of (identity) system behavior (in this case, the behavior of users accepting or rejecting that system);
• Prescriptive (i.e., imperative) principles governing the desired dynamics of (identity) system behavior (in this case, an identity system designed to support user self-empowerment and be privacy-friendly)
And there’s nothing wrong with Kim proposing identity principles (aka “laws”) in either sense. However, as I said in a previous post, I still haven’t seen any evidence for real-world IdM systems that have failed to gain acceptance because they failed to implement Kim’s descriptive “laws.” But, as I also said, I applaud Kim’s development of prescriptive “laws” to promote user self-empowerment and privacy protection. In other words, Kim’s development of “architecture principles” (Jamie’s term) to guide IdM planning, design, implementation, and administration. Kim’s use of the uppercase word “MUST” in each “law” gives away the prescriptive nature of what he’s proposing.
Second, it occurs to me that Kim’s prescriptive/imperative “laws” revolve around a core IdM principle that one might describe as “the law of identity opacity”:
• Universal identity systems must allow users to opaque (or reveal) as much or little of their identity as they wish at any time, for any reason, from any other party, and for any duration.”
With this as the core principle, we can derive four of Kim’s six (so far) principles: the “law of control,” “law of minimal disclosure,” “law of fewest parties,” and “law of directed identity.” However, the “law of pluralism” doesn’t follow from this (it really should be called the “law of identity federation”), nor does the “law of human integration” (it should be called the “law of identity idiotproofing,” or is that too condescending?).
These laws/principles can sustain Lewis' "motherhood" test. Reasonable people can suggest and defend contrary positions:
• Contrary to the "law of identity opacity": identity transparency (i.e., no secrets, concealment, or privacy) promotes universal accountability, auditing, and security, which is valuable from the point-of-view of law enforcement (aka "Big Brother")
• Contrary to the "law of identity federation": identity aggregation (i.e., one big identity vault) promotes universal SSO, which is valuable from the point of view of all users
• Contrary to the "law of identity idiotproofing": identity multifactoring (i.e., onerous multi-step registration, validation, and login procedures with IDs, passwords, PINs, biometrics, etc.) promotes a high degree of authentication assurance, which is valuable from the point of view of relying parties, and also end users
Anyway, Kim's principles are a sound basis for further development of the universal IdM governance structure. I’m curious what else Kim has up his sleeve. And whether/how/when IdM vendors are interpreting/implementing these in their solutions (the laws of identity federation, idiotproofing, and opacity, or, per the universal law of analyst self-aggrandizement, “Kobielus’ meta-laws of identity governance”).
Jim
self Brain overload: Too much to do, too little time
All:
Pointer to article:
http://www.computerworld.com/managementtopics/management/story/0,10801,98770,00.html?source=NLT_WK&nid=98770
Kobielus kommentary:
Yeah, I know: same ol’, same ol’. We’re all swimming in this overheating fish pond. “The stress of modern work life may be literally driving us to distraction.”
Complex times can make even the relatively well-moored feel like we’re suffering from ADT. I don’t, but sometimes I feel like the environment is filling me with a mental form of secondhand smoke.
In case you’re wondering why I haven’t posted anything to this blog in the past week, this headline more or less summarizes it. Everything piling up: looking for a full-time job, taking various freelance consulting and writing gigs to pay the bills, dealing with various system and connectivity issues, running my household, dealing with various family issues, etc. Oh….and, of course, continue to read the IT industry news, think about it all, and post my thoughts to this blog. Sometimes some things need to give under the pressure of events. This has been one of them. So has my poetry. I have only written one new piece this month/year (“OO”). Just haven’t felt like it.
It’s not so much my brain but my heart that’s been overloaded. I’m 46 years old now and feel like it. Fortunately (and this is semi-ironic) I’ve finally brought down my weight to my target: 142 pounds, which is what I weighed at the age of 19. This has come about not so much through worry and nervous exhaustion but from 3+ years of exercise and diet. At least I’m entering into middle age a relatively trim man (trying to avoid the weight issues that aggravated my father’s heart disease and contributed to his death at age 54 in 1978--in many ways, I physically and temperamentally resemble Bill Kobielus very closely, and am always conscious of that existential fact).
These are complex, trying times, full of people and events conspiring to add to our worries. As if we need any “help.” Oh….system issues…in the past week I’ve been afflicted with a spyware infestation on my main desktop computer (Windows XP SP 1—I’m just now getting around to upgrading to SP2). Yes, I’ve had anti-spyware “protection” on that system (Ad-Aware, Spybot S&D), but none of it has prevented a boatload of new crap from pooping itself down from the ether. The worst thing about the new infestation is that it has crashed IE, Windows Explorer, and Control Panel (fortunately, I can still get to all of this if I put Windows in “safe mode”). The number one culprit in all this are these ironically named browser “helper” objects that silently introduce themselves through IE’s holes. In case you’re wondering, I made Mozilla Firefox my default browser several weeks ago, but that still didn’t stop the IE-borne spyware—Windows just seems to start up IE whenever it feels like it, so having Mozilla as my default has been no “inoculation.”
I’m typing this from my (new) laptop, on which I’ve beefed up the anti-spyware (I’m also using Spyware Blaster) now.
Trying times. I’m trying. I really am. Ironically, since I lost my job, I've been busier than ever (not sure it's all been the right kind of busy, though). I’m taking actions, trying to ward off distraction and depression. Thanks (if you’re tuning into my blog) for your patience. I’ll have new real content-laden bloggings shortly.
Got any leads on good jobs? I’m looking for product management or consulting positions. Identity management, security, integration middleware, BPM, wireless/mobile, etc. I cover a broad range.
Call me at 703-924-6224 or e-mail me at james_kobielus@hotmail.com. Even if it’s not about jobs, I need to talk to human beings. Thanks.
Jim
Pointer to article:
http://www.computerworld.com/managementtopics/management/story/0,10801,98770,00.html?source=NLT_WK&nid=98770
Kobielus kommentary:
Yeah, I know: same ol’, same ol’. We’re all swimming in this overheating fish pond. “The stress of modern work life may be literally driving us to distraction.”
Complex times can make even the relatively well-moored feel like we’re suffering from ADT. I don’t, but sometimes I feel like the environment is filling me with a mental form of secondhand smoke.
In case you’re wondering why I haven’t posted anything to this blog in the past week, this headline more or less summarizes it. Everything piling up: looking for a full-time job, taking various freelance consulting and writing gigs to pay the bills, dealing with various system and connectivity issues, running my household, dealing with various family issues, etc. Oh….and, of course, continue to read the IT industry news, think about it all, and post my thoughts to this blog. Sometimes some things need to give under the pressure of events. This has been one of them. So has my poetry. I have only written one new piece this month/year (“OO”). Just haven’t felt like it.
It’s not so much my brain but my heart that’s been overloaded. I’m 46 years old now and feel like it. Fortunately (and this is semi-ironic) I’ve finally brought down my weight to my target: 142 pounds, which is what I weighed at the age of 19. This has come about not so much through worry and nervous exhaustion but from 3+ years of exercise and diet. At least I’m entering into middle age a relatively trim man (trying to avoid the weight issues that aggravated my father’s heart disease and contributed to his death at age 54 in 1978--in many ways, I physically and temperamentally resemble Bill Kobielus very closely, and am always conscious of that existential fact).
These are complex, trying times, full of people and events conspiring to add to our worries. As if we need any “help.” Oh….system issues…in the past week I’ve been afflicted with a spyware infestation on my main desktop computer (Windows XP SP 1—I’m just now getting around to upgrading to SP2). Yes, I’ve had anti-spyware “protection” on that system (Ad-Aware, Spybot S&D), but none of it has prevented a boatload of new crap from pooping itself down from the ether. The worst thing about the new infestation is that it has crashed IE, Windows Explorer, and Control Panel (fortunately, I can still get to all of this if I put Windows in “safe mode”). The number one culprit in all this are these ironically named browser “helper” objects that silently introduce themselves through IE’s holes. In case you’re wondering, I made Mozilla Firefox my default browser several weeks ago, but that still didn’t stop the IE-borne spyware—Windows just seems to start up IE whenever it feels like it, so having Mozilla as my default has been no “inoculation.”
I’m typing this from my (new) laptop, on which I’ve beefed up the anti-spyware (I’m also using Spyware Blaster) now.
Trying times. I’m trying. I really am. Ironically, since I lost my job, I've been busier than ever (not sure it's all been the right kind of busy, though). I’m taking actions, trying to ward off distraction and depression. Thanks (if you’re tuning into my blog) for your patience. I’ll have new real content-laden bloggings shortly.
Got any leads on good jobs? I’m looking for product management or consulting positions. Identity management, security, integration middleware, BPM, wireless/mobile, etc. I cover a broad range.
Call me at 703-924-6224 or e-mail me at james_kobielus@hotmail.com. Even if it’s not about jobs, I need to talk to human beings. Thanks.
Jim
Monday, January 17, 2005
poem Chemical Water
CHEMICAL WATER
We search the
sheen of the grass.
We know the gleam on the blade
is the blood of all creation.
We see landscapes softened.
Snowflakes
sweet upon planetary dreams.
We place faith in the oasis.
And in rivers
through parched interstellar.
Doped with trace amounts
in vast arrays, mites teeming,
we know that worlds
can flourish.
We search the
sheen of the grass.
We know the gleam on the blade
is the blood of all creation.
We see landscapes softened.
Snowflakes
sweet upon planetary dreams.
We place faith in the oasis.
And in rivers
through parched interstellar.
Doped with trace amounts
in vast arrays, mites teeming,
we know that worlds
can flourish.
fyi Semantic Web Ontologies: What Works and What Doesn't
All:
Pointer to article:
http://www.alwayson-network.com/comments.php?id=P7480_0_3_0_C
Kobielus kommentary:
This is a great interview with Peter Norvig, Google’s director of search quality. The core problem a search engine faces puts the semantic web question in stark relief. What do you trust more: what a webpage declares about itself—the metadata--or the information contained in that page?
I wholeheartedly agree with this statement toward the end of the article:
“You can't trust the metadata. You can't trust what people are going to say. In general, search engines have turned away from metadata, and they try to hone in more on what's exactly perceivable to the user. For the most part we throw away the meta tags, unless there's a good reason to believe them, because they tend to be more deceptive than they are helpful. And the more there's a marketplace in which people can make money off of this deception, the more it's going to happen. Humans are very good at detecting this kind of spam, and machines aren't necessarily that good. So if more of the information flows between machines, this is something you're going to have to look out for more and more.”
That’s right. You can’t necessarily trust self-assertions. Where there’s money to be made, careers to be advanced, jobs to be landed, and e-commerce transactions to be lured, people will lie. Where reputations are made or lost, people will lie. Sometimes, they’ll just lie for the hell of it. Sometimes, they’ll lie and not realize they’re lying. They may be passing on deceptive information that they’ve never quite realized is wrong—they’re fooling themselves and, if you buy what they’re selling, they’ve fooled you too.
Think about resumes. They’re self-assertions—hence metadata--on the human object. Who can totally trust their truthfulness, completeness, or freshness? You can't trust all people to have the self-knowledge and self-description abilities that make for a good, solid, useful resume. You can't expect people to write resumes that give every potential employer precisely the information they're looking for, which is why you often need to fill in the blanks in their mind in the form of a live interview. You certainly can’t expect people all over the earth to converge on a single resume format so that potential employers can compare everybody apples-to-apples. Sometimes it’s better to trust what others (i.e., references) say about us, but they can lie too (especially if they’re friends of the person, and have been put up to it).
Think about the sheer volume, variety, and dynamicity of information and other resources on the Internet. Even if everybody tagged their metadata with OWL, RDF, or some other common schema, how can you trust those assertions?
In the broadest sense, the “semantic web” refers to the master “description layer” that informs the Internet and all networked interactions. We can describe semantics generally as referring to shared understandings of the “meaning” of various entities within a distributed environment. The entities to which one might ascribe “meaning” include services, messages, resources, applications, and protocols. An entity’s “meaning” might be described by any or all of the following characteristics: structure, behavior, function, operation, scope, context, reference, use, goal state, and implied processing.
Clearly, in the broadest sense, any WS-* specification in any layer describes some aspect of a service’s semantics: usually the structures, behaviors, scope, context, reference, and processing of data, messages, and interactions within a specific functional sphere (such as identification, messaging, security, reliable messaging, and publish-and-subscribe). WSDL is a semantic/description syntax for describing Web services; XML Schema is one for XML vocabularies; WS-Policy is one for policies; and so and and so forth.
But the industry narrowly refers to a subset of WS-* specifications as specifically addressing “semantics”: OWL, RDF, and a welter of others that have achieved minimal adoption, as this article states. Typically, “semantics specifications” provide frameworks for defining the scope, reference, use, goal state, and implied processing of a particular resource type: tagged XML data.
The “semantic web” concept can’t even begin to lift off until there’s general recognition that all description markup syntaxes need to be included in this framework. And until there’s a federated registry of all metadata descriptions of all networked resources, relationships, and other entities. And until there’s a trust infrastructure that helps the “relying party” to assess the quality/accuracy of any given piece of asserted metadata on any entity. And until there’s a search infrastructure that helps us to locate, aggregate, assemble, and present all that metadata (and assessments thereof) to human beings for their evaluation.
Because, as Norvig says, there are things that people catch that machines totally miss. People are the final arbiters of "meaning." If it isn't meaningful to a human being, it isn't meaningful, period.
Jim
Pointer to article:
http://www.alwayson-network.com/comments.php?id=P7480_0_3_0_C
Kobielus kommentary:
This is a great interview with Peter Norvig, Google’s director of search quality. The core problem a search engine faces puts the semantic web question in stark relief. What do you trust more: what a webpage declares about itself—the metadata--or the information contained in that page?
I wholeheartedly agree with this statement toward the end of the article:
“You can't trust the metadata. You can't trust what people are going to say. In general, search engines have turned away from metadata, and they try to hone in more on what's exactly perceivable to the user. For the most part we throw away the meta tags, unless there's a good reason to believe them, because they tend to be more deceptive than they are helpful. And the more there's a marketplace in which people can make money off of this deception, the more it's going to happen. Humans are very good at detecting this kind of spam, and machines aren't necessarily that good. So if more of the information flows between machines, this is something you're going to have to look out for more and more.”
That’s right. You can’t necessarily trust self-assertions. Where there’s money to be made, careers to be advanced, jobs to be landed, and e-commerce transactions to be lured, people will lie. Where reputations are made or lost, people will lie. Sometimes, they’ll just lie for the hell of it. Sometimes, they’ll lie and not realize they’re lying. They may be passing on deceptive information that they’ve never quite realized is wrong—they’re fooling themselves and, if you buy what they’re selling, they’ve fooled you too.
Think about resumes. They’re self-assertions—hence metadata--on the human object. Who can totally trust their truthfulness, completeness, or freshness? You can't trust all people to have the self-knowledge and self-description abilities that make for a good, solid, useful resume. You can't expect people to write resumes that give every potential employer precisely the information they're looking for, which is why you often need to fill in the blanks in their mind in the form of a live interview. You certainly can’t expect people all over the earth to converge on a single resume format so that potential employers can compare everybody apples-to-apples. Sometimes it’s better to trust what others (i.e., references) say about us, but they can lie too (especially if they’re friends of the person, and have been put up to it).
Think about the sheer volume, variety, and dynamicity of information and other resources on the Internet. Even if everybody tagged their metadata with OWL, RDF, or some other common schema, how can you trust those assertions?
In the broadest sense, the “semantic web” refers to the master “description layer” that informs the Internet and all networked interactions. We can describe semantics generally as referring to shared understandings of the “meaning” of various entities within a distributed environment. The entities to which one might ascribe “meaning” include services, messages, resources, applications, and protocols. An entity’s “meaning” might be described by any or all of the following characteristics: structure, behavior, function, operation, scope, context, reference, use, goal state, and implied processing.
Clearly, in the broadest sense, any WS-* specification in any layer describes some aspect of a service’s semantics: usually the structures, behaviors, scope, context, reference, and processing of data, messages, and interactions within a specific functional sphere (such as identification, messaging, security, reliable messaging, and publish-and-subscribe). WSDL is a semantic/description syntax for describing Web services; XML Schema is one for XML vocabularies; WS-Policy is one for policies; and so and and so forth.
But the industry narrowly refers to a subset of WS-* specifications as specifically addressing “semantics”: OWL, RDF, and a welter of others that have achieved minimal adoption, as this article states. Typically, “semantics specifications” provide frameworks for defining the scope, reference, use, goal state, and implied processing of a particular resource type: tagged XML data.
The “semantic web” concept can’t even begin to lift off until there’s general recognition that all description markup syntaxes need to be included in this framework. And until there’s a federated registry of all metadata descriptions of all networked resources, relationships, and other entities. And until there’s a trust infrastructure that helps the “relying party” to assess the quality/accuracy of any given piece of asserted metadata on any entity. And until there’s a search infrastructure that helps us to locate, aggregate, assemble, and present all that metadata (and assessments thereof) to human beings for their evaluation.
Because, as Norvig says, there are things that people catch that machines totally miss. People are the final arbiters of "meaning." If it isn't meaningful to a human being, it isn't meaningful, period.
Jim
Saturday, January 15, 2005
lol Biology Book Stickers Ruled Unconstitutional
All:
Pointer to article:
http://www.washingtonpost.com/wp-dyn/articles/A6681-2005Jan13.html?sub=AR
Kobielus kommentary:
Good, some sanity from the federal judiciary. Considering who’s appointing federal judges these days, this is not something you can take for granted.
I’m sick of having to give these fundamentalist fools the benefit of the doubt on this creationism nonsense. There is no doubt. Organisms have always evolved, and will continue to evolve. The universe has always evolved, and will carry on in this mode till the end of time. In fact, time has always evolved. This nasty know-nothing time in our culture shall pass. Just have to have faith.
Regarding evolution of organisms, do the creationists doubt that natural selection is taking place all around us? After all, natural selection is the mechanism that Darwin focused on to describe the ever-changing fit between organisms and their environments, and the steady modification of organisms themselves that result from the individual’s struggle for life, sustenance, and procreation. How can the fundamentalists seriously deny the manifestations of this mechanism (especially the steady progression of new hominid forms up through and including our own species) without going after the mechanism itself? And if they deny that natural selection takes place, they'll have to deny some indisputable facts of existence: people differ from their parents, some people don't beget children, some people have more children than other people, some lineages die off and don't pass their genes to future generations ad infinitum. We--ever single one of us--exist through the grace of that lucky "ad infinitum" chain of "x begot y begot z." It's the getting and begetting of the significant others through all eternity that constitutes natural selection, which is the driver of evolution. Heck, even Genesis acknowledges the centrality of the beget-chain.
The comic component of all this is the idiotic disclaimer sticker that some Georgia schoolboard affixed to books that mention evolution. Here’s what the now-struck-down stickers said:
• “This textbook contains material on evolution. Evolution is a theory, not a fact, regarding the origin of living things. This material should be approached with an open mind, studied carefully, and critically considered.”
I agree with the last sentence. Critical thinking is important. It’s the basis of scientific inquiry, and the vast sweep of scientific inquiry has more than established evolution of species as a fact. As much a fact as the observation that spring follows winter annually in the temperate zones of our planet. Or that objects tend to drop to earth when allowed to fall freely.
I think this schoolboard should have been more consistent in its approach. I’d like to have seen them apply that disclaimer to any established, confirmed body of scientific knowledge discussed in the book. The disclaimers would have choked out the actual book text. For example:
• “This textbook contains material on GRAVITATION, which is a theory, not a fact. This material should be approached with an open mind, studied carefully, and critically considered.”
• “This textbook contains material on CIRCULATION OF THE BLOOD, which is a theory, not a fact. This material should be approached with an open mind, studied carefully, and critically considered.”
• “This textbook contains material on PHOTOSYNTHESIS, which is a theory, not a fact. This material should be approached with an open mind, studied carefully, and critically considered.”
And so on. For every scientific discovery since Aristotle. Just for consistency’s sake.
Pinheads.
Jim
Pointer to article:
http://www.washingtonpost.com/wp-dyn/articles/A6681-2005Jan13.html?sub=AR
Kobielus kommentary:
Good, some sanity from the federal judiciary. Considering who’s appointing federal judges these days, this is not something you can take for granted.
I’m sick of having to give these fundamentalist fools the benefit of the doubt on this creationism nonsense. There is no doubt. Organisms have always evolved, and will continue to evolve. The universe has always evolved, and will carry on in this mode till the end of time. In fact, time has always evolved. This nasty know-nothing time in our culture shall pass. Just have to have faith.
Regarding evolution of organisms, do the creationists doubt that natural selection is taking place all around us? After all, natural selection is the mechanism that Darwin focused on to describe the ever-changing fit between organisms and their environments, and the steady modification of organisms themselves that result from the individual’s struggle for life, sustenance, and procreation. How can the fundamentalists seriously deny the manifestations of this mechanism (especially the steady progression of new hominid forms up through and including our own species) without going after the mechanism itself? And if they deny that natural selection takes place, they'll have to deny some indisputable facts of existence: people differ from their parents, some people don't beget children, some people have more children than other people, some lineages die off and don't pass their genes to future generations ad infinitum. We--ever single one of us--exist through the grace of that lucky "ad infinitum" chain of "x begot y begot z." It's the getting and begetting of the significant others through all eternity that constitutes natural selection, which is the driver of evolution. Heck, even Genesis acknowledges the centrality of the beget-chain.
The comic component of all this is the idiotic disclaimer sticker that some Georgia schoolboard affixed to books that mention evolution. Here’s what the now-struck-down stickers said:
• “This textbook contains material on evolution. Evolution is a theory, not a fact, regarding the origin of living things. This material should be approached with an open mind, studied carefully, and critically considered.”
I agree with the last sentence. Critical thinking is important. It’s the basis of scientific inquiry, and the vast sweep of scientific inquiry has more than established evolution of species as a fact. As much a fact as the observation that spring follows winter annually in the temperate zones of our planet. Or that objects tend to drop to earth when allowed to fall freely.
I think this schoolboard should have been more consistent in its approach. I’d like to have seen them apply that disclaimer to any established, confirmed body of scientific knowledge discussed in the book. The disclaimers would have choked out the actual book text. For example:
• “This textbook contains material on GRAVITATION, which is a theory, not a fact. This material should be approached with an open mind, studied carefully, and critically considered.”
• “This textbook contains material on CIRCULATION OF THE BLOOD, which is a theory, not a fact. This material should be approached with an open mind, studied carefully, and critically considered.”
• “This textbook contains material on PHOTOSYNTHESIS, which is a theory, not a fact. This material should be approached with an open mind, studied carefully, and critically considered.”
And so on. For every scientific discovery since Aristotle. Just for consistency’s sake.
Pinheads.
Jim
Friday, January 14, 2005
imho Identity service bus the key to universal federation
All:
Identity self-empowerment is the central theme underpinning recent blog discussions by Cameron, Lewis, Ernst, Powers, Kearns, Windley, Burton, and others. How can individuals self-assert their identities, self-register their identities into federated communities of trust, self-host their identity information, self-publish their identities to authorized parties, and self-police disclosure to and use of their identity information by relying parties?
It seems to me that LID, Identity Commons, SxIP, FOAF, XRI/XDI, and other industry initiatives revolve around these core self-empowerment themes. There seems to be a general belief that identity/trust brokers of all sorts are a bad thing because they violate Cameron’s “law of fewest parties.” The fewest possible parties in a distributed IdM environment are two: 1) identity-asserting entity and 2) the identity-relying entity.
If we accept all that, then all this identity self-empowerment requires a conducive middleware fabric: one that deeply supports peer-to-peer identity interactions without need for any intermediary identity registrar, authority, or broker. Identity interactions are just one category of traffic traversing the increasingly peer-to-peer enterprise service bus. The minimal environment consists of the peer doing the asserting and the peer doing the relying.
Fundamentally, self-empowering federation depends on an “identity service bus” that implements the emerging stack of WS-* standards (the basis for the broader enterprise service bus) in peer-to-peer fashion. Why not define a peer-to-peer trust environment in which individual publish and subscribe to (hence control disclosure of) identity information via WS-Notification and WS-Eventing (or whatever consensus pub/sub standard emerges from current industry discussions)?
What is identity federation, at heart, if not a pub/sub and event notification environment? After all, an authentication assertion is simply a notification of (really, a voucher from a trusted party concerning) an event: that someone has successfully logged in. An attribute assertion vouches for the existence of another type of event: prior registration of various attributes, such as roles, with an authoritative attribute store (such as an LDAP directory). My autonomous identity domain (the IdP) vouches for the existence of these identity event(s), and your autonomous domain (the SP) relies on those vouchers (aka "assertions").
Where today's federated identity schemes go wrong is in re-inventing the wheel: defining their own request/response messaging protocols that don't leverage the emerging WS-* standards for pub/sub and event notification. Why should SAML or Liberty Alliance or any other federation protocol use a different event-notification messaging protocol from that used for other Web services interactions? As I said, identity is just another type of interaction over the enterprise service bus--it shouldn't require its own distinct robust app-to-app messaging protocol.
I get heartburn when I see new IdM initiatives—such as LID and SxIP—that don’t compose fully into the emerging stack of consensus WS-* specifications. Actually, why single them out? I'd like to see SAML, Liberty Alliance, and WS-Federation re-architected to support peer-to-peer identity self-assertion, -registration, -hosting, -pub/sub, and -policing. And also to support WS-Notification/Eventing natively. Obviously, it would take a few years for the industry to turn the SAML ship around in this direction, considering that SAML 2.0 is well-advanced toward ratification.
The only way to establish a universal identity service bus is to leverage the middleware standards that every peer endpoint implements everywhere. And, where universality is concerned, the growing WS-* stack is "it." Or, should I say, IT?
Assuming that the industry converges around a common vision of the identity service bus, I don't expect universal, peer-oriented self-federation over a universal WS-* middleware backplane to become a reality for another 10 years, at the earliest. It will take at least that long for the enabling middleware standards to mature and be adopted widely. It will take even longer--perhaps an eternity--before identity authorities everywhere are willing to cede control over the IdM backplane to their suddenly self-assertive serfs--er, subjects--ummmm.....people.
Jim
Identity self-empowerment is the central theme underpinning recent blog discussions by Cameron, Lewis, Ernst, Powers, Kearns, Windley, Burton, and others. How can individuals self-assert their identities, self-register their identities into federated communities of trust, self-host their identity information, self-publish their identities to authorized parties, and self-police disclosure to and use of their identity information by relying parties?
It seems to me that LID, Identity Commons, SxIP, FOAF, XRI/XDI, and other industry initiatives revolve around these core self-empowerment themes. There seems to be a general belief that identity/trust brokers of all sorts are a bad thing because they violate Cameron’s “law of fewest parties.” The fewest possible parties in a distributed IdM environment are two: 1) identity-asserting entity and 2) the identity-relying entity.
If we accept all that, then all this identity self-empowerment requires a conducive middleware fabric: one that deeply supports peer-to-peer identity interactions without need for any intermediary identity registrar, authority, or broker. Identity interactions are just one category of traffic traversing the increasingly peer-to-peer enterprise service bus. The minimal environment consists of the peer doing the asserting and the peer doing the relying.
Fundamentally, self-empowering federation depends on an “identity service bus” that implements the emerging stack of WS-* standards (the basis for the broader enterprise service bus) in peer-to-peer fashion. Why not define a peer-to-peer trust environment in which individual publish and subscribe to (hence control disclosure of) identity information via WS-Notification and WS-Eventing (or whatever consensus pub/sub standard emerges from current industry discussions)?
What is identity federation, at heart, if not a pub/sub and event notification environment? After all, an authentication assertion is simply a notification of (really, a voucher from a trusted party concerning) an event: that someone has successfully logged in. An attribute assertion vouches for the existence of another type of event: prior registration of various attributes, such as roles, with an authoritative attribute store (such as an LDAP directory). My autonomous identity domain (the IdP) vouches for the existence of these identity event(s), and your autonomous domain (the SP) relies on those vouchers (aka "assertions").
Where today's federated identity schemes go wrong is in re-inventing the wheel: defining their own request/response messaging protocols that don't leverage the emerging WS-* standards for pub/sub and event notification. Why should SAML or Liberty Alliance or any other federation protocol use a different event-notification messaging protocol from that used for other Web services interactions? As I said, identity is just another type of interaction over the enterprise service bus--it shouldn't require its own distinct robust app-to-app messaging protocol.
I get heartburn when I see new IdM initiatives—such as LID and SxIP—that don’t compose fully into the emerging stack of consensus WS-* specifications. Actually, why single them out? I'd like to see SAML, Liberty Alliance, and WS-Federation re-architected to support peer-to-peer identity self-assertion, -registration, -hosting, -pub/sub, and -policing. And also to support WS-Notification/Eventing natively. Obviously, it would take a few years for the industry to turn the SAML ship around in this direction, considering that SAML 2.0 is well-advanced toward ratification.
The only way to establish a universal identity service bus is to leverage the middleware standards that every peer endpoint implements everywhere. And, where universality is concerned, the growing WS-* stack is "it." Or, should I say, IT?
Assuming that the industry converges around a common vision of the identity service bus, I don't expect universal, peer-oriented self-federation over a universal WS-* middleware backplane to become a reality for another 10 years, at the earliest. It will take at least that long for the enabling middleware standards to mature and be adopted widely. It will take even longer--perhaps an eternity--before identity authorities everywhere are willing to cede control over the IdM backplane to their suddenly self-assertive serfs--er, subjects--ummmm.....people.
Jim
lol Prehistoric Mammal Had Diet of Dinosaurs
All:
Pointer to article:
http://www.technewsworld.com/story/39681.html?u=jkobielus2&p=ENNSS_669c3848b38a83722f797d45e212845c
Kobielus kommentary:
This reminds me of one of my favorite Gary Larson “Far Side” cartoons. In it, a dinosaur stands at a podium, speaking to a crowd of other dinosaurs:
“Gentlemen, the situation looks pretty bleak. The world’s climate is changing, the mammals are taking over, and we all have brains the size of walnuts.”
Jim
Pointer to article:
http://www.technewsworld.com/story/39681.html?u=jkobielus2&p=ENNSS_669c3848b38a83722f797d45e212845c
Kobielus kommentary:
This reminds me of one of my favorite Gary Larson “Far Side” cartoons. In it, a dinosaur stands at a podium, speaking to a crowd of other dinosaurs:
“Gentlemen, the situation looks pretty bleak. The world’s climate is changing, the mammals are taking over, and we all have brains the size of walnuts.”
Jim
fyi WinOE Workflow Prepped For Whidbey, Longhorn, Office 12 In 2006
All:
Pointer to article: http://www.crn.com/nl/crndirect/showArticle.jhtml?articleId=57700833)
Kobielus kommentary:
Microsoft has been saying for years that BizTalk is the foundation for convergence of all of their workflow functionality (encompassing the workflow features of Exchange, SQL Server, Content Management Server, SharePoint Team Services, “Project Green,” DRM Services, and what have you). “Longhorn”’s WinOE Workflow module—in tandem with related functionality going into “Whidbey” and Office 12--is where Windows gets workflow in a serious way.
Microsoft has said that BizTalk Server will continue to be a stand-alone workflow/orchestration product, and I believe them. [Quick terminology note: “workflow,” “orchestration,” “choreography,” and “business process management” are all synonyms in my book—actually, in my actual books—“Workflow Strategies” (1997, IDG Books) and “BizTalk: Implementing B2B E-Commerce” (2001, Prentice Hall PTR)—I use the term “workflow,” whereas in my recent coverage of this space I’m tending to use “orchestration” as the catch-all term; same diff].
Microsoft hasn’t provided as much specificity as I would wish with its BizTalk roadmap. But what the article says sounds quite plausible (except for one teeny little error—can you spot it?): “The next version of BizTalk 2006, code named Pathfinder and due to go into beta by the end of the year, will continue to use the existing original orchestration engine based on Visio but its successor -- due in 2008 -- will use the new WinOE orchestration technology, sources said.” The error is that it’s absurd to say that “[BizTalk Server’s] original orchestration engine [is] based on Visio.” This is entirely apples-and-oranges: an orchestration engine is a runtime component; Visio is a flowcharting tool for business analysts to use in specifying processes to be executed by BizTalk’s runtime orchestration engine (Visual Studio and various BizTalk orchestration tools are the heavy-hitting technically-oriented orchestration modeling/definition tools in Microsoft’s arsenal).
Anyway, having written a book on BizTalk, my hunch about WinOE Workflow is that it will primarily provide the development/runtime infrastructure for person-to-person (aka human) workflows throughout Microsoft’s product family (BizTalk Server 2004 added human workflow functionality to a product that had previously been focused solely on EAI). Where does that leave future versions of the stand-alone BizTalk Server product? My guess is that Microsoft will position BizTalk Server more solidly in the middleware space as an integration infrastructure hub for application, data, and process integration (and hooking into WinOE Workflow for the presentation side of it all). Microsoft currently has no EII offering, and I suspect that BizTalk will figure into that strategy going forward. Likewise, Microsoft has no enterprise service bus (ESB) offering (which I define as integration fabrics that support flexible message exchange patterns, including hub-and-spoke, decentralized/routed, and peer-to-peer).
This “single [Microsoft] orchestration programming paradigm” mentioned in the article will be interesting to see. I doubt there’ll be a single paradigm. Rather, there’ll be orchestration visual modeling paradigms appropriate for various users: non-technical end users (perhaps using an e-mail-like routing-slip metaphor from within Office/Outlook/InfoPath/IE for process definition, and e-mail-like worklists for process participation, and calendar/task-management interface for process tracking), business process analysts (Visio-like flowcharting), orchestration process-modeling and simulation gurus (various high-powered model-driven-development paradigms a la UML, BPMN, etc., with Microsoft’s “Whitehorse” pulling the carriage), and process/system administrators (browser-based visual tracking/monitoring tools).
Just as important will be Microsoft’s orchestration-standardization push. How deeply/broadly will it implement WS-BPEL for hub-and-spoke orchestration? WS-CDL for peer-to-peer orchestration? Various WS-* standards/specs—WS-Policy, WS-ReliableMessaging, WS-Notification, WS-Eventing, WS-Coordination, etc.—to address the critical features of a federated multivendor orchestration environment (oooh..there’s a new concept—see my prior blog posting on new frontiers in federation).
Some folks are down on WS-BPEL because it doesn’t provide the be-all orchestration standards framework. WS-BPEL has its persistent "debunkers," but such attitude is based on misunderstanding of its proper scope. It's an important piece of orchestration standards picture, but it only defines the process-definition rules interchange/execution syntax for orchestrations executed at intermediary nodes such as integration brokers (such as BizTalk Server); WS-CDL provides an equivalent rules syntax for orchestrations to be implemented at endpoint nodes. Check out the Workflow Management Coalition (www.wfmc.org) for a fuller interoperability reference framework for federated orchestration (though the WfMC’s actual “standards” in this regard have been conspicuous in their absence from most commercial workflow/orchestration tools.
So WS-BPEL is only a small piece of the much broader range of orchestration standards that are necessary under the WS-* umbrella. Microsoft’s orchestration products/tools—now and future—are only a small piece of the overall orchestration federation environment for multivendor, multiplatform, multi-enterprise business processes. But Microsoft has an excellent roadmap for embedding general-purpose, standards-based orchestration (person-to-person, app-to-app, etc.) into their platform. I’d like to see other platform vendors follow suit with their orchestration roadmaps.
Jim
Pointer to article: http://www.crn.com/nl/crndirect/showArticle.jhtml?articleId=57700833)
Kobielus kommentary:
Microsoft has been saying for years that BizTalk is the foundation for convergence of all of their workflow functionality (encompassing the workflow features of Exchange, SQL Server, Content Management Server, SharePoint Team Services, “Project Green,” DRM Services, and what have you). “Longhorn”’s WinOE Workflow module—in tandem with related functionality going into “Whidbey” and Office 12--is where Windows gets workflow in a serious way.
Microsoft has said that BizTalk Server will continue to be a stand-alone workflow/orchestration product, and I believe them. [Quick terminology note: “workflow,” “orchestration,” “choreography,” and “business process management” are all synonyms in my book—actually, in my actual books—“Workflow Strategies” (1997, IDG Books) and “BizTalk: Implementing B2B E-Commerce” (2001, Prentice Hall PTR)—I use the term “workflow,” whereas in my recent coverage of this space I’m tending to use “orchestration” as the catch-all term; same diff].
Microsoft hasn’t provided as much specificity as I would wish with its BizTalk roadmap. But what the article says sounds quite plausible (except for one teeny little error—can you spot it?): “The next version of BizTalk 2006, code named Pathfinder and due to go into beta by the end of the year, will continue to use the existing original orchestration engine based on Visio but its successor -- due in 2008 -- will use the new WinOE orchestration technology, sources said.” The error is that it’s absurd to say that “[BizTalk Server’s] original orchestration engine [is] based on Visio.” This is entirely apples-and-oranges: an orchestration engine is a runtime component; Visio is a flowcharting tool for business analysts to use in specifying processes to be executed by BizTalk’s runtime orchestration engine (Visual Studio and various BizTalk orchestration tools are the heavy-hitting technically-oriented orchestration modeling/definition tools in Microsoft’s arsenal).
Anyway, having written a book on BizTalk, my hunch about WinOE Workflow is that it will primarily provide the development/runtime infrastructure for person-to-person (aka human) workflows throughout Microsoft’s product family (BizTalk Server 2004 added human workflow functionality to a product that had previously been focused solely on EAI). Where does that leave future versions of the stand-alone BizTalk Server product? My guess is that Microsoft will position BizTalk Server more solidly in the middleware space as an integration infrastructure hub for application, data, and process integration (and hooking into WinOE Workflow for the presentation side of it all). Microsoft currently has no EII offering, and I suspect that BizTalk will figure into that strategy going forward. Likewise, Microsoft has no enterprise service bus (ESB) offering (which I define as integration fabrics that support flexible message exchange patterns, including hub-and-spoke, decentralized/routed, and peer-to-peer).
This “single [Microsoft] orchestration programming paradigm” mentioned in the article will be interesting to see. I doubt there’ll be a single paradigm. Rather, there’ll be orchestration visual modeling paradigms appropriate for various users: non-technical end users (perhaps using an e-mail-like routing-slip metaphor from within Office/Outlook/InfoPath/IE for process definition, and e-mail-like worklists for process participation, and calendar/task-management interface for process tracking), business process analysts (Visio-like flowcharting), orchestration process-modeling and simulation gurus (various high-powered model-driven-development paradigms a la UML, BPMN, etc., with Microsoft’s “Whitehorse” pulling the carriage), and process/system administrators (browser-based visual tracking/monitoring tools).
Just as important will be Microsoft’s orchestration-standardization push. How deeply/broadly will it implement WS-BPEL for hub-and-spoke orchestration? WS-CDL for peer-to-peer orchestration? Various WS-* standards/specs—WS-Policy, WS-ReliableMessaging, WS-Notification, WS-Eventing, WS-Coordination, etc.—to address the critical features of a federated multivendor orchestration environment (oooh..there’s a new concept—see my prior blog posting on new frontiers in federation).
Some folks are down on WS-BPEL because it doesn’t provide the be-all orchestration standards framework. WS-BPEL has its persistent "debunkers," but such attitude is based on misunderstanding of its proper scope. It's an important piece of orchestration standards picture, but it only defines the process-definition rules interchange/execution syntax for orchestrations executed at intermediary nodes such as integration brokers (such as BizTalk Server); WS-CDL provides an equivalent rules syntax for orchestrations to be implemented at endpoint nodes. Check out the Workflow Management Coalition (www.wfmc.org) for a fuller interoperability reference framework for federated orchestration (though the WfMC’s actual “standards” in this regard have been conspicuous in their absence from most commercial workflow/orchestration tools.
So WS-BPEL is only a small piece of the much broader range of orchestration standards that are necessary under the WS-* umbrella. Microsoft’s orchestration products/tools—now and future—are only a small piece of the overall orchestration federation environment for multivendor, multiplatform, multi-enterprise business processes. But Microsoft has an excellent roadmap for embedding general-purpose, standards-based orchestration (person-to-person, app-to-app, etc.) into their platform. I’d like to see other platform vendors follow suit with their orchestration roadmaps.
Jim
Thursday, January 13, 2005
rip Jefferson Airplane Drummer Spencer Dryden Dies
All:
Pointer to article:
http://story.news.yahoo.com/news?tmpl=story&cid=769&e=2&u=/nm/20050114/music_nm/music_jefferson_dc
Observance:
Yes, the drummer of Jefferson Airplane was Charlie Chaplin’s nephew, a fact I learned in passing last year when I read a biography of Chaplin. The fact was never called out—I just saw the kid’s name (quick mention of his participation in a party in his famous uncle’s mansion in the early 40s) and did a quick Google. Actually, the drummer’s father was Chaplin’s half-brother by the same mother, hence the different surnames. Dryden never told any of his bandmates or anybody else about it (I’m sure he had plenty of good reasons—not wanting to be overshadowed, not wanting people to expect him to carry on those “comedy genes,” not wanting to be associated with someone who was generally regarded in LA as a beast and a rogue and communist, and so on and so forth).
About Spencer Dryden’s role in Jefferson Airplane. No, he wasn't one of the creative principals in the group, and he, like most drummers, was pretty low-key and invisible. His name is essentially a bonus question in a 60s-rock trivia contest. From everything I've read, he was just about the freest spirit of the bunch, and the least connected with the whole Haight-Ashbury scene (he was always perceived by the other members as the "LA guy" on drums). But he directly influenced the Airplane's lead singer, main personality, and resident looker.
Grace Slick wrote/sang the wonderful, overlooked, funny ballad “Lather” about Dryden, who was her primary squeeze during the 1967-68 creative sweet spot, spanning “Surrealistic Pillow,” “After Bathing at Baxter’s,” and “Crown of Creation.” “Lather was 30 years old today/They took away all of his toys/His mother sent newspaper clippings to him/About his old friends who’d stopped being boys.” An overgrown child, or lost child, or tripping child.
When I think of his contribution to the band, I primarily think of the “bolero” drum progression of “White Rabbit,” and also the charging percussion of "Somebody to Love" (I've heard Grace's prior versions of both songs from her pre-JA band, Great Society, and can assure you that JA's versions are much better, thanks in no small part to Dryden's excellent drumming).
I also think of Dryden's acid-drenched free-association riff called "A Small Package of Value Will Come to You Shortly," on "After Bathing at Baxter's": especially him barking out the phrase: "No man is an island/No man is an island/He's a peninsula" (one of my absolute favorite double-entendres of all time--think of Florida--also look into the word "peninsula" for five letters in the correct sequence, and strip away the four unessential letters to reveal the hidden word).
I wonder what Dryden had been doing with his life since the 70s (he was in country-rock band New Riders of the Purple Sage for several years). As you can tell, I’m a huge Jefferson Airplane fan. One of the most memorable events of my life was a brief, accidental meeting with Grace Slick and Paul Kantner in Detroit in 1981. Somehow, I prefer their artsy noodlings to the Dead. I’m sure they don’t remember the encounter. I got their autographs. Lost ‘em.
Jim
Pointer to article:
http://story.news.yahoo.com/news?tmpl=story&cid=769&e=2&u=/nm/20050114/music_nm/music_jefferson_dc
Observance:
Yes, the drummer of Jefferson Airplane was Charlie Chaplin’s nephew, a fact I learned in passing last year when I read a biography of Chaplin. The fact was never called out—I just saw the kid’s name (quick mention of his participation in a party in his famous uncle’s mansion in the early 40s) and did a quick Google. Actually, the drummer’s father was Chaplin’s half-brother by the same mother, hence the different surnames. Dryden never told any of his bandmates or anybody else about it (I’m sure he had plenty of good reasons—not wanting to be overshadowed, not wanting people to expect him to carry on those “comedy genes,” not wanting to be associated with someone who was generally regarded in LA as a beast and a rogue and communist, and so on and so forth).
About Spencer Dryden’s role in Jefferson Airplane. No, he wasn't one of the creative principals in the group, and he, like most drummers, was pretty low-key and invisible. His name is essentially a bonus question in a 60s-rock trivia contest. From everything I've read, he was just about the freest spirit of the bunch, and the least connected with the whole Haight-Ashbury scene (he was always perceived by the other members as the "LA guy" on drums). But he directly influenced the Airplane's lead singer, main personality, and resident looker.
Grace Slick wrote/sang the wonderful, overlooked, funny ballad “Lather” about Dryden, who was her primary squeeze during the 1967-68 creative sweet spot, spanning “Surrealistic Pillow,” “After Bathing at Baxter’s,” and “Crown of Creation.” “Lather was 30 years old today/They took away all of his toys/His mother sent newspaper clippings to him/About his old friends who’d stopped being boys.” An overgrown child, or lost child, or tripping child.
When I think of his contribution to the band, I primarily think of the “bolero” drum progression of “White Rabbit,” and also the charging percussion of "Somebody to Love" (I've heard Grace's prior versions of both songs from her pre-JA band, Great Society, and can assure you that JA's versions are much better, thanks in no small part to Dryden's excellent drumming).
I also think of Dryden's acid-drenched free-association riff called "A Small Package of Value Will Come to You Shortly," on "After Bathing at Baxter's": especially him barking out the phrase: "No man is an island/No man is an island/He's a peninsula" (one of my absolute favorite double-entendres of all time--think of Florida--also look into the word "peninsula" for five letters in the correct sequence, and strip away the four unessential letters to reveal the hidden word).
I wonder what Dryden had been doing with his life since the 70s (he was in country-rock band New Riders of the Purple Sage for several years). As you can tell, I’m a huge Jefferson Airplane fan. One of the most memorable events of my life was a brief, accidental meeting with Grace Slick and Paul Kantner in Detroit in 1981. Somehow, I prefer their artsy noodlings to the Dead. I’m sure they don’t remember the encounter. I got their autographs. Lost ‘em.
Jim
fyi Putting XML in the fast lane
All:
Pointer to article:
http://news.com.com/Putting+XML+in+the+fast+lane/2100-7345_3-5534249.html
Kobielus kommentary:
This article is wrong and misleading on several levels. Most fundamentally, it doesn’t go into the broader picture: compressing XML’s footprint on networks/storage resources, but also accelerating XML processing overhead burdens on app servers and endpoints.
I published a feature article in Business Communications Review on this very topic last month. With the kind indulgence of Fred Knight, Eric Krapf, and Sandy Borthick, here’s the meat of that piece re compact XML encodings (for the rest of piece, I refer you to that fine publication—it’s one of those enduring publications, which hasn’t changed in its format, but is still somehow still fresh after all these years, just like my principal publication-host, Network World):
“XML content needn’t always be encoded as plain, bandwidth-hogging ASCII text, though XML’s human-readability appeals to application developers everywhere. One of the most important new approaches is use of improved XML encoding and serialization schemes in lieu of traditional reliance on ASCII plaintext. Binary encodings of XML are generally more compact than text encodings, producing smaller XML document file sizes to be transmitted over networks and stored in databases.
The core standard, XML 1.0, supports alternate approaches for serializing the document elements whose logical data model is described in XML markup syntax. To the extent that XML data models can be serialized to binary encodings, XML becomes a more efficient interchange and storage format.
At a mandatory minimum, all XML processors (the software components that generate and/or parse XML) must be able to read data encoded as Unicode Transformation Format 8 (UTF-8, for standard ASCII text in the Latin alphabet used by English) or UTF-16 (for non-Latin alphabets such as Chinese and Cyrillic). This mandatory feature of the core XML 1.0 standard allows XML documents to contain text in all the world’s character sets. Both UTF-8 and UTF-16 are considered text formats (not binary encodings). For XML documents, UTF-8 is a more compact encoding than UTF-16; the former uses 8 bits per character whereas the latter uses 16 bits per character.
In addition, CDATA--an optional feature of XML 1.0—may be used to encapsulate binary data within XML documents. However, this approach is fraught with limitations, such as the possibility that receiving XML parsers that may not know how to process CDATA encodings correctly. As noted, support for CDATA is optional in the XML 1.0 standard. However, support for UTF-8 and UTF-16 is mandatory.
Another approach is to rely on various industry specifications that use an XML-based SOAP message as a manifest for describing binary data files within SOAP’s surrounding HTTP packet. SOAP with Attachments (SwA) and Microsoft’s Direct Internet Messaging Extensions (DIME) transmit opaque, non-textual data—such as images and digital signatures—along with an XML document. However, they don’t support binary encoding of all content within XML documents.
Neither SwA nor DIME has achieved broad adoption within industry. Recognizing the critical need for a consensus standard for compact XML encodings, the World Wide Web Consortium (W3C) has developed new Candidate Recommendations for binary encoding of XML within SOAP 1.2 payloads: SOAP Message Transmission Optimization Mechanism (MTOM) [http://www.w3.org/TR/2004/CR-soap12-mtom-20040826/] and XML-binary Optimized Packaging (XOP) [http://www.w3.org/TR/2004/CR-xop10-20040826/]. In addition, W3C’s XML Binary Characterization Working Group has released the First Public Working Draft of its XML Binary Characterization Properties” document (http://www.w3.org/TR/2004/WD-xbc-properties-20041005), describing properties desirable for MTOM, XOP, or any other serialization of the XML data model.
MTOM and XOP (which may be considered two halves of a single standard) have much broader vendor support than any predecessor specification for XML-to-binary serialization. MTOM and XOP describe how to produce optimized binary encodings of XML content within SOAP 1.2 payloads. MTOM and XOP preserve one of XML’s great strengths: the transparency of the tagged, logical data structure that a particular document implements. (Note: Where XML encoding schemes are concerned, the terms “optimized” and “efficient” are industry shorthand for “smaller XML file sizes.” We use both terms in that context in this article, as well as such synonyms as “compact” and “small.”).
Structural transparency is what distinguishes XML syntax from most text formats. XML’s tags, attributes, and other markup conventions call out the interrelationships, datatyping, and semantics of its constituent data elements. By calling out a document’s logical structure, XML facilitates fine-grained validation, transformation, and other processing on that document data elements by receiving applications. In fact, most XML-based Web services require that the underlying document markup syntax be transparent and self-describing. Web services require that all XML-processing nodes (software and/or hardware-based) be able to parse, validate, and transform all elements within SOAP/XML traffic. Deep content inspection is how XML firewalls and SOAP content routers operate. Take away XML’s logical transparency and you jeopardize Web services interoperability, management, and security.
For any given XML document, MTOM and XOP preserve its logical transparency structure by encoding that structure in a text-based “XML Information Set” manifest, while allowing any of the document’s contents to be serialized to any binary encoding. In particular, these specifications support binary encoding of XML content as Multipurpose Internet Messaging Extensions (MIME) Multipart/Related body parts and encapsulation of those parts—along with the associated XML Information Set manifest--within SOAP 1.2 envelopes. The specifications also describe how to encapsulate binary-encoded XML body parts directly within HTTP packets (in cases where SOAP doesn’t enter the equation), thereby reducing the size of XML files for transmission and/or storage.”
Oh...almost forgot...my Network World column on the same topic was published this week (long lead times on these publications, folks--half the time I forget what's in the pipeline--much has changed in my life since I wrote these pieces in October-November--pardon me for not contributing any additional insights on the blog item--I'm distracted by any number of tasks concerning finding a new job, working through some tech issues with my local phone company, and helping my son apply to colleges--he's going to be an actor, or so he hopes--he's a funny talented guy--handsome dude too).
Note that the print version of the story has my current byline correct, but the editor of the online version stills has the old byline (they need to get their internal workflow straightened out on synchronizing these sorts of things).
I'm an independent IT industry analyst currently looking for a new position. I'm no longer employed by that firm whose name is in the obsolete byline. So take note. Call me at 703-924-6224 or e-mail me at james_kobielus@hotmail.com.
Let's talk. You'll find that I'm easy to speak and work with. I'd prefer if you actually came out here to chat live in person. I miss that. I hate working in a dungeon surrounded by disembodied voices and words on the wire. It's much better to have human give-and-take, as opposed to take-and-take.
Jim
Pointer to article:
http://news.com.com/Putting+XML+in+the+fast+lane/2100-7345_3-5534249.html
Kobielus kommentary:
This article is wrong and misleading on several levels. Most fundamentally, it doesn’t go into the broader picture: compressing XML’s footprint on networks/storage resources, but also accelerating XML processing overhead burdens on app servers and endpoints.
I published a feature article in Business Communications Review on this very topic last month. With the kind indulgence of Fred Knight, Eric Krapf, and Sandy Borthick, here’s the meat of that piece re compact XML encodings (for the rest of piece, I refer you to that fine publication—it’s one of those enduring publications, which hasn’t changed in its format, but is still somehow still fresh after all these years, just like my principal publication-host, Network World):
“XML content needn’t always be encoded as plain, bandwidth-hogging ASCII text, though XML’s human-readability appeals to application developers everywhere. One of the most important new approaches is use of improved XML encoding and serialization schemes in lieu of traditional reliance on ASCII plaintext. Binary encodings of XML are generally more compact than text encodings, producing smaller XML document file sizes to be transmitted over networks and stored in databases.
The core standard, XML 1.0, supports alternate approaches for serializing the document elements whose logical data model is described in XML markup syntax. To the extent that XML data models can be serialized to binary encodings, XML becomes a more efficient interchange and storage format.
At a mandatory minimum, all XML processors (the software components that generate and/or parse XML) must be able to read data encoded as Unicode Transformation Format 8 (UTF-8, for standard ASCII text in the Latin alphabet used by English) or UTF-16 (for non-Latin alphabets such as Chinese and Cyrillic). This mandatory feature of the core XML 1.0 standard allows XML documents to contain text in all the world’s character sets. Both UTF-8 and UTF-16 are considered text formats (not binary encodings). For XML documents, UTF-8 is a more compact encoding than UTF-16; the former uses 8 bits per character whereas the latter uses 16 bits per character.
In addition, CDATA--an optional feature of XML 1.0—may be used to encapsulate binary data within XML documents. However, this approach is fraught with limitations, such as the possibility that receiving XML parsers that may not know how to process CDATA encodings correctly. As noted, support for CDATA is optional in the XML 1.0 standard. However, support for UTF-8 and UTF-16 is mandatory.
Another approach is to rely on various industry specifications that use an XML-based SOAP message as a manifest for describing binary data files within SOAP’s surrounding HTTP packet. SOAP with Attachments (SwA) and Microsoft’s Direct Internet Messaging Extensions (DIME) transmit opaque, non-textual data—such as images and digital signatures—along with an XML document. However, they don’t support binary encoding of all content within XML documents.
Neither SwA nor DIME has achieved broad adoption within industry. Recognizing the critical need for a consensus standard for compact XML encodings, the World Wide Web Consortium (W3C) has developed new Candidate Recommendations for binary encoding of XML within SOAP 1.2 payloads: SOAP Message Transmission Optimization Mechanism (MTOM) [http://www.w3.org/TR/2004/CR-soap12-mtom-20040826/] and XML-binary Optimized Packaging (XOP) [http://www.w3.org/TR/2004/CR-xop10-20040826/]. In addition, W3C’s XML Binary Characterization Working Group has released the First Public Working Draft of its XML Binary Characterization Properties” document (http://www.w3.org/TR/2004/WD-xbc-properties-20041005), describing properties desirable for MTOM, XOP, or any other serialization of the XML data model.
MTOM and XOP (which may be considered two halves of a single standard) have much broader vendor support than any predecessor specification for XML-to-binary serialization. MTOM and XOP describe how to produce optimized binary encodings of XML content within SOAP 1.2 payloads. MTOM and XOP preserve one of XML’s great strengths: the transparency of the tagged, logical data structure that a particular document implements. (Note: Where XML encoding schemes are concerned, the terms “optimized” and “efficient” are industry shorthand for “smaller XML file sizes.” We use both terms in that context in this article, as well as such synonyms as “compact” and “small.”).
Structural transparency is what distinguishes XML syntax from most text formats. XML’s tags, attributes, and other markup conventions call out the interrelationships, datatyping, and semantics of its constituent data elements. By calling out a document’s logical structure, XML facilitates fine-grained validation, transformation, and other processing on that document data elements by receiving applications. In fact, most XML-based Web services require that the underlying document markup syntax be transparent and self-describing. Web services require that all XML-processing nodes (software and/or hardware-based) be able to parse, validate, and transform all elements within SOAP/XML traffic. Deep content inspection is how XML firewalls and SOAP content routers operate. Take away XML’s logical transparency and you jeopardize Web services interoperability, management, and security.
For any given XML document, MTOM and XOP preserve its logical transparency structure by encoding that structure in a text-based “XML Information Set” manifest, while allowing any of the document’s contents to be serialized to any binary encoding. In particular, these specifications support binary encoding of XML content as Multipurpose Internet Messaging Extensions (MIME) Multipart/Related body parts and encapsulation of those parts—along with the associated XML Information Set manifest--within SOAP 1.2 envelopes. The specifications also describe how to encapsulate binary-encoded XML body parts directly within HTTP packets (in cases where SOAP doesn’t enter the equation), thereby reducing the size of XML files for transmission and/or storage.”
Oh...almost forgot...my Network World column on the same topic was published this week (long lead times on these publications, folks--half the time I forget what's in the pipeline--much has changed in my life since I wrote these pieces in October-November--pardon me for not contributing any additional insights on the blog item--I'm distracted by any number of tasks concerning finding a new job, working through some tech issues with my local phone company, and helping my son apply to colleges--he's going to be an actor, or so he hopes--he's a funny talented guy--handsome dude too).
Note that the print version of the story has my current byline correct, but the editor of the online version stills has the old byline (they need to get their internal workflow straightened out on synchronizing these sorts of things).
I'm an independent IT industry analyst currently looking for a new position. I'm no longer employed by that firm whose name is in the obsolete byline. So take note. Call me at 703-924-6224 or e-mail me at james_kobielus@hotmail.com.
Let's talk. You'll find that I'm easy to speak and work with. I'd prefer if you actually came out here to chat live in person. I miss that. I hate working in a dungeon surrounded by disembodied voices and words on the wire. It's much better to have human give-and-take, as opposed to take-and-take.
Jim
poem McCartney's Moon
MCCARTNEY'S MOON
Moon's the mandarin
face of an orange.
Stars the clean sightlines of Linda's Arizona.
Air
is quiet night
in a velvet guitar case
lain for Lennon
at the mouth
of a cavern.
Moon's the mandarin
face of an orange.
Stars the clean sightlines of Linda's Arizona.
Air
is quiet night
in a velvet guitar case
lain for Lennon
at the mouth
of a cavern.
poem Must
MUST
Money must know its
sway and swing and push
it. Money must swear
no oath to any
owner but its own
sharp resummation:
Its mercenary
muscle for tidy
building & smashing.
Money must know its
sway and swing and push
it. Money must swear
no oath to any
owner but its own
sharp resummation:
Its mercenary
muscle for tidy
building & smashing.
poem Nature Scare
NATURE SCARE
None too delicate,
the little mother buggers.
Earth's full of them,
their colonies,
law.
Smelling dead wood,
they arise naturally.
Well,
thank you so much for that millimeter fracture.
And that handy front stoop,
sure paved our way.
And that porous spring wood,
chewed through there in no time.
Yeah,
try as we may,
they are coming back.
As our structures grow weaker,
older,
and sag.
None too delicate,
the little mother buggers.
Earth's full of them,
their colonies,
law.
Smelling dead wood,
they arise naturally.
Well,
thank you so much for that millimeter fracture.
And that handy front stoop,
sure paved our way.
And that porous spring wood,
chewed through there in no time.
Yeah,
try as we may,
they are coming back.
As our structures grow weaker,
older,
and sag.
poem Ocean of Snow
OCEAN OF SNOW
Let it drift slower and deeper.
Piles beyond our whitest dreams.
Cozy, cool, and all-absorbing.
A well for light and all that moves.
Let it drift slower and deeper.
Piles beyond our whitest dreams.
Cozy, cool, and all-absorbing.
A well for light and all that moves.
poem Onto Onco
ONTO ONCO
One code keeps sticking, spraying, stubborn, I go, ego, all-defying.
One code won't shut up.
One code won't shut down.
One code all around.
One code.
One code keeps sticking, spraying, stubborn, I go, ego, all-defying.
One code won't shut up.
One code won't shut down.
One code all around.
One code.
poem Polaris
POLARIS
Meridians meet.
Where solid waters unite.
Where the hot plates of earth are forever remote.
Where with the means to compute
you too can locate
that pivotal place
on the ice.
Meridians meet.
Where solid waters unite.
Where the hot plates of earth are forever remote.
Where with the means to compute
you too can locate
that pivotal place
on the ice.
poem Providence
PROVIDENCE
God's providence is
no more mysterious than
photosynthesis.
Both transmute the fine
immaterial into
daily sustenance.
We embody them
both. We rise with the sun and
partake of the bread.
God's providence is
no more mysterious than
photosynthesis.
Both transmute the fine
immaterial into
daily sustenance.
We embody them
both. We rise with the sun and
partake of the bread.
poem Revelation
REVELATION
Beauty is
in the contour of
a new way of
seeing,
in the revelation of
a cosmos that,
at our prompting,
has just let drop
the very next veil.
Beauty is
in the contour of
a new way of
seeing,
in the revelation of
a cosmos that,
at our prompting,
has just let drop
the very next veil.
poem Re:Bye
RE:BYE
ACK:
Just a keep-alive ping.
Session context endures.
Async pipeline preserved.
Attenuated latency caused premature notification of impending time-out.
Host please send next packet.
Admin please note channel-jitter condition.
END-ACK
ACK:
Just a keep-alive ping.
Session context endures.
Async pipeline preserved.
Attenuated latency caused premature notification of impending time-out.
Host please send next packet.
Admin please note channel-jitter condition.
END-ACK
poem Run Riot
RUN RIOT
I'll take a meadow
wherever nature's chosen
to happily jam.
Medians wild and
rioting, freeways redeemed
in numberless weeds.
Old federations of
fresh bees and blooms
wherever wasteland resumes.
I'll take a meadow
wherever nature's chosen
to happily jam.
Medians wild and
rioting, freeways redeemed
in numberless weeds.
Old federations of
fresh bees and blooms
wherever wasteland resumes.
poem Seattle
SEATTLE
In the essential
Seattle users
photosynthesize
caffeine directly
from whatever drops
of liquid sunshine
are vouchsafed their way
or, failing that, fix
off the glints of glare
that glance in off the
gray and grace their green
eye-stained monitors.
In the essential
Seattle users
photosynthesize
caffeine directly
from whatever drops
of liquid sunshine
are vouchsafed their way
or, failing that, fix
off the glints of glare
that glance in off the
gray and grace their green
eye-stained monitors.
poem Shush
SHUSH
Dim the lit nightclub
and offer another toast
to all sleeping dogs.
Let the owner know
we won't resist if asked to
keep the chit-chat down.
There are drinks best drunk
in silence and things I'd like
to whisper to you.
Dim the lit nightclub
and offer another toast
to all sleeping dogs.
Let the owner know
we won't resist if asked to
keep the chit-chat down.
There are drinks best drunk
in silence and things I'd like
to whisper to you.
poem Sliding Window
SLIDING WINDOW
Inbox fairly full.
Yours not mine unanswered look.
Losing patience fast.
Inbox fairly full.
Yours not mine unanswered look.
Losing patience fast.
Wednesday, January 12, 2005
poem Will
WILL
A mind's munition
is its concentration, its
hole-before-bullet.
Its ignorance of
peripheral issues and
stray conditionals.
Its attention not
to what may take place but what
through destiny will.
A mind's munition
is its concentration, its
hole-before-bullet.
Its ignorance of
peripheral issues and
stray conditionals.
Its attention not
to what may take place but what
through destiny will.
poem Whorl Whirl Swirl
WHORL WHIRL SWIRL
Tip of the index
finger and opposing thumb
briefly sweetly meet.
Tip of the index
finger and opposing thumb
briefly sweetly meet.
poem Stake
STAKE
We stake
standards.
We cut colors
to contours
of borders
sharp.
We fly flags
as the baboon
bares his teeth.
When the blood is full
and a fight is nigh.
We stake
standards.
We cut colors
to contours
of borders
sharp.
We fly flags
as the baboon
bares his teeth.
When the blood is full
and a fight is nigh.
Re Cameron's "A little tiny baby information calamity"
All:
I really like Kim's posting on the George Mason University security breach. I like the fact that he fatalistically recognizes that these sorts of penetrations--with concomitant loss/theft of identity information--will happen time and again. We just have to accept that disasters do happen, even when IT administrators have taken all necessary precautions. What I didn't see in his post was any discussion of "disaster response" procedures that identity administrators (is that even a real title?) must take in response to such incidents. It's clear that they should plug the vulnerability in their system, track down the perpetrator, and seek legal recourse, if possible. But it's also critical to immediately notify the impacted parties--the people whose identities were stolen--so that they can implement "damage control" in their lives (looking for signs of identity fraud, suspicious credit-card charges, and so forth). These notifications may involve sending alerts to the institutions that manage their assets, so that those institutions can execute identity-fraud clampdown/monitoring procedures.
What I'd like to see is some IdM-penetration-event analog to the "amber alert" system, under which identity breaches trigger some sort of automated alert, fraud-control clampdown, disaster response, and investigation train of events. No matter where the IdM penetration occurred, or how many people it exposed, or how much identity info was exposed, or the amount of potential financial liability involved. There should be legal/regulatory rules governing collective national/international responses to these incidents (as if they were oil spills or tsunamis or what have you).
Identity information is a munition. Having someone steal your identity is like having them steal your gun from your hip and then shoot you in the back with it. Serious threats demand serious collective, organized responses.
Jim
I really like Kim's posting on the George Mason University security breach. I like the fact that he fatalistically recognizes that these sorts of penetrations--with concomitant loss/theft of identity information--will happen time and again. We just have to accept that disasters do happen, even when IT administrators have taken all necessary precautions. What I didn't see in his post was any discussion of "disaster response" procedures that identity administrators (is that even a real title?) must take in response to such incidents. It's clear that they should plug the vulnerability in their system, track down the perpetrator, and seek legal recourse, if possible. But it's also critical to immediately notify the impacted parties--the people whose identities were stolen--so that they can implement "damage control" in their lives (looking for signs of identity fraud, suspicious credit-card charges, and so forth). These notifications may involve sending alerts to the institutions that manage their assets, so that those institutions can execute identity-fraud clampdown/monitoring procedures.
What I'd like to see is some IdM-penetration-event analog to the "amber alert" system, under which identity breaches trigger some sort of automated alert, fraud-control clampdown, disaster response, and investigation train of events. No matter where the IdM penetration occurred, or how many people it exposed, or how much identity info was exposed, or the amount of potential financial liability involved. There should be legal/regulatory rules governing collective national/international responses to these incidents (as if they were oil spills or tsunamis or what have you).
Identity information is a munition. Having someone steal your identity is like having them steal your gun from your hip and then shoot you in the back with it. Serious threats demand serious collective, organized responses.
Jim
fyi Microsoft to expand Redmond headquarters
All:
Pointer to article:
http://news.com.com/Microsoft+to+expand+Redmond+headquarters/2100-1014_3-5529693.html
Kobielus kommentary:
Microsoft is doing the right thing by deepening its roots in the Seattle area. Software is an art. Artists want to do their art in a conducive environment. The Pacific Northwest region has one of the most thriving local artistic communities in North America, if not the world.
It’s not just high-tech. For example, check out their music scene. Go online to www.kexp.org and stream their Audioasis show featuring “the best in Pacific Northwest music, live performances, and in-studio guests” It’s on every Saturday night from 6-9pm (pacific). The music is so consistently high-quality, innovative, and diverse that you just keep coming back. I’m originally from Detroit (a larger metropolitan area, even to this day), and I spent time in Madison, Wisconsin (a college town). And I’ll tell you, Seattle puts them to shame. Just great stuff (and, no, the grunge stuff is now ancient history, thank god—it’s just part of a much broader variety now—for more recent Pacific Northwest bands, try Modest Mouse, the Shins, Jesse Sykes and the Sweet Hereafter, United State of Electronica, the Decembrists, the Dandy Warhols, Maktub, and Math and Physics Club, and so on and so forth). Not just that, but the best musicians from all around the world flock to Seattle to record, perform, and hang out at KEXP.
Just keep your ears open and your fingers on the keyboard. Good vibes and good software are co-dependent.
Jim
http://www.kexp.org/programming/progpage.asp?showID=9&1413=38360.75-1&96=38360.75-1&20=38360.75-1&256=38360.75-2#recent
Pointer to article:
http://news.com.com/Microsoft+to+expand+Redmond+headquarters/2100-1014_3-5529693.html
Kobielus kommentary:
Microsoft is doing the right thing by deepening its roots in the Seattle area. Software is an art. Artists want to do their art in a conducive environment. The Pacific Northwest region has one of the most thriving local artistic communities in North America, if not the world.
It’s not just high-tech. For example, check out their music scene. Go online to www.kexp.org and stream their Audioasis show featuring “the best in Pacific Northwest music, live performances, and in-studio guests” It’s on every Saturday night from 6-9pm (pacific). The music is so consistently high-quality, innovative, and diverse that you just keep coming back. I’m originally from Detroit (a larger metropolitan area, even to this day), and I spent time in Madison, Wisconsin (a college town). And I’ll tell you, Seattle puts them to shame. Just great stuff (and, no, the grunge stuff is now ancient history, thank god—it’s just part of a much broader variety now—for more recent Pacific Northwest bands, try Modest Mouse, the Shins, Jesse Sykes and the Sweet Hereafter, United State of Electronica, the Decembrists, the Dandy Warhols, Maktub, and Math and Physics Club, and so on and so forth). Not just that, but the best musicians from all around the world flock to Seattle to record, perform, and hang out at KEXP.
Just keep your ears open and your fingers on the keyboard. Good vibes and good software are co-dependent.
Jim
http://www.kexp.org/programming/progpage.asp?showID=9&1413=38360.75-1&96=38360.75-1&20=38360.75-1&256=38360.75-2#recent
fyi IBM, Microsoft Chart Collaboration's Course
All:
Pointer to article:
http://www.crn.com/nl/crndirect/showArticle.jhtml?articleId=57700131
Kobielus kommentary:
Oh…collaboration technology is hip again. We’ve swung back to the early 90s again, in terms of the IT industry giving a big whoop about all this. Funny that I had to cover all of this during a period when everybody else felt they knew all they needed to know about it. When Notes v. Exchange was the painfully repetitive, mind-numbing Coke v. Pepsi customer issue, year after year. Story of my life.
Interesting that the collaboration industry focus is now on portals as the primary platform: IBM and Microsoft both put their respective portals at the heart of their Web-facing collaboration product families. So does SAP (re NetWeaver’s Enterprise Portal, which includes various collaboration tools), now that I think of it.
This is a good and inevitable trend. The browser is the heart of all future collaboration environments, on the client side (or the enriched browser, or enriched-browser-embedded in the OS, a la “Longhorn” “Avalon”). The portal is the browser’s server-side counterpart in the presentation tier. Actually, some have suggested that the presentation tier be referred to as the [user] “interaction tier,” because that more fully suggests it role in facilitating two-way interactions between clients and server-side user-access/rendering/content-aggregation components (particularly, Web servers, portal servers, application servers, presentation servers, and terminal servers).
The browser as the single, all-purpose client for all collaboration environments. Boy do we need that! I’m sick and tired of having to have multiple clients (browser, e-mail/calendaring, IM, P2P, multimedia, VoIP, RSS newsreaders, workflow, SMS, etc.) up on my various clients in order to access various messaging, calendaring, conferencing, blogs, and other collaboration and content delivery services. If one of those clients could rise to the challenge of encompassing the rest, I’d be happier and more productive every single day.
It looks like the evolving browser is “it,” coupled to the evolving presentation server (the term “portal” is loose enough to encompass ongoing evolution on this end of the presentation-tier equation). However, it will take a while for the browser to become enriched enough (through “Avalon,” Flex, Nexaweb, Curl, or whatever technologies eventually take root on the client and in the infrastructure) to be able to displace all these previous clients.
And, of course, many people are quite comfortable with their multiplicity of collaboration/messaging clients. Depending on how soon the enriched browser comes to commercial maturity and ubiquity, the collaboration “client glut” may be with us for a long long time. I expect faster development on the server side of the equation, as WSRP enables greater federation among diverse portals, application servers, content management servers, and data repositories in serving/rendering richer content to today’s still-wimpy browsers.
As the article says, portals are the aggregation/rendering/personalization linchpin in all this. I want server-side components (presentation, business-logic, and data tiers) to collaborate amongst themselves in serving me with the world’s information, collaboration, and application riches.
Without fail. And at a lazy glance.
Jim
Pointer to article:
http://www.crn.com/nl/crndirect/showArticle.jhtml?articleId=57700131
Kobielus kommentary:
Oh…collaboration technology is hip again. We’ve swung back to the early 90s again, in terms of the IT industry giving a big whoop about all this. Funny that I had to cover all of this during a period when everybody else felt they knew all they needed to know about it. When Notes v. Exchange was the painfully repetitive, mind-numbing Coke v. Pepsi customer issue, year after year. Story of my life.
Interesting that the collaboration industry focus is now on portals as the primary platform: IBM and Microsoft both put their respective portals at the heart of their Web-facing collaboration product families. So does SAP (re NetWeaver’s Enterprise Portal, which includes various collaboration tools), now that I think of it.
This is a good and inevitable trend. The browser is the heart of all future collaboration environments, on the client side (or the enriched browser, or enriched-browser-embedded in the OS, a la “Longhorn” “Avalon”). The portal is the browser’s server-side counterpart in the presentation tier. Actually, some have suggested that the presentation tier be referred to as the [user] “interaction tier,” because that more fully suggests it role in facilitating two-way interactions between clients and server-side user-access/rendering/content-aggregation components (particularly, Web servers, portal servers, application servers, presentation servers, and terminal servers).
The browser as the single, all-purpose client for all collaboration environments. Boy do we need that! I’m sick and tired of having to have multiple clients (browser, e-mail/calendaring, IM, P2P, multimedia, VoIP, RSS newsreaders, workflow, SMS, etc.) up on my various clients in order to access various messaging, calendaring, conferencing, blogs, and other collaboration and content delivery services. If one of those clients could rise to the challenge of encompassing the rest, I’d be happier and more productive every single day.
It looks like the evolving browser is “it,” coupled to the evolving presentation server (the term “portal” is loose enough to encompass ongoing evolution on this end of the presentation-tier equation). However, it will take a while for the browser to become enriched enough (through “Avalon,” Flex, Nexaweb, Curl, or whatever technologies eventually take root on the client and in the infrastructure) to be able to displace all these previous clients.
And, of course, many people are quite comfortable with their multiplicity of collaboration/messaging clients. Depending on how soon the enriched browser comes to commercial maturity and ubiquity, the collaboration “client glut” may be with us for a long long time. I expect faster development on the server side of the equation, as WSRP enables greater federation among diverse portals, application servers, content management servers, and data repositories in serving/rendering richer content to today’s still-wimpy browsers.
As the article says, portals are the aggregation/rendering/personalization linchpin in all this. I want server-side components (presentation, business-logic, and data tiers) to collaborate amongst themselves in serving me with the world’s information, collaboration, and application riches.
Without fail. And at a lazy glance.
Jim
Tuesday, January 11, 2005
fyi Does the open source model apply beyond software?
All:
Pointer to post:
http://blogs.osafoundation.org/mitch/000815.html
Kobielus kommentary:
I found Mitch Kapor’s recent blog entry on this topic thought-provoking, pretty much in spite of itself. I see plenty of clichés and flaws in his attempt at structured argumentation. A high-school rhetoric teacher would tear Kapor’s composition to shreds.
For starters, he detracts from his argument by overstating his thesis at the outset, and then immediately weakening it with rhetorical diffidence.
Putting his most assertive foot forward first, Kapor states: “Open source heralds a global paradigm shift in social and economic value creation of enormous proportions, the extent of which is almost completely unappreciated.” Yeah, yeah…actually, lots of people—too many, in fact—have waxed overappreciative of the open source movement and the supposed “paradigm shift” it represents.
Then he shuffles his feet with the following hedging language: “If I am right, then we are in for interesting times as the irresistible force of open source meets the immoveable object of corporate entrenchment.”
Excuse me, Mitch: “if I am right”? Didn’t you just assert that you are right? If you’re not asserting that you’re right, then why should we read any further? If you want to engage our minds in your line of argument, you need to have some fundamental confidence in your authority to speak on the topic.
In spite of all that, I found his fundamental question worth considering: “Does the open source model apply beyond software?” Specifically, he seems to be wondering aloud—without definitively answering—whether the principles of open source communities have a parallel in the functioning of biological systems. The next statement is another classic utterance in the annals of self-laceration: “If I actually knew biology, I might be able to answer that question.” Much of the rest of the piece mostly cycles around in that same “if I knew enough or thought deeply enough about this important issue then I might have something useful to contribute to this overlong blog entry” mode.
When Kapor does get to the point, it’s a bit of a disappointment. He says: “I am ignorant of current practice, but I would look at information sharing going on in biomedical research as to whether there is an active community-based dynamic going on or not, and if not, whether there could be.”
Huh? So, is he asking whether open-source principles are implemented in the scientific field of biomedical research, or in the functioning of biological systems, or both? He equates “open source” with some notion that he calls an “information commons.” He isn’t clear at all whether this “information commons” is the same as the older notion of a “public domain.” He seems to also include the criterion of “dynamics of active contribution and community participation over time” in his definition of open source, though it’s clear that an open source project can lapse into passivity and limited participation without ceasing to be open source. He also seems to contrast open source—however defined—with “private ownership of assets,” as if the two economic models are mutually exclusive; in fact, Linus Torvalds owns Linux, but distributes it freely under an open-source license.
Kapor’s focus shifts to the question of whether open-source is inconsistent with innovation. He closes by asserting: “Empirically, I would say the fostering of a common results in improvements in products over time, which we ordinarily assume requires competition to achieve.” Huh? What empirical evidence do you cite in support of this assertion? None. Also, you seem to imply that an open-source community is a competition-free zone—when, in fact, it may be intensely competitive community of technical wizards trying to outdo each other in advancing a common project toward common objectives.
OK, now, the core question of whether open source principles operate in biological systems. What precisely are these open source principles? I would posit three:
• Uncensored creation
• Unfettered transmission
• Unconstrained consumption
We can cite these core principles without bringing such unessential notions as “community,” “commons,” “contribution,” and “ownership” into the discussion. Evolutionary biologists regard the individual—not the species—as the appropriate focus of natural selection (yeah—that’s a controversial position in some circles, but I side with the late Stephen Jay Gould in preferring to focus on the individual’s struggle for procreational success). The three “open source” principles that I posited can be viewed as “freedom-friendly” behaviors that, when adopted by all individuals in a particular population, foster continuous, unrestrained innovation. (In some sense, the shared species-wide behaviors—plus genome, body plan, habitat, and artifacts—constitute a biological “commons”).
Innovation potential is the key to open source, as it is to evolutionary success. Some software (or biological) innovations may eventually prove adaptive to new conditions, and hence be “selected” by them. As conditions change continuously, innovations will oscillate back and forth in terms of their “fitness.” Some innovations may never prove adaptive, but are still innovations, and shouldn't be squelched. Remember what I said about "uncensored creation." Innovation potential must be preserved, and censorship is the enemy of innovative potential.
Freedom-friendly software-development governance structures are essential for our collective innovation and adaptation (put more simply: freedom = survival).
Hence, open source.
Jim
Pointer to post:
http://blogs.osafoundation.org/mitch/000815.html
Kobielus kommentary:
I found Mitch Kapor’s recent blog entry on this topic thought-provoking, pretty much in spite of itself. I see plenty of clichés and flaws in his attempt at structured argumentation. A high-school rhetoric teacher would tear Kapor’s composition to shreds.
For starters, he detracts from his argument by overstating his thesis at the outset, and then immediately weakening it with rhetorical diffidence.
Putting his most assertive foot forward first, Kapor states: “Open source heralds a global paradigm shift in social and economic value creation of enormous proportions, the extent of which is almost completely unappreciated.” Yeah, yeah…actually, lots of people—too many, in fact—have waxed overappreciative of the open source movement and the supposed “paradigm shift” it represents.
Then he shuffles his feet with the following hedging language: “If I am right, then we are in for interesting times as the irresistible force of open source meets the immoveable object of corporate entrenchment.”
Excuse me, Mitch: “if I am right”? Didn’t you just assert that you are right? If you’re not asserting that you’re right, then why should we read any further? If you want to engage our minds in your line of argument, you need to have some fundamental confidence in your authority to speak on the topic.
In spite of all that, I found his fundamental question worth considering: “Does the open source model apply beyond software?” Specifically, he seems to be wondering aloud—without definitively answering—whether the principles of open source communities have a parallel in the functioning of biological systems. The next statement is another classic utterance in the annals of self-laceration: “If I actually knew biology, I might be able to answer that question.” Much of the rest of the piece mostly cycles around in that same “if I knew enough or thought deeply enough about this important issue then I might have something useful to contribute to this overlong blog entry” mode.
When Kapor does get to the point, it’s a bit of a disappointment. He says: “I am ignorant of current practice, but I would look at information sharing going on in biomedical research as to whether there is an active community-based dynamic going on or not, and if not, whether there could be.”
Huh? So, is he asking whether open-source principles are implemented in the scientific field of biomedical research, or in the functioning of biological systems, or both? He equates “open source” with some notion that he calls an “information commons.” He isn’t clear at all whether this “information commons” is the same as the older notion of a “public domain.” He seems to also include the criterion of “dynamics of active contribution and community participation over time” in his definition of open source, though it’s clear that an open source project can lapse into passivity and limited participation without ceasing to be open source. He also seems to contrast open source—however defined—with “private ownership of assets,” as if the two economic models are mutually exclusive; in fact, Linus Torvalds owns Linux, but distributes it freely under an open-source license.
Kapor’s focus shifts to the question of whether open-source is inconsistent with innovation. He closes by asserting: “Empirically, I would say the fostering of a common results in improvements in products over time, which we ordinarily assume requires competition to achieve.” Huh? What empirical evidence do you cite in support of this assertion? None. Also, you seem to imply that an open-source community is a competition-free zone—when, in fact, it may be intensely competitive community of technical wizards trying to outdo each other in advancing a common project toward common objectives.
OK, now, the core question of whether open source principles operate in biological systems. What precisely are these open source principles? I would posit three:
• Uncensored creation
• Unfettered transmission
• Unconstrained consumption
We can cite these core principles without bringing such unessential notions as “community,” “commons,” “contribution,” and “ownership” into the discussion. Evolutionary biologists regard the individual—not the species—as the appropriate focus of natural selection (yeah—that’s a controversial position in some circles, but I side with the late Stephen Jay Gould in preferring to focus on the individual’s struggle for procreational success). The three “open source” principles that I posited can be viewed as “freedom-friendly” behaviors that, when adopted by all individuals in a particular population, foster continuous, unrestrained innovation. (In some sense, the shared species-wide behaviors—plus genome, body plan, habitat, and artifacts—constitute a biological “commons”).
Innovation potential is the key to open source, as it is to evolutionary success. Some software (or biological) innovations may eventually prove adaptive to new conditions, and hence be “selected” by them. As conditions change continuously, innovations will oscillate back and forth in terms of their “fitness.” Some innovations may never prove adaptive, but are still innovations, and shouldn't be squelched. Remember what I said about "uncensored creation." Innovation potential must be preserved, and censorship is the enemy of innovative potential.
Freedom-friendly software-development governance structures are essential for our collective innovation and adaptation (put more simply: freedom = survival).
Hence, open source.
Jim
fyi Novell, Red Hat Eye Virtualization for Linux, OS rivals to offer open-source tools
All:
Pointer to article:
http://www.computerworld.com/hardwaretopics/hardware/server/story/0,10801,98814,00.html?source=NLT_LIN&nid=98814
Kobielus kommentary:
Open sourcing of virtualization technology makes good sense. By definition, virtualization is decoupling of the interface from the app, the app from the OS, and the OS from the underlying hardware platform. Virtualization enables flexible, logical resource partitioning and/or aggregation across diverse platforms. True platform independence. Hardware vendors want to tie you to their metal, OS vendors to their environments, and app vendors to their code. None of them has a real interest in decoupling you from all of that legacy.
I suspect that Xen and other open-source virtualization projects will achieve the same critical deployed mass that catapulted Linux, Apache, and other seminal distributions to widespread adoption (though Xen’s licensing-related lack of support for Windows shows how the platform vendors can throw a monkey wrench in the open-source virtualization game). I don’t doubt that Microsoft, VMware, and other virtualization software vendors will do a good business. But only an open-source community can muster the diverse resources necessary to sustainably support the “virtualization software to run [dozens of] guest operating systems [and several times as many applications] on [growing numbers of commodity] production server [chassis], each with [many physical blade servers, each of which has more than a handful of] CPUs [and each of those is logically partitioned into myriad virtual machines]” scenarios, such as in this article.
The advance of virtualization will spell the end of forced OS and app migrations. The legacy can live forever if we can physically/logically slice-and-dice new hardware to run old software. And let's not forget grid technology: a way of aggregating old and commodity hardware into larger environments that transcend any OS/app/server. Call that the bunch-and-crunch virtualization scenario.
Virtualization technologies let old data-processing assets live on as long as they can carry their own weight, in terms of business application.
Jim
Pointer to article:
http://www.computerworld.com/hardwaretopics/hardware/server/story/0,10801,98814,00.html?source=NLT_LIN&nid=98814
Kobielus kommentary:
Open sourcing of virtualization technology makes good sense. By definition, virtualization is decoupling of the interface from the app, the app from the OS, and the OS from the underlying hardware platform. Virtualization enables flexible, logical resource partitioning and/or aggregation across diverse platforms. True platform independence. Hardware vendors want to tie you to their metal, OS vendors to their environments, and app vendors to their code. None of them has a real interest in decoupling you from all of that legacy.
I suspect that Xen and other open-source virtualization projects will achieve the same critical deployed mass that catapulted Linux, Apache, and other seminal distributions to widespread adoption (though Xen’s licensing-related lack of support for Windows shows how the platform vendors can throw a monkey wrench in the open-source virtualization game). I don’t doubt that Microsoft, VMware, and other virtualization software vendors will do a good business. But only an open-source community can muster the diverse resources necessary to sustainably support the “virtualization software to run [dozens of] guest operating systems [and several times as many applications] on [growing numbers of commodity] production server [chassis], each with [many physical blade servers, each of which has more than a handful of] CPUs [and each of those is logically partitioned into myriad virtual machines]” scenarios, such as in this article.
The advance of virtualization will spell the end of forced OS and app migrations. The legacy can live forever if we can physically/logically slice-and-dice new hardware to run old software. And let's not forget grid technology: a way of aggregating old and commodity hardware into larger environments that transcend any OS/app/server. Call that the bunch-and-crunch virtualization scenario.
Virtualization technologies let old data-processing assets live on as long as they can carry their own weight, in terms of business application.
Jim
Monday, January 10, 2005
fyi Who's the Smartest of Them All? Social software uncovers the true experts
All:
Pointer to article:
http://www.computerworld.com/databasetopics/businessintelligence/datamining/story/0,10801,98765,00.html
Kobielus kommentary:
I see two theories of “knowledge” implicit in this discussion:
• Knowledge sits in people’s heads and can be harvested
• Knowledge emerges from people’s interactions and can be conjured through structured collaboration environments
Both theories are right, to a degree. Knowledge is a personal stock that we each build, and a social force field in which we radiate. Sometimes, you don’t know what you know until somebody/something elicits it from you.
What I find most fascinating about this research (Bernardo Huberman of HP) is that it directly compares the knowledge-generation results of three corporate “financial knowledge assets” (my term): de facto (i.e., informal, self-selected, emergent) teams of quasi-experts, formal (i.e., official, other-selected, predesignated) teams of recognized experts, and “an expert financial software tool.” In Huberman’s study, the informal (intra-HP) quasi-expert team beat the formal expert team and the expert-system tool in the accuracy of its predictions.
I’m not claiming that these same results will hold in all studies (and I haven’t examined Huberman’s methods in any detail). But I’ve long noticed that teams of earnest, intelligent non-experts (in any given topic) can often collectively improvise a “good enough” strategy, whereas teams of squabbling “experts” often cancel each other out by obstinately and dogmatically quashing each others’ respective approaches. In teams that involve at least one “expert,” it often seems that those “experts” are trying to dictate approaches to others. In teams that have no such dictators, people improvise an approach that splits the differences among various promising approaches, and nobody tries to bulldoze an intellectual monoculture.
Huberman calls it the “power of the implicit” (i.e., the implicit stock of latent knowledge just waiting there in people’s heads). I call it the “power of the emergent” (i.e., the pent-up, results-oriented groupthink of people complementing each others’ strengths and neutralizing each others’ weaknesses).
How do you program self-effacement, results orientation, and humility into a social network?
Jim
Pointer to article:
http://www.computerworld.com/databasetopics/businessintelligence/datamining/story/0,10801,98765,00.html
Kobielus kommentary:
I see two theories of “knowledge” implicit in this discussion:
• Knowledge sits in people’s heads and can be harvested
• Knowledge emerges from people’s interactions and can be conjured through structured collaboration environments
Both theories are right, to a degree. Knowledge is a personal stock that we each build, and a social force field in which we radiate. Sometimes, you don’t know what you know until somebody/something elicits it from you.
What I find most fascinating about this research (Bernardo Huberman of HP) is that it directly compares the knowledge-generation results of three corporate “financial knowledge assets” (my term): de facto (i.e., informal, self-selected, emergent) teams of quasi-experts, formal (i.e., official, other-selected, predesignated) teams of recognized experts, and “an expert financial software tool.” In Huberman’s study, the informal (intra-HP) quasi-expert team beat the formal expert team and the expert-system tool in the accuracy of its predictions.
I’m not claiming that these same results will hold in all studies (and I haven’t examined Huberman’s methods in any detail). But I’ve long noticed that teams of earnest, intelligent non-experts (in any given topic) can often collectively improvise a “good enough” strategy, whereas teams of squabbling “experts” often cancel each other out by obstinately and dogmatically quashing each others’ respective approaches. In teams that involve at least one “expert,” it often seems that those “experts” are trying to dictate approaches to others. In teams that have no such dictators, people improvise an approach that splits the differences among various promising approaches, and nobody tries to bulldoze an intellectual monoculture.
Huberman calls it the “power of the implicit” (i.e., the implicit stock of latent knowledge just waiting there in people’s heads). I call it the “power of the emergent” (i.e., the pent-up, results-oriented groupthink of people complementing each others’ strengths and neutralizing each others’ weaknesses).
How do you program self-effacement, results orientation, and humility into a social network?
Jim
Saturday, January 08, 2005
fyi Do Gates Glitches Bode Ill for Microsoft?
All:
Pointer to article:
http://www.technewsworld.com/story/39471.html
Kobielus kommentary:
I don’t know why the press focuses on this stuff. Why isn't there equal attention when a competitor's products crash in public events? Who, besides Microsoft marketing communications personnel, really care if one of Gates’ public demos runs into a glitch? Is it any surprise to anybody that IT is unreliable, or that Microsoft’s engineers are no smarter than the people who work for competing software companies? That hardware/software often screws up at the worst possible time? That Bill Gates isn’t a sorcerer with supernatural powers to conjure continuously flawless performance from the widgets that the elves back in his toyshop cobble together? If he’s so all-powerful, why has Microsoft’s stock price been in the dumper for years and years? He owns a lot of it, from what I heard. We all own a lot of his company’s products. Let’s see a little of his magic spread over the Microsoft creations on which we’ve built our lives.
Jim
Pointer to article:
http://www.technewsworld.com/story/39471.html
Kobielus kommentary:
I don’t know why the press focuses on this stuff. Why isn't there equal attention when a competitor's products crash in public events? Who, besides Microsoft marketing communications personnel, really care if one of Gates’ public demos runs into a glitch? Is it any surprise to anybody that IT is unreliable, or that Microsoft’s engineers are no smarter than the people who work for competing software companies? That hardware/software often screws up at the worst possible time? That Bill Gates isn’t a sorcerer with supernatural powers to conjure continuously flawless performance from the widgets that the elves back in his toyshop cobble together? If he’s so all-powerful, why has Microsoft’s stock price been in the dumper for years and years? He owns a lot of it, from what I heard. We all own a lot of his company’s products. Let’s see a little of his magic spread over the Microsoft creations on which we’ve built our lives.
Jim
fyi Microsoft move sends shivers through antivirus market
All:
Pointer to article:
http://www.computerworld.com/securitytopics/security/story/0,10801,98802p2,00.html
Kobielus kommentary:
You really can’t begrudge Microsoft this move, unless you’re an anti-virus or anti-spyware vendor. They absolutely had to release their own anti-virus and anti-spyware tools in order to defend their customers’ investments in Windows platforms and apps. Whether or not Microsoft has set its sights on eating Symantec et al.’s lunches is a secondary issue. The market for third-party anti-malware tools will operate under a substantial competitive shadow from now on. As a Windows user, I’m happy to get this functionality for free, and I know many other users who probably feel the same way. I don’t want Microsoft to crash this market, but I don’t want malware to crash my Microsoft Windows platforms. And I don’t think I should have to pay for basic platform protection against malware. So there you have it.
Everything in this industry is slipping down the slope to free ("Closer to Free"--wasn't that a hit for Del Amitri?). Hitting closer to home, I see the IT research and analysis (R&A) industry continuing to consolidate and shrink as IT professionals get addicted to free reports from many sources. Whether or not those reports are vendor-sponsored/slanted/biased/manipulative is a secondary issue for many tech professionals. There's a lot of good solid factual and educational material in many of those free reports, and most of us know how to filter out the marketing BS chaff from the informative wheat.
Analysts will still make money from their insights. But that money will come less and less from R&A subscription licenses and more and more from consulting, educational seminars, and forums/conferences--in other words, from "events" of all sorts where the analyst "performs live." And from vendor-sponsored or vendor-reprinted whitepapers and presentations. And from what pittances you make from authoring trade-press articles and tech/business books.
It's getting to the point where IT R&A reports are a dime a dozen. Part of the noise, not the signal.
Jim
Pointer to article:
http://www.computerworld.com/securitytopics/security/story/0,10801,98802p2,00.html
Kobielus kommentary:
You really can’t begrudge Microsoft this move, unless you’re an anti-virus or anti-spyware vendor. They absolutely had to release their own anti-virus and anti-spyware tools in order to defend their customers’ investments in Windows platforms and apps. Whether or not Microsoft has set its sights on eating Symantec et al.’s lunches is a secondary issue. The market for third-party anti-malware tools will operate under a substantial competitive shadow from now on. As a Windows user, I’m happy to get this functionality for free, and I know many other users who probably feel the same way. I don’t want Microsoft to crash this market, but I don’t want malware to crash my Microsoft Windows platforms. And I don’t think I should have to pay for basic platform protection against malware. So there you have it.
Everything in this industry is slipping down the slope to free ("Closer to Free"--wasn't that a hit for Del Amitri?). Hitting closer to home, I see the IT research and analysis (R&A) industry continuing to consolidate and shrink as IT professionals get addicted to free reports from many sources. Whether or not those reports are vendor-sponsored/slanted/biased/manipulative is a secondary issue for many tech professionals. There's a lot of good solid factual and educational material in many of those free reports, and most of us know how to filter out the marketing BS chaff from the informative wheat.
Analysts will still make money from their insights. But that money will come less and less from R&A subscription licenses and more and more from consulting, educational seminars, and forums/conferences--in other words, from "events" of all sorts where the analyst "performs live." And from vendor-sponsored or vendor-reprinted whitepapers and presentations. And from what pittances you make from authoring trade-press articles and tech/business books.
It's getting to the point where IT R&A reports are a dime a dozen. Part of the noise, not the signal.
Jim
Thursday, January 06, 2005
imho Further thoughts on Cameron's identity laws
All:
I’d like to posit the following normative principles of identity, which are implicit in most of Kim Cameron’s and other people’s recent blog discussions:
• Each person is the only legitimate owner of their identity, all manifestations of that identity, and all associated identity attributes.
• Each person must be able to exert full control over all instances, attributes, disclosure, and management of their own identity.
• Identity environments must be architected to enable each person to exert that control, while facilitating identity-based security functions (authentication, access control, etc.), ensuring permission-based identity-attribute sharing, and safeguarding personal privacy.
• Where each person’s identity information is concerned, any other party in the identity environments is either a registrar, steward, or consumer (not an owner) of such information.
• Other parties in the identity chain must ensure that their policies, procedures, activities, and operations don’t violate or compromise people’s control over their own identity information.
If we accept these normative principles of identity, then all of Kim’s and everybody else’s discussions of “laws of identity” follows logically. I still haven’t seen any clear discussion of what identity systems “failed” specifically because they violated one or more of these normative principles.
Scott Lemon (http://www.freeid.org/2004/12/05.html) came closest when he attributed the failure of Novell’s “digitalMe” initiative to violation of Kim’s “Third Law of Identity” (The Fewest Parties Law of Identity: Technical identity systems MUST be designed so the disclosure of identifying information is limited to parties having a necessary and justifiable place in a given identity relationship.”).
Scott said: “It's funny how some people at Novell really thought that Novell was somehow going to become the de facto source of identity information in the world….If you try to build the ‘one big thing in the sky’, and there is a second group of people that don't like you or trust you, then they'll build their own version. Which means there will be two. If there are two, then there will be three or more ... and then things start to go in all directions. It's funny to see this even occurring in the Open Source world. People have disagreements and fork a project ... and then it get's forked again. I'm not saying this is bad at all ... it's the natural progression. So build to embrace this!”
Did digitalMe fail because non-Novell people didn’t trust Novell to aggregate/manage their identity information? Or because most Novell projects fail to achieve their grandiose objectives?
Did Passport fail because non-Microsoft people didn’t trust Microsoft as an identity aggregator? Or because Microsoft pursued a proprietary-based approach in a world rapidly moving to purely WS-* based middleware of all types, including identity middleware (a la WS-Security and SAML)?
The ultimate universal “identity service bus” will be agnostic to normative identity-governance principles. It will support the “each person is master of own personal identity domain” governance model (a la Kim) and the “each person is a serf whose identity is issued and controlled by impersonal identity domains run by business and industry” governance model that actually describes the current and likely future world more accurately.
Yeah, I’d like to be master of my own identity domain as much as you. I have a deep personal connection to this “James Kobielus” identity—it’ll be carved on my tombstone, so it, and the reputational halo surrounding it, are dear personal assets (that's why I defend them against eclipse, tarnishment, and distortion).
But, hey, you can call me Jim.
Jim
P.S. Kim: Thanks for the kind words. I'm looking to dock my identity to a new mothership. I appreciate your help in that regard.
I’d like to posit the following normative principles of identity, which are implicit in most of Kim Cameron’s and other people’s recent blog discussions:
• Each person is the only legitimate owner of their identity, all manifestations of that identity, and all associated identity attributes.
• Each person must be able to exert full control over all instances, attributes, disclosure, and management of their own identity.
• Identity environments must be architected to enable each person to exert that control, while facilitating identity-based security functions (authentication, access control, etc.), ensuring permission-based identity-attribute sharing, and safeguarding personal privacy.
• Where each person’s identity information is concerned, any other party in the identity environments is either a registrar, steward, or consumer (not an owner) of such information.
• Other parties in the identity chain must ensure that their policies, procedures, activities, and operations don’t violate or compromise people’s control over their own identity information.
If we accept these normative principles of identity, then all of Kim’s and everybody else’s discussions of “laws of identity” follows logically. I still haven’t seen any clear discussion of what identity systems “failed” specifically because they violated one or more of these normative principles.
Scott Lemon (http://www.freeid.org/2004/12/05.html) came closest when he attributed the failure of Novell’s “digitalMe” initiative to violation of Kim’s “Third Law of Identity” (The Fewest Parties Law of Identity: Technical identity systems MUST be designed so the disclosure of identifying information is limited to parties having a necessary and justifiable place in a given identity relationship.”).
Scott said: “It's funny how some people at Novell really thought that Novell was somehow going to become the de facto source of identity information in the world….If you try to build the ‘one big thing in the sky’, and there is a second group of people that don't like you or trust you, then they'll build their own version. Which means there will be two. If there are two, then there will be three or more ... and then things start to go in all directions. It's funny to see this even occurring in the Open Source world. People have disagreements and fork a project ... and then it get's forked again. I'm not saying this is bad at all ... it's the natural progression. So build to embrace this!”
Did digitalMe fail because non-Novell people didn’t trust Novell to aggregate/manage their identity information? Or because most Novell projects fail to achieve their grandiose objectives?
Did Passport fail because non-Microsoft people didn’t trust Microsoft as an identity aggregator? Or because Microsoft pursued a proprietary-based approach in a world rapidly moving to purely WS-* based middleware of all types, including identity middleware (a la WS-Security and SAML)?
The ultimate universal “identity service bus” will be agnostic to normative identity-governance principles. It will support the “each person is master of own personal identity domain” governance model (a la Kim) and the “each person is a serf whose identity is issued and controlled by impersonal identity domains run by business and industry” governance model that actually describes the current and likely future world more accurately.
Yeah, I’d like to be master of my own identity domain as much as you. I have a deep personal connection to this “James Kobielus” identity—it’ll be carved on my tombstone, so it, and the reputational halo surrounding it, are dear personal assets (that's why I defend them against eclipse, tarnishment, and distortion).
But, hey, you can call me Jim.
Jim
P.S. Kim: Thanks for the kind words. I'm looking to dock my identity to a new mothership. I appreciate your help in that regard.
fyi EFF Throws Support to 'Anonymous' Internet Project
All:
Pointer to article:
www.internetnews.com/dev-news/article.php/3454521
Kobielus kommentary:
See previous post for chilling new context within which this story should be understood.
According to the article, “Proponents of [EFF’s] Tor said the tool is beneficial to surfers who may be harmed if their identity were revealed. ‘EFF understands the importance of anonymity technology for everyone -- from the average Web surfer, to journalists for community sites like Indymedia, to people living under oppressive regimes,’ said Roger Dingledine, Tor project leader, in a statement.”
The “surfers” are only half the e-commerce equation. How about the online sellers? What if those sellers are thieves fencing stolen property under assumed names? How about the legitimate owners of the stolen goods? Aren’t they being harmed by the thieves’ cloak of online anonymity?
The EFF’s goals are generally laudable. But their absolutist ideological defense of online anonymity ignores the very real and serious problem of online theft and fraud. The fraudsters need to have their real identities and nasty deeds exposed to the light of day.
Anonymity is no absolute right. If you’ ve sustained damages of a commercial nature—online and/or offline--you need the tools and information to go after the perpetrator. Common thieves are everywhere.
Notice how I didn’t mention terrorists till just now?
Jim
Pointer to article:
www.internetnews.com/dev-news/article.php/3454521
Kobielus kommentary:
See previous post for chilling new context within which this story should be understood.
According to the article, “Proponents of [EFF’s] Tor said the tool is beneficial to surfers who may be harmed if their identity were revealed. ‘EFF understands the importance of anonymity technology for everyone -- from the average Web surfer, to journalists for community sites like Indymedia, to people living under oppressive regimes,’ said Roger Dingledine, Tor project leader, in a statement.”
The “surfers” are only half the e-commerce equation. How about the online sellers? What if those sellers are thieves fencing stolen property under assumed names? How about the legitimate owners of the stolen goods? Aren’t they being harmed by the thieves’ cloak of online anonymity?
The EFF’s goals are generally laudable. But their absolutist ideological defense of online anonymity ignores the very real and serious problem of online theft and fraud. The fraudsters need to have their real identities and nasty deeds exposed to the light of day.
Anonymity is no absolute right. If you’ ve sustained damages of a commercial nature—online and/or offline--you need the tools and information to go after the perpetrator. Common thieves are everywhere.
Notice how I didn’t mention terrorists till just now?
Jim
fyi Thieves trade on quasi-anonymity to sell stolen goods via eBay
All:
Pointer to article:
http://www.washingtonpost.com/wp-dyn/articles/A51741-2005Jan5_2.html
Kobielus kommentary:
It was only a matter of time before eBay got effectively hijacked for this. Before long, people will start calling it “eFence.” Does anybody really believe that eBay, or any other online auction marketplace, can police this sort of activity?
Jim
Pointer to article:
http://www.washingtonpost.com/wp-dyn/articles/A51741-2005Jan5_2.html
Kobielus kommentary:
It was only a matter of time before eBay got effectively hijacked for this. Before long, people will start calling it “eFence.” Does anybody really believe that eBay, or any other online auction marketplace, can police this sort of activity?
Jim
Wednesday, January 05, 2005
fyi Motorola to build cell phones into ski jackets
All:
Pointer to article:
http://www.computerworld.com/mobiletopics/mobile/story/0,10801,98693,00.html?source=NLT_AM_B&nid=98693
Kobielus kommentary:
See previous post re creative potential of apparel/accessory-embedded handheld phone/camera technology. A few points to glean from this article:
• The outerwear uses interactive cell phone and portable music technology in snowboarding jackets
• In a bid to win over twentysomethings.
• Motorola and privately held apparel maker Burton of Burlington, Vt., will jointly develop jackets, helmets and caps, to be released in the latter half of 2005.
• The jackets will have a padded casing for a Motorola cell phone and an MP3 music player.
• They will feature a device on the sleeve that lets the wearer control incoming and outgoing calls, and toggle back and forth to music, sending audio signals to removable speakers in the hood.
• cell phones with built-in cameras.
• New clothes will use Bluetooth technology, which allows devices such as headsets and computers to communicate with each other over a short range without wired connections.
• "You'll see more things coming down the pike from us where we'll be taking this technology and vetting it with other partners," said Bruce Hawver, who heads Motorola's accessories unit.
“Twenty-somethings” are the least of it. These are tools, not toys. I’m a forty-something and I want it all.
Jim
Pointer to article:
http://www.computerworld.com/mobiletopics/mobile/story/0,10801,98693,00.html?source=NLT_AM_B&nid=98693
Kobielus kommentary:
See previous post re creative potential of apparel/accessory-embedded handheld phone/camera technology. A few points to glean from this article:
• The outerwear uses interactive cell phone and portable music technology in snowboarding jackets
• In a bid to win over twentysomethings.
• Motorola and privately held apparel maker Burton of Burlington, Vt., will jointly develop jackets, helmets and caps, to be released in the latter half of 2005.
• The jackets will have a padded casing for a Motorola cell phone and an MP3 music player.
• They will feature a device on the sleeve that lets the wearer control incoming and outgoing calls, and toggle back and forth to music, sending audio signals to removable speakers in the hood.
• cell phones with built-in cameras.
• New clothes will use Bluetooth technology, which allows devices such as headsets and computers to communicate with each other over a short range without wired connections.
• "You'll see more things coming down the pike from us where we'll be taking this technology and vetting it with other partners," said Bruce Hawver, who heads Motorola's accessories unit.
“Twenty-somethings” are the least of it. These are tools, not toys. I’m a forty-something and I want it all.
Jim
fyi World's Telecom Cos Developing Advanced Mobile Standard for Hi-Res Video
All:
Pointer to article:
www.cellular-news.com/story/11595.shtml
Kobielus kommentary:
Obviously, this will take several years: to develop the standard, to develop handsets and infrastructure that implement the standard, to roll out 3G commercial services that support the standard, and to migrate customers toward hi-res video services over 3G (or preferably, over WiFi-to-3G/GPRS roaming) environments.
This evolution is inevitable, and hi-res streaming video over public wireless will gain substantial adoption worldwide by the end of this decade. Wireless carriers’ futures depend on it. Slipping down the slope to commoditization of their underlying bearer services, they need sexy new apps/services to boost average revenue per subscriber. Hi-res video certainly qualifies as sexy (and let’s not get into the literal nature of that adjective applied to this technology and its for-sure adoption in e-porn—OK, I mentioned it, but that’s as far as I’ll go on that ramification).
Instead, let’s focus on hi-res wireless streaming video as a basic feature of most handhelds by the beginning of the next decade: the Teens. It will surely mark the beginning of the end of the camcorder market, just as digital cameras have doomed film cameras to obsolescence. Think about it: You’re on vacation, and you’re video-capturing everything you see and, not just that, you’re automatically streaming all that video in real time over the airwaves back to some persistent storage location that your 3G/WiFI wireless carrier provides you. Not just that, but you also have the option of granting others with real-time access to that feed—or playback from the persistent store—so that they can see what you’re doing on vacation while you’re doing it (actually, scratch that thought—if you’re like me, back in the days of photographic slides, you couldn’t stand being forced to watch somebody’s else’s vacation moments, all those indistinguishable sunsets and beaches and rollercoasters and overexposed red-eye grins).
Just as important, this hi-res streaming wireless video uplink technology is the future of on-the-spot newsgathering everywhere (er…everywhere they have 3G, that is). And the future of remote job interviews. And so on and so forth. Everybody will be able to muster up an on-the-spot video uplink—of steadily improving quality--from equipment that they carry on their person.
Awesome. Which, indirectly, reminds me of my favorite film of 2004. The line between “reality” entertainment and scripted entertainment is becoming blurrier (actually, I define reality TV as “gross, idiotic, humiliating stunts performed by shameless non-actors,” but that’s beside the point). Increasingly, people expect a streak of cinema verite (oh…ancient term…somehow, though, it feels more relevant than ever)—the shaky camera, the awkward angles, the spontaneous danger of ad-hoc reacting and venting—in even the most professional video productions (TV, movies, etc.).
My favorite film of 2004 was “Before Sunset,” directed by Richard Linklater and starring Ethan Hawke and Julie Delpy (it was a sequel to the 1995 film “Before Sunrise” by the same team). The premise of both films is straightforward: a man and a woman, who’ve barely met, walk and talk and talk and walk through a romantic European capital (Vienna in the first film, Paris in the sequel), connecting deeply on a personal level, becoming ever more smitten with each other, but, mostly, just talking and walking and walking and talking, trying to beat the clock (in both movies, the Jesse/Hawke character has to leave town imminently on a scheduled/ticketed flight). The wife and I saw “Before Sunset” on DVD just before Thanksgiving, and it blew us away with the depth and reality of the conversation between the two characters—or, was it actually really a deepening personal connection between the actor and actress? The conversation/connection between Jesse/Ethan and Celine/Julie was so really real feeling that it was the most totally gripping movie I saw this past year (caveat: I haven’t yet seen “Sideways,” which I hear is also a quiet deep-personal-connection film of this sort; but I have seen “Eternal Sunshine of the Spotless Mind,” which was also extremely brilliant in this same core way).
What does this movie review have to do with hi-res streaming video over 3G? Well, it occurred to me that, once everybody’s so empowered to stream their every conversation and footstep to everybody everywhere, we’ll start to get flooded with amateur and professional “movies” of the “Before Sunset” variety. It’s the most simple low-overhead video-production premise: this is just me/us, walking and talking, talking and walking. Though “Before Sunset” was filmed by a full camera crew with an actual director and script (largely written by the performers themselves), one can easily imagine that a lo-res version could have been produced by Linklater alone, just walking ahead of Hawke and Delpy through the streets of Paris, with a camera/mic embedded in a backpack.
Oh…the boundary between reality and script in “Before Sunset”: there’s a certain point in the dramatic heart of the conversation where the guy (Jesse the character is married but estranged; Ethan the actor is divorced from Uma Thurman, with whom he has two children) expresses disappointment with his unhappy marriage. He (Jesse) makes a stunningly acerbic comment about that marriage that is identical to a comment that he (Ethan) made (in Esquire magazine) about his relationship with Uma Thurman. That broke down the reality/script wall immediately and, for me, made this movie totally absolutely real. Ballsy.
It’s a masterpiece (and the original’s great too). Excellent date movies, and fine getting-older-but-haven't-lost-the-romance-in-our-relationship movies as well. See them with a loved one, current or prospective. Go rent—don’t rip--them. Artists gotta eat.
Jim
Pointer to article:
www.cellular-news.com/story/11595.shtml
Kobielus kommentary:
Obviously, this will take several years: to develop the standard, to develop handsets and infrastructure that implement the standard, to roll out 3G commercial services that support the standard, and to migrate customers toward hi-res video services over 3G (or preferably, over WiFi-to-3G/GPRS roaming) environments.
This evolution is inevitable, and hi-res streaming video over public wireless will gain substantial adoption worldwide by the end of this decade. Wireless carriers’ futures depend on it. Slipping down the slope to commoditization of their underlying bearer services, they need sexy new apps/services to boost average revenue per subscriber. Hi-res video certainly qualifies as sexy (and let’s not get into the literal nature of that adjective applied to this technology and its for-sure adoption in e-porn—OK, I mentioned it, but that’s as far as I’ll go on that ramification).
Instead, let’s focus on hi-res wireless streaming video as a basic feature of most handhelds by the beginning of the next decade: the Teens. It will surely mark the beginning of the end of the camcorder market, just as digital cameras have doomed film cameras to obsolescence. Think about it: You’re on vacation, and you’re video-capturing everything you see and, not just that, you’re automatically streaming all that video in real time over the airwaves back to some persistent storage location that your 3G/WiFI wireless carrier provides you. Not just that, but you also have the option of granting others with real-time access to that feed—or playback from the persistent store—so that they can see what you’re doing on vacation while you’re doing it (actually, scratch that thought—if you’re like me, back in the days of photographic slides, you couldn’t stand being forced to watch somebody’s else’s vacation moments, all those indistinguishable sunsets and beaches and rollercoasters and overexposed red-eye grins).
Just as important, this hi-res streaming wireless video uplink technology is the future of on-the-spot newsgathering everywhere (er…everywhere they have 3G, that is). And the future of remote job interviews. And so on and so forth. Everybody will be able to muster up an on-the-spot video uplink—of steadily improving quality--from equipment that they carry on their person.
Awesome. Which, indirectly, reminds me of my favorite film of 2004. The line between “reality” entertainment and scripted entertainment is becoming blurrier (actually, I define reality TV as “gross, idiotic, humiliating stunts performed by shameless non-actors,” but that’s beside the point). Increasingly, people expect a streak of cinema verite (oh…ancient term…somehow, though, it feels more relevant than ever)—the shaky camera, the awkward angles, the spontaneous danger of ad-hoc reacting and venting—in even the most professional video productions (TV, movies, etc.).
My favorite film of 2004 was “Before Sunset,” directed by Richard Linklater and starring Ethan Hawke and Julie Delpy (it was a sequel to the 1995 film “Before Sunrise” by the same team). The premise of both films is straightforward: a man and a woman, who’ve barely met, walk and talk and talk and walk through a romantic European capital (Vienna in the first film, Paris in the sequel), connecting deeply on a personal level, becoming ever more smitten with each other, but, mostly, just talking and walking and walking and talking, trying to beat the clock (in both movies, the Jesse/Hawke character has to leave town imminently on a scheduled/ticketed flight). The wife and I saw “Before Sunset” on DVD just before Thanksgiving, and it blew us away with the depth and reality of the conversation between the two characters—or, was it actually really a deepening personal connection between the actor and actress? The conversation/connection between Jesse/Ethan and Celine/Julie was so really real feeling that it was the most totally gripping movie I saw this past year (caveat: I haven’t yet seen “Sideways,” which I hear is also a quiet deep-personal-connection film of this sort; but I have seen “Eternal Sunshine of the Spotless Mind,” which was also extremely brilliant in this same core way).
What does this movie review have to do with hi-res streaming video over 3G? Well, it occurred to me that, once everybody’s so empowered to stream their every conversation and footstep to everybody everywhere, we’ll start to get flooded with amateur and professional “movies” of the “Before Sunset” variety. It’s the most simple low-overhead video-production premise: this is just me/us, walking and talking, talking and walking. Though “Before Sunset” was filmed by a full camera crew with an actual director and script (largely written by the performers themselves), one can easily imagine that a lo-res version could have been produced by Linklater alone, just walking ahead of Hawke and Delpy through the streets of Paris, with a camera/mic embedded in a backpack.
Oh…the boundary between reality and script in “Before Sunset”: there’s a certain point in the dramatic heart of the conversation where the guy (Jesse the character is married but estranged; Ethan the actor is divorced from Uma Thurman, with whom he has two children) expresses disappointment with his unhappy marriage. He (Jesse) makes a stunningly acerbic comment about that marriage that is identical to a comment that he (Ethan) made (in Esquire magazine) about his relationship with Uma Thurman. That broke down the reality/script wall immediately and, for me, made this movie totally absolutely real. Ballsy.
It’s a masterpiece (and the original’s great too). Excellent date movies, and fine getting-older-but-haven't-lost-the-romance-in-our-relationship movies as well. See them with a loved one, current or prospective. Go rent—don’t rip--them. Artists gotta eat.
Jim
Tuesday, January 04, 2005
lol Inadvertently funny tech acronyms
All:
I just ran across the following acronym in an e-mail newsletter (DEMOletter Weekly Edition):
* "END NOTES: PMP-ING WILL BE EASY: At this week's Consumer Electronics Show, get ready for an onslaught of portable media players (PMP), which lets consumers take their digital content (including TV shows and other video) on the go."
Just sound it out in the normal way. Anybody wanna pimp my ride? Pimp my handheld? Ah, yes.
But my all-time favorite inadvertently hilarious tech acronym award goes to (stop me if you've heard this one before):
* Segway Human Transporter (SHT)
I actually rode a SHT in a hallway at Microsoft in Redmond a few years ago. Loved it, actually. I also saw people on the streets there riding their personal SHTs. I couldn't help but think of Roman centurions rushing forth to battle in their stand-up carriages, pulled by virtual steeds. It felt so exquisitely ridiculous and dungeons and dragons and nerd-perfect.
It also put me in mind of the AMC Gremlin. One of those painfully embarrassing vehicular memories from the 70s that seemed, at the time, kinda OK cuz it was fuel-efficient. But it was a half-chopped-off toy semi-car. Likewise, the SHT is a stand-up golfcart, essentially. I don't know about you. But I'd rather sit. I'd rather not be exposed to the elements. I'd rather not block the wind. I'd rather be seated and sleek and aerodynamic. Give me a Vespa anyday.
I'm from Detroit originally and I have long memories of engineering-push macho muscle car contraptions that failed miserably because they were ungainly (Delorean anybody?). The SHT was a computer-geek's nouveau virtual-delorean of a fiasco-mobile.
Oh, yes. The SHT ran on hydrogen? Didn't the Hindenburg run on hydrogen? Oh, the humanity!
Jim
I just ran across the following acronym in an e-mail newsletter (DEMOletter Weekly Edition):
* "END NOTES: PMP-ING WILL BE EASY: At this week's Consumer Electronics Show, get ready for an onslaught of portable media players (PMP), which lets consumers take their digital content (including TV shows and other video) on the go."
Just sound it out in the normal way. Anybody wanna pimp my ride? Pimp my handheld? Ah, yes.
But my all-time favorite inadvertently hilarious tech acronym award goes to (stop me if you've heard this one before):
* Segway Human Transporter (SHT)
I actually rode a SHT in a hallway at Microsoft in Redmond a few years ago. Loved it, actually. I also saw people on the streets there riding their personal SHTs. I couldn't help but think of Roman centurions rushing forth to battle in their stand-up carriages, pulled by virtual steeds. It felt so exquisitely ridiculous and dungeons and dragons and nerd-perfect.
It also put me in mind of the AMC Gremlin. One of those painfully embarrassing vehicular memories from the 70s that seemed, at the time, kinda OK cuz it was fuel-efficient. But it was a half-chopped-off toy semi-car. Likewise, the SHT is a stand-up golfcart, essentially. I don't know about you. But I'd rather sit. I'd rather not be exposed to the elements. I'd rather not block the wind. I'd rather be seated and sleek and aerodynamic. Give me a Vespa anyday.
I'm from Detroit originally and I have long memories of engineering-push macho muscle car contraptions that failed miserably because they were ungainly (Delorean anybody?). The SHT was a computer-geek's nouveau virtual-delorean of a fiasco-mobile.
Oh, yes. The SHT ran on hydrogen? Didn't the Hindenburg run on hydrogen? Oh, the humanity!
Jim
fyi Linux continues inroads into server market
All:
Pointer to article:
http://www.computerworld.com/softwaretopics/os/linux/story/0,10801,98681,00.html?source=NLT_LIN&nid=98681
Kobielus kommentary:
My head did a quick compute on these statements from the article:
• "The growth of blade servers is also an important contributing factor [in Linux’s adoption]."
• “[H]alf of all blade servers shipped today run Linux.”
• “Initial users of blade servers were those who embrace new technology first and tended to be in the high-performance computing arena, which is a stronghold of Linux.”
• “The increased scalability of the Linux kernel Version 2.6 has resulted in more multiprocessor Linux server shipments, the report said, and dual-processor systems are now the predominant configuration, followed by uniprocessors and four-processor systems.”
It’s clear to me, from my research, that blade servers are becoming the primary proving ground for grid-computing architectures in enterprises. Forget these globe-spanning grids that search for signs of intelligent life in the universe and so forth—“they’re out there” (OK, dumb pun intended), but they’re research tools rather than corporate infrastructure. Likewise, don’t think that data-center grids will obsolete traditional server clusters any time soon: though, clearly, server clusters are becoming more dynamic in their real-time task allocation among available CPUs, storage resources, and so forth.
No, focus instead on grids that are entirely contained within bladed computing chassis, yoking the many blades (each its own “stand-alone” server) into virtual supercomputer. The popular conception of grids involves massive “resource aggregation” among many physically separate machines over the wide or local area. But the increasing reality of grid deployments will involve more flexible “resource partitioning”—both through “hard blades” (i.e., separate slip-in server boards) and “soft blades” (i.e., virtual machines that slice up resources on each hard blade into logically distinct computing nodes).
By concentrating computing resources into single chassis, resource-partitioning intra-chassis grids are inherently more manageable than the scattered, distributed, resource-aggregation grids. Expect more intra-chassis grids to take root in data centers everywhere. Expect most of them to incorporate hard-bladed and soft-bladed Linux environments (anywhere from dozens to hundreds to thousands of Linux instances per chassis). Expect them to usurp more enterprise high-performance computing applications. Expect those grids to continue scaling and screaming faster and faster, through “scale-in” (intra-chassis Linux partitioning) and “scale-out” (extra-chassis Linux aggregation).
It's undeniable. Blades are growing ever sharper.
Jim
Pointer to article:
http://www.computerworld.com/softwaretopics/os/linux/story/0,10801,98681,00.html?source=NLT_LIN&nid=98681
Kobielus kommentary:
My head did a quick compute on these statements from the article:
• "The growth of blade servers is also an important contributing factor [in Linux’s adoption]."
• “[H]alf of all blade servers shipped today run Linux.”
• “Initial users of blade servers were those who embrace new technology first and tended to be in the high-performance computing arena, which is a stronghold of Linux.”
• “The increased scalability of the Linux kernel Version 2.6 has resulted in more multiprocessor Linux server shipments, the report said, and dual-processor systems are now the predominant configuration, followed by uniprocessors and four-processor systems.”
It’s clear to me, from my research, that blade servers are becoming the primary proving ground for grid-computing architectures in enterprises. Forget these globe-spanning grids that search for signs of intelligent life in the universe and so forth—“they’re out there” (OK, dumb pun intended), but they’re research tools rather than corporate infrastructure. Likewise, don’t think that data-center grids will obsolete traditional server clusters any time soon: though, clearly, server clusters are becoming more dynamic in their real-time task allocation among available CPUs, storage resources, and so forth.
No, focus instead on grids that are entirely contained within bladed computing chassis, yoking the many blades (each its own “stand-alone” server) into virtual supercomputer. The popular conception of grids involves massive “resource aggregation” among many physically separate machines over the wide or local area. But the increasing reality of grid deployments will involve more flexible “resource partitioning”—both through “hard blades” (i.e., separate slip-in server boards) and “soft blades” (i.e., virtual machines that slice up resources on each hard blade into logically distinct computing nodes).
By concentrating computing resources into single chassis, resource-partitioning intra-chassis grids are inherently more manageable than the scattered, distributed, resource-aggregation grids. Expect more intra-chassis grids to take root in data centers everywhere. Expect most of them to incorporate hard-bladed and soft-bladed Linux environments (anywhere from dozens to hundreds to thousands of Linux instances per chassis). Expect them to usurp more enterprise high-performance computing applications. Expect those grids to continue scaling and screaming faster and faster, through “scale-in” (intra-chassis Linux partitioning) and “scale-out” (extra-chassis Linux aggregation).
It's undeniable. Blades are growing ever sharper.
Jim
Monday, January 03, 2005
poem OO
OO
Flatter deities
no more with
praise and prostration—
they know we’re low.
Pray to the odds for
elevation.
Bend inward with the
roll of dice
and
--ow—
embrace.
Flatter deities
no more with
praise and prostration—
they know we’re low.
Pray to the odds for
elevation.
Bend inward with the
roll of dice
and
--ow—
embrace.
fyi REST vs. SOAP: Which SOA Is More Popular?
All:
Pointer to article:
http://billburnham.blogs.com/burnhamsbeat/2004/12/rest_vs_soap_wh.html
Kobielus kommentary:
This blog item would have been more useful if the author had gotten some basic points straight.
Most significantly, he fuzzes the distinction between SOA and REST. Both are architectural styles for designing distributed Web services, but they differ fundamentally in how they enable ubiquitous interoperability.
Under SOA, endpoints expose their interface semantics as service contracts, and may define any arbitrary interface semantics they wish, so long as they are described in platform-independent syntaxes such as XML Schema, WSDL, and WS-Policy. Under REST, endpoints rely exclusively on HTTP’s limited interface semantics for creating, retrieving, updating, deleting, and obtaining metadata on resources (i.e., PUT, GET, POST, DELETE, HEAD). REST doesn’t support arbitrary or application-specific protocol interface semantics. So—contrary to the post’s headline—REST is not an SOA. Nor is SOAP, for that matter: it’s a middleware protocol that is often used in SOA-based environments (along with XML, WSDL, UDDI, etc.) for any-to-any interoperability (RPC-based or document-based).
Essentially, REST refers to the WWW—which is of course ubiquitous—but with some additional architectural constraints (it’s hard, from reading this article, to know if all the constraints are satisfied in the Amazon example). In addition to the rigid reliance on HTTP’s protocol semantics, REST doesn’t support stateful conversations among agents and resources. Under REST, all state information must be contained in the self-describing messages interchanged beween agents and resources. Other REST features include a separation of presentation from business-logic and data tiers, caching of resource representation data in intermediary caches, and on-demand code provisioning.
I don’t take issue with his basic point that REST—as a Web services architectural style facilitated by the ubiquity of the WWW—is more common than SOA. REST is a simpler interoperability style that makes do with the basic ingredients of the WWW: URI/URL, HTTP, HTML, and DNS (not a shred of XML in that basic equation). Therefore, REST is the common denominator interoperability path of least resistance.
But REST is not necessarily a competitor for the richer, more flexible, more robust world of SOA (based on XML, SOAP, WSDL, UDDI, and so forth). Like the WWW, REST is more purely a style for presentation-tier interoperability. For complex integration within and between all the tiers (presentation, business-logic, and data) of your distributed environment, SOA remains the most appropriate style.
Jim
Pointer to article:
http://billburnham.blogs.com/burnhamsbeat/2004/12/rest_vs_soap_wh.html
Kobielus kommentary:
This blog item would have been more useful if the author had gotten some basic points straight.
Most significantly, he fuzzes the distinction between SOA and REST. Both are architectural styles for designing distributed Web services, but they differ fundamentally in how they enable ubiquitous interoperability.
Under SOA, endpoints expose their interface semantics as service contracts, and may define any arbitrary interface semantics they wish, so long as they are described in platform-independent syntaxes such as XML Schema, WSDL, and WS-Policy. Under REST, endpoints rely exclusively on HTTP’s limited interface semantics for creating, retrieving, updating, deleting, and obtaining metadata on resources (i.e., PUT, GET, POST, DELETE, HEAD). REST doesn’t support arbitrary or application-specific protocol interface semantics. So—contrary to the post’s headline—REST is not an SOA. Nor is SOAP, for that matter: it’s a middleware protocol that is often used in SOA-based environments (along with XML, WSDL, UDDI, etc.) for any-to-any interoperability (RPC-based or document-based).
Essentially, REST refers to the WWW—which is of course ubiquitous—but with some additional architectural constraints (it’s hard, from reading this article, to know if all the constraints are satisfied in the Amazon example). In addition to the rigid reliance on HTTP’s protocol semantics, REST doesn’t support stateful conversations among agents and resources. Under REST, all state information must be contained in the self-describing messages interchanged beween agents and resources. Other REST features include a separation of presentation from business-logic and data tiers, caching of resource representation data in intermediary caches, and on-demand code provisioning.
I don’t take issue with his basic point that REST—as a Web services architectural style facilitated by the ubiquity of the WWW—is more common than SOA. REST is a simpler interoperability style that makes do with the basic ingredients of the WWW: URI/URL, HTTP, HTML, and DNS (not a shred of XML in that basic equation). Therefore, REST is the common denominator interoperability path of least resistance.
But REST is not necessarily a competitor for the richer, more flexible, more robust world of SOA (based on XML, SOAP, WSDL, UDDI, and so forth). Like the WWW, REST is more purely a style for presentation-tier interoperability. For complex integration within and between all the tiers (presentation, business-logic, and data) of your distributed environment, SOA remains the most appropriate style.
Jim
Subscribe to:
Posts (Atom)