Friday, October 28, 2005

poem Tickle

TICKLE

Touch tingles not much
but enough lingers your eyes
suggest your fingers.

fyi Spammers exploit bird flu fears

All:

Pointer to article:
www.vnunet.com/2144878

Kobielus kommentary:
I hope the anti-spam appliance vendors are keeping a master historical archive of spam tagged by time and topic.

That would be an invaluable tool for future historians to compile a profile of the evolving FUDload in the zeitgeist of these times. The “bird flu” is just the latest example of spammers’ ingenious exploitation of the never-ending human tendency to imprint new neurosis on the nervous system.

Actually, I’d like to see a “FUD registry” continually updated in real-time, culled from the current spamload. That would clue me into what cultural scareforces are primarily transient spooks, and then steel/shelter my heart accordingly.

And instruct my brain to filter out these phantoms flooding in through all media and human interactions. Not just from spam. If you can call that “media” or “human interaction.”

Which, shockingly, it resembles too closely to totally ignore. The greater the spamload, the harder it is for the recipient to do a mental Turing test that distinguishes human-generated vs. demon-generated messages. It all starts to blur into disembodied voices and forces, many of which are manipulators, howling at you continually to keep your nerves on edge.

Hmm…Halloween’s on Monday….I didn’t realize this post had a seasonal tie-in. Inadvertent. But there you have it.

Jim

Thursday, October 27, 2005

fyi Why Wikipedia isn't like Linux

All:

Pointer to article: http://www.theregister.co.uk/2005/10/27/wikipedia_britannica_and_linux/

Kobielus kommentary:
Actually, this article hinges on another comparison: Wikipedia vs. Encyclopedia Brittanica. Or Wikipedia vs. all traditional encyclopedias. Not just Wikipedia vs. Linux and other open source projects.

All of those comparisons are bogus, but especially those vis-à-vis traditional encyclopedias. And the “bogosity” (stick that in your Funk and Wagnalls) stems from presumptuous assumptions being made by everybody on this discussion: that an “encyclopedia” is or should be some authoritative source of all knowledge; and that one encyclopedia is all you should ever need for all your information needs.

Hey, folks, when’s the last time you consulted an encyclopedia of any sort when doing research? Or, when’s the last time you made an encyclopedia the first and last place you consulted when doing research? And it’s not an issue of Google and other search engines supplanting traditional reference works. I’m asking you to cast your minds back before the days of the ubiquitous Web, back when people visited physical locations known as “libraries,” and when people actually bought physical items called “books” that they kept in their homes.

Even back then, in the day, an encyclopedia, if you used it, was just the place where you dipped your information-seeking toes in the water of collective human knowledge. It was the place you started to familiarize yourself with some un- or semi-familiar topic, before moving onto other books, journals, magazines, newspapers, microfilm, and other materials where you could go in depth.

And, even if you, sitting in a physical public library, consulted an encyclopedia, you were likely to consult two or more encyclopedias—assuming you were in a library that had the budget and need for alternative encyclopedias. There has always been competition in the encyclopedia market. Many of us growing up bought the reader-friendly World Book series for our homes, but, when we went to high school or college, also consumed Brittanica, F&W, and whatever was available.

Now there’s Wikipedia, which has the great advantage of being free and being continually refreshed from all over the planet. Yeah, it’s got its strengths and weaknesses. But so do traditional encyclopedias. World Book, for example, too surface-oriented glossy. Encyclopedia Brittanica too ploddingly academic—and tiny typeface.

There’s room for all these reference works. If you’re a serious researcher—or even a semi-clued-in one—you’ll cycle through all these sources as need be. But you’ll be even more likely just to Google it and follow your queries through link after link through cross-references.

Which is how, come to think of it, the best researchers have navigated through the hardcover/hardcopy world of traditional encyclopedias, in which the cross-references between sections have always been just as important as the content of any particular section. The cross-references are the semantic map of the overall knowledge space, as managed by the editors of the encyclopedia.

That’s all there in Wikipedia too, but Web-based from the get-go. Developed from scratch in a reference medium where cross-referencing is part of its fundamental DNA.

So, IMHO, Wikipedia’s OK. But it’s not the authoritative source of all knowledge. Nor is any traditional encyclopedia. Nor is any particular library or other collection of informational materials.

No, IMHO, I’m the authoritative source on any particular topic, as long as I can gather all the relevant materials I need, from wherever, and sift/analyze/synthesize them into new knowledge that I can share with the world. Each of us is the authoritative source of whatever little knowledge-building project we choose to undertake at any time.

In the process of becoming an autodidact, we must also become an omnididact (made that one up too).

Which is the Wikipedia dynamic, isn’t it?

Jim

Thursday, October 20, 2005

fyi Still looking for a definition for 'role'

All:

Pointer to Dave Kearns’ blog (couldn’t find URL pointing to this particular article, which appeared in Dave’s Network World Newsletter on Identity Management, October 19, 2005, and by the way Dave, thanks for pointing back to my blog…have you noticed that I’ve been slacking off in recent weeks in posting new stuff…thanks for your continued patronage everybody…and thanks for indulging my poetic musings… and how’s everything with you these days and….oh my gosh, this is going on too long and I need to launch into my kommentary while the thought is fresh so here’s the pointer to your site and by the way always love your stuff etc etc blah blah blah): http://vquill.com/

Kobielus kommentary:
I’ve glad that Dave cycled back to the discussion of what a role is. I’ve been storing up thoughts on roles like a squirrel his acorns. Dave’s search for an “ontology of identity management” resonates pretty strongly with me. I’m always searching for an ontology on every topic I encounter. The base bedrock simple powerful fundamental representation of the problem domain from which all more nuanced higher-level complex representations may be unfurled, and to which they may be reduced.

Anyway, here are those thoughts on roles. I’ll try to keep them succinct:

Role engineering is the black art of IdM. Almost every IdM project quickly launches a role-engineering exercise. Traditionally, roles have served as a convenient construct for simplifying assignment of permission sets to individual users, per their stable responsibilities/functions within an organization, process, or project. One of the primary benefits of roles—from an IdM/permission management standpoint—is that the privileges associated with a role can be managed in a single role object in the directory, without having to change the permissions of every single user (of which there could be thousands of users, each with myriad permissions) who belong to that role.

But role engineering is difficult to implement effectively. Partly that has to do with the fact that roles are sometimes difficult to generalize at an abstraction level sufficient to lump a significant number of like users together. When examining the diversity of real-world roles played by various individuals, one is sometimes tempted to create a unique role for each person (i.e., the “role of Jim,” encompassing the unique set of stuff that Jim does in our organization). Other problems with role engineering are that many people play many roles; that those roles are evolving continually; and those roles layer and interact with each other in diverse ways that are often specific to each individual.

Roles are multilayer constructs that are difficult to model clearly, and difficult to manage within an IdM user management environment. Per what Ed Zou of Bridgestream is quoted as saying in Kearns’ article: “Business units use [roles] to represent organization structure, responsibility, span of control and authority. For example, if Jane in the marketing department reports to the CEO, supports key sales initiatives at major accounts, manages three staff members, and participates in the revenue recognition team, she has four different business roles. Yet, most likely only two of these roles can be found in the directory: the direct reporting structure and the formal department that she belongs to. The other dimensions are difficult for directories to include and even harder to maintain. Her role changes and thus must be defined to be sensitive to business context, e.g., in-context roles."

Kearns ends his brief article with a call for new terms that the industry can use to describe this multi-level contextualization of roles. I’d like to propose just such a framework.

Essentially, a role is the contextual coordinate system within which an identity is described, qualified, characterized, and classified so that it can be managed effectively. In this regard, we can view any role as existing within a three-dimensional coordinate space:

• Place: This is the notion, highlighted above, of a role being defined in relation to the identity’s contextualization within a “direct reporting structure and formal department.” This is where “roles” and “groups” essentially overlap in semantic scope. For example, my role in Exostar is “senior technical systems analyst,” which is defined within the context of the “project office” which is defined within the context of the org supervised by the “chief technology officer,” which is defined with respect the entire org under the supervision and control of the “chief executive officer.” In this context, “place” simply refers to a standing persistent grouping of identities under an organization. A “project” (e.g., “revenue recognition team”) is another type of “place.”
• Process: This is role in the workflow context of somebody’s position in a flow of tasks, steps, documents, deliverables, etc. In the excerpt above, this corresponds to “supports key sales initiatives.” In that context, someone’s role may be “provides sales engineering support upon request” or whatever. In a document processing workflow, one’s role might be “document originator,” “document reviewer,” “document approver,” or so forth.
• Permission: This is role in the access control context: some stable grouping of permissions to access some set of apps, data, or other resources—a grouping that is associated with particular identities based on various criteria. Quite often, IdM professionals refer to role in this permission-management context. The NIST subject-object role model is built on it, as are the role-based access control features in many apps: read, write, modify, delete, append comments, and other privileges.

So, to sum up, the concept of roles may be applied in any or all of three management contexts:

• Place management
• Process management
• Permission management

Or, more succinctly: A role is an identity defined in its full governance context.

Just a little ol’ ontology I’ve been carrying in my head for a while. A kobelian coordinate system. Use it if you find a use. My ontology.

Jim

Thursday, October 13, 2005

fyi Malware Naming Plan Gets Chilly Reception

All:

Pointer to article:
http://www.eweek.com/article2/0,1895,1868236,00.asp

Kobielus kommentary:
Yes, yes…Beelzebub has a thousand names, and every one inflicts a private pain.

CME’s aims are laudable. But objections to the plan are grounded in realism. How can any single surveillance center tag all new malware signatures uniquely, quickly, and definitively? How can the diverse anti-malware vendors’ proprietary naming schemes be harmonized quickly around a common threat registry? How can a centralized naming registrar move quickly enough to provide the anti-malware community with the ammunition necessary to ensure a common, continuing defense against all such threats? No easy chore, as this article notes. Not as fine-grained as naming every raindrop that falls from the heavens, but not as coarse-grained as assigning nicknames to hurricanes.

Clearly, this is a federated identity management problem, where the entities being identified are programmatic constructs that were designed to evade identification, and the identity providers (IdPs) don't want to slave to a single master registry. In the anti-malware space, what we have are diverse IdPs-—the anti-malware vendors--each with their own identity registration practices. As regards the CME initiative, the general industry goal is to federate everybody’s malware identity registration schemes to a common identity hub: possible managed by Mitre. The general concern is that the CME hub won’t be able to move fast enough to uniquely identify malware species and signatures to allow the “federated” anti-malware vendors to organize more effective common countermeasures.

I’d like to suggest that the CME be regarded not as a common malware registry but as a meta-registry—essentially, a meta-directory responsible for tracking the correspondences between diverse anti-malware vendors’ nomenclatures. Just as wars are fought by those entrenched on the front lines, the first defense against all incoming threats will continue to be the anti-malware vendors’ own round-the-clock operations centers, which will need to automatically forward all reports to the common industry-wide nerve center for further analysis. As an always-on-call resource, the common CME anti-malware nerve center will help the industry regroup to finetune their defenses. It will also help each vendor to determine “lessons learned” elsewhere on new threats, and to correlate divergent nomenclatures, to eliminate unnecessary cross-vendor confusion.

How will anti-malware vendors compete when they’re sharing all their latest threat intelligence with the world? Just the same way that continuous news channels compete—by boasting that they “got there first” with breaking coverage of the new threat, and with effective analysis geared at providing “news you can use.”

News you can use to protect your—or your customers’--precious computers from the next cyberplague.

Jim

Thursday, October 06, 2005

poem Thisaway

THISAWAY

…an Architecture's
bare Scaffolding: Void for fresh
Constellation Dust.

one inGenious Way
to conjure a young Kosmos:
Trinity—invoKe!

the three strongest Beams--
Bones of any bright Structure:
Dreams Drawings Stuff and….

fyi GridWorld: Standards adoption key to grid computing growth

All:

Pointer to article:
http://www.computerworld.com/hardwaretopics/hardware/gridcomputing/story/0,10801,105180,00.html?source=NLT_PM&nid=105180

Kobielus kommentary:
This article gets it plain wrong.

First off, there have been standards in the grid space for several years. The OGSA/OGSI standards that the article describes as “emerging” have in fact emerged a long time ago. In fact, OGSA/OGSI has been implemented widely, though not universally, throughout the grid market in various ways. In fact, OGSA/OGSI is well into its decline, have been superseded by a new set of WS-* grid standards, under the WS-Resource Framework (WSRF) umbrella.

Secondly, standards adoption isn’t what will drive the grid market. If standards were so important to grid adoption, vendors would have long since rushed to implement OGSA/OGSI and/or WSRF universally, to define industry-standard implementation profiles for those standards, and to hold interoperability bake-offs far and wide. If the latent demand for grid computing—as one of many high-performance computing architectures—were so strong, why isn’t Microsoft, for example, flooding industry airwaves with its grid go-to-market message? Does it even have a grid go-to-market message?

Thirdly, the so-called “dearth of [grid] business applications” is nonsense. Grid is just the next generation of massive parallel processing, clustering, and high-availability computing, and is serving the same core enterprise requirements. Grid’s aren’t an “architecture in search of applications”—they’re an architecture looking for applications that are ready to migrate from older approaches that address the same core requirements. And grids are making persistent inroads into the enterprise market, despite what this article claims. Just talk to any grid middleware vendor, and they’ll give you a lot of juicy enterprise case studies. (Grid middleware—“griddleware”?—pancakes, Canadian bacon---it’s morning, I’m hungry).

Increasing grid adoption depends on a long-running, inexorable architectural shift toward virtualization. Fundamentally, grid is a virtualization approach that abstracts applications from the distributed hardware substrate of processors, storage devices, and other devices across which the apps execute. It’s growing in importance and adoption alongside other virtualization approaches, such as service-oriented architecture (SOA). Actually, you can regard grid as an implementation of SOA, where the “service” is a pool of unseen but seemingly unlimited computing power. A distributed processing “blackbox” that encompasses an entire constellation of online resources. The time-honored time-sharing CPU-sponging Xanadu dream that refuses to die.

It will take time—perhaps another 10 years—before distributed computing environments shift decisively toward ubiquitous virtualization, SOA, and grid computing. But it will happen. The need for cross-platform architectural flexibility, scalability, performance, and availability demand it.

By the end of this decade, even Microsoft will realize their need for a strong grid message. Perhaps in “Black(box)comb.”

Jim

Tuesday, October 04, 2005

fyi IT groups push Congress to raise H-1B visa limits

All:

Pointer to article:
http://www.computerworld.com/governmenttopics/government/policy/story/0,10801,105093,00.html?source=NLT_AM&nid=105093

Kobielus kommentary:
The US seriously needs to get rid of these visa limits, and of all barriers to immigration. The only barrier to entry into this country should be the immigrant’s prior background or likely activities in this country. Obviously, a criminal record or intent should disqualify somebody from entering this country.

But why deny entry to law-abiding people of any nationality, and why eject them once here, if they can demonstrate continued good behavior? Immigration is a never-ending source of strength for our country. Immigrants are usually extraordinarily motivated to succeed in this their new country. And I’ll bet that a substantial percentage of immigrants—especially those with high-tech backgrounds—come here to start their own businesses. In other words, immigrants drive our economic growth, either as entrepreneurs or as especially hard-working employees.

I think it’s subtle racism to create a separate visa quota for Australians. What’s so especially virtuous about that particular nationality that should warrant us giving them a special dispensation? Their native English speaking? Most educated Indians speak English. In fact, most educated people of many nationalities have a decent fluency in English. Prior language skills of a national or ethnic group shouldn’t give them an advantage in seeking US citizenship. Nor should skin pigmentation.

We should open the door to smart, motivated, law-abiding people from all over the globe. We need to drop this fortress-America mentality.

Believe it or not, the world doesn’t hate us. They want to be us.

Jim