Tuesday, February 28, 2006

imho DRM7

All:

Found content: http://www.itarchitectmag.com/shared/article/showArticle.jhtml;jsessionid=1QXBNWQSDF4GUQSNDBECKH0CJUMEKJVN?articleId=174400783

My take:

DRM is another name for cryptographic containers that wrap content in persistent policies under the control of the content’s creator and/or owner. It’s also another name for whatever bad dream all that crypto conjures in your fevered imagination. For some folks, it’s hard to look at crypto without a post-9/11 night sweat: as another type of dangerous munition that may be wielded by swaggering world-dominating maniacs, unless we find and defang them promptly.

Think of all of the content that’s created in Microsoft’s software products. I’m writing this blogpost in Microsoft Word 2002 (my other two computers have two more current versions of that program). When I’m done writing this, I’ll copy/paste it into an HTML e-form at http://www.blogger.com/ by means of my Microsoft Internet Explorer 6.0 browser. If I get tired before I’m finished writing, and before I post, I’ll e-mail the unfinished text from my Microsoft Outlook Express client through my Microsoft HotMail account to another e-mail account (on Microsoft Exchange) that I’ll access in the morning through Microsoft Outlook. Of course, all of that software is running on the several versions of Microsoft Windows that I run on my various computers.

Think of all the potential for Microsoft to wrap its DRM tentacles around my content and your content—or rather, to give us the tools to wrap our personal tentacles around our own content, but with Microsoft-proprietary DRM technologies, including (especially) Windows Rights Management Server. The author (Michel Labelle) of the referenced article (Microsoft’s DRM Conspiracy) thinks a bit too much about it, or so it seems. Doesn’t the following article excerpt sound perhaps just a wee bit alarmist?:

  • “Microsoft has been quietly introducing a number of dubious technologies. First came Windows Rights Management Services (RMS). Digital Rights Management (DRM) is always bad, and it just doesn’t go well with business data. Losing the keys to the DRM store could lock an organization out of all its data….Vista goes so far as to prevent you from viewing DRM content unless you’re using a DRM-equipped monitor….It doesn’t take much of a leap of faith to see that Microsoft is setting us up as a captive market….Once a business goes down the DRM route for security its corporate data store, there’s no getting out. It will be impossible to effectively extract intellectual property that’s locked into a Microsoft proprietary format with Microsoft-specific DRM technology….Unless we heed the alarm, this could turn into a real nightmare.”

Now, I haven’t investigated Microsoft RMS in any great detail, but I take issue with several points in Labelle’s rambling argument.

First, DRM is not always bad—in fact, it’s usually a good thing—especially in the corporate world that is Labelle’s focus. DRM is another name for content and/or code license management technology. As such, DRM doesn’t differ in principle from the discretionary access controls supported in many operating environments, database management systems, and document repositories. Call it discretionary rights management.

Second, DRM, in the corporate world, isn’t usually implemented as a centralized “store” that has a specific set of “keys” that are in danger of being lost and thereby locking away all corporate data in perpetual irretrievable cold storage. DRM is a set of technologies that--depending on approach, vendor, and product—relies on various cryptographic techniques (involving asymmetric/public and/or symmetric/secret keys). More to the point, only an insane corporate database, document, or content manager would centralize all DRM-protected content and DRM keys, and then fail to backup any of this content or crypto material in off-site storage.

Third, what is a “DRM-equipped monitor,” how in the world would it operate, and why exactly would Microsoft design the next version of its client OS to prevent somebody from viewing some DRM-protected content if they don’t happen to be using this strange new display technology? That’s the first I’ve ever heard of a display technology that’s been built to selectively opaque data that the user has retrieved from storage, been loaded in memory, and processed by that node’s CPU. Is it sort of like the “V-chip”? Will it be factory-equipped to conceal data that’s embarrassing to Microsoft?

Fourth, what’s this jazz about DRM as a vortex that sucks businesses down to some hideous abyss, never to be seen from again? You can, of course, use DRM technologies to unlock/liberate data and display/store it in the clear—if that’s the policy you choose for a particular piece of DRM-protected data—or for an entire data set. You can liberate your data from Microsoft’s DRM technology, if you wish, only to lock it up again in SealedMedia or any competitor’s DRM containers.

Finally, was Labelle’s editor paying attention when the following sentence appeared on his or her display (or was the editor viewing Labelle’s draft through one of those magical selective-text-opaquing displays that chose to approve this nonsensical statement)?:

  • “It will be impossible to effectively extract intellectual property that’s locked into a Microsoft proprietary format with Microsoft-specific DRM technology.”

Huh? Come again? So, Microsoft specifically designed its DRM technology to irrevocably lock up any content that is created in a Microsoft proprietary format (.doc, .ppt, .xls, etc.)?

Surely, the Redmond gods must be crazy. Sound the alarm. The bad DRM dream is upon us.

Whew—got that blogpost done—now time to hit the hay. If I dare.

Jim

Monday, February 27, 2006

imho DRM8

All:

Found content: http://www.sdmagazine.com/documents/s=9961/sdm0602b/0602b.html

My take:

DRM is another name for content and/or code license management technology. As such, DRM doesn’t differ in principle from the discretionary access controls supported in many operating environments, database management systems, and document repositories. Call it discretionary rights management.

DRM has gained an ideological black eye in the B2C space due to recent PR fiascos such as Sony’s desktop-security-violating XCP rootkit. But DRM has gained a significant and growing niche in the business world as a tool for binding access controls persistently to corporate documents. As the referenced article points out, DRM is being used to enforce security classifications on internally distributed materials within organizations; to keep tabs on who accesses what information; and to prevent users from performing certain document functions (such as printing, copy/pasting, and forwarding) that content owners prohibit. Nothing terribly sinister about any of that. All of this is well within the controls that security-sensitive organizations have long enforced on paper documents. Principal vendors of DRM for corporate content management include Adobe, Microsoft, SafeNet, and SealedMedia.

Software activation is another hot area where DRM-like technologies are being applied in the corporate world. Software activation tools (which look, walk, and quack like DRM, but in a different pond from content DRM) allow developers to enforce a dizzying range of controls on distribution, installation, and usage of their products: automatic, secure, connected, or disconnected software activation; trial, perpetual, subscription, metered usage-based, rental, superdistribution, upgrade, or other licenses; automatic node-locking by hardware serial numbers, BIOS signatures, OS product identifiers, MAC addresses, and vendor hidden cryptographic hashes; fixed expiration or set number of program executions; etc etc etc. Check out software activation/licensing/metering tools from Agilis, Aladdin, Bysses, CrypKey, Macrovision, Nalpeiron, SafeNet, Sofpro, Pingram, and SoftwareKey

Once again, software publishers have been doing software-activation DRM—by various names--since the dawn of computing. Nothing controversial about any of this.

Of course, no two DRM (content or code) vendors implement the same approach. All of these tools embody proprietary DRM approaches. Each DRM environment is its own self-contained virtual fortress. Every real-world customer deployment of these tools adds another disconnected island of self-protecting license-aware walled-off content/code to your corporate information architecture. More virtual barriers preventing you and your colleagues from sharing, reusing, leveraging, mashing, mixing, slicing, dicing, and recombining data and code in the service of corporate agility. And service-oriented everything.

Not that there’s anything wrong with that.

Jim

Sunday, February 26, 2006

imho DRM9

All:

Found content (had to search Google’s cache for this article, though it’s only a little over three months old—perishable content—had already perished—only the persistent can dig it out—but dig I did): http://64.233.179.104/search?q=cache:xjgTm2Pq8aoJ:channelweb.com/sections/allnews/article.jhtml%3FarticleId%3D174400380+Sony+Blunder+Shows+Digital+Rights+May+Be+Doomed&hl=en&gl=us&ct=clnk&cd=4

My take:

DRM is another name for anti-piracy technology. The original anti-piracy technologies were armor, fortresses, ramparts, moats, and sharp swords. All of which leads the mind toward the basic economic situation that fosters organized piracy: booty is concentrated, but desire is distributed.

DRM is an acronym that invites mockery: defensive rearguard maneuvers by desperate restriction mongers. The more persistent and distributed the desire for whatever booty Sony and other copyright holders hold, the more defensive and desperate these fortresses will grow. All of which makes me think that Inside Digital Media analyst Phil Leigh just totally misses the mark. According to the referenced article, Leigh “believes that rather than adopting technological methods to try to stop unauthorized copying of music, record companies need to do more to remove the incentive for piracy.”

Like what? How are the Sonys of the world going to extinguish people’s desire for music, movies, TVs, video games, and art and culture in general? At what point in the development of the human species will people no longer crave these forms of creative stimulation? If you can’t snuff out people’s desire (and, along with that, all demand for Sony’s products, hence Sony’s continuing existence), then the only ways to remove the incentive for piracy are:

  • Give away all content for free (and thereby also kill the gander that gathered the golden eggs), or
  • Trust that some customers will pay for some content some of the time, if you keep the virtual shelves continually well-stocked with fresh goodies; don’t overprice the wares; supply it all through channels and packages that are easy, convenient, and pleasant to find, access, and consume; and allow consumers to actually take ownership in the content, to copy, backup, mix, mash, and generally have their way with the material to their hearts’ delight
Trust the consumers. Believe it or not, they want to encourage artists to continue making great music, movies, books, etc., and they will compensate artists according to the reasonable value of their works. But they will seek less expensive (and less legal) alternatives when the artists’ precious outputs are overpriced. That’s just a basic, inevitable fact of any economic order.

Respect the consumers and don’t treat them like potential shoplifters. As the article states: “The challenge has been to find an anti-piracy tool that works well enough to please the industry without overly annoying users, many of whom want to make legitimate backup copies of their CDs and don't like being assumed to be criminals.” The article presents a smattering of annoying anti-copy techniques that the recording industry has inflicted on users (in addition to Sony’s notorious XCP rootkit):

  • “recording labels commonly sent music critics promotional material in portable players glued shut to prevent copying.”
  • “discs that included digital watermarks — extra encoding designed to lock the recordings, or at least their high-resolution portions — on the disc.”
  • “discs that contained data near the perimeter of the CD instructing a computer's hard drive not to look for audio tracks…[b]ut blocking that technology merely required drawing a line with a marker near the edge of the CD.”

The number one mistake that Sony made with its XCP rootkit was to imagine that they owned not just the music, but could also, without asking permission, arrogate their own persistent footprint on the computers of the consumers of that music. That was going too far down the spyware rathole.

DRM isn’t doomed, but it won’t usher in an age of absolute, perpetual institutional control over all content everywhere. Different content fortresses will continue to build their digital ramparts. But the digital hordes will continue to scale, trash, and torch every new barrier, as long as there’s fresh booty indoors.

DRM will be ignored completely in the new paradigm of consumer-created content, of which blogs are a harbinger. It’s ridiculous to imagine that any set of institutions can control even a tiny portion of the fount of creativity that springs from people’s souls and lives everywhere. In this new world order, few of us make a direct living from our self-published content, which we provide gratis to all comers. Instead, more and more of us are finding creative ways to leverage our self-published content into money-making endeavors of various sorts. Or simply self-subsidizing our creative selves with funds from “real jobs.”

Like creators have done since time immemorial.

Jim

Saturday, February 25, 2006

poem Freq

FREQ

A friend is present.

A friend responds,
frequently frequents
whatever planet you're on.

Thursday, February 23, 2006

fyi Don't Let Mashups Smash Up

All:

Pointers to this that and the other:
http://www.eweek.com/article2/0,1759,1921743,00.asp http://www.eweek.com/article2/0,1895,1929486,00.asp
http://www.ebizq.net/blogs/column2/

Kobielus kommentary:

Eric Lundquist’s definition of “mashup” is a good enough launching point for what I’m about to say: “Blended applications, or mashups, are the hottest topic in application development. A mashup is usually a Web application built from many sources but combined into a seamless interface that provides a new user experience.”

Excuse me, Eric, but for the past several years this same definition, without modification, could have been applied to another trendy term: “portal.” It’s fairly close to one of my personal operating definitions for “portal”: a browser-accessible, server-based platform that aggregates links to and composes a new interface for interacting with content and functionality hosted elsewhere (OK—I just made up this unwieldy definition, but it’s sort of close to the shorter phrase I’m always wandering around mumbling). Is “mashup” just another example of this industry’s tendency to proliferate unnecessary new terms for still-valid older terms? What if anything is new that merits a new term?

If “mashup” has any new meaning (over and above “portal”), it refers to a growing trend under which Web applications are slapped together hastily from links to disparate services from diverse sources. It connotes a more ad-hoc, anarchic, slapdash, composite development approach than we normally associate with portals. “Seamless interface that provides a new user experience”? Ha! The term “mashup” comes from the hip-hop music world, and refers to a compositional approach under which heterogeneous audio is sampled and spliced from all over creation with all seams showing—syncopating your brain and body into rave overdrive. That’s what “mashup” actually connotes: user interface presented as funky compost, not as fussy composition.

There’s a place for everything, and I rather like a rhapsody, which is what a mashup is: a musical composition of irregular form having an improvisatory character (http://www.m-w.com/dictionary/rhapsody). So I chuckle to read Mary Jo Foley’s piece in Microsoft Watch: “Microsoft Business Apps Unit Readies New Web 2.0 Mashups.” The article says: “In December 2005, Microsoft posted to GoDotNet [Microsoft’s shared-source hosting site] a mashup of Dynamics 3.0 and MapPoint, its online mapping service. Such a mashup could allow customers to customize the Dynamics CRM contact form to show a MapPoint map displaying a contact’s address.”

Wait just a sec, Mary Jo. That doesn’t sound particularly heterogeneous, anarchic, or improvisatory: one vendor integrating software components from two of its existing product/services in order to extend/expand the functionality of both. Suddenly, mashup is treading on the semantic territory claimed by another familiar software industry term: feature enhancement.

Mashup is throwing its verbal weight around in the blogosphere too. Sandy Kemsley makes the following statement implying that mashup is now a synonym for SOA (service-oriented architecture): “To be fair, many IT departments need to put themselves in the position of both the API providers and the developers that I met at Mashup Camp, since they need to both wrap some of their own ugly old systems in some nicer interfaces and consume the resulting APIs in their own internal corporate mashups.”

“Wrap some of their own ugly old systems in some nicer interfaces…consume the resulting APIs”? You mean WSDL, SOAP, and the whole WS-* suite of standards, right? And, of course, where building portal-based mashups is concerned, WSRP as well, right? That’s SOA, pure and simple (er, it’s still complex, let’s not kid ourselves).

Is mashup simply SOA-based rapid application development, cobbling together “found external content” into a bold new synthesis in the presentation tier? A quasi-artistic endeavor? A semi-political faux-libertarian statement on the manifest destiny of “information needs to be free” and all that?

Or what?

Jim

Wednesday, February 22, 2006

fyi Gates: Passwords Aren’t Enough

All:

Pointers to articles:
http://www.crn.com/nl/crndailynews/showArticle.jhtml?articleId=180204041
http://www.microsoft.com/windowsserversystem/CLM/overview.mspx

Kobielus kommentary:
The headline of this piece isn’t news, nor is its substance. Everybody knows passwords aren’t enough for strong authentication. And anybody who’s been paying attention knows that Microsoft has been beefing up Windows’ smartcard and certificate lifecycle management tools. Microsoft acquired Alacris last year and has been integrating its credential management workflow and provisioning tools into Windows Vista and “Longhorn.”

But it’s clear that Microsoft, with Certificate Lifecycle Manager (CLM), isn’t implementing any functionality different from entrenched card/credential management vendors, such as GemPlus, Schlumberger, and Siemens. Yes, card/credential management integration with the InfoCard feature of Vista/Longhorn is news, until you realize that InfoCard is just another type of soft token in which PKI and other credentials will be stored—Microsoft’s not an innovator in that regard either.

Let’s not confuse a Bill Gates marketing-ish announcement with an actual vision of anything new--from a PKI, IdM, or trust management perspective—coming out of Microsoft. It would have been more interesting if Microsoft had let Kim Cameron discuss identity metasystem from the stage at RSA Security. But that would be a PR no-go for Microsoft. Only IdM wonks like myself would have paid attention. Also, Kim is a semi-autonomous vision guy, not someone that Microsoft would ever consider as a heavy-hitting marketing mouthpiece.

Gates is Microsoft’s marketeer-in-chief. And you can’t ask for a better one. He’s iconic, smart, current, in-depth, articulate. However, he’s not usually someone who communicates anything that you haven’t heard before, expressed more memorably by others.

In this instance, I find Gates’ statements on the issue of authentication factors--passwords vs. smartcards/certs--a tad stale, and off the mark.

First off, he focuses on the need to simplify smartcard/certificate issuance, renewal, and revocation, rather than the need to increase assurance (hence trust) surrounding these critical lifecycle processes. “Having the revocation and issuance work as easily as passwords do today is a critical element here,” says Gates. Wait just a second, Bill. The primary purpose of smartcards/certs is to enable stronger authentication than you can get with plain ID/passwords. And strong authentication demands strong assurance implemented acrosss the entire lifeycle of smartcard/cert enrollment, approval, proofing, provisioning, management, and validation. It all comes down to trusted workflows and trusted roles implemented across a company’s smartcard/cert/credential management process. If somebody can impersonate me and get a smartcard/cert in my name without undergoing the necessary administrative approvals and in-person proofing necessary to prove that they’re me, then what’s being gained? Just a new way to steal my identity and “authenticate” as me and gain “authorized” access to my world. Simplifying the smartcard/credential management process often means gutting whatever assurance the provisioned tokens/certs might otherwise have had.

Second, he says that most companies can and should (implication: they will) move away from password authentication toward smartcard/cert authentication by the end of this decade. Of course, that prediction is predicated on Microsoft and other smartcard/cert/credential management vendors making greater headway in customer acceptance than they have in the past. It’s still not clear that Microsoft’s CLM will appreciably simplify the complex PKI workflow and technical infrastructure needed to make this happen (ignoring, for the sake of discussion, the negative impact that such “simplification” might have on smartcard/certificate assurance levels).

If you want authentication simplicity, you can’t beat passwords. For authentication assurance, though, passwords are almost always the weak link in the trust chain, owing to boundless opportunities to hack, guess, and steal passwords, and also to the fact that users are rarely proofed in-person prior to issuance of passwords.

Consequently, the password being presented in an authentication session may or may not be presented by the identity that purports to be presenting it. You have no strong assurance that it’s not being presented by an impostor.

But then again, without an enrollment, approval, proofing, and provisioning workflow that ensures strong binding of the smartcard/cert to a particular human being, you have no strong assurance with multifactor authentication either. And without a trust web and certificate validation infrastructure, you have no assurance that a cert being presented is still valid. How often do people rely on someone else’s PKI cert without checking to see whether it’s been revoked, or simply expired? Without that validity check, how secure is that cert?

CLM assurance (implemented through infrastructure and workflow) is the “x-factor” in multifactor authentication. Without that x-factor, digital certs aren’t appreciably more secure than passwords. Under most circumstances (and these aren’t likely to change by the end of this decade, regardless of what Gates claims), digital certs are more costly, complex, and cumbersome than passwords. Yes, the smartcard/PKI industry needs to embed their infrastructure in platforms (e.g., Windows) and needs to expand the opportunities for user self-service in enrollment, renewal, revocation, and other lifecycle functions.

But, inherently, strong credentials assurance can’t rely on user self-service alone. Users must run the gantlet of company approvers, background vetters, in-person proofers, and other trusted “administrators” in order to obtain their trusted certs. And to renew them. And to ensure that, when other people’s certs are revoked, that the appropriate certificate revocation lists are kept up to date.

Yes, passwords aren’t enough. But smartcards and certs without a strong-assurance-enabling infrastructure aren’t enough either.

Jim

Thursday, February 16, 2006

lol Study: Dolphins Not So Intelligent On Land

All:

Pointer to fresh Onion peeling:
http://www.theonion.com/content/node/45360&rss=1

Kobielus kollapsing into katatonic kondition of koughing and konvulsing from Onion-induced kackling:

First off, let me once again quote an excerpt from a recent blogpost, and then relate it to the insanely funny pretext of this Onion piece. First, me again:
  • “A [sentient being’s] perceived intelligence depends totally on context. Intelligence is primarily the capacity of a [being] to respond (continually, appropriately, effectively, articulately, and successfully) to various challenges (tests, tasks, and problems) that the world (fate, society, colleagues, teachers, friends, and adversaries) place in front of them. The evaluation of the success of a [sentient entity’s] ongoing/evolving responsiveness to the never-ending parade of new challenges is that [being’s] "intelligence." The [beings] doing that evaluation include the individual him/her/[it]self, plus the [being]’'s family/friends/colleagues/contemporaries/[aquarium/jailer/zookeepers], plus the [being]'s posterity (descendants/historians/etc.). The context for the "smart vs. stupid" determination, then, is the entire frame of reference that involves diverse challenges and different evaluators. Nobody is inherently smart or stupid--they must continually "prove" themselves as one or the other, and could just as easily (on their next challenge, in the eyes of their next self- or external-evaluation) flip-flop [or jump 10 feet into the air] toward [a dangled salmon held by an attractive blonde positioined at] either pole. The bottom line is that [beings squeak] smart things on some occasions on some topics, and stupid things on other occasions/topics….. The evaluator of a [being’'s intelligence may be particular individuals, or a community of individuals, or a particular individual deferring to the collective/received opinions of the community (contemporaneous/posterity).”

That said, this Onion slice made me cry tears of recognition:

  • “Despite their failures in the initial series of tests, the animals were given further opportunities to demonstrate their intelligence on land. The dolphins were unable to display novel behaviors, use a map to pinpoint their location on campus (spatial reasoning), or complete a simple obstacle course and wall climb….’Their learning curve was actually negative,’ Lindell said. ‘The more time we gave them to complete basic land-based tests, the more pitiful their efforts became, with many of them opting to bask in the sun rather than perform a simple task. In some cases," Lindell added, ‘the dolphins appeared to be looking directly into our eyes, as if pleading with us to help them perform better in these tests.’ Many scientists believe these findings may help to explain why dolphins, for all their vaunted intelligence, have never developed technology or agriculture, or harnessed the power of fire—skills still exclusively in the domain of Homo sapiens.”

All of which makes me want to point to the SETI folks and urge them to rename their initiative: Search for Extraterrestrial Anthropomorphism.

We’re always searching “others” for mirrors of our own “intelligence” (i.e., our own very particular human adaptation-conditioned responsiveness to situations, as expressed through our own very particular human brain, body, behaviors, and culture). When we don’t find those signs of “intelligence,” we declare others stupid.

You try navigating the briny murky deep night and day with just your smell, hearing, and kinesthetic senses. A life-or-death task we’re not adapted to. See how smart you appear.

Jim

Tuesday, February 14, 2006

imho Most influential in networking over past 20+ years

All:

Beth Schultz of Network World recently asked me and other columnists to list up to 20 people who have been most influential in networking over the past 20 or so years. This is convenient timespan, because it coincides with my entire career in IT. So it gave me a chance to trip down memory lane.

I started a list of industry-transforming "roles," and then quickly filled in the names that came first to mind. My list:
  • Visionary: George Gilder. His vision of the future of unlimited, no-cost bandwidth and any-to-any connectivity still exerts a powerful influence on everybody's vision of the Internet's potential.
  • Investor: Bill Gates. His longtime patient capital has built Microsoft into the unchallenged platform and application vendor, but his greatest legacy will be the William and Melinda Gates Foundation's ongoing grants to rid the developing world of infectious diseases.
  • Inventor: Tim Berners-Lee. His invention of the World Wide Web truly revolutionized human society by turning the world into an open book, introducing a new addressing scheme that could be applied to any information or application anywhere, and a new protocol that allows us all to meander endlessly throughout the global cornucopia of human creativity.
  • Engineer: Linus Torvalds. His graceful stewardship over Linux has started open-source software on its inevitable path to industry dominance in all categories.
  • Enterpreneur: William McGowan. His principled persistence in the face of massive Bell System obstruction helped usher in the present age of freewheeling competition throughout the global telecommunications industry.
  • Executive: Lou Gerstner. He kept IBM at the industry forefront by successfully evolving the former mainframe monolith into a global professional services powerhouse, just in time for the emergence of platform-agnostic service-oriented architecture.
  • Legislator: Al Gore. As US Senator in Spring 1991, he spearheaded the passage of a bill to fund the National Research and Education Network (NREN), which was a bridge project that sought to transform the R&D-focused Arpanet into the commercialized Internet. Gore's legislation had the desired catalyst efffect. I distinctly recall 10am, June 5, 1991, in room H-137 of the U.S. Capitol Building--when Sen. Gore and three other legislators took the initiative to stimulate the development of the commercial Internet. I was there, in attendance when they announced the legislation, and spoke to the then-senator, who was the acknowledged leader in pushing for this initiative. Let the record note. I still have my notes.
  • Regulator: Harold Greene. The judge who presided over the AT&T divestiture stuck to his guns as long as he could, and gave the newly competitive telecommunications industry just enough breathing room to flourish in the interregnum between Ma Bell and the rapidly reconsolidating Baby Bells.
  • Agitator: Shawn Fanning. He ran a massive civil disobedience service that helped musicians everywhere to find their audiences, overcoming obstructionist record companies, restrictive radio station programmers, and others who try to deny the people easy access to their soul grooves.
  • Standardizer: Jon Postel. He was the maestro who coordinated the development of many of the most fundamental open standards without which the Internet and World Wide Web would never have risen so fast and spread so wide.
  • Lobbyist: Marc Rotenberg. His single-issue focus on privacy protection has kept the lawmakers, regulators, telcos, and others in positions of power in the networking industry continually on the defensive, and kept us all vigilant against encroachments on our civil liberties.
  • Marketer: Steve Jobs. His tiny little iPod has invaded the popular culture so fast that it's almost subcutaneous--and ushered in the age of podcasting--the portable all-in-one entertainment medium.

I realized I have close personal "degrees of separation" from several of these individuals:

  • Gates: My wife, before she moved to the US and met me, dated an American guy who later went on manage the Gates Foundation's finances for a while.
  • McGowan: I once worked for him. During the period in the late 80s when he had his heart transplant, he came into our office conference room one day and told us that "I haven't had a change of heart" re whether he'd keep our unit operating. Not true. But I liked him anyway.
  • Gore: I've actually met him twice, once as senator and once as VP. Nice guy, well-read (read at least one of my Network World columns). Didn't seem wooden at all. Someone I would trust with the keys to the car.
  • Fanning: The guy who temporarily managed Napster after Fanning left was in the senior honors seminar in economics at the University of Michigan with me in 1979-80.
  • Rotenberg: My wife currently works with his wife.

Yeah, I said "degrees of separation," not buddy-buddy. I haven't met Kevin Bacon. But I have met Jason Kobielus.

Jim

Saturday, February 11, 2006

imho The Giulio thread

All:

Giulio Cesare Solaroli e-mailed me to say he enjoyed my blogpost on his and Marco Barulli’s computational reputation model for blog comments, but to point out that I’d misconstrued several details regarding the interactions among functional components in their model. I refer you all to http://www.clipperz.net/ (Giulio and Marco’s blog) for further details on their model (as those details get posted).

What I’m doing in this blogpost is responding to the core assumption of Giulio/Marco’s computational reputation model, as stated by Giulio in our e-mail thread:

  • "What we hope, is that the merit of the comment and the reputation of the author could be some how (indirectly) bounded. Smart people tend to be smart; troll tend to be troll. This regardless of the context where you observe them."

As I responded to Giulio in the e-mail thread, here's my philosophical perspective on this issue:

  • A person's perceived "intelligence" depends totally on context. Intelligence is primarily the capacity of an individual to respond (continually, appropriately, effectively, articulately, and successfully) to various challenges (tests, tasks, and problems) that the world (fate, society, colleagues, teachers, friends, and adversaries) place in front of them. The evaluation of the success of a person's ongoing/evolving "responsiveness" to the never-ending parade of new challenges is that person's "intelligence." The persons doing that evaluation include the individual him/herself, plus the person's family/friends/colleagues/contemporaries, plus the person's posterity (descendants/historians/etc.). The context for the "smart vs. stupid" determination, then, is the entire frame of reference that involves diverse challenges and different evaluators. Nobody is inherently smart or stupid--they must continually "prove" themselves as one or the other, and could just as easily (on their next challenge, in the eyes of their next self- or external-evaluation) flip-flop toward either pole. The bottom line is that people say smart things on some occasions on some topics, and stupid things on other occasions/topics. People tell me I'm smart, but I'm quite aware when I've said or done something stupid (or my wife makes me aware of it).
  • The evaluator of a person's intelligence may be particular individuals, or a community of individuals, or a particular individual deferring to the collective/received opinions of the community (contemporaneous/posterity). Your model is based on the latter intelligence (reputation)-evaluation model: a particular individual (blog author) deferring to the collective/received opinions of contemporaries (other blog authors, as filtered through a "reputation manager").

As I stated in my recent blogpost on their model, I base my decision to reference somebody else's inputs in my blog on whether they pass a certain "intelligence" challenge in the eyes of one particular evaluator: the "it's interesting to Jim" test. I'm not so interested in whether they pass the "intelligence" challenges of other evaluators (such as a circle/community of blog authors). It's not that I necessarily disparage the opinions of other blog authors. But they run their own fiefdoms, and I run mine. I'm president, king, emperor, and grand vizier of my own idio-domain.

I'm always skeptical of "received community opinion" (aka "reputation"). As I said in my blog this past November:

  • "Reputation feels anti-governance, hence unfair. It feels oppressive. It’s the collective mass of received opinion, good and ill, weighing down on a particular identity. It feels like a court where the judge, jury, prosecuting attorney, jailer, and lord high executioner are phantoms, never showing their faces, but making their collective force felt at every turn. It feels like outer appearances, not inner character, ruling our lives."

Giulio responded with another e-mail in which he expressed some doubts about whether “reputation” is really the concept they’re trying to capture in their model. I said that, in the context of their application, I don't think "reputation" is the right term or concept for the assurance level that the "reputation manager" (wrong term for that functional component) is asserting with respect to a blog commenter. What the blog commenter asserts (across one or more comments made to one or more blog authors) is their "commentary" (the sum total of their comments, as an outward manifestation of their analytical and expressive powers).

I proposed that Giulio/Marco think in terms of the following roles:

  • Commenters:
    • Blog authors: These are as Giulio/Marco define them. But it's important to recognize that a blog is simply a stream of commentary from one or more blog authors (e.g., Marco, Giulio, et al.) and (optionally) blog visitors (who may or may not be able to post their commentary to a blog author's site, or, if invited, must be approved by the blog authors prior to posting).
    • Blog visitors: These are Giulio/Marco’s "blog commenters." They don't own the site that they're visiting. They're just guests. They knock on the door and may or may not be invited to post directly.
  • Reviewers:
    • Blog authors: Yes, this is a second role (reviewing submissions from blog visitors) that blog authors perform. Blog authors may also publish their "reviews" (or thumbs-up/thumbs-down decisions) to a blog review hub.
    • Blog review hub: This is Giulio/Marco’s "reputation manager." It's job is to compile, aggregate, weight, and score blog reviews from various blog authors pertaining to various blog visitors.

Blog review hubs don't score blog visitors' "reputation." They score the community (of blog authors) reported evaluations of the quality of blog visitors' submitted comments.

It's not reputation. It's "commentary quality" (of blog visitors) that's being scored by a blog review hub.

And that's what blog visitors (that subset of them who simply visit to read commentary) are implicitly scoring through their browsing/attention and return visits. Do they respond to the quality of Marco/Giulio, Jim Kobielus, Phil Windley, or other people's blog-asserted commentary? And to the quality of the blog visitors who those blog authors have opted to let post to their blogs?

The blog review hub must always be scoped to a particular commentary community. Any measure of “commentary quality” is always relative to the yardstick (i.e., set of values) that a particular community (e.g., technogeeks, feminists, right-wingers, conservative Muslims) holds in common. It’s not enough to scope it to the “blogosphere” (as if that were a community). Your blog review hub may be associated with a particular community of interest. Or a particular congregation of idiots. As blog author, you choose which blog review hub(s) you wish to federate/affiliate with. Whatever clique you click with.

Essentially, then, a blog author may wish to consider a blog commenter’s commentary quality score, as reported by a particular blog review hub, prior to posting that commenter’s comment to their blog.

Or take the simpler, more direct route. Actually read the submitted comment. Then hit “post” if so moved. Or process the comment through your own gray matter and blog on it.

Jim

Wednesday, February 08, 2006

imho James Kobielus on reputation

All:

Pointer to blogpost where at least one person and perhaps two in collaboration (Marco Barulli and Giulio Cesare Solaroli) imho’d something that I had previously imho’d: http://www.clipperz.net/users/marco/blog/2006/02/07/james_kobielus_on_reputation

Kobielus self-regarding kommentary (but really, kommentary on the Barulli/Solaroli proposal—trust me—it’s not all about me):

I want to thank Marco Barulli for his kind words about my prior blogged thoughts on reputation. I wish I could bottle such beauty and uncork it on down days.

I also want to thank Marco (and I assume, Giulio as well, though Marco was the one who e-mailed me to inform me that he had commented on my thoughts) for his/their thought-provoking proposal to operationalize a computational reputation service. By the way, folks, I’m also giving a shout-out to Phil Windley’s recent blogpost on computational reputation. Good stuff—all of it. However, I’m the one who pointed out to Marco to note in his blog that my January 25 blogpost (high-level, conceptual, not implementation-oriented) on computational reputation came first. Just because. If I don’t take pains to point out these simple facts early on, the historical record might remain distorted (to my disadvantage) forever. Clearly, each of us is the number one stakeholder in and steward of our own reputation. For the record, though, I think Windley’s (and Barulli/Solaroli’s) blog(s) is/are terrific. I’m not a glory hog on such things: If Phil developed this computational reputation ideas independent of me, so be it—he’s developing them to an extent that I’m not—and I’m reading him avidly. (Besides, a quick Google on "computational reputation" shows that the concept and term precedes us both by a mile--so there's no point stoking my vanity any further with this silly tangent).

Now onto the Barulli/Solaroli proposal. They focus on the following possible use case for computational reputation (quoting from their blog):

“We focused on a special kind of reputation, one built from the whole set of comments you sent to blog authors and their acceptance or rejection. A little portion of “who you are”. Given that set, everyone could “run” his personal reputation application and compute your reputation as a commenter. Unfortunately this set, your full comment history, is not easily available. Most of the times, a blog author will have to decide to accept or reject your new comment judging solely from the content of that very last submission. This was the problem we were trying to solve.”

This use case had me scratching my head. First, I had to mentally diagram the entities and interactions. Let’s see:

  • The reputation-dependent access-control decision is “accept/post or reject submitted comment.”
  • The reputation-relying service provider is a “blog author,” who selectively accepts (and rejects) third-party comments for posting to the blog.
  • The reputed party is the “blog commenter.”
  • The reputation-asserting authority is the “reputation manager” (which may be an automated process that calculates reputation scores that get attributed to the blog commenter and relied upon by the blog author ).
  • The reputation score associated with a blog commenter is computed by the reputation manager from the transactional history of acceptances and rejections from other blog author to whom the commenter had submitted comments.
  • The reputation score for a commenter is contained in a “comment token” that is asserted/issued by the reputation manager.
  • The current/computed comment token for a particular blog commenter is issued by the reputation manager upon request by that blog commenter.
  • The blog commenter receives that comment token and includes the token—plus a fresh comment—to a blog author for consideration.
  • The blog author considers the fresh comment, in the context asserted by the comment token; makes an accept/reject decision; and asserts this fresh accept/reject decision both to the blog commenter and the reputation manager.

Did I get that right, Marco? I await further details on your approach, which you’ve promised for subsesquent blogposts.

My first reaction: This is not an appropriate use case for computational reputation.

Most important, the blog author’s decision to accept or reject a comment wouldn’t normally depend on any knowledge of the commenter’s reputation. It would depend purely on the merits of the comment itself (was it interesting, stimulating, on-topic, etc)? Why should I, as blog author, give a damn what others think of this commenter, or whether (unseen? unknown? unintelligent?) other blog authors have accepted or rejected this individual’s prior blogposts?

There are only a few scenarios in which the blog author would care about the commenter’s reputation, as asserted by the (computed/weighted/aggregatged) opinions of other blog authors. For example, if the blogosphere has identified this individual as a serial plagiarist who keeps submitting other people’s words as his/her own, then, yes, I may choose not to give that intellectual property thief a forum to rip off other people’s work.

I can’t think of another scenario where I would give a crap about someone's reputation when considering their ideas. I like to think of the blogosphere (and democratic societies in general) as intellectual meritocracies.

I choose to consider only the text (ideas/thoughts) you present to me in conversation, not the irrelevant con-text (e.g., other people’s prejudices or grudges against you) that others want me to consider.

If I were to consider loaning money to you, that’s another issue entirely. I would definitely consider your reputation as derived from your transactional history (as reported/asserted/scored by others) of paying your debts on time or being a serial deadbeat.

For the record, I don’t accept comments for posting on this blog. This is my personal pulpit. If you all want to respond to these thoughts, get your own blog. Or do what Marco and others have done. Send me a note at james_kobielus@hotmail.com.

If I find your comment interesting enough, I’ll do what I did here. I’ll blog on it. Respectfully. And, hopefully, the act of my doing so will add to your reputation scores as computed by others who value what Jim Kobielus has to say on such things).

My personal calculation of you is purely fuzzy math. Extremely fuzzy. Warm and fuzzy.

Jim