Wednesday, November 23, 2005

imho profiling

All:

Whence: lezt

What:
Profiling, a formerly innocuous term, has gained negative connotations in recent years. Now it’s almost always construed in the context of “racial profiling.” It’s suffering the same fate as “exploitation” (prior to feminism, this simply referred to usage, consumption, and/or deriving some advantage from some resource) and “notorious” (prior to John Dillinger, this simply meant a person of note, regard, or reputation).

In an IdM context, profiling refers to the ability to compile sufficient identity data for the purpose of targeting individuals of note so that one may derive some advantage from one’s business association with those individuals. It needn’t always be to the disadvantage of the subjects of the profiling, of course (Dillinger analogy notwithstanding—this is one individual who certainly wished he hadn’t stood at the business end of the FBI’s targeting strategy—also, one thinks of the paparazzi, who certainly exploit others’ notoriety, thereby increasingly that notoriety/marketability and pissing off their subjects in the process—paparazzi profile based on one single criterion: the price that a candid photograph of the subject can fetch).

The subjects of profiling needn’t always be unwilling victims. To the extent that we the subjects control our own profiles and can parcel out access to relying parties, we can stay out of everybody’s crosshairs, or put our identities out in the public arena for maximum exposure to and exploitation by others. To the extent that we can inspect/correct the profiles that others hold on us, we can at least prevent unfair exploitation. Correcting errors in your online credit histories (held by D&B etc.) is one such way in which we can gain some modicum of control over the legitimate and quite powerful profiles that others hold on us. Every American now can get a free copy of their credit history from the major bureaus each year, and correct them—all online

It’s interesting how these bureaus authenticate you—the anonymous web browsing entity with whom they have no prior business relationships—for the purpose of authorizing you to view your credit history (and request corrections to that profile). Essentially, they authenticate you by doing a Q&A session in which you and they match your respective identity systems of records (iSoR—I love this acronym, which I just concocted now) associated with your credit history. In other words, they pose a series of multiple-choice questions to you, drawn from data in your iSoR (held by them), and score your responses. These are questions that only you (the identity subject, mining your own personal iSoR which you, hopefully, have never divulged in its entirety to any other party) can be expected to answer correctly. If you answer the Q&A session perfectly—or near perfectly—the credit bureau authenticates you and authorizes you to access the iSoR that they hold on you.

This is essentially a “zero-knowledge proof” of your identity, in which you’ve divulged nothing to the relying party that the relying party didn’t already know. All of which reminds me of a research paper recently co-authored by muse: “Establishing and Protecting Digital Identity in Federation Systems.” In it, muse and collaborators provide an approach for protecting user attributes against identity theft. Their approach involves associating various attributes from a user’s private iSoR (my term, not theirs) with each other and with a user’s identity. In order for somebody/anybody (the user included) to exploit the user’s identity for any purpose—such as to authenticate to a credit bureau, say--that entity needs to marshal a specified subset of the user’s private iSoR as a “proof of identity.” The approach allows the user to provide that “proof of identity” to any relying party—and lets the relying party to verify the proof of identity cryptographically—without the user ever needing to disclose any particular piece of privately held iSoR data. Essentially, the user is a private IdP, and federates their personal data attributes to any SP in such a way that the user only needs to establish that they are the sovereign IdP for that data—whatever its values may be—and never loses control over their private iSoR/profile. The SP simply matches the personal IdP-presented private-iSoR proof-of-identity to the shadow iSoR that they hold on you.

At least, that’s what I think is going on in the paper. Interesting stuff. But mine eyes are sore from trying to divine the math.

Jim