Tuesday, February 24, 2009

poem Over Cocktails

OVER COCKTAILS

Kim the marketer
swears she hasn’t made her mark
but I’m a marked man.

Amber has numbered
the years of her career for
only me to hear.

As for poor Pauli
currently between jobs I’ll
listen but I’m male.

Sunday, February 22, 2009

poem Depletionary Stakes

DEPLETIONARY STAKES

Among elements
humanity’s hope may well
be molybdenum.

Upon this orb we’ll
scratch it out or, resourceful,
maybe mine the moon.

Scout it. Blast that ore.
As we’ve done petroleum
and uranium.

poem Lax

LAX

Dial back the blood and
dilate those capillaries.
You’ll see some results.

Oxygenate your
outlook. Let lax the vex and
laborious breath.

Feel the infusion.
Fill with enthusiasm
gods in your temple.

poem Recessionary Gold

RECESSIONARY GOLD

When the water fled
the dead rested. Mummified.
Secure till the flood.

While the money runs
we’re liquid as hell, floating our
hardboiled nest eggs.

Waxing nostalgic
for those dear days when happy
days were here again.

poem Slinky

SLINKY

Spry and springy odd
cylinder thingy.

Swings end over end
with a gentle bend.

This spirally band
uncertainly lands.

Sunday, February 15, 2009

poem 1967

1967

Blissed as twittering
Cowsills in trees. "The Rain The
Park And Other Things."

Nonplussed as post "Wild
Thing" Troggs boldly croaking that
"Love Is All Around."

Weighty as Yardbirds'
"Shapes of Things" anchored by pre
Zeppelin Jimmy.

poem A Marital Vinyasa

A MARITAL VINYASA

"We've saluted so
many suns. Numb, we gasp from
their sheer sunniness."

"We sweat. We struggle
to hold our focal points on
some middle distance."

"Some perturbation
in Bally's stained ceiling tiles
does it for me, dear."

Saturday, February 14, 2009

poem A Marital Valentine

A MARITAL VALENTINE

"A tiny candy
under your pillow...could have
easily been crushed."

"The message on it
was even teenier...an
eyestrain...sorry, luv."

"As insubstantial
a confection as can be
imagined...my sweet."

Friday, February 13, 2009

fyi Pull Wii Games Off Shelves, Says Radio Pundit

All:

Stimulus: http://blogs.computerworld.com/pull_wii_from_shelves?source=NLT_BLOG

Response: I wouldn’t recommend pulling Wii off store shelves. It’s more of a caveat emptor situation. My primary problem with Wii--which I used at friends' parties a few times over the holiday season--is that many games require you to lunge, repeatedly, intensely, mindlessly, asymmetrically, herky-jerkily, into bare air, flailing any and all appendages, without any counter-resistance. That felt to my 50-year-old body like a surefire prescription for throwing everything out of joint. Oh yeah...the carpal tunnel potential on handling the controller...that too, but that’s the least of it. I do powerflex twice a week in a class/instructor setting. Every lunge we do has the resistance of weights that we balance on our shoulder blades--plus the slow controlled, balanced movements that systematically work the micro-fibers of our muscles, never relying on pure inertia or momentum to move the weights up and down, side to side. Wii’s not my style, because I prefer to strengthen and calm my body--with images and rhythms generated by my mind, breathing, and muscle memory--not cartoon avatars and idiotic synth-ditties yanking me around on some TV screen.

Jim

Wednesday, February 11, 2009

poems 4 pastorals from various years

DOG

Sheepdogs shear the air
with their bark, prepare

the perimeter
beyond which white flocks

never tend, inside
which grow whole bubble

edens, clusters of
friends and familiar

foe to guide us by,
pastures that glide and

deliver us back
miraculous yields.

LEMUR

a religious man
i love our lemon

green ancient forebear
two eyes fixed on

phantoms, the rumored
earth, hands ready to

pull her tender frame
toward uppermost

reaches in a
branching bouquet of

sheltering blossoms
one of which, or so

the story goes, was
large, could

take families whole
into a supple

recess and mothers
needn't worry that

little hands
exposed
could ever attract

he of the curvy
claws and the piercing

gaze.

P23

Lord, shepherd, my sufficiency
softly sustains me in green sleep,

walks me along the cool currents
into the awakening day

upon paths brightened in splendor
of his most sheltering essence.

Even in deepest depression
your presence abides me, fearless,

strong as your hand holds the crook-stick,
secure as your feasts from the foe.

You bless me with gracious perfume.
You overfill my meek chalice.

Surely, sweet fortune will trail me
Lord, dwelling here day without end.

RUN RIOT

I’ll take a meadow
wherever nature’s chosen
to happily jam.

Medians wild and
rioting, freeways redeemed
in numberless weeds.

Old federations of
fresh bees and blooms
wherever wasteland resumes.

FORRESTER blog repost What, If Anything, is a “Niche Vendor,” Where Enterprise Data Warehousing is Concerned?

What, If Anything, is a “Niche Vendor,” Where Enterprise Data Warehousing is Concerned?

http://blogs.forrester.com/information_management/2009/02/what-if-anythin.html

By James Kobielus

Now that we’ve published my Forrester Wave for Enterprise Data Warehousing (EDW) Platforms, you’d think I can breathe easier. Far from it. No matter how carefully one words a report, there is always the potential for misunderstanding. I’m already seeing some of that surrounding the notion of what, exactly, constitutes an EDW “niche vendor.”

For starters, that term--“niche vendor”--is not in my vocabulary, and not in my Wave. In the Wave, I used the standard Forrester methodology, which, based on transparent criteria and evaluation scores, distinguishes among “Leaders,” “Strong Performers,” “Contenders,” and “Risky Bets.” Rest assured that all seven of the vendors I evaluated--Teradata, Oracle, IBM, Microsoft, SAP, Netezza, and Sybase, are either “Leaders” or “Strong Performers.”

We have no formalized definition of “niche vendors” in the Wave.Instead, all of the vendors in my Wave should be understood as “enterprise” data warehousing platform providers. The qualifier “enterprise” indicates that they are all addressing a wide range of enterprise Information and Knowledge Management (I&KM) requirements for data warehousing. However, some of them are better positioned at this time to target a broader addressable market than are others, as evidenced by the details of their current offerings, strategies, and market presence. The vendors that are addressing the widest range of EDW marketplace requirements and opportunities scored higher in the Wave.

I think the crux of the misunderstanding lies in my acknowledgement that there in fact “niche” segments of the EDW platforms market, and that some vendors have differentiated themselves well in those niches without, necessarily, being locked into them permanently. I refer to “niche markets,” “niche solutions,” and “niche deployments,” but never “niche vendors.” I do use “niche player” at one point, but that’s to reflect a vendor’s strategy, not its destiny.

To reflect that nuanced understanding, I placed the following qualifying language at the intro to the “Strong Performers” section:

“Strong Performers have proven themselves in particular niches, primarily among large enterprises but also in a growing range of midmarket deployments. These vendors’ substantial, loyal, and longtime customer bases suggest plenty of opportunity for well-differentiated niche products in the multifaceted and innovative EDW platforms market. I&KM professionals can rest assured that these and other substantial EDW platform vendors have the staying power, resources, and vision to weather the ups and downs in today’s turbulent IT market.”

What exactly, then, is an EDW “niche solution”? Actually, before I answer that, let’s discuss what’s not a niche solution? Essentially, any solution portfolio that is well-suited to addressing the broadest range of EDW requirements--and in fact has production customers to demonstrate a vendor’s success at doing just that--is the polar opposite of a niche solution.

In order to be “well-suited” in this regard, an EDW solution portfolio should have the comprehensive functionality, flexibility, scalability, and affordability to qualify for short-listing by I&KM professionals with the broadest range of requirements. More than that, the vendors should demonstrate considerable success in selling their solutions into the full range of customer size classes, verticals, and geographies.

It’s one thing to state, in the abstract, that one’s EDW solution portfolio has universal appeal, but quite another to demonstrate that a critical mass of real customers across all segments have found it appealing enough to put their money down and standardize on it. A vendor's pricing, licensing, packaging, sales, marketing, distribution, support, and professional services are critically important for them to achieve this degree of universal--or at least widespread--adoption. Also, sometimes, what holds a vendor back from broad appeal is a marketplace perception issue that may be several years out of date, but is still a tangible competitive handicap.

One way of interpreting the Wave is that the higher-scoring vendors have the least “niche-y” solutions on the market. Of course, a niche may be a large one, as measured by the number of actual or potential customers, but it’s still a potential competitive handicap if a vendor is having difficulty breaking out of it--or doesn't realize it hampers their growth prospects. And a niche may be a matter of a vendor’s sales strategy--e.g., selling their DW appliance primarily as an OLAP data-mart accelerator--that has paid off in sales momentum but is becoming a confining pigeonhole. Or the niche may be an architectural specialty--such as a columnar database--that has great strengths for particular EDW-node deployment roles but may be suboptimal for other roles.

Sometimes, vendors position their niche approach as the future of the market as a whole, and as the answer to every EDW requirement that every user might have. And, sometimes, the market disagrees, as expressed through customer demand, or the lack thereof, leaving vendors mystified as to why they're not becoming the pre-eminent market leader.

And sometimes, an emerging niche (i.e., vendor growth-potential-limiting constraint) may not be apparent to the vendors that, heretofore, have assumed that it constitutes the entire EDW market. One such emerging niche is for EDW solutions that have not yet attained petabyte-scale in production customer deployments, in demo environments, or in the lab. In fact, that niche includes the majority of today’s EDW solutions, and the vast majority of I&KM requirements. Some vendors (read the Wave to see who) have moved beyond that sub-petabyte niche, or are just now traversing that threshold, or are soon likely to. Interestingly, most vendors in the EDW Platforms Wave offer a credible case that they’ll soon attain full petabyte-scalability, but only a few had actual customer deployments showing that they’re already there.

But none of this is to be read as vendor destiny. The Wave also scores the vendor’s corporate and product directions, and their momentum in selling into customer-size, vertical, and geographic segments outside their installed base. All of this is to be understood as a vendor attempting to break out of whatever niche(s) its solutions may be concentrated in.

And, indeed, that’s a key take-away from the Forrester Wave for EDW Platforms. All seven of these vendors are rapidly evolving out of the various niches in which their solutions have been deployed. That includes petabyte-scalability. Consequently, you shouldn’t assume--simply because a vendor didn’t demonstrate “well-beyond” one-petabyte scalability for the purpose of gaining a “5” on that Wave criterion in Q1 09--that the vendor won’t able to demonstrate that capability for you, in their lab, next week.

The EDW market is evolving extraordinarily fast. Clearly, we’ll need to update the EDW Platforms Wave in the coming year or so to keep pace.

Sunday, February 08, 2009

poem We Test Positive

WE TEST POSITIVE

That athletes abuse
our trust ain’t news. They’ll smash and
snort any substance.

That we’ll indulge them
their little lies. No surprise.
The shame’s also ours.

We place our faith in
the stupendous. Superstars
and hidden magic.

Friday, February 06, 2009

FORRESTER blog repost The Forrester Wave™: Enterprise Data Warehousing (EDW) Platforms Q1 2009: The Key Takeaway

The Forrester Wave™: Enterprise Data Warehousing (EDW) Platforms Q1 2009: The Key Takeaway

http://blogs.forrester.com/information_management/2009/02/forrester-wave.html

By James Kobielus

Today we published the first Forrester Wave™ specifically focused on Enterprise Data Warehousing (EDW) Platforms. The final published report is now available on Forrester’s website to clients. Information and Knowledge Management (I&KM) professionals will find it a timely and actionable study of the leading EDW platform vendors: Teradata, Oracle, IBM, Microsoft, SAP, Sybase, and Netezza. I urge you to download and read it, and then engage me, the author-analyst, in inquiries and advisories to help you apply it to your EDW initiatives.

The key takeaway from this Wave is that scalability, flexibility, and affordability are the dominant requirements in today’s budget-stressed EDW platforms market. I&KM professionals are under the gun, trying to keep EDW and business intelligence (BI) costs under tight control while preserving the flexibility to grow and repurpose these investments to support an ever-changing array of decision-support requirements. Hence, an EDW platform--to score well in the Wave--should address the following high-bar requirements:

Extremely scalable: The EDW platform should be scalable to support petabytes of usable data; thousand-plus distributed compute/storage nodes; tens of thousands of concurrent users and queries; many terabytes of daily or continuous data loads; and expanding mixed workloads of reporting, query, OLAP, in-database analytics, real-time analytics, ETL, data cleansing, and other transactions. It should support this extreme scalability through scale-out, shared-nothing MPP, optimized appliances, optimized storage, dynamic query optimization, and mixed workload management technologies.

Extremely flexible: The EDW platform should be flexible to support diverse applications, including business intelligence, online analytical processing, data mining, predictive analytics, text analytics, closed-loop business process management, and complex event processing; and various deployment roles, including multi-domain data hubs, subject-specific data marts; operational data stores, master data management hubs, staging nodes, analytic data marts, multi-temperature hierarchical storage management and archiving, and source and/or target repository in data federation environments. It should support this extreme flexibility by being fluid, adaptive, and virtualized; enabling data to be transparently persisted, in diverse physical and logical formats, to an abstract, seamless grid of interconnected memory and disk resources; and delivered with subsecond delay to consuming applications; and ensuring application service levels through an end-to-end, policy-driven, latency-agile, distributed-caching and dynamic query-optimization memory grid.

Extremely affordable: The EDW platform should be affordable for all customer segments and use cases. It should support this extreme affordability through flexible packaging/pricing, including licensed software, modular appliances, and “pay as you go” subscription-based SaaS/cloud offerings.

EDW platforms vendors that can’t address these key requirements--now or in their enhancement roadmaps over the coming 2-3 years--will not survive in this very competitive arena.

As noted above and in my blogpost last week, scalability, performance, and optimization are perhaps the most important criteria in today’s EDW market. And, of course, they are quite difficult to nail down into a single yardstick that does justice to different vendors’ approaches. Nevertheless, I believe this Wave accomplishes that. I have boiled down “scalability, performance, and optimization” (SPO) into a single criterion that defines five profiles (from 5= most scalable to 1 = least scalable), focusing on the degree of parallelism in the underlying architecture.

For each of the vendors in this Wave, I got a deep dive on their SPO architecture, but I didn’t stop there. I asked each vendor for reference customers, and conducted a structured interview with each. I asked each for a list and description of their largest production customer deployments. And I asked each for published benchmarks, plus all the supporting info on how the test environments, scenarios, and criteria. In other words, I applied the standard Forrester Wave methodology.

Essentially, the customer deployment and benchmark data corroborated whether a vendor in fact earned the particular SPO score associated with their architectural approach. Clearly, there were plenty of gray areas. Also, quite clearly, vendors had plenty of comments on the definitions of the SPO scales, and on where they fell on this spectrum. And, of course, many pointed out that being scored, say, a “2” rather than a “4” or “5” didn’t necessarily mean they were slower, less efficient, or incapable of processing various EDW and BI workloads. It also didn’t mean that they couldn’t, in practice and in customer deployments, push the scalability and speed envelope that one would associate with their architecture. Architecture isn’t destiny, but it definitely sets SPO constraints, which is the whole point of the scoring on this criterion in this Wave.

All the vendor feedback was excellent and helped me tweak and tune the scale to fit the EDW market’s current and emerging state of the art. With that said, here are the final SPO scales in this Wave:

5 = scale out through shared-nothing massively parallel processing (MPP), up to 100-1000+ storage/compute nodes in single-tier grid of compute/storage nodes, and well beyond 1000s of terabytes (TBs) of online, usable production data across distributed deployment

4 = scale out in the storage tier to 100-1000+ nodes and/or up to around 1000 TBs of online, usable production data, but lacking support for single-tier-grid shared-nothing MPP and/or lacking the ability to scale out to 100-1000+ nodes in the compute tier

3 = scale-out through shared-nothing MPP and/or clustering, up to 2-100 storage and/or compute nodes and up to 100s of TBs of online, usable production data across distributed deployment

2 = scale-up through symmetric multiprocessing (SMP), and up to 10s of TBs of online, usable production data, and scale-out in a clustered deployment of 2-99 compute nodes

1 = scale-up through SMP and up to 10s of TBs of online, usable production data on a single-node deployment

To see how the vendors ranked, you’ll need to read the Wave. Or engage me in an inquiry or advisory. Or, preferably, both.

Thursday, February 05, 2009

FORRESTER blog repost Is BI Recession-Proof...or Are We Just Bracing for the Next Shoe to Drop?

Is BI Recession-Proof...or Are We Just Bracing for the Next Shoe to Drop?

http://blogs.forrester.com/information_management/2009/02/is-bi-recession.html

By James Kobielus

The economic outlook isn’t all gloom and doom. Bright spots remain in some substantial IT growth sectors--most important, in the sprawling business intelligence (BI) market.

In the past month, we’ve seen solid financials--in some cases, record growth and profitability numbers--from leading BI vendors, including SAP (Business Objects), IBM (Cognos), and privately held SAS Institute. Oracle and Microsoft also seem to be doing fairly well with BI-related revenues. Even vendors that only participate in BI environments as a provider of data warehousing (DW) solutions (e.g., Sybase) or data integration (DI) middleware (e.g., Informatica) are reporting outstanding financials all the way through year-end 2008. That includes the period just passed when the world economy began to spiral wildly out of control.

What’s going on here? Is the BI industry recession proof, or is the next soft-economy shoe--or heavy hammer--poised to drop on this segment’s unsuspecting heads? To some extent, I suspect that BI’s relative, perhaps short-lived, immunity from tough times is due to its use as a “recession-busting” tool for identifying areas to cut costs, consolidate operations, and boost revenues. SAS CEO Jim Goodnight articulated this view in his recent statement: “In tough times, companies focus on optimizing their businesses.”

But excuse me if I take a slightly cynical perspective on any sector’s claim to be recession proof. I take issue with the notion that people have no choice but to use one particular vendor’s or sector’s product, no matter how bad the economy gets. For example, the “people gotta eat” argument didn’t translate into general prosperity in the agricultural sector during the Great Depression. People found ways to survive on less store-bought food--or less meats and sweets--or larger backyard gardens--or handouts.

As regards the indispensability of BI, I suspect the actual market dynamic is bit more nuanced than we’ve been led to believe. What’s interesting about the latest round of BI vendor earnings numbers is that some are lackluster and/or declining. Case in point: MicroStrategy’s recent report of a 12 percent decline in BI license revenues in Q4 2008, compared to the same quarter a year earlier (bear in mind that the vendor’s product licensing revenues grew by 5 percent for the year as a whole, due to a strong start).

Why is MicroStrategy reporting flaccid Q4 numbers but SAP, IBM, SAS, and others are doing OK? I suspect that one of the key factors is the encroaching commoditization of “core BI” stacks, with concomitant declines in prices. Forrester defines “core BI” as solutions that incorporate any or all of the following features for information access, delivery, presentation, and user-side sharing: reporting, query, OLAP, dashboarding, Microsoft Office integration, portal integration, alerting/notification, and interactive visualization. Clearly, this particular segment is overcrowded, with dozens of vendors--including open-source and software-as-a-service providers--that are becoming as indistinguishable as polar bears in a blizzard. Though MicroStrategy is a well-established, widely adopted core-BI vendor, it does not have much beyond that common denominator feature set.

Another trend that’s making it more difficult for MicroStrategy and similar vendors to grow is enterprise information managers’ desire to consolidate their analytics environments down to a few core vendors--which, more often than not, provide data warehousing (DW), data mining, data quality, and other solutions in addition to core BI. MicroStrategy is mostly missing from those other markets, so it may be experiencing problems growing its footprint among existing customers.

Yet another trend that explains the soft MicroStrategy numbers may be enterprises’ preference for BI vendors that can offer a full range of prepackaged “business content”--such as analytics tailored to specific vertical and horizontal requirements--to extend and leverage the core BI platform. That’s where the likes of SAP/Business Objects, IBM/Cognos, Oracle/Hyperion, and SAS come into the picture--and the MicroStrategies of the BI market are mostly absent.

Just as important to these latter vendors’ ongoing success are strong professional services organizations, partnerships, and customer relationships. Only by deepening their domain expertise and customer intimacy--and pouring this new “business content” into packaged applications and solution accelerators--can BI vendors realize healthy margins going forward. Forrester refers to these packaged domain analytics applications as “business performance solutions” (BPS).

SAS’ Goodnight alluded to this key BI-vendor success imperative in his recent press release: “We achieved our 33rd year of revenue growth in the worst economy most can remember. This growth is a direct result of being a stable privately held company, which allows us to invest in long-term relationships with employees and customers.”

Where SAS and some other vendors are concerned, another key differentiator that’s helping them stay strong is emphasis on BI’s chief growth segments: most notably, advanced analytics, which encompasses predictive analytics (PA), data mining (DM), and text mining/analytics. Deep domain expertise and customer intimacy are also keys to vendor growth in advanced analytics. Indeed, the range of tailored analytic applications that leverage advanced analytics features continues to grow, though the number of PA/DM “workbench” providers on the market remains fairly stable.

At heart, BI is a relationship business. The BI solution provider should be a committed partner helping customers to address their most burning success imperatives. Customers won’t forget if you helped them out of a tight situation, such as nasty patch of sluggish economy. They’ll keep coming back to you time and again.

Steady repeat business--loyal customers--indispensable brands--that’s the best business model--just ask Warren Buffett.

Sunday, February 01, 2009

poem Irr

IRR

Scars are weeks to heal.
Cold sores a matter of days.
Sunburns forever.

Imprints of ancient
days out, life indelible,
this solar tattoo.

Dust are damage. A
matter of maintenance. All
coughing and sweeping.

poem That'll or This'll

THAT'LL OR THIS'LL

I can't remember
if I cried. Was eighty-one
days the day they died.

And the three men I
admittedly'd never heard.
Not yet into rock.

Long long time ago.
Still hadn't acquired Don's sense
of foreshadowing.

Friday, January 30, 2009

FORRESTER blog repost What’s the Fastest, Most Scalable Data Warehouse Platform? Well, If You Must Ask.....

What’s the Fastest, Most Scalable Data Warehouse Platform? Well, If You Must Ask.....

http://blogs.forrester.com/information_management/2009/01/whats-the-faste.html

By James Kobielus.

Welcome to the life of a data warehousing (DW) industry analyst. I’m often asked by Information and Knowledge Management (I&KM) professionals to address the perennial issue of which commercial DW solution is fastest or most scalable. Vendors ask me too, of course, in the process of them attempting to suss out rivals’ limitations and identify their own competitive advantages.

It’s always difficult for me to provide I&KM pros and their vendors with simple answers to such queries. Benchmarking is the blackest of black arts in the DW arena. It’s intensely sensitive to myriad variables, many of may not be entirely transparent to all parties involved in the evaluation. It’s intensely political, because the answer to that question can influence millions of dollars of investments in DW solutions. And it’s a slippery slope of overqualified assertions that may leave no one confident that they’re making the right decisions. Yes, I’m as geeky as the next analyst, but I myself feel queasy when a sentence coming out of my mouth starts to run-on with an unending string of conditional clauses.

If we offer any value-add in the DW arena’s commentary cloud, industry analysts can at least clarify the complexities. Here is how I frame the benchmarking issues that should drive I&KM pros’ discussions with DW vendors:

• Vendor DW performance-boost claims (10x, 20x, 30x, 40x, 50x, etc.) are extremely sensitive to myriad implementation factors. No two DW vendors provide performance numbers that are based on the same precise configuration. Also, vendors vary so widely in their DW architectural approaches, that each vendor can claim that no rival could provide a comparable configuration to its own. For the purpose of comparing vendor scalability for the recently completed Forrester Wave on Enterprise Data Warehousing Platforms (to be published imminently), I broke out the approaches into several broad implementation profiles. Each of those profiles (which you’ll have to wait for the published Wave to see) may be implemented in myriad ways by vendors and users. And each specific configuration of hardware, software, and network interconnect of each of those profiles may be optimized to run specific workloads very efficiently--and be very suboptimal for others.

• Vendor DW apples-to-apples benchmarks depend on comparing configurations that are processing comparable workloads. No two DW vendors, it seems, bases their benchmarks on the same specific set of query and loading tests. Also, no two vendors’ benchmarks incorporating the exact same set of parameters in their benchmark tests--in other words, the same query characteristics, same input record counts, same database sizes, same table sizes, same return-set sizes, same number of columns selected, same frequency distribution of values per column, same number of table joins, same source-table indexing method, same mixture of relational data and flat/text files in loads from source, same mixture of ad-hoc vs. predictable queries, same use of materialized views and client caches, and so forth.

• Vendor DW benchmark comparisons should cover the full range of performance criteria that actually matter in DW and BI deployments. No two DW vendors report benchmarks on the full range of performance metrics relative to users. Most offer basic metrics on query and load performance. But they often fail to include any measurements of other important DW performance criteria, such as concurrent access, concurrent query, continuous loading, data replication, and backup and restore. In addition, they often fail to provide any benchmarks that address various mixed workloads of diverse query, ETL, in-database analytics, and other jobs that execute in the DW.

• Different vendors’ DW benchmarks should use the same metrics for each criterion. Unfortunately, no two vendors in the DW market use the same benchmarking framework or metrics. Some report numbers framed in proprietary benchmarking frameworks that may be maddeningly opaque--and impossible to compare directly with competitors. Some report TPC/H, but often only when it puts them in a favorable light, whereas others avoid that benchmark on principle (with merit: it barely addresses the full range of transactions managed by a real-live DW). Others report “TPC/H-like” queries (whatever that means). Still others publish no benchmarks at all, as if they were trade secrets and not something that I&KM pros absolutely need to know when evaluating commercial alternatives. Sadly, most DW vendors tends to make vague assertions about “linear scalability,” “10-200x performance advantage [against the competition], and “[x number of] queries per [hour/minute/sec] in [lab, customer production, or proof of concept].” Imagine sorting through these assertions for a living--which is what I do constantly.

• DW benchmark tests should be performed and/or vouched for by reliable, unbiased, third parties--i.e,. those not employed by or receiving compensation by the vendors in question. If there were any such third parties, I’d be aware of them. Know any? Yes, there are many DW and DBMS benchmarking consultants, but they all make their living by selling their services to solution providers. I hesitate to recommend any such benchmark numbers to anybody who seeks a truly neutral third-party.

• DW solution price-performance comparisons require that you base your analysis on an equivalently configured/capacity solution stack--i.e., hardware, software--for each vendor and also the full lifetime total cost of ownership for each vendor/solution. That’s a black art in its own right. Later this year, I’ll be doing a study that provides a methodology for estimating return on investment for DW appliance solutions.

As an entirely separate issue, it does no good, competitively, for a DW vendor to assert performance enhancements that are only relative to a prior configuration of a prior version of its own product or technology. The customer has no easy assurance that the vendor is comparing its current solutions against a well-configured/engineered example of the prior solution. The vendor’s assertion of order-of-magnitude improvement over a prior version of its own product may be impressive, but only as a statement of how much they’ve improved its own technology, not how it fares against the competition. And such “past-self-comparisons” can easily backfire on the vendor, as customers and competitors may use it to insinuate that there were significant flaws or limitations in your legacy products.

Here’s my bottom-line advice, to all DW vendors on positioning your performance assertions. Frame them in the context of the architectural advantages of your specific DW technical approach. Publish your full benchmark numbers with test configurations, scenarios, and cases explicitly spelled out. To the extent that you can aggregate 100s of terabytes of data, serve thousands of concurrent users and queries, process complex mixtures of queries, joins, and aggregations, ensure subsecond ingest-to-query latencies, and support continuous, high-volume, multiterabyte batch data loading, call all of that out in your benchmarks. To the extent that any or all of that is in your roadmap, call that out too.

Here’s my bottom-line advice to I&KM pros: Don’t expect easy answers. Think critically about all vendor-reported DW benchmarks. And recognize that no one DW platform can possibly be configured optimally for all requirements, transactions, and applications.

If there were any such DW platforms, I’d be aware of them. Know any?

Wednesday, January 28, 2009

BriefingsDirect Analysts Discuss Service Oriented Communications, Debate How Dead SOA Really Is

All:

Check out the full transcript at http://briefingsdirect.blogspot.com/2009/01/briefingsdirect-analysts-discuss.html

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 36, on communications as a service and the future of SOA in light of hard economic times. [[and further edited/annotated in a gratuitous, unnecessary way by James Kobielus on the evening of January 28, 2009 for the purpose of populating this blog with sweet new stuff without having to work very hard]]

Panelists: Dana Gardner (Interarbor Solutions), Tony Baer (Ovum), Jim Kobielus (Forrester Research), Joe McKendrick (independent), Dave Linthicum (Linthicum Group), JP Morgenthal (Burton Group), Anne Thomas Manes (Burton Group), Todd Landry (NEC Sphere)

*********************************

Gardner: I'm your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions. Our topic this week, the week of Jan. 12, 2009, starts and ends with service-oriented architecture (SOA) -- dead or alive?

We're going to begin with an example of what keeps SOA alive and vibrant, and that is the ability for the architectural approach to grow inclusive of service types and therefore grow more valuable over time.

We're going to examine service-oriented communications (SOC) a variation on the SOA theme, and a way of ushering a wider variety of services -- in this case communications and collaboration services from the network -- into business processes and consumer-facing solutions. We're joined by a thought leader on SOC, Todd Landry, the vice president of NEC Sphere.

In the second half of our show, we'll revisit the purported demise of large-scale SOA and find where the wellsprings of enterprise architectural innovation and productivity will eventually come from.

We’ll also delve into the psychology of IT. What are they thinking in the enterprise data centers these days? Somebody’s thoughts might resuscitate SOA or perhaps nail even more spikes into the SOA coffin.

*********************************

Baer: I hate to use a cliché, but it’s like the last mile of enterprise workflow and enterprise processes. The whole goal of workflows was trying to codify what we do with processes and trying to implement our best practices consistently. Yet, when it comes to verbal communications, we’re still basically using technology that began with the dawn of the human voice eons ago.

Gardner: I've seen people use sign language.

Baer: Well, that maybe too, and smoke signals.

Gardner: A certain finger comes up from time to time in some IT departments.

Kobielus: At least the use of a trusty index DTMS finger.

Gardner: There you go.

*********************************

Gardner: Jim Kobielus, isn’t there more to this on the consumer side as well? We've got these hand-held devices that people are using more and more with full broadband connectivity for more types of activities, straddling their personal and business lives and activities. We know Microsoft has been talking about voice recognition as a new interface goal for, what, 10 years now. What’s the deal when it comes to user habits, interfaces, and having some input into these back-end processes?

An Important Extension

Kobielus: That’s a huge question. Let me just unpeel the onion here. I see SOC as very much an important extension of SOA or an application of SOA, where the service that you're trying to maximize, share, and use is the intelligence that’s in people’s heads -- the people in the organization, in your team. You have various ways in which you can get access to that knowledge and intelligence, one of which is by tapping into a common social networking environment.

In a consumer sphere, the thing is the intelligence you want to gain access to is intelligence sets residing in mobile assets -- human beings on the run. Human beings have various devices and applications through which they can get access to all manner of content and through which they can get access to each other.

So, in a consumer world, a lot of the SOC value proposition is in how it supports social networking. The Facebook environments provide an ever more service-oriented environment within which people can mash up not only their presence and profiles, but all of the content the human beings generate on the fly. Possibly, they can tag on the fly as well, and that might be relevant to other people.

There is a strong potential for SOC and that consumer Facebook-style paradigm of sharing everybody’s user-generated content that’s developed on the fly.

*********************************

Text-Mining Capability

Kobielus: One of the services in the infrastructure of the SOC that will be critically needed in a consumer or a business environment is a text-mining capability within the cloud. That can go on the fly to all these unstructured texts that have been generated, and identify entities in relationships and sentiments to make that information quickly available. Or, it can make those relationships quickly available through search or through other means to people who are too busy to do a formal search or who are too busy to even do any manual tagging. We simply want the basic meanings to just bubble up out of the cloud.

*********************************

Kobielus: I want to add one last observation before we go to the "SOA is dead" topic. In order for this integration to happen in the cloud, the cloud providers need to federate their new registries with those of their enterprise customers. But, humans are reachable through a different type of registry called a directory, lightweight directory access protocol (LDAP) and other means.

Cloud providers need to federate their identity management in a directory environment with those of their customers. I don’t think the industry has really thought through all the federation issues to really make this service oriented, business communications in the cloud scenario a reality any time soon.

Gardner: So we need an open Wiki-like phone book in the sky.

Kobielus: Exactly.


*********************************

Kobielus: The whole "SOA is dead" theme struck a responsive chord in the industry, because there's a lot of fatigue, not only with the buzzword, but the very concept. It’s been too grandiose and nebulous. It’s been oversold, expectations have been built up too high, and governance is a bear.

We all know the real-world implementation problems with SOA, the way it’s been developed and presented and discussed in the industry. The core of it is services. As Anne indicated, services are the unit of governance that SOA begot.

We all now focus on services. Now, we’re moving into the world of cloud computing and, you know what, a nebulous environment has gotten even more nebulous. The issues with governance, services, and the cloud -- everything is a service in the cloud. So, how do you govern everything? How do you federate public and private clouds? How do you control mashups and so forth? How do you deal with the issues like virtual machine sprawl?

The range of issues now being thrown into the big SOA hopper under the cloud paradigm is just growing, and the fatigue is going to grow, and the disillusionment is going to grow with the very concept of SOA. I just want to point that out as a background condition that I’m sensing everywhere.

*********************************

Kobielus’ comments on all the above, here on the evening of January 28 by himself:

• “Trusty index DTMS finger”? What does that mean? I never said it. Whoever transcribed the audio misattributed it. What does “DTMS” stand for anyway? Go listen to the audio playback and tell me whether I actually said it. I’m too lazy to do so. Also, I can’t stand listening to my own voice (yeah, I know, you’d think otherwise, wouldn’t you, given how verbal I am).
• Mash up each other’s presence, content, intelligence? A virtual mashpit, so to speak. Feels slightly creepy, doesn’t it. Sort of like the movie “The Fly” where the machine went haywire and mashed Jeff Goldblum’s DNA with an insect. Ewwww!
• Glad I mashed up identity management and service governance on the call--mashed and mushed LDAP directories and UDDI registries, into each other conceptually--and federated them in that big Venn diagram in the sky. SOA for interpersonal communications depends on populating the governance bus with all that identity “metadata” (e.g., contacts, attributes, profiles, roles, demographics, interests, transactions, behavioral characteristics, clickstream, predictive model scores, etc.)
• Text mining will provide the auto-discovery mechanism for all the “identity metadata” that people are self-publishing in un- and semi-structured formats (often without fully realizing it) in the Web 2.0, social networking, wiki world.
• Controlling mashups. Mashup governance. Dave Linthicum introduced that concept a year or two ago, but I still don’t sense any clear feeling among vendors or users that it’s a hot button. I think everybody still regards user-created services (i.e., mashups) as outside the proper scope of SOA. But, then again, what’s the difference between a mashup and a rogue service? The former is not sanctioned by corporate IT but is ostensibly benign and is to be tolerated, if not encouraged or supported. The latter is also unsanctioned, and possibly benign, but under suspicion and to be decommissioned or neutralized at the first opportunity. And what’s the difference between mashups and virtual machine sprawl? The former proliferates but doesn’t necessarily hog resources or disrupt operations, whereas the latter also proliferates and consumes more than its fair share of resources. The relevant distinctions in these cases all concern where specifically a particular created/published service (be it a mashup service, Web service, or cloud service) sits on the governance spectrum in a given organization. Is it a sanctioned/supported or unsanctioned/unsupported service, from the point of view of the service governance “authorities”?

Jim

Saturday, January 24, 2009

poem Jimi Hendrix Woodstock '69 Star Spangled Banner

JIMI HENDRIX WOODSTOCK '69 STAR SPANGLED BANNER

Mangled and flaring:
righties rightfully spat at
that conflagration.

Flailing and flouting:
the left-brained would gladly ban
such deconstruction:

Such flapping of the
banner feedback-streaked and wild
freak celebration.

Thursday, January 22, 2009

poem Inner Lincoln (2009 & 1997)

INNER LINCOLN (2009)

The original
Republican. There he sits:
bearded, observing.

Yes, yet another
old marble giant. Tired and
visibly aging.

Ready to retire.
Let the people complete their
emancipation.

INNER LINCOLN (1997)

Majesty brooding.
Swamp temple for a nation.
Children of Africa unbound your legacy.
Blood war bad marriage depression prima donna generals couldn't break you.
In death you're larger but still humble still alone.
Stone chips flake from your old walls.
Darkness consumes you.

Tuesday, January 20, 2009

fyi What Happens If You Die?

All:

Stimulus: http://blogs.cioinsight.com/knowitall/content001/cio/the_indispensable_man.html?kc=CIOMINEPNL01202009

Response : Another headline that stops you cold (pun sort of intended). It’s intended as a comment on Steve Jobs’ health situation and the ramifications for Apple going forward. Rather than me get into a pointless ramble on God, heaven, the soul, legacy, the afterlife, “cemeteries are full of indispensable men,” and the like, I’ll just point out that, where corporate succession planning is concerned, Jobs’ eventual demise was factored into people’s thinking when it was revealed a few years ago that he had battled cancer. And, in fact, he has built a brand that can certainly survive the loss of one or more individuals and keep on prospering. It’s instructive to look at the legacy of Walt Disney (Jobs, in fact, has been called the new Disney because he founded Pixar). Quibble as you might with how Mike Eisner and others have built Disney’s brand in the 43 years since Walt went to heaven, you can’t deny that the founder did something exquisitely right. And that he was just as indispensable in his heyday to his shop as Jobs is now to his. That said, I wish Steven Jobs a speedy recovery. I don’t know him. I’ve never met him. I’m not a big Apple fan, but that’s irrelevant. He’s a human being suffering from some nasty medical condition, and should be in all our prayers.

Jim

Sunday, January 18, 2009

New Federation Frontiers in the Cloud Services World

All:

Rich Wolski of Eucalyptus had some very interesting insights to share about the role of identity federation among public and private clouds. You'll see those thoughts when my Network World article publishes on February 9.

What Rich said reminded me of this article, which I published in Business Communications Review in fall 2006. It's about the need for multi-layered federation infrastructures for IP networking. It reminds me of the fact that clouds (aka "everything as a service") will also have to federate on every level.

***************************************


New Federation Frontiers in the IP Networking World

Federation is a concept much in vogue these days, and it is being applied to a growing range of telecommunications and computing infrastructures.

Where telecommunications is concerned, federation refers to an established industry practice: interconnection, routing, billing, clearing, revenue settlement, and other negotiated arrangements among affiliated service providers. Network federation allows subscribers to authenticate to their primary carrier and thereby gain single sign-on (SSO) access to services, applications, and content controlled by affiliated service providers. The alternative to federation is centralization—in other words, the long-discarded “Ma Bell” approach of one carrier controlling everything in the connected universe.

If you think of it, the Internet is the most successful network federation of all. It is a global federation of separate, cooperating networks built on universal adoption of the Internet Protocol (IP), Domain Name Service (DNS), Uniform Resource Locator (URL), and other core standards developed under the auspices of the Internet Engineering Task Force (IETF) and other groups. In addition, as noted in “What is Federated Identity Management?” (Business Communications Review, August 2005), federations have been implemented widely in other telecommunications and distributed computing environments. For example, federated location-registry and roaming services enable interconnected cellular carriers to authenticate client devices, route incoming calls, apply appropriate calling features, and bill subscribers correctly. Furthermore, multi-institution automated teller machine (ATM) networks—such as Cirrus--operate under a type of federation, enabling users to login remotely to their bank accounts from any affiliated institution’s machines.

In addition to these long-established approaches, new frontiers in standards-based cross-carrier federation are opening up. Many of those new federation initiatives fall under the broad architectural umbrella of IP Multimedia Subsystem (IMS). Increasingly, network industry standards groups are using the word “federation” to describe their cross-carrier IP interoperability frameworks. The IMS community is referencing federation identity management (IdM) standards—such as those developed by the Organization for the Advancement of Structured Information Standards (OASIS) and the Liberty Alliance—to facilitate convergence among diverse IP-based services. But they’re going beyond the Web services world’s federation protocols to define federation environments that build on IETF specifications such as Domain Name Service (DNS), Session Initiation Protocol (SIP), and Electronic Number (ENUM).

Figure 1 illustrates several layers of federation that are possible in a cross-carrier IP internetworking environment: federated IdM (user and device authentication, SSO, and roaming); federated service creation, provisioning, and coordination; and federated service provider peering (interconnection, policy declaration, addressing, and routing).



The industry is implementing IP federation in all of these areas. Recently, federation has popped up in several new industry standardization efforts, though not all of these new federation approaches have yet been implemented in production carrier internetworking environments.

Most notably, infrastructure vendors are integrating federated IdM SSO protocols within IMS’ Home Subscriber Server (HSS). In addition, the IPsphere Forum is developing commercial and technical frameworks to support federated cross-carrier service provisioning, signaling, and management, incorporating federated IdM standards plus a broad range of WS-* standards. Furthermore, an Internet Engineering Task Force (IETF) Working Group is developing standards under which Voice Over IP (VoIP) service providers will be able to flexibly federate amongst themselves through the DNS and ENUM infrastructures.

Federated Identity Management
Many people in the information technology world associate federation primarily with IdM. Federated IdM refers to standards-based approaches for handling authentication, SSO, role-based access control, and session management across diverse organizations, security domains, and application platforms. The most widely implemented federated IdM/SSO protocol standards include Liberty Alliance Identity Federation Framework (ID-FF), OASIS’ Security Assertion Markup Language (SAML), and WS-Federation.

Within a typical cross-carrier internetworking environment, federated IdM may be implemented in layers. For converged IP services, federated IdM may involve separate authentications at the application layer and network layer.

Increasingly, the application-layer authentications are relying on any or all of the federated IdM standards mentioned above. In fact, telecommunications carriers in many nations are among the most active implementers of the Liberty Alliance specifications, having deployed Liberty-based IdM services for application-layer account linking, SSO, and trusted attribute sharing across their catalogs of federated third-party services.

Application-layer federated IdM relies on carriers maintaining authoritative directories of user identities, credentials, roles, personalization settings, user preferences, and other attributes. Generally, each federated carrier and service provider operates as an identity provider (IdP), managing the master directory of its own registered subscribers along with their account profiles. Carriers may also provide real-time subscriber session state information to federated partners, thereby facilitating targeting and personalization of service delivery.

Within the underlying IP networking environment, network-layer authentications will increasingly rely on IMS’ HSS infrastructure. HSS is IMS’ principal network-layer IdM environment. Within each carrier’s IMS network, the HSS is a master directory that supports user authentication and authorization, subscriber profile management, session setup and management, call routing, and user roaming within carrier networks. There are no standards specifying exactly how the HSS must interact with its underlying user directory or database repository. Consequently, IMS infrastructure providers and carriers may rely on prevalent directory-access standards, such as the Lightweight Directory Access Protocol (LDAP), or WS-* standards, such as XML Query, for query, update, and management of their HSS repository.

The HSS is a master directory of device and user identity information relevant to network-level authentication, authorization, and roaming. For wireless networks, the HSS manages device and user identities such as International Mobile Subscriber Identity (IMSI), Temporary Mobile Subscriber Identity (TMSI), International Mobile Equipment Identity (IMEI), and Mobile Subscriber ISDN Number (MSISDN). With IMS, the HSS manages additional identities, including IP Multimedia Private Identity (IMPI) and IP Multimedia Public Identity (IMPU), which are URIs associated with single or multiple client devices.

To enable cross-carrier interconnection, SSO, and roaming, HSS environments must be federated through various approaches.

At the network layer of the IMS architecture, cross-HSS federation requires that each carrier also implement a Subscriber Location Function (SLF), and that each HSS and SLF implement the DIAMETER protocol (RFC 3588) for authentication, authorization, and accounting (AAA). Essentially, the HSS/SLF infrastructure in IMS environments is equivalent to the Home Location Registry (HLR) and Visitor Location Registry (VLR) services in cellular networks (one big difference is that the HSS/SLF is a media, network, and device-agnostic functional evolution, hence a functional superset, of the cellular-specific HLR/VLR infrastructure).

DIAMETER is an important piece of the IP networking federation equation. DIAMETER—the IMS successor to the widely adopted Remote Access Dial-In User Service (RADIUS) protocol--may be used for cross-carrier federated AAA in conjunction with the HSS. Wireline and wireless ISPs authenticate users at the application layer through DIAMETER/RADIUS servers that interface to authoritative directories of user identities, passwords, and other credentials. DIAMETER/RADIUS servers can serve as proxies, mediating between a front-end authenticating server and one or more back-end directories. As proxies, these servers can be set up to forward authentication and accounting messages to peer authentication servers in other application domains, which is essentially a federated IdM scenario.

In addition, DIAMETER is the principal access protocol that allows distributed IMS functions, no matter what carrier’s domain they happen to deployed within, to interact with the carrier’s master HSS. Within the IMS infrastructure, the Interrogating Call Session Control Function (I-CSCF) queries the HSS using DIAMETER to retrieve the user location, in order to route a Session Initiation Protocol (SIP) request to its assigned Serving CSCF (S-CSCF). The S-CSCF uses DIAMETER to download user profiles from and upload user profiles to the HSS. And an IMS Application Server—controlling caller ID and other enhanced services—can use DIAMETER to query the HSS for subscriber presence, location, and other account profile data.

Industry efforts are underway to integrate IMS’ federated IdM infrastructure—centered on HSS and DIAMETER—with the Web services world’s IdM environment, in which application-layer directories and federated SSO protocols are predominant. In separate recent initiatives, both Sun Microsystems and Microsoft are positioning their federated IdM platforms and protocols as SSO adjuncts to carrier HSS infrastructures.

In April 2006, Sun and Lucent Technologies announced joint development of infrastructure products that provide standards-based SSO access to federated IMS services. Sun is providing its Sun Java System Federation Manager product to the initiative, whereas Lucent has provided a full IMS product suite that includes HSS and other IdM functionality.

Under this joint development initiative, Lucent is providing a suite of IMS infrastructure products to support federated IdM functionality. The following set of products is indicative of the functional components necessary for federated IdM over HSS in an IMS environment:

 Lucent Datagrid: This product integrates the diverse, federated carrier databases that contain subscriber data relevant to call processing, session management, messaging, and customer care.
 Lucent Unified Subscriber Data Server (USDS): This product provides HSS, HLR, and AAA functionality. It enables HSSs to be deployed in a centralized or decentralized/federated fashion. It allows access to subscriber profile data that is hosted inside or outside of a service provider's network and on diverse network platforms. It also enables operators to provide subscribers with a single service presentation environment even when roaming to another carrier’s network.
 Lucent Session Manager: In conjunction with the USDS, the Session Manager supports SSO, presence management, and session management across diverse, federated IMS-based services. It allows operators to provide integrated voice, data, video, multimedia, and other capabilities over IMS sessions. Through embedding of Sun’s technology, the Session Manager is implementing the Liberty Alliance federated IdM protocols, which provide SSO within multilateral federated environments. In addition, the product leverages the Liberty protocols to allow subscribers to selectively disclose particular profile information—such as current locations and previously stored preferences—to particular federated application and content providers. Furthermore, the Session Manager can be deployed for several core IMS functional roles, including Call Session Control Function, Service Broker, Service Capabilities Interaction Manager (SCIM), Policy Decision Function (PDF), and the Breakout Gateway Control Function (BGCF).
 Lucent Communication Manager: This product provides a unified portal presentation view for subscribers to access their IMS-based converged services from wireline or wireless clients. It supports integrated session control that is agnostic to the underlying application servers serving the subscriber and is agnostic to the client devices through which services are being accessed.
 Lucent Vortex: This product provides a policy engine that may be distributed throughout a network to support personalization and customization of end-user views of federated IMS-based services. It allows network operators to quickly modify network behaviors to serve the special requirements of particular customer segments and ensure guaranteed quality of service.

Separately, Microsoft has been working with carriers throughout the world to integrate its own application-layer federated IdM stack with their IMS environments. Microsoft published its federated IMS vision in a June 2005 whitepaper called “Connected Services Framework and IMS: A Partnership for Success.”

Microsoft’s and Sun’s visions for federated IdM have many common themes, such as promoting IMS service convergence and aggregation, enabling SSO with trusted user attribute sharing, and implementing WS-* standards pervasively throughout carrier infrastructure. Both of them promote IMS convergence visions under which network-layer IdM services—such as the DIAMETER protocol interfaces—could conceivably be exposed as Web services and invoked from application-layer IdM services (though neither Microsoft nor Sun has committed to exposing DIAMETER APIs as Web services). In other words, they both point to the eventual unification of IMS application- and network-layer federation within a common service-oriented architecture (SOA) framework.

However, their approaches differ in two important respects.

First, Sun has been promoting the Liberty Alliance protocols in its carrier-federation roadmap, and implementing them in its work with Lucent. Microsoft, by contrast, has been implementing the rival WS-Federation protocol, as well as other WS-* specifications—such as WS-Trust—that it has a key role in developing. It’s important to note that the functional differences between the Liberty Alliance protocols and WS-Federation are not great, and that they both support federated account linking, strong authentication, SSO, trusted attribute sharing, privacy protection, and session management over multi-organization circles of trust.

Second, Sun has been working with Lucent to embed federated IdM protocols into the underlying IMS HSS/SLF infrastructure. Microsoft, by contrast, has focused on connecting its federated IdM infrastructure to IMS as an Application Server. In the IMS architectural framework, an Application Server is a functional component that hosts and executes calling and application services. In addition to application-layer SSO, other services that may be implemented as IMS Application Servers include Caller ID, Call Waiting, Push To Talk, Voice Mail, Short Message Service, Presence, and Location-Based Services. From the subscriber’s point of view, an Application Server may be located in the subscriber’s own home carrier’s network, or in a federated third-party network or service provider environment.

It’s not clear which, if either, of the two approaches—Sun’s embedding of federated IdM in IMS HSS vs. Microsoft’s integration of federated IdM as an IMS Application Server—is best. Embedding of industry-standard federation protocols in HSS may pay off for Sun/Lucent if other IMS infrastructure providers and carriers follow their lead.

Integration of the WS-Federation protocol as an IMS Application Server may pay off for Microsoft if it can convince IMS infrastructure providers and carriers to implement this protocol. However, it should be noted that Microsoft’s three-year-old WS-Federation specification has not achieved much adoption in the mainstream federated IdM community.

Federated Service Creation, Provisioning, and Coordination
The IMS framework is missing an important component: specifications that describe how IP services can be flexibly created, provisioned, and coordinated across federated carriers, application partners, and content publishers.

The IPsphere Forum is a telecommunications industry initiative to fill in this missing piece. The forum is an international consortium of service and infrastructure providers developing both the commercial and technical frameworks for federated cross-operator service delivery. The group, which has been in existence for more than a year, has established a formal liaison with the International Telecommunications Union Telecom Standardization Sector.

The IPsphere frameworks, still under development, implement SOA principles within the IMS architectural model. Leveraging WS-* specifications such as UDDI, IPsphere is defining a standards-based environment for provisioning network infrastructure, application, and content services—composed of modular “service elements”--to carriers, endpoints, and users across federated IP networks. Each service element is a software method or module that is hosted by a provider and published to a UDDI registry as a Web service. End-to-end IP voice, data, and multimedia services may be created from diverse service elements hosted by many federated providers. Providers link the services to their respective network and policy management infrastructures for runtime administration, optimization, and control.

Under the IPsphere commercial/technical framework, application-layer IdM services—such as Liberty Alliance-based SSO—are just one category of infrastructure interactions that may be federated across a “pan-provider” IMS environment. Boundaries between federated providers sit at the intercarrier interface (ICI), as defined under the IMS model. IMS defines Call State Control Function (CSCF) points that can be deployed at network boundaries, such as ICIs, for enforcing federation policies—such as security, trust, quality of service, revenue settlement, and service-level agreement (SLA) accountability--defined by cooperating IP service providers.

Across these network boundaries, federated service provisioning and coordination take place across the following functional service layers, or “strata,” as defined by IPsphere:

 Packet handling stratum: This corresponds to the seven-layer Open Systems Interconnection protocols, as implemented in the IMS model.
 Policy and control stratum: This corresponds to such IMS functional components as the “Policy Decision Function,” “Proxy Call Session Control Function,” “Policy Enforcement Point,” and “Common Open Policy Service.”
 Service signaling stratum: This stratum has no counterpart in the IMS model. It is the IPsphere layer at which federated pan-provider services are created, provisioned, and coordinated from elements hosted in diverse provider environments. Across this layer, the providers’ service creation environments exchange structured messages to manage the phases of federated service setup, execution, and assurance. IPsphere defines several models of federated message-driven service creation, including permissive Internet-like interactions among providers, policy-database-mediated linking of services at the ICI, and explicit linking of services at the ICI by the providers’ respective network management systems.

Under IPsphere’s commercial model, each federated service provider may perform one or both of the following functional roles: “Partners” or “Sellers.” Partners contribute resources in the form of registered component service elements from which Sellers assemble end-to-end services that are sold to users, who are also known as “Buyers.” Partners publish only those services/elements that they want Sellers to deliver to Buyers, using UDDI and other Web services standards for messaging-based service provisioning interactions with Sellers. Partners receive revenues from Buyers via settlement payments rendered by Sellers, who validate, authenticate, and bill the Buyers. Partners may also assemble component services contributed by various federated “Sub-Partners.”

Of course, negotiated contractual relationships determine how Partners, Sub-Partners, Sellers, and Buyers interact throughout the federated service provisioning and delivery life cycle. The flexible IPsphere federation framework allows participating organizations to offer whatever resources they choose, at whatever price the market will bear, under whatever federation partnering arrangements make business sense.

Federated Service Provider Peering
Within the fast-evolving world of IP networks, cross-provider federations are being established to facilitate end-to-end service interoperability.

In their drive to establish a end-to-end alternative to the public switched telephone network (PSTN), VoIP service providers (VSPs) are establishing their own federations. Federation—also called “VoIP peering”--enables VSPs to offer end-to-end “on-net” VoIP calls and other IP multimedia communications services to their own customers and to the customers of ay federated VSP. As more VSPs federate with each other—preferably in multilateral arrangements--their collective on-net customer base will reach a critical mass under which VoIP becomes a cost-effective, full-service alternative to the PSTN. The number of calls that a VSP can complete on-net is directly proportional to the number of other federated VSPs and their customers.
Founded in 2004 and headquartered in London, XConnect is the world’s largest VSP peering/federation community and operates the world’s largest international private ENUM registry. XConnect provides VSP federation services to more than 150 VSPs and 123 million unique VoIP telephone numbers worldwide. Its VSP services include address protocol interoperability, ENUM interconnect call addressing and routing services, and authentication and identity services. In addition, XConnect provides multi-protocol interoperability, VoIP call security, and Spam over Internet Telephony (SPIT) prevention services to VSP members.
Separately, the IETF’s Session Peering for Multimedia Interconnect (SPEERMINT) Working Group is developing standards under which VSPs will be able to flexibly federate amongst themselves. The SPEERMINT specifications leverage the basic VoIP standards: SIP, Real-time Transport Protocol (RTP), and H.323. In addition, SPEERMINT is placing heavy reliance on DNS and the emerging DNS-integrated ENUM directory infrastructure to support a ubiquitous VSP federation address and policy administration environment.

Under SPEERMINT’s specifications, a federation is defined as “a group of VSPs [that] agree to receive calls from each other via SIP, agree on a set of administrative rules for such calls [such as settlement and abuse handling], and agree on specific rules for the technical details of the interconnection.” A VSP declares its membership in a federation by publishing to DNS a “domain policy” regarding the conditions under which they are willing to accept incoming communications per the rules of the federation. The specifications define the structure of these domain policies and the general approach for publishing them to DNS, using Dynamic Delegation Discovery System (DDDS) DNS records.

Under SPEERMINT’s approach, each VSP federation would identify itself by a unique URI, set membership eligibility criteria, define its internal policies and rules, and determine how to communicate those rules to member VSPs. SPEERMINT recommends but does not require that VSP federations use URLs to point to documents describing federation policies and rules.

Some of the VSP-federation policies, rules, and membership conditions that might be described in these documents include:

• Federated VSPs agree to use federation-designated ENUM infrastructure to translate existing numeric phone numbers to SIP addresses using DNS to facilitate on-net VoIP call routing;
• Federated VSPs agree to accept SIP-based calls from each other via the public Internet, as long as each call uses Transport Layer Security (TLS) over Transmission Control Protocol and presents a X.509 cert that was signed by a federation-designated public key infrastructure certificate authority;
• Federated VSPs agree to accept only those SIP-based calls from each other that were transmitted over a federation-wide virtual private network;
• Federated VSPs agree to accept all SIP-based calls from each other that were originated from within the same country;
• Federated VSPs agree to accept only those SIP-based calls from each other that were routed through a central, federated-designated SIP proxy;
• Federated VSPs agree to have revenue settlements for calls from each other administered by a federation-designated clearinghouse; and
• Federated VSPs agree to use firewalls and other perimeter security devices to block SIP calls that violate federation-administered anti-SPIT rules.

The SPEERMINT working group also points out that the same DNS-enabled federation approach may be used for peering among providers of SIP, instant messaging (IM), and other IP application services.

Though the SPEERMINT group doesn’t directly acknowledge the IPsphere Forum’s work, it’s clear that the two industry initiatives are complementary. The SPEERMINT effort defines an IP environment under which providers of a particular service—VoIP calling—may federate the policies under which they connect their users. IPsphere, by contrast, defines a larger IMS-based technical environment within which VSPs can provision and coordinate end-to-end VoIP and other services that conform to federation policies.

Likewise, the SPEERMINT and IPsphere frameworks require that end users and their devices authenticate using federated IdM protocols, at both the application layer (in the context of SOA and Web services) and network layer (in the context of IMS’ HSS). So there’s an important and growing role for the Liberty Alliance, SAML, and other federated IdM protocols in IP, IMS, SIP, VoIP, and IPTV federations.

Federation in a complex IP internetwork is a many-layered thing. In fact, federation—on many levels—is the key to convergence of diverse, pan-provider, multimedia IP services. Every new carrier, hosted application provider, and content publisher in the IMS fabric is another domain that must federate with existing providers in order to do business online.

***************************************

Back in the days I was a federation analyst. And an SOA analyst. Still am, but I've moved on.

Jim

fyi Tom Cruise says grew up wanting to kill Hitler

All:

See: http://news.yahoo.com/s/nm/20090118/en_nm/us_cruise_valkyrie_korea_1

My take: This is one of those headlines that absolutely stops you in your tracks. Didn’t Tom realize that Adolf had done the job himself--and long before the infant Mr. Mapother crawled out of L. Ron Hubbard’s stork-shaped spacecraft? By the way, “Valkyrie”—good popcorn movie, though, in all frankness, do we need yet another Nazi tale on the big screen? And do we need another reason to admit that, yes, Mr. Cruise has talent, but that he carries so much personal baggage on screen that it gets in the way of our viewing pleasure? In terms of big stars of my generation (true confession: Cruise is 5 years younger than me), Tom Cruise is a bit like Madonna: so absolutely cold and calculating that every new project seems more engineered than felt. You tend to focus on their ambition more than their message. And for all his talent as a politician, Barack Obama has a bit of that quality as well. A bit too tightly wound, though he certainly knows what he’s doing and why he’s doing it. I wish him well in his upcoming new job. I voted for him. But I’m still not sure who he is.

Jim

imho SOA 2.0: Outlook Cloudy, But With Scattered Patches of Promise

All:

Ha ha. Yes, this analyst too is not averse to the usual “last wave/next wave” tropes, such as “xx 2.0” or “xx is dead.” So sue me.

Clouds are SOA 2.0. Cloud computing is to a great extent the future of SOA. However, this paradigm raise the SOA stakes while also accentuating the risks.

To the extent that organizations use governance to harness the richness of cloud environments, they will be able to supercharge their SOA initiatives while radically improving scalability and cost-effectiveness. Leveraging distributed cloud platforms, the next-generation SOA will be more fluid, flexible, and virtualized, managing ever more massive data sets and providing the agility to handle more complex mixed workloads of transactional applications, business intelligence, data mining, enterprise service bus, business process management, and other functions.

Clouds complicate the SOA governance picture, but it’s not as if many enterprises already have exemplary governance practices. In the real world, cloud computing, like SOA implementations, is often an ungovernable mess. By encouraging widespread reuse of scattered software components, SOA threatens to transform the enterprise application infrastructure into a sprawling, unmanageable hodgepodge of ad-hoc services. Without proper governance, SOA could allow anyone anywhere to deploy a new cloud service any time they wish, and anyone anywhere to invoke and orchestrate that service--and thousands of others—into ever more convoluted messaging patterns. In a governance-free environment, coordinated cloud service planning and optimization become frustratingly difficult. In addition, rogue cloud services could spring up everywhere and pass themselves off as legitimate nodes, thereby wreaking havoc on the delicate trust that underlies production SOA.

SOA governance is maturing as a discipline, while cloud computing—the new galaxy in which services will burst forth—is anything but. Unfortunately, the cloud arena may continue to evolve so fast over the next several years that it will be difficult for consensus service-governance practices to coalesce. Still, emerging cloud services can benefit from the many lessons learned by enterprise SOA governance implementers. Most important, you need a service catalog that maintains metadata about services and enables you to control development and construction of services and publish visibility and availability of services to consumers. Also, federation agreements should be set up to auto-provision service definitions between public clouds and enterprises’ Web services, REST, and other application environments.

So the outlook for strong service governance in this brave new paradigm remains cloudy, but with scattered patches of promise.

Jim

Friday, January 16, 2009

twitter What I'm doing now...

...random self-indulgent stuff
...thanks for asking

poems 1982

CARTOON LOGIC

Drifting dream. Sitting on a beam. Two cells on a shaft. One fore, one aft. Protean complements. Sunday supplements. Brisk and happy beyond beyond belief, I take me in and what a relief. Every room well maintained. Every splash of blue contained within the lines. Rushing, shifting. Take me deep and go on drifting.

CENTER OF PAIN

Something. I can see the whole world fall away and there nothing but this pain. This morning I could feel the sheets rustling the leaves shaking but still this pain. Tonight I can relax and dwell in gray wet cloudy and fold my pain into nothing. Nothing.

ORGANISM

How strange when the infra-red ultra-thin membrane of dream blood ruptures and spills a dread film between the seeing-eye and a world itching and twitching inflamed rejecting a donor tissue.

PRASEODYMIUM

I wOUld wOrshIp grEEn glAss, bUt drIvEn tO cOnsIdEr...cOmpOsItIOn, thE rElAtIvE EAsE wIth whIch shArp pAnEs, slAppEd Up thrOUgh tIdY frAmEs, fIltEr whItE thrOUgh flAttEnEd sAlts, pOUndEd IntO rUdE AllIAncE...I Opt OUt. YEllOw Is hOw thEsE skYlInEs fAll, pOUrEd lIke sAndY sOIl In smEArY vAlEncE.

poems Dedication page of "Workflow Strategies" (1997) and translation (by Marian Lakomy) for Polish-language version (IDG Books Worldwide)

AND SO IT FLOWS

Starts and fits and somehow it works. Pieces and bits and blood on the pages. Rush and push and squeeze it between times. Scream and stream and give it a name.

I TAK TO PLYNIE

Zaczynamy, dopasowujemy i jakos to dziala. Czesci i bity i krew na stronicach. Spieszymy sie, wpychamy i upychamy w terminarzu. Plakac i plynac i nadac mu imie.

poems Several OO-themed ghostly pieces I forgot I'd written, back pre-blog

ONTO ONCO

One code keeps sticking, spraying, stubborn, I go, ego, all-defying. One code won’t shut up. One code won’t shut down. One code all around. One code.

OO

Flatter deities
no more with

praise and prostration—
they know we’re low.

Pray to the odds for
elevation.

Bend inward with the
roll of dice
and
--ow—
embrace.

OOBI

The projected I
really registered that tap
on our right shoulder.

stuff on hard drive at home...needs to get posted to blog before I forget..."Language as an Object Worthy of Contemplation"

From: [self]
Sent: [eight years ago]
To: [people I knew way back when]
Subject: Language as an Object Worthy of Contemplation

All:

I highly recommend Chris Redgate's daily, syndicated, capsule newspaper column, "The Red Pencil," which focuses on the art of putting words together. Chris' 100-words-a-day ranks right up there with Doonesbury and my morning bowl of Cheerios. He/she (never been able to resolve that forename into a definite gender, and I guess it doesn't matter--anybody here seen Julia Sweeney's wonderful "It's Pat!" movie?) recently wrote about the distinction (or lack thereof) between prose and poetry, and spurred me to respond as follows:

**************************************************

Chris:

I enjoy your "Red Pencil" column, which I read in the Washington Post.
I'm writing to respond to your recent two-part column on the difference between prose and poetry. On one level, I agree that in practice there is often little difference between prose and poetry as distinct literary genres. In practice, modern poetry is often simply prose chopped up and defaced with arbitrary carriage returns, tabs, punctuation, misspellings, and obscurities. Poetry often suffers from highfalutin abstractions, precious diction, adjectival overload, lack of point or narrative, and whining, self-pitying attitudes. And poets wonder why very few people buy or care about their work.

On another level, though, we can distinguish between prosaic and poetic expression, which, taken together and interwoven well, can enliven even the most mundane writing. Prosaic expression points to objects in the world (even if that world exists only in the writer's head, as many scientific hypotheses, for example, do). Poetic expression points back at itself, focusing on language as an object worthy of contemplation in its own right (write!). Language as an object worthy of contemplation--what do I mean by that? I mean the features of language that make it noteworthy, catchy, and memorable: meter, cadence, rhythm, rhyme, alliteration, tintinnabulation, imagery, word choice, etc. Language as a symbol system or an equation that we continually manipulate: grammar, syntax, etc. Language as a human artifact that is capable of conveying beauty and meaning through its very structure and sound.

The very best writing is both prose and poetry--you want to read, then re-read it, focusing on the objects that the writer is trying to depict, but also the object through which the writer depicts them. Through brevity, the best poetry encourages us to re-read. The best e-mails do too.

Jim

**************************************************

It's all art and artifice. I've spent my career trying to breathe life into technical topics of thudding complexity. Committing this sh*t to someone's memory requires stealth poetry.

Jim

Nine Choices on the Road to BI Solution Centers

Nine Choices on the Road to BI Solution Centers

December 2008

http://www.intelligententerprise.com/showArticle.jhtml?articleID=212201130

Forrester Research says BI Solution Centers offer a business-governed, solutions-focused edge over BI competency centers. Here are nine key considerations you'll face when building your BISC.

By Boris Evelson and James Kobielus, Forrester Research

BI is no longer just about back-office reporting. As BI solutions increasingly permeate the enterprise and span a wide range of applications, analytics-driven organizations recognize BI as a key corporate asset and a do-or-die platform. In today's turbulent and increasingly commoditized economy, enterprises must make better and faster decisions to stay competitive — and often just to keep their heads above water.

As BI grows more pervasive, complex, feature-rich, and mission-critical, it also becomes harder to implement effectively. Many information and knowledge management professionals question whether they architect, implement, and manage their BI initiatives properly. Doing so requires sound BI and performance management best practices — and an awareness of the myriad ways it can all go wrong.

Forrester's ongoing research compiles a litany of worst practices often committed, deliberately or inadvertently, by even the smartest, most experienced information and knowledge management professionals. Common deficiencies in many enterprise BI environments often manifest themselves at the application level, but the root causes of the problems go much deeper. The chief symptoms of suboptimal BI management practices include:

The lack of a single trustworthy view of all relevant information. Many organizations strive for a single unified view of disparate transactional data and commit themselves to the long-range goal of consolidating it all into an all-encompassing enterprise data warehouse (EDW). In practice, though, the goal of an uber-EDW is a moving target. EDW projects are frequently the victims of "scope creep," due to constantly changing requirements, relentless growth in the range of operational-data sources, and stubborn resource bottlenecks within IT. Insufficient focus on data quality and master data management (MDM) only adds to a lack of trust. Even data in a comprehensive EDW may be viewed as untrustworthy or, in a worst case scenario, incorrect. As a result, BI application users resort to old fashioned methods to collect and analyze data such as running their own SQL queries and bringing data into spreadsheets for analysis.

BI applications too complex and confusing to use effectively. Crafting sophisticated BI applications for power users is important, but designing them for casual business users is far trickier. Even the most user-friendly, point-and-click BI applications often require users to slog through a daunting range of user interfaces, features, reports, metrics, dimensions, and hierarchies. Also, BI is just a subset of the surfeit of productivity tools that information workers must juggle just to perform their basic responsibilities. As a result, most BI end users have barely tapped the productivity potential of the tools at their disposal and often run back to IT to help them create new reports, queries, and dashboards.

BI applications too rigid to address even minor changes. Our modern world moves at lightning speed, but BI solutions are often too rigid to keep up with the changes. One simple change to a single source data element can result in a few changes to extract, transform, load (ETL) and data cleansing jobs, which may turn into several data model changes in operational data store (ODS), data warehouse (DW), and data marts; this in turn affects dozens of metrics and measures that could be referenced in hundreds of queries, reports, and dashboards.

As these problems illustrate, the typical BI environment is far from realizing its potential as a strategic business asset. Many organizations have responded by developing BI support centers or BI competency centers, but a BI Solution Center (BISC) offers a more business- and solutions-focused advance on these concepts. This article, which is based on the Forrester Report "Implementing Your Business Intelligence Solutions Center," details nine choices you'll need to make on the road to building an effective BISC.

BI Solutions Centers Cultivate BI Best Practices
Implementing BI technology is easy (relatively) — but getting value from those technology investments is the truly hard part. Recognizing that sound BI management practices are often the missing ingredient, many companies have begun to transform their project-based BI support groups into a more strategic function: the BI solutions center. Though the BISC has great promise, it is no silver bullet. Enterprises with more successful BI implementations often implement some form of BISC practices, but there are a wide range of BISC implementation options, and not all of them are appropriate for every scenario. Why? Because there are many different approaches, organizational structures, and modus operandi for BISCs, each with its own pros and cons.

Forrester defines a BISC as:
A permanent, cross-functional organizational structure responsible for governance and processes necessary to deliver or facilitate delivery of successful BI solutions, as well as being an institutional steward, protector, and forum for BI best practices.
How does a BISC differ from such kindred concepts as the BI competency center (BICC) and BI center of excellence (BI COE)? Though it has the same core IT-centric functions (such as building OLAP cubes, deploying data warehouses, and writing ETL scripts) as a BICC or BI COE, the BISC differs in its business-led governance and solutions focus, as explained below.

The BISC, like a sharp business suit, must be cut, trimmed, and tailored to the contours of each organization. Each BISC must, at the very least, fit an organization's specific structure, people, business processes, technology, and especially the BI, data warehousing, and other analytics-relevant infrastructures.

The intersections of these four dimensions — process, people, data, and technology — create multiple BISC scenarios and approaches that information and knowledge management pros must consider when developing a BISC most relevant to support your BI efforts. Detailed below are nine scenarios and approaches you must consider when implementing your BISC.

Consideration 1: Strategic Or Operational Objectives?
Some organizations deploy BISCs that are purely strategic or advisory in nature. In those organizations BISC accepts the role of being a BI champion, providing subject matter experts, and overseeing BI standards, methodologies, and a repository of best practices. When these BISCs take on more operational duties they become responsible for tasks like the BI project management office (PMO), training, and vendor management. And in an ultimate operational manifestation of BISC, it can also carry the full spectrum of delivering BI solutions — BI solutions-as-a-service.

Consideration 2: In-house or Outsourced?
Enterprises deploying BI will need help from experienced consultants and systems integrators (SIs). This expertise is critical because BI is very much an art and will remain that for the foreseeable future, since it involves engineering a complex set of systems and data to address the changing imperatives of business organizations.

All successful, complete, scalable, "industrial strength" BI solutions require customization, application of best practices, and a significant systems integration effort. Because true best practices do not evolve from implementing two or three BI applications, internal resources with experience in dozens of successful BI implementations are difficult to find. A knowledge of best practices and lessons learned needs to be accumulated across hundreds of BI implementations — a privilege reserved for full-time systems integrators specializing in BI. As a result, most of the more successful BISC organizations include both internal and external staff.

Consideration 3: Virtual Or Physical?
Organizations have a choice of leaving their BISC staff within their lines of business (LOBs) or functional departments, or moving them to a centralized physical BISC organization. Since members within virtual BISC organizational structure have other management or hands-on responsibilities, they may lack BI focus and have to juggle conflicting priorities. Therefore, this type of a structure is typically more appropriate to BISCs that are strategic and advisory in nature. On the other hand, a physical, dedicated, and centralized organization is often more appropriate to fully operational BISCs. However, these tend to become just another "cost center" — as any centralized function carries with it the burden of process, methodology, and organizational structure. This implies bureaucracy, red tape, and a lack of flexibility. While such a structure is a must for certain IT functions, like infrastructure, security, and many others, it could be a BI showstopper. The first time IT cannot respond quickly or efficiently enough to a new requirement, a typical BI user will run back to spreadsheets to build a homegrown model, run the analysis, and get the job done. Information and knowledge management pros must determine which BISC structure — virtual or physical — would be most effective within their organizational culture.

Consideration 4: Operational or Analytical in Scope?
A BISC for some may focus on addressing the front-end access, presentation, delivery, and visualization requirements of analytic applications. Alternately, others may encompass a wider scope including data warehousing; data integration; data quality; master data management (MDM); and many other analytics-relevant infrastructures, processes, and tools.

Information and knowledge management pros can draw the scope of your BISC narrowly or broadly, and that line may depend greatly on how your company staffs, funds, and organizes these diverse IT groups. How far should a BISC go "upstream" to operational applications to draw that line? For example, is a database trigger implemented in an operational application for changed data capture (CDC) that feeds a DW part of the analytical or operational realm of your BISC's responsibilities? Is the data mart that calculates customer or product profitability and feeds these numbers to downstream operational applications an analytical or an operational data store? Such scope needs to be very well defined and managed to avoid the very real BISC "scope creep."

Consideration 5: Support IT only or All Stakeholders?
In especially large, heterogeneous, and siloed organizations, corporate culture and other realities may not make a centralized strategic or operational BISC a practical proposition. However, even in such an environment, it's still possible and often beneficial to centralize BI infrastructure (servers, DW, ETL, and BI tools) and let each individual line of business and functional department manage its own prioritization and BI application development, while leveraging the centralized BI infrastructure. Developers are the ultimate customers of these more narrowly scoped BISCs, and in several real life examples Forrester found that this is a practical limit of how much responsibility a BISC can take on without running into "turf battles." Information and knowledge management pros must determine whether their organizational culture is ready to support BISC beyond BI infrastructure in scope.

Consideration 6: Type of Funding Model?
BISC can be treated as a corporate cost center, and all departments across the enterprise can use and benefit from BISC services. The difficulty here is that this approach carries a stigma of "just another IT department/cost center." Furthermore, departments that are not yet set up to take advantage of the BISC will push back on carrying part of the cost burden. A cost allocation model based on the actual usage of BISC services can be fairer, but detailed, activity-based cost allocation models can be tricky to set up, implement, and manage.

Consideration 7: Narrow or Broad Scope?
Though a BICC-ish BISC is certainly possible, it's not preferred. Forrester recommends business leadership and business-led governance orientation, not a technology-centric focus, for the BISC. The same road map principles that apply to the best practices of implementing BI apply to the BISC: strategy first, architecture next, technology last. In its ultimate breadth of scope, BISC could encompass as many as 20 major components, roles, and responsibilities (see Figure 6 in Forrester's complete report), so it's very important to start small and increase the scope slowly.

Consideration 8: Performance Measurement Approach?
BISC stakeholders require transparent measurements of the success of the BISC program in order to support ongoing momentum and funding. BISC leaders must establish a clear set of BISC performance metrics and clearly communicate them on a periodic basis. Some BISC performance metrics are obvious and easy to calculate. Examples include number of BI applications delivered and maintained by BISC, number of BI users, number of reports in production and the usage patterns of these reports, reduced BI support staff, and reduced BI software and maintenance costs. Other metrics could be trickier to calculate and monitor such as improved information accuracy and turnaround time on BI requests.
Most leading BI products come with pre-built applications to monitor and analyze at least some of these metrics. If such out-of-the-box applications are not available from your preferred BI vendor, or if you are using multiple BI platforms, a centralized BI metrics management solution can be architected by using products from vendors such as Appfluent Technology and Teleran Technologies.

Consideration 9: Isolated or Aligned With Other Solution Centers?
No BI environment is an island from the rest of the data management infrastructure. Just as BI applications touch, depend on, and overlap with many related processes and technologies, BISCs cannot exist in isolation from other competency centers, solutions centers, or centers of excellence. Federation between the BISC and other data management competency centers is a best practice. Many such competency centers have existed in organizations for years, though they may not be recognized as distinct disciplines or organizations. Essentially, any group that defines, approves, and/or enforces standard practices for new projects or initiatives in any of these areas is a competency center.

To realize the full return on investment from BI, your organization's BISC should engage with all or most of the interdependent competency centers.
To succeed in your specific organizational environment, your BISC must have clear lines of demarcation, cooperation, and integration with all these other relevant initiatives. Failing to define a clear charter with appropriate collaboration, communication, and change management processes between complementary efforts can be a fatal pitfall in your BISC initiative.

Download the Complete Report
This article is based on the Forrester Report "Implementing Your Business Intelligence Solutions Center." The 16-page report includes charts, diagrams and seven recommendations not included in this article. Click here to download the free report.