Wednesday, December 31, 2008
Monday, December 29, 2008
Monday, December 22, 2008
This has been one of the most pivotal years in the evolution of the enterprise data warehousing (EDW) market. Every EDW vendor of note has firmly repositioned its go-to-market strategy around the appliance approach, with some also taking tentative steps into what is sure to be a key theme in 2009 and beyond: EDW in the “cloud.”
Yes, much of the recent “cloud” buzz has been bleeding-edge IT trade-paper fodder. It’s all interesting, to be sure, and there’s plenty of innovation going on. But much of the discussion seems to be a renaming, repackaging, and mashing up--with subtle twists and tweaks--of such well-established themes as service-oriented architecture (SOA), software as a service (SaaS), virtualization, and Web 2.0. And much of the trendy attention to cloud services obscures that fact that public cloud offerings from Amazon, Google, Microsoft, and others are still primarily a work in progress.
Forrester clients have only recently begun to inquire about cloud topics in earnest. But still, it’s hard to stay cynical about cloud computing for long. This solution delivery approach is coming almost certainly, inexorably, to all IT solution segments, including to EDW and business intelligence (BI). Sure, I&KM pros can safely ignore much of the cloud buzz for now, but, a year ago, they said the same about EDW appliances, and look how quickly appliances have become a dominant deployment approach in this market.
In the EDW-related inquiries I take from Forrester I&KM customers, a great many concern the appliance market. CIOs, CTOs, DBAs, and other professionals are actively considering various vendors’ appliances to replace, or at least to supplement and extend, their traditional “roll-your-own” EDWs. Typically, the I&KM pro wants me to help them select the best EDW appliance for their needs from any of several vendors, both venerable blue-chip and sexy start-up.
What validated appliances for I&KM pros this year was the fact that big-name EDW vendors--including Teradata, Oracle, IBM, Microsoft, and Sybase--have gone this route in earnest. The inflection point for the whole industry was this fall when Teradata--who effectively established the EDW appliance space years ago but had long resisted going to market under that label--embraced the approach and significantly expanded their appliance solution portfolio.
EDW cloud services are still a few years away from a similar inflection point. The leading EDW vendors have taken only the most tentative steps into the still-embryonic cloud services market. But they are all beginning to explore the cloud/SaaS channel with greater interest. They simply have to. Customers’ capital budgets are under severe pressure, and a multi-million dollar EDW solution--be it a premises-based appliance or what have you--is a tough sell. In a soft economy, any on-demand pay-as-you-go offering becomes more attractive across all customer segments. Just as important, the increasing scalability, performance, flexibility, and availability demands on the EDW and BI infrastructure are spurring many users to consider managed, hosted, outsourced offerings with fresh interest.
We’re starting to see the next-generation cloud EDW emerge. One noteworthy development this year was Oracle’s partnership with Amazon. Under that agreement, Oracle customers can license Oracle’s core EDW software stack to run in Amazon Web Services' Elastic Compute Cloud (Amazon EC2) environment. They can also use their existing software licenses on Amazon EC2 with no additional license fees.
Another important development was Microsoft’s announcement of its Windows Azure cloud initiative, of which one key component is the (still in beta) SQL Server Data Services (SSDS) subscription offering. When Microsoft SSDS goes into production in 2009, it will offer some basic DW/BI features in addition to transactional database support. Though SSDS will not initially be at a functional par with they licensed SQL Server offerings, it is clear that Microsoft plans to evolve it rapidly toward becoming a feature-competitive DW cloud offering over the next several years.
Microsoft also made a key EDW-related acquisition this year, appliance pure-play DATAllegro. Forrester expects this acquisition to figure centrally into Microsoft’s evolution of SSDS into a massively scalable cloud DW service. Though DATAllegro did not achieve much market adoption as a DW appliance pure-play, it provides Microsoft with a robust scale-out technology called “shared-nothing massively parallel processing” (MPP). By the way, Microsoft is playing catch-up in this regard, since most of its closest competitors implement shared-nothing MPP in their EDW premises-oriented solutions, and such DW appliance pure plays as Netezza, Greenplum, and Vertica also implement it to varying degrees.
When Microsoft ultimately ships a DATAllegro-powered SQL Server EDW appliance under its “Project Madison” in a year or two, we would not be surprised to see it adopted first in SSDS. The cloud EDW offering would benefit immensely from shared-nothing MPP’s ability to manage petabytes of analytic data and parallelize queries and other transactions seamlessly across a grid of hundreds or thousands of nodes.
Indeed, the industry consensus is largely in favor of shared-nothing MPP across the storage and compute tiers, coupled with flexible information-as a-service (IaaS) and server virtualization, as the principal platform for cloud computing. In the next-generation EDW, shared-nothing MPP allows the infrastructure to become more fluid, flexible, and virtualized, while managing ever more massive data sets and providing the agility to handle more complex mixed workloads of reporting, query, OLAP, data mining, data cleansing, transformations, and other functions.
As 2009 approaches, we’ll see pure-play DW cloud vendors come to the fore, appealing both to the early adopters among I&KM pros as well as to those under severe budgetary, headcount, and data center constraints. The established EDW vendors will not come to the cloud in a big way till 2010 at the earliest, it appears. But they will come, and with all guns blazing.
In 2-3 years’ time, the established vendors will own the EDW cloud space just as they’re starting to own the rapidly maturing appliance segment--in part, by acquiring the most promising cloud startups. The longer the economy stays drab and dreary, the faster the cloud EDW segment will expand, mature, and consolidate.
As that happens, commercial EDW cloud offerings will become as diverse and feature-rich as appliances have become, and I&KM pros will almost certainly ramp up their cloud EDW inquiries.
Forrester’s EDW analyst stands ready to serve.
Saturday, December 20, 2008
Friday, December 12, 2008
Pet a very large
Number. Let it purr its stream
Of murmuring ohs.
Hmm. A one at one
Of the ends. Chasing, it slinks
Past time’s curvature.
Lost the count. Are we
Mounting the powers or gone
Thursday, December 11, 2008
Cloud computing in a bubble economy
By James Kobielus , Network World , 12/09/2008
Cloud computing is the IT world's latest hot topic, and it's no secret why. In tough times, when capital expenditure budgets are under severe pressure, any pay-per-use solution looks like a winner.
If you give enterprises a credible outsourced alternative to their internal platforms and applications -- one that requires no capital outlays, long-term contracts, data-center infrastructure or internal IT staff -- users can scale that service up or down as their needs and fortunes expand or contract.
Clearly, cloud computing -- as a purely on-demand service-delivery model -- is tailor-made for a bubble economy, such as the world in which we live. In a bubble economy, volatility rules, prices fluctuate wildly, and acute uncertainty and risk permeate everything. Even more distressing, this dynamic new order can destroy established industries, vendors, business models and investment portfolios with sudden, sickening speed.
As we've seen in the financial and automotive industries, valuations can collapse overnight, thereby dislocating lives, careers and communities without much warning. As the economic outlook deteriorates, survival strategies and last-ditch tactics -- such as shotgun mergers -- quickly preempt sound business planning.
Does anybody truly believe that established IT market segments are immune from the brutality of the bubble economy? Hey, let's admit that the IT industry was in fact the proving ground and remains one of the chief enablers of this new economic order. If we learned anything from the dotcom bubble, it was that the frictionless business formation of an on-demand economy can prove disruptive in both the good and bad senses of that word.
Yes, on-demand services contribute to innovation, efficiency and agility throughout the IT world. But fast-bubbling start-ups can also, in the same bold burst, mortally wound established IT vendors before they know what hit them. And this same process can just as rapidly doom the disruptors themselves -- whenever the next cloud of fresh bubbles emerges to suck away their oxygen.
Excessive business risk is the thunderclap inside the world of cloud computing, and it can zap IT suppliers and users with equal devastation. If you've invested in a traditional IT solution that now confronts a significantly more cost-effective cloud-based rival, you'll be hard-pressed to survive if one of your competitors has leveraged that alternative to pare its cost structure to the bone.
And if you're counting on your established IT vendor to migrate you gracefully into its own emerging cloud-based environment, think again. Focusing on short-term financial results, its shareholders are demanding that it leverage the traditional cash cow of software license fees and maintenance revenue to the hilt. To the extent that your traditional IT supplier encourages you to adopt its new cloud-based offering, it will often be just a grudging nudge, a last-ditch effort to hold onto your business.
When looking at the cloud-computing horizon, no two IT industry observers agree on which solution vendors will ultimately prevail -- just as analysts in the financial and automotive sectors have no idea whether Citibank or General Motors will survive this period in anything resembling their current forms. How will Oracle, HP, IBM, Microsoft, Cisco, EMC, SAP, and other blue-chips weather this chaotic cloud front of tornadic start-ups?
Yes, the big guys all have their cloud initiatives, to varying degrees of maturity. But they all tremble before the possibility that such cloud-based pure-plays as Google, Amazon.com, Salesforce.com, or Akamai Technologies will lure away customers with more flexible, lower-cost offerings.
Established IT vendors are trying a bit of everything to keep their core businesses from slipping away. Fundamentally, they're all approaching cloud computing as a sort of Venn-style conceptual bubble diagram, one that converges software-as-a-service, service-oriented architecture, virtualization, utility computing, outsourcing, open source, Web 2.0, social networking and pretty much every other IT trend of the past 10 years. What the incumbents hope is that some magic synthesis of these approaches will help them hang on through this turbulence and prevail into the next era.
Let's be honest with ourselves. We all know the IT industry is in the throes of a major shakeout and some familiar names may not survive much longer. We may have to endure our fair share of shotgun mergers among veteran IT providers before we see the rainbow that signals the end of today's perfect storm. Silver linings are there in today's increasingly cloud-oriented environment, but they're hard to glimpse through the layers of macroeconomic gloom.
All contents copyright 1995-2008 Network World, Inc. http://www.networkworld.com
Above the Cloud: September 1987 to December 2008
Thanks, John G., John D., Neal W., Susan C., Adam G., Beth S., Julie B., Paul D., Charlie B., Barb W., Kyle, Alison C., Jim B., Paul S., Bruce G., Anne R., and Steve M.--and, of course, Cheryl C., and all the fine upstanding virtuous virtual citizens in their Speen Street finery
"Everything must run its course." --MGMT
Tuesday, December 09, 2008
Monday, December 08, 2008
Re these stories:
--Take that, iPod: Obama uses a Zune
--Obama's Zune Prompts Screams of Betrayal From Apple Fanboy Nation
Get over it, iPodders. We need our music. Microsoft makes a great portable media player. So does Apple. But iPod is not as great as you all like to believe. Nor is the Mac.
Also, I listen to KEXP. So should you.
Furthermore, Seattle's cooler than Silicon Valley. Better music. Better gloom. Anybody who follows my poetry can see that I'm more grunge than hippie, and more geek than nature boy. Though, being of the same age as Madonna and Prince, we fall directly between those generations, belonging to neither. All of which reminds me to re-publish the following poem, written several years ago (I spontaneously recited it for Linda Tucci this afternoon):
In the essential
from whatever drops
of liquid sunshine
are vouchsafed their way
or, failing that, fix
off the glints of glare
that glance in off the
gray and grace their green
I have tea and cola poems too.
We're the Obama generation, literally. Our next president will be our first younger than me (by three years--I met JFK in 1960, as a toddler at my Mom's side, as did highschooler Bill Clinton a few years later--Obama, clearly, never did).
If Barack Obama seems way too weirdly self-contained and driven for your tastes, he seems exactly right to me. He is me (he's also Madonna and Prince).
Sunday, December 07, 2008
Thursday, December 04, 2008
Air was there, was bare
and shimmering, around the
spot where bells should be.
Night was there, was spare
and sparkling, a strand of lights
suggesting a tree.
Some snow, a song, a
kid. These ticklesome trances
of holiday glee.
Monday, November 24, 2008
Tuesday, November 18, 2008
Tough times we’re living in. In fact, it’s hard to get through a day now without having the phrase “tough times” hammered into your head by the parrots in the media… with “down economy,” “bad economy,” “slowing economy,” “recessionary times,” “difficult times,” “challenging economic environment,” and a veritable Roget’s treasure chest of other near-synonyms creating a maddening echo-chamber effect.
Technology is proving to be a sort of shock absorber in times like these. What I mean is that it keeps us connected, productive, entertained, and tuned in to our friends, family, and support group---or, helps us to tune into and groove on our own comfy little private world of passions and distractions—even when the news from the wider world is bleak. And when travel options are expensive or unavailable—as they were in the immediate post-9/11 period—or the parcel post is potentially toxic—as it was post-anthrax--information technology helped us continue to carry on a constrained semblance of a normal economy.
Tensile strength is the amount of stress that a connection can bear before it breaks. The availability of so many communication options gives our social sinews more tensile strength than ever. IT makes us a tougher fabric, harder to rend asunder, even under the stresses that come from terrorism, financial panic, mass layoffs, and other nasty facts of life.
Telecommunications is a tendon, a tether, a thick thread that resists twisting and torque, and then springs back into shape. It’s a tissue that binds our community, maintains the integrity and possibility of collective action, the resilience and resolve of common response, even in the harshest circumstances. I still take inspiration from the thought that the initial impetus for the development of the Internet was to create a national network that can survive a nuclear holocaust. Even if that particular hope was too dire, desperate, and naïve to survive close scrutiny.
Tell the nation’s technologist-in-chief, if we ever appoint one, to update that vision. Tell that person to forge a new vision of a resilient national backbone, one that can help us weather rough times, and carry us over to boom periods, but remain in place, operational, evolving, and commercially viable through the subsequent economic cycles, with only minimal government oversight or funding. Wait…that vision’s already a reality—it’s the commercial Internet that Sen. Al Gore and others envisioned in the early 90s and which quickly became a worldwide reality.
Today’s Internet is working fine, but can always stand improvement. If we create a national CTO, that person must respect the fact that this network is a global resource, not a US fiefdom. A US national CTO must collaborate and federate with their counterparts in other nations.
Try not to monkey arrogantly and unilaterally with a good thing, which some call cyberspace but is in fact now a key connective thread of the human race.
Monday, November 17, 2008
Way over yonder in CIO Insight magazine, Eric Lundquist says we should have a national chief information officer (CIO), not a national CTO.
Maybe I’m dense, but I’ve never fully understood the difference. A CIO is supposedly focused on applications, or, in Lundquist’s words, “business first and technology second.” Whereas a CTO would be, by implication, a business-be-damned technology zealot of the first order.
Or maybe I’m exaggerating slightly. But I’ve met quite a few CTOs in my life and times, and I still haven’t met an irresponsible bit-hugging code-cutting wire-pulling maniac among them. Which is not to say that there aren’t incompetent twits in this line of work, as in all walks of life. But, if there is a practical distinction between CIO and CTO, it isn’t usually in their relative business-savvy. It’s usually that the former is better at working with stakeholders, gaining buy-in, and nailing down the budget, whereas the latter is adept at delivering on any commitments that the former has made. The best C-level tech execs combine both skill sets, or focus on CIO responsibilities and know enough to delegate the CTO responsibilities to the right person. Whoever fulfills these roles, they, in Lundquist’s words, should certainly “know how to get things done.”
Where a national CTO/CIO is concerned, the big issue is what “thing” we the people want them to get done. As I stated in the past few posts, it’s anybody’s guess what specific responsibilities President-elect Obama would invest in any future national CTO that Congress may or may not authorize him to appoint. In Lundquist’s article, I’m glad he mentioned the heretofore fruitless federal effort to focus cybersecurity policy in a single position. “How many cybersecurity czars,” says Lundquist, “have we gone through since 9/11? I count at least three (Amit Yoran, Howard Schmidt, Greg Garcia and I’m sure there have been more) along with long gaps between selections. I think what happened was in the panic to develop national security there was an unwillingness to admit that a national security plan could take years and years to develop as competing agencies, privacy concerns and security processes needed to be considered. A national CTO could face the same difficulties.”
That last sentence is the understatement of the year. You think giving one person responsibility for all federal cybersecurity policy was a ticket to failure? Well, just imagine the insanely overflowing inbox—-cybersecurity and much much more--that will greet the person who tries to take on President Obama’s IT policy agenda, per the all-encompassing sci-tech position paper that his campaign published months ago.
If a federal CTO/CIO does nothing else, they should at least focus on the same core agenda as their counterparts in the private sector: leveraging information assets to improve organizational efficiency, effectiveness, and agility. In other words, the whole business transformation and optimization agenda—where the business we’re speaking of is the “people’s business.” Maybe what we need is a national Chief Transformation Officer.
Which is why I initially thought Al Gore would be a good candidate for the national CTO job—never mind that he would be an even better candidate for Secretary of State. When Gore was Vice President, he headed a “Reinventing Government” effort—essentially, a thankless, possibly futile, effort to prod government agencies to work both smarter and harder. Not that I have anything against government employees, but most of them work for what are essentially monopolies, and many of them have secure, unionized jobs. Good luck asking them to transform themselves when they have absolutely no career-saving need to do so.
If the incoming Democratic administration attempts to revive the “Reinventing Government” initiative, whoever leads it will need to consider the transformative power of the government’s vast IT assets. Will that leader be the presumed national CTO? Will it be Vice President Biden? Should it be?
If we’re going to give someone the thankless job of national CTO, why not hand it to the second-in-command, whose position was once described as not being worth a “pitcher of warm spit.”
Sunday, November 16, 2008
Obama has not yet articulated any compelling public interest for creating a national CTO, however that role may be defined.
Furthermore, the president-elect has not addressed the obvious corollary of his proposal: a national CTO would be powerless and ineffectual without statutory authority and a corresponding budget and bureaucracy. To make his vision of a national CTO a reality, Obama would need to propose legislation that would establish a new agency, which would probably absorb the functions of existing agencies.
Of course, we already have just such an agency: the National Telecommunications and Information Administration (NTIA), which is under the Department of Commerce and describes itself as “the President's principal adviser on telecommunications and information policy.” Does Obama simply want to give NTIA’s head, the Assistant Secretary for Communications and Information, a new job title, “National CTO,” or is he proposing something different? Does he also want to invest this position/agency with additional responsibilities? Does he want to split it from Commerce and reconstitute it as a separate agency? None of this is clear at this point.
That’s not to say making NTIA a separate agency would necessarily be a bad idea (but I have no opinions on the matter, one way or another). Periodically, the U.S. government has created new agencies from the programs formerly held by established bureaucracies, and/or to administer new laws. Sometimes, an overriding national emergency creates an urgent demand for a new regulatory bureaucracy. Excuse me for putting a cynical spin on this, but the Department of Homeland Security will always be the 9/11-reaction agency…just as the Environmental Protection Agency is the “Rachel Carson Silent Spring Earth Day” reaction agency, the Securities and Exchange Commission is the “1929 stock market crash” reaction agency, and the Federal Reserve System is the “Panic of 1907” reaction agency.
OK, I’m oversimplifying, but only slightly. But I’m not sensing any great urgent national call for a CTO-like position. More critical, at this historical juncture, is a totally reconstituted financial regulatory authority to replace the Federal Reserve in the wake of the current meltdown.
If we need any sort of national CTO right now, maybe it should be a Chief Transparency Officer. What do I mean by that? Well, the Obama campaign pledged to “use cutting-edge technologies to ...creat[e] a new level of transparency, accountability and participation for America's citizens.”
What this suggests is a role for business intelligence and performance management technologies in the federal government’s outreach to citizens. What I’m thinking of, and discussed in a recent Forrester blog post, is some sort of online, continuously refreshed scorecard, dashboard, or report that measures how well the government is serving its constitutents, as measured across many key performance indicators. Or, at the very least, such a scorecard might illustrate how well President Obama is living up to his campaign promises or stacking up against, say, the government’s performance under the outgoing President Bush.
Hmmm….a national government transparency scorecard. Only an aggressive push from the incoming president can put this sort of initiative on a legislative fast track. And fresh blood, in the form of a national Chief Transparency Officer from an activist background, would be needed to sell it to a skeptical public and federal bureaucracy.
Yeah, we could give the government scorecard program to the NTIA, but that would bury it deep in the bureaucracy and probably doom it to failure. The Chief Transparency Officer would need to report directly to the president, who should be promoting it as a key component of his efforts to open government to deep scrutiny.
Saturday, November 15, 2008
My first reaction when I heard that Obama wanted to create a national “chief technology officer” (CTO) was a tad on the cynical side. I haven’t progressed much beyond that—-yet.
National CTO? A technology czar, so to speak? To do what, exactly? With what mandate? What budget? What bureaucracy? What programs? Which champions and defenders in which committees on Capitol Hill? Which lobbyists fighting for a slot on your agenda so they can report to their customers that they have access to someone who actually has real influence over policy and legislation that matters to them? You’re nothing in Washington if you don’t have any of that.
So what’s to distinguish a National CTO from an impotent, symbolic, figurehead position, such as, say, U.S. Poet Laureate, doing a perpetual roadshow to showcase the best in US tech prowess and innovation?
Is this national CTO’s primary job going to be promoting digital apple pie causes such as the need for everybody to get connected, achieve some basic Internet literacy, or give kids laptops to help them develop into the next generation of geniuses, and so forth?
Or will the national CTO have something slightly meatier to fill their days, such as convening meetings of federal-agency CTOs in order to disseminate government best practices for service-oriented architecture (SOA) and the like?
Or will the national CTO serve as some sort of policy coordinator driving the Obama administration’s attempts to get its IT-related initiatives implemented in legislation?
We definitely need all of those things. But in the same job? Same person? And what person(s) might be suited to any or all of the above? What conceivable reason might, say, a tech billionaire have to accept such a position, which sounds like a politics-intensive job for a longtime inside-Washington policy wonk with a thick skin and the soul of a lobbyist? Where’s the fun? The glamour? The chance to do something innovative?
Thursday, November 13, 2008
Birthday’s a good day to take a breather. So I am. Just today.
It’s also a good day to take a quick look back at the year gone by. Especially if, like me , you were born in late autumn, and will be too damn busy in the coming weeks to take another break till Christmas hits, and everybody will be doing their wrap-ups, and it all becomes just too much.
Too much is exactly what I’m avoiding today, which is a Novemberly one all right, outside my window: overcast, chill, rain, bare trees. Feels like home for a transplanted Michigan boy like me. The calm of gray light and soft sofa.
This personal blog is nearing the end of its fourth year. You’ve probably noticed that I’ve slacked off on the tech postings. In case you were wondering, I’ve moved most, but not all, of my tech musings to the Forrester Information and Knowledge Management blog, where I’m one of many. Rather than confuse the industry regarding what Jim Kobielus’ real position is on this or that, I’ve kept my personal blog tech blather to just the marginal, silly, tangential, and self-indulgent—and to re-postings of my Network World columns.
Here, for the record, are links to my Forrester blog posts, from latest to earliest:
Obama’s Information Agenda....What is It and Is There A Role for BI?
Governance Risk Compliance Agenda....Critical in Turbulent Economy, But Conspicuously Missing from IBM’s IOD Go-To-Market Message
Extreme Affordability at the Data Warehouse? Teradata? Really?
Tactile user-built micro-analytics...OLAP and BI for the next generation...and for the aging Baby Boomer generation
Agenda Politics -- Information Shifts The Balance Of Policy And Influence In Any Organization
Oracle Soars Into Petabyte Stratosphere, Puts HP-Powered Grid Storage At The Heart Of Its New High-End DW Appliance
Oracle Virtualizes DBMS And DW Into Amazon's Cloud
Federation Supplements The Data Warehouse - Not Either/Or, Never Was
The New Paradigm Of In-Database Cloud Analytics, And Google’s Role As Catalyst
Database Virtualization Could Induce I&KM Vertigo
Microsoft Acquiring DATAllegro, Rebooting Data Warehousing Appliance Strategy, And Triggering Industry Consolidation
OLAP's Cube Is Crumbling Around The Edges
No posts in June.
Analytic Databases Power BI Boom
Teradata Goes Appliance, Officially
The Enterprise Data Warehouse (EDW) — Defined, Refined, Evolving With The Times
CEP For Real-Time BI: Vendor Announcement Events Come In Threes, Apparently
Competitive Business Intelligence, Harnessed Through Collaboration And CEP, Harvested Across The Cloud
Oh No, Not Another 2.0 -- Database 2.0? Data Warehousing In The Cloud!
Complex Decisions Driven, But Not Overtaken, By Events
Complex Events, Simple Experiences
Complex Event Processing (CEP) For I&KM — Mouthfuls, Morsels, And Meaningful, Manageable, Multifaceted Streams of Real-Time Intelligence
IBM Expands IOD Portfolio, Perhaps To The Bursting Point
Data Warehousing Appliances: Growing Bigger Than A Breadbox, Softer Than The Bread
BI's New Frontiers In 2008 And Beyond
Everything That Happens In The Enterprise Software Market Affects BI
I noticed that I’ve published 15 poems on my blog this year. Interesting how those just sprung up, after having put it all on relative hiatus for the past few years. Basically, it all sprung up to have content to stick in my personal blog till I figure out the next thread. If you scroll back over the past 4 years in this blog, I started by riding the news cycle, in terms of offering opinions triggered by stories I read in the IT press. Then, there was a long period where I rode the “me cycle,” in terms of doing long multi-post rambling discussions on whatever tech topics came into my head, or topics I developed in my freelance IT writing jobs, or tech topics that I’d developed positions on previously in my analyst gigs that I still had ideas I needed to share with the world.
This year has seen a bit of all that, plus the poems, which I started developing late in my 30s to have an outlet for stray thoughts and to distract me in the late 90s/early 00s from the sometimes overwhelming task of writing books on workflow, doing my freelance articles, having a full-time job, and having a family.
Someday I’ll publish it all, hopefully, in book form. But mostly, it’s just to amuse. Of the recent ones, I’m particularly fond of “Charles Schulz’s ‘Kids’” (inspired by David Michaelis’ biography of the cartoonist—the first stanza of that poem is a direct rejoinder to a Peanuts cartoon from the mid-60s—look it up); “U.S. Pres. Barry H. ("Rocky") Obama, Jr. (D)” (inspired by an actual sticker on my sliding glass door in back, but written a month before the election, in Seattle, 6:30am, at Pike Place Market, standing on a concrete baluster with Starbucks coffee in hand, looking out over Puget Sound, knowing what’s coming); “Crunchy Analytic” (inspired by Kristina Kerr of Microsoft—hey Kristina, I told you I’d mention you--and composed in my head during dinner with Microsoft during their BI show in Seattle last month); “Wane and Wax” (inspired by a Cocteau Twins song); “Music and Music Accessories” (inspired by a piano player in the lounge at the Fairmont Chateau Laurier in Ottawa during the IBM Cognos analyst summit two months ago); “Talk Toxins” (a yoga breathing/relaxation/disengagement poem that also anticipated the recent panic and has helped me stay centered); “Bill Gates’ ‘Retirement’” (a ten-stanza triple-haiku, like the Schulz piece; it rode both the news cycle and the me cycle; like the Schulz piece, was actually a mediation on marriage, and also on the span of 50 years; like today’s poem, which they both foreshadowed); “Menhir” (for Elizabeth Aliman, my recently departed mother-in-law, and also for Jean Elizabeth Hoff Kobielus, my beloved mother who left this world almost 40 years ago); the Las Vegas poems from early June (especially “Center of Conventions Exhibitions Conferences and Expositions,” because it captures the essential spatial geometry of the huge interior public spaces that have been one of the primary settings for my career since the mid-80s, and which, like airports and business hotels, tend to blur in my dreams into each other).
Because I dream in space. And my mind processes information geometrically, and tangibly. Hence “Crunchy Analytic.”
Wednesday, November 05, 2008
Red fades from the flag
of my post nine eleven
sticker. One white stripe.
they were death, a patriot
was anything but.
On a bluing field.
Behold the latest star. A
black white Hawaiian.
Saturday, October 18, 2008
Wednesday, October 15, 2008
Ellison hypes Oracle's data warehouse appliance
By James Kobielus , Network World , 10/07/2008
The high-end data warehousing wars are fast upon us. Vendors are launching ever more scalable DW solutions. And they're delivering them with more aggressive -- and slippery -- performance claims.
The DW industry's new battlefront is petabyte scalability. This refers to a DW platform's ability to ingest, store, process and deliver an order-of-magnitude more data than today's typical terabyte-size warehouses. In this regard, the competitive high ground is still held by pioneering DW-appliance provider Teradata. That vendor recently released a high-end, shared-nothing, massively parallel processing (MPP) DW appliance that can scale to an astounding 10 petabytes across as many as 1,024 compute/storage nodes.
Oracle and HP recently joined the petabyte battle with all guns blazing. At Oracle's annual OpenWorld conference, they jointly announced general availability of a new petabyte-scalable DW appliance: the HP Oracle Database Machine, which includes the HP Exadata Storage Server. They touted its "extreme" performance and scaling features, bolstering those claims through public demos and beta-tester testimonials.
Most significant, they enlisted none other than Oracle CEO Larry Ellison and HP honchos Mark Hurd and Ann Livermore to unveil the new offering from the conference's main stage.
Clearly, the HP Oracle Database Machine is highly strategic for both companies. It provides a platform for Oracle to sell more database licenses and for HP to sell more server and storage hardware into DW deployments. It will almost certainly get the partners onto vendor short lists, alongside Teradata, for petabyte-scale DW solutions, which are increasingly being deployed in such vertical markets as telecommunications, government and financial services.
Also, it helps them blunt the momentum of DW appliance up-and-comer Netezza, whose platform, like the new Oracle/HP offering, performs SQL processing in an intelligent storage layer, thereby accelerating queries and table scans against very large data sets.
For sure, the recent Oracle/HP announcement was substantial and has shifted the competitive dynamics in the high-end DW market. But it was also an exercise in pure, albeit well-engineered, marketing hype. Predictably, it triggered an immediate firestorm of heated retorts from aggrieved competitors, which will almost certainly escalate in coming months.
In the fog of war, the first casualty is perspective, and that's certainly the case in this competitive fracas. Buyers of DW solutions should exercise extreme caution when evaluating the new Oracle/HP solution vis-à-vis comparably scalable offerings from Teradata, Sybase, Greenplum, IBM and others. You'll definitely need to apply the standard caveats to Larry Ellison's bold price/performance claims for his new monster DW appliance. And considering that Ellison was employing the native marketing speak of the DW arena, you'll need to apply the same grains of salt to his competitors' tails. Everybody in the DW market presents their self-serving performance story in much the same way as Oracle's big kahuna.
For starters, Ellison studded his talk with what might be regarded as the "virtuous coefficients" of DW performance enhancement: 10x, 20x, 30x, 40x, 50x, as high as 72x speedups have been documented by beta testers of the HP Oracle Database Machine. Of course, every DW professional knows that these performance boosts are extremely sensitive to myriad implementation factors, such as what you put in a SQL "where" clause, how many table joins you perform, whether and how you compress the data and so forth.
The performance enhancements are also relative to whatever DW configuration -- well-engineered or otherwise -- the beta testers had implemented prior to getting their hands on this shiny new uber-appliance. Note the tag line near the end of Ellison's presentation (emphasis added): "10-50x faster than current Oracle data warehouses."
Also, Oracle's big boss hammered Teradata and Netezza with benchmarks that were ostensibly apples-to-apples. However, Ellison's presentation seriously lacked the detailed footnoting that would be necessary to ascertain that he was indeed comparing his product against comparably configured instances of rival offerings that were processing comparable workloads. Where are those fast-talking, TV-commercial pharmaceutical disclaimer readers when we need them?
But even without aid of a magnifying glass, it was clear that Ellison was comparing his appliance directly to the Teradata 2550 and Netezza 10100 on the basis of a single common-denominator, configuration-wise: They all have a one-rack footprint. That's an odd basis for comparison. Those competitors do in fact have higher-end DW-appliance models, with more capacity, that might serve as a better basis for performance and price comparisons. Somehow, though, Oracle chose to overlook that fact. Why did it size up a 168-terabyte Oracle/HP machine against 43-terabyte offerings from Teradata and Netezza respectively?
Furthermore, Oracle somehow failed to benchmark these same solutions on the full range of performance criteria that actually matter in DW and business intelligence (BI) deployments, such as query response times, concurrent usage, mixed workload support, load speed and transaction throughput. Of course, even if Oracle had provided reliable, unbiased, third-party benchmarks in all of these areas, it would have been useless if the company didn't apply to comparably configured Teradata and Netezza offerings.
And the price-comparison chart -- including those same rival solutions -- was also seriously deficient. Most notably, the HP Oracle Database Machine's overall price, as presented by Ellison, lacked the requisite Oracle Database Real Application Cluster license fees. However, the stated prices for the Teradata and Netezza solutions definitely included the database management systems that come configured into those offerings (though, of course, Netezza has a free open source database, PostgreSQL, at the heart of its offering). So when you factor in all relevant costs, the new HP Oracle Database Machine doesn't look quite as attractive on the common-denominator of acquisition price per usable terabyte of production data.
Finally, Ellison, like most DW vendors, implicitly presented his solution's architectural approach as the gold standard against which all others must be disparaged. That, of course, is a highly debatable proposition.
For one thing, Oracle Database 11g -- the software heart of the appliance -- is still a general-purpose relational DBMS that has one foot in DW but another solidly planted in online transaction processing (OLTP). By constast, Teradata, Sybase, Netezza, Greenplum and other competitors have optimized their DBMSs for DW from the get-go, and do not support OLTP.
Also, Oracle's new appliance implements a shared-disk storage-area network architecture. By most accounts, shared-disk approaches are inherently less scalable than the shared-nothing MPP approach at the heart of DW solutions from, among others, Teradata and Greenplum.
And the Exadata storage layer can only parallelize SQL queries, and only against structured relational data. In its present incarnation, the Exadata storage grid cannot be used to execute a wider range of analytic functions or handle unstructured and semi-structured data types. Consequently, it is not applicable to the new generation of "content DWs" or for any of the in-database analytics that might be applied to the myriad nonrelational data types that reside in those warehouses.
Of course, Larry Ellison didn't go into anywhere near this degree of industry context. His job was and is to sell the world on an important new Oracle product and partnership, and he did so quite well. We shouldn't expect his direct competitors to be any more frank about their respective DW solutions' limitations. No commercial DW platform can optimally address every business-analytics requirement, now and future.
Sorting through the field of high-end DW solutions is getting more difficult, due to the diversity of vendor approaches. IT professionals need to read between the lines of DW vendors' increasingly breathtaking product announcements -- and talk to a consultant or analyst in the know -- before deciding if Oracle, HP or any other solution provider is truly breaking new ground.
If you find all of these complexities and caveats extremely confusing, and you're having trouble deciding which high-end appliance-based solutions can support the most extreme petabyte-scale workloads, welcome to the new DW market.
All contents copyright 1995-2008 Network World, Inc. http://www.networkworld.com
Wednesday, October 08, 2008
See Duane Nickull’s post and Anne Thomas Manes’ musings on the same.
First off, I started reading into this because I wasn’t, and still am not, clear on why Nickull is using the term “forensic” in this context. He refers to “forensic architecture” as “the process of describing the architecture of something after it has been built,” as if this is a non-judgmental effort. But he then uses it to refer to his dissection of various failed, ineffectual, and/or underwhelming SOA standardization efforts in which he was involved: ebXML, W3C Web Services Architecture Working Group, UN/CEFACT eBusiness Architecture, and OASIS Reference Model for SOA. But it becomes clear, in his analysis, that he’s actually deploying the term “forensic” in the standard negative connotation of post-mortem (on a victim) and building of an evidence-based case for prosecution of perpetrators.
Though, fortunately, Nickull doesn’t lay it on that heavy. And he provides a good analysis of what went wrong and lessons learned from those various efforts. But, reading into this, it’s clear to me that he’s primarily critiquing the applicability of the software development life cycle “waterfall methodology” to committee-based development of standards in sprawling, ill-defined architectural initiatives--of which SOA is perhaps a classic case in point.
What his analysis points to is the value of a retrospective approach to clarifying the core design principles of an emergent architectural phenomenon that simply works--such as the Web, with REST as perhaps the textbook premier example of a “principles clarification” exercise. In contrast to the “waterfall” method, I’d call this the “salmon swimming upstream to reconceptualize in their presumed/intuited spawning place” approach. Or maybe simply the “salmon” methodology.
Which reminds me of a point I need to make. For the past several years, I’ve been focusing on a space--business intelligence (BI), data warehousing (DW), and data integration (DI)--in which SOA (however defined and standardized) has had just a minimal footprint, and primarily as one integration approach in the back-end. But BI/DW/DI continues to grow and innovate at an amazing clip, still hinging on an old, stable, universal standard: SQL (with SOA-ish XQuery/XPath not achieving any significant momentum swimming upstream against this powerful current).
Interestingly, there are few if any industry specification activities in any forum that involve the BI/DW/DI segment. Much of the front-end BI innovation revolves around integrating Web 2.0-style interfaces and services, and much of that relies on REST (the non-architected architecture that has totally eclipsed SOA in real-world adoption).
REST-ive salmon continue to spawn like crazy downstream in the BI and analytics market. Look, for example, at my latest Forrester Information and Knowledge Management blogpost on the next-generation OLAP “Project Gemini” features that Microsoft began demonstrating publicly this week. Not much SOA in this approach, but a hefty dose of REST, thanks to its tight integration with Microsoft’s Sharepoint portal/collaboration platform, and a lot of SQL, owing to integration with SQL Server Analysis Services.
By the way, people who read James Kobielus’ blog may or may not realize that I now put most of my tech-meaty musings under Forrester’s I&KM blog. Please plug that blog into your reader. I’m one of many Forrester analysts who post to that regularly. Seriously great stuff, all of it.
And you thought all I do nowadays is write pretentious poetry.
Pretentious? Moi? Au contraire, mon frere.
Monday, October 06, 2008
Thursday, September 25, 2008
Wednesday, September 17, 2008
Monday, September 15, 2008
Monday, September 08, 2008
I like it.
I like the emptiness of it--just two super-rich super-famous guys in their 50s in an aggressively fake situation.
And I like the anti-punchline, per Seinfeld: “Give me a signal--adjust your shorts.” [Gates squinches buttocks in baggy pants as they walk side-by-side, viewed from behind] “I knew it!”
Neither of these guys is trying to seem even remotely hip, and I actually like that. I don’t know about anybody else, but what I love most about the Apple commercials is John Hodgman as the faux Gates. Whereas Justin Long as the faux Jobs--I just want to push a grapefruit in his smug mug--sorry, true feeling there.
Re the Seinfeld commercial (first of a series I suppose), notice how Gates now looks his age, but Seinfeld, though two years older than the software mogul, looks at least 10 years younger. I wonder which one of them works out.
Also notice how essentially mute Gates is in the commercial--but how he and Jerry make an odd funny level of eye contact in the shoe store. That was interesting.
You can view it at: http://www.youtube.com/watch?v=uz6amk3P-hY
This one’s actually worth repeated viewings. Within reason.
Hey, I’m a dork, I admit it. I loved Microsoft Bob too.
By the way, at a minute-and-a-half, it will undoubtedly re-run in tightly edited format (hmmm..."tight"). With that in mind, my suggestions to the editors (in sequence):
--Seinfeld: "Shoe Circus...Bill Gates"
--Seinfeld/Gates: "Is that your toe?" "It's leather." [that eye contact]
--Seinfeld/Gates: "Ever wear clothes in the shower, Bill?" "Never" "You're dressed and you're clean."
--Seinfeld: "Guess what Bill, you're a 10."
--Seinfeld: "Magnum Jupiter brain...."
--Seinfeld: “Give me a signal--adjust your shorts.” [Gates squinches buttocks in baggy pants as they walk side-by-side, viewed from behind] “I knew it!”
That's easily the dorkiest concentrated take-away from this commercial. Compressed into 30 seconds.
A little editing can make a world of difference.
Thursday, August 21, 2008
In many ways, the database as we know it is disappearing into a virtualization fabric of its own. In this emerging paradigm, data will not physically reside anywhere in particular. Instead, it will be transparently persisted, in a growing range of physical and logical formats, to an abstract, seamless grid of interconnected memory and disk resources; and delivered with subsecond delay to consuming applications.
Real-time is the most exciting new frontier in business intelligence, and virtualization will facilitate low-latency analytics more powerfully than traditional approaches. Database virtualization will enable real-time business intelligence through a policy-driven, latency-agile, distributed-caching memory grid that permeates an infrastructure at all levels.
As this new approach takes hold, it will provide a convergence architecture for diverse approaches to real-time business intelligence, such as trickle-feed extract transform load (ETL), changed-data capture (CDC), event-stream processing and data federation. Traditionally deployed as stovepipe infrastructures, these approaches will become alternative integration patterns in a virtualized information fabric for real-time business intelligence.
The convergence of real-time business-intelligence approaches onto a unified, in-memory, distributed-caching infrastructure may take more than a decade to come to fruition because of the immaturity of the technology; lack of multivendor standards; and spotty, fragmented implementation of its enabling technologies among today's business-intelligence and data-warehouse vendors. However, all signs point to its inevitability.
Case in point: Microsoft, though not necessarily the most visionary vendor of real-time solutions, has recently ramped up its support for real-time business intelligence in its SQL Server product platform. Even more important, it has begun to discuss plans to make in-memory distributed caching, often known as "information fabric," the centerpiece middleware approach of its evolving business-intelligence and data-warehouse strategy.
For starters, Microsoft recently released its long-awaited SQL Server 2008 to manufacturing. Among this release's many enhancements is a new CDC module and proactive caching in its online analytical processing (OLAP) engine. CDC is a best practice for traditional real-time business intelligence, because, by enabling continuous loading of database updates from transaction redo logs, it minimizes the performance impact on source platforms' transactional workloads. Proactive caching is an important capability in the front-end data mart because it speeds response on user queries against aggregate data.
Also, Microsoft recently went public with plans to develop a next-generation, in-memory distributed-caching middleware code-named "Project Velocity." Though the vendor hasn't indicated when or how this new technology will find its way into shipping products, it's almost certain it will be integrated into future versions of SQL Server. Within Project Velocity, Microsoft is playing a bit of competitor catch-up, considering that Oracle already has a well-developed in-memory, distributed-caching technology called Coherence, which it acquired more than a year ago from Tangosol. Likewise, pure-plays, such as GigaSpaces, Gemstone Systems, and ScaleOut Software have similar data-virtualization offerings.
Furthermore, Microsoft recently announced plans to acquire data-warehouse-appliance pure-play DATAllegro and to move that grid-enabled solution over to a pure Microsoft data-warehouse stack that includes SQL Server, its query optimization tools and data-integration middleware. Though Microsoft cannot discuss any road-map details until after the deal closes, it's highly likely it will leverage DATAllegro's sophisticated massively parallel processing, dynamic task-brokering and federated deployment features in future releases of its databases, including the on-demand version of SQL Server. In addition, it doesn't take much imagination to see a big role for in-memory distributed caching, à la Project Velocity in Microsoft's future road map for appliance-based business-intelligence/data-warehouse solutions. Going even further, it's not inconceivable that, while plugging SQL Server into DATAllegro's platform (and removing the current Ingres open source database), Microsoft may tweak the underlying storage engine to support more business-intelligence-optimized logical and physical schemas.
Microsoft, however, isn't saying much about its platform road map for real-time business-intelligence/data-warehousing, because it probably hasn't worked out a coherent plan that combines these diverse elements. To be fair, neither has Oracle -- or, indeed, any other business-intelligence/data-warehouse vendor that has strong real-time features or plans. No vendor in the business-intelligence/data-warehouse arena has defined a coherent road map yet that converges its diverse real-time middleware approaches into a unified in-memory, distributed-caching approach.
Likewise, no vendor has clearly spelled out its approach for supporting the full range of physical and logical data-persistence models across its real-time information fabrics. Nevertheless, it's quite clear that the business-intelligence/data-warehouse industry is moving toward a new paradigm wherein the optimal data-persistence model will be provisioned automatically to each node based on its deployment role -- and in which data will be written to whatever blend of virtualized memory and disk best suits applications' real-time requirements.
For example, dimensional and column-based approaches are optimized to the front-end OLAP tier of data marts, where they support high-performance queries against large, aggregate tables. By contrast, relational and row-based approaches are suited best to the mid-tier of enterprise data-warehouse hubs, where they facilitate the speedy administration of complex hierarchies across multiple subject-area domains. Other persistence approaches -- such as inverted indexing -- may be suited to back-end staging nodes, where they can support efficient ETL, profiling and storage of complex data types before they are loaded into enterprise data-warehouse hubs.
For sure, all this virtualized data infrastructure will live in the "cloud," in a managed-service environment and within organizations' existing, premises-based business-intelligence/data-warehouse environments. It would be ridiculous, however, to imagine this evolution will take place overnight. Even if solution vendors suddenly converged on a common information-fabric framework -- which is highly doubtful -- enterprises have too much invested in their current data environments to justify migrating them to a virtualized architecture overnight.
Old data-warehouse platforms linger on generation after generation, solid and trusty, albeit increasingly crusty and musty. They won't get virtualized out of existence anytime soon, even as the new generation steals their oxygen. Old databases will expire only when someone migrates their precious data to a new environment, then physically pulls the plug, putting them out of their misery.
Sunday, August 03, 2008
and stupid Lucy, griping
at a starry sky.
thinks your wee anxieties
are this fussworthy.
Or that your little
life, daily composure, will
somehow shrink the night.
Oh, brother! Only
an only child could have this
big a moony head.
Sally surely was
adopted and Charlie Brown
knows it but won’t tell.
blonde girl barely registers
on his self-regard.
In their fifty-year
childhood, Sparky never got
past the first eighteen.
at the Daisy Hill Puppy
Franklin, a black character
who was simply black?
What was funnier:
the round head kid’s frustrations
or beagle’s fancies?
Lucy’s little bro
contemplating his blanket
and smoking his thumb?
My priest pulpited
upon the Peanuts phenom
and divorce, Sparky’s.
Humor is waning
lately: the implication,
sin, the wages: duh!
The man’s overworked,
I thought, raising two sets of
kids can hammer you.
Joyce was certainly
his muse, unacknowledged, his
Norse Alice Kramden.
him a girl, then a Linus
and a few Reruns.
Settled an estate
around this milquetoast of a
Crisp lettering, bold
silence, warm meditations
on the most minute.
Schulz was sad and fused
into every frame.
So we were treated
to his affair and breakup
through chatty Snoopy.
Google Earth clearly
shows Sparky and Snoopy’s home
ice and tennis court.
Googling the Moon shows
the Sea of Tranquility,
where boy and dog roamed.
Beyond that the Star
of Bethlehem, which Linus
mentioned on TV.
Throughout an anxious
childhood, real children’s voices
skated in your rink.
kids, inked and lined in pain and
shades of awareness.
Likely as Shermy
to grace an old friend with an
Sit and draw. All you
ever do is sit in this
stupid room and draw!
(Silence). (Silence). (More
silence). Well? What do you have
to say for yourself?
(More silence) (No eye
contact). (Rapt concentration).
(Scribble, scribble). “Sigh!”
Monday, July 07, 2008
Man can multitask
and recognize when one big
one, for him, is done.
The man’s stack is still
very deep. Yeah. Iconhood
alone’s a full-time.
Not to mention his
new to-do: doing good on
a grandiose scale.
Bill: the short form of
billionaire: a shorthand for
Kids know him: name a
five-year-old, in the day, knew
what Carnegie wrought.
Or J.P., John D.
Not one learnt their ABCs
on raw pig iron.
Met him. Upon two
rooms. Me: Q. Him: A.
Smart, technical. But that’s par
for this profession.
Has private nervous
tics I’m told, and swears a fair
bit, but so do I.
He’s fifty-two now.
Has held one job since nineteen.
Has weathered pressures
uncommon, advancing far
Put yourself in his
place: Would you still rise early,
check e-mail daily?
As the salesperson
wearies of the pitch, software’s
abstractions grow old.
Real though the money
may be, coding can often
blur into word games.
So it’s no wonder
programmers seek programs a
bit more tangible.
Melinda, for one.
A wife programs you in ways
May make you think of
work less. Retire earlier
than was your habit.
May make you want to
designate her cornerstone
of your foundation.
Dare say he and she
don’t see these as their waning
or remaining days.
Aren’t hunkering down
in their subterranean
Lake Washington lodge.
dollars and hours from their cache
of Microsoft years.
An open expanse
of marriage is scary: too
much time before us.
Office visits: these
daily separations have
held us together.
The milestones of our
children’s lives, the schedules of
work the world demands.
Can you walk away
from unfinished business and
leave others the mess?
Then dive forever
into the muck of ages
and leave a fresh one?
plan and platform that, like your
first, will decompose?
William: is your will
writ large? Are your intentions
As we straddle our
there’s no letting go.
Men still managing
ours. Me: these thoughts. You: still chair
Thursday, June 26, 2008
Business intelligence (BI) is essentially a set of best practices for building models to answer business questions. However, today’s BI best practices may be suboptimal for many enterprises’ decision-support requirements.
For most users, BI is a journey that’s been modeled and mapped out in advance by others, following a well-marked path through vast data sets. Data models, which must often be pre-built by specialists, generate or shape the design of such key BI artifacts as queries, reports, and dashboards. Essentially, every BI application is some data modeler’s prediction of the types of questions that users will want to ask of the underlying data marts. Sometimes, those predictions are little more than an educated guess--and are not always on the mark.
BI’s most ubiquitous data-modeling approach is the online analytical processing (OLAP) data structure known as a “cube.” The OLAP cube--essentially a denormalized relational database--sits at the heart of most BI data marts. OLAP cubes, usually implemented as multidimensional “star” or “snowflake” schemas, allow large recordsets to be quickly and efficiently summarized, sorted, queried, and analyzed. However, no matter how well designed the dimensional data models within any particular cube, users eventually outgrow these constraints and demand the ability to drill down, up, and across tabular recordsets in ways not built into the underlying data structures.
The chief disadvantage of multidimensional OLAP cubes is their inflexibility. Cubes are built by pre-joining relational data tables into fixed, subject-specific structures. One way of getting around these constraints is the approach known as relational OLAP, which retains the underlying normalized relational storage approach while speeding multidimensional query access through “projections.” However, relational OLAP also suffers from the need for explicit, upfront modeling of relationships within and among the underlying tabular data structures.
From the average end user’s point of view, all of this is mere plumbing--invisible and boring--until it prevents them from obtaining the new query tools, structured reports, and dashboards needed to do their jobs. One unfortunate consequence of OLAP cubes’ inflexibility is that requests for new BI applications inevitably wind up in a backlog of IT projects that can take weeks or months to deliver. What might seem a trivial thing to the end user--such as adding a new field or new calculation to an existing report--might represent a time-consuming technical exercise for the data modeling professional. Behind the scenes, this simple decision-support request might, beyond the front-end BI tweaks, also require remodeling of the data mart’s OLAP star schema, re-indexing of the data warehouse, revision of extract transform load (ETL) scripts, and retrieval of data from different transactional applications.
No one expects the OLAP cube to vanish completely from the BI landscape, but its role in many decision-support environments has been declining over the past several years. Increasingly, vendors are emphasizing new approaches that, when examined in a broader context, appear to be loosening OLAP’s lockhold on mainstream BI and data warehousing. The emerging paradigm for ad-hoc, flexible, multi-dimensional, user-driven decision support includes the following important approaches:
- Automated discovery and normalization of dispersed, heterogeneous data sets through a pervasive metadata layer
- Semantic virtualization middleware, which supports on-demand, logically integrated viewing and query of data from heterogeneous, distributed data sources without need for a data warehouse or any other centralized persistence node
- On-the-fly report, query, and dashboard creation, which relies on dynamic aggregation of data, organization of that data within relevant hierarchies, and presentation of metrics that have been customized to the user or session context
- Interactive data visualization tools, which enable user-driven exploration of the full native dimensionality of heterogeneous data sets, thereby eliminating the need for manual modeling and transformation of data to a common schema
- Guided analytics tools, which support user-driven, ad-hoc creation of sharable, extensible models containing data, visualization, and navigation models for customizable decision-support scenarios
- Inverted indexing storage engines, which support more flexible, on-the-fly assembly of structured data in response to ad-hoc queries than is possible with traditional row-based or column-based data warehousing persistence layers
- Distributed in-memory processing, which enables continuous delivery of intelligence being extracted in real-time from millions of rows of data that originates in myriad, distributed data sources
Whatever we choose to call this new era, look around you. It has already arrived. We can see this trend in the growing adoption of all of these constituent approaches in production BI environments everywhere. However, to date, few enterprises have combined these post-OLAP approaches in a coherent BI architectural framework.
But that day is rapidly coming to mainstream BI and data warehousing environments everywhere. OLAP’s hard-and-fast, cube-based approach is slowly but surely dissolving in this new era of more flexible, user-centric decision support.
Saturday, June 21, 2008
Friday, June 06, 2008
Thursday, June 05, 2008
Every pitch hits some
Distinctly sharp mark.
Typically it’s at
Least the fact that this vendor’s
Lawyers nailed the name.
They alone in their space are
How nicely every
Press release marks another
Milestone in their rise.
How capital was
Amassed, plans staked firm, and bright
Warm brains brought on board.
In this major or minor
Remind me: I’ve been
Around this block several times.
Perhaps I’m jaded.
We’ve met, haven’t we--
Mandalay or Caesars--or
Hard to distinguish
This present polish from the
Glint of bygone gems.
High ceiling. A day
before booths are built and some
of the comers come.
concrete plain, a room swept, kept
on ready’s near edge.
Walker walks. The guards
regard the span of several
leveled city blocks.
Tuesday, June 03, 2008
Monday, June 02, 2008
Fifth of five questions sent to us before the Forester Analyst Relations (AR) Council panel this past month, plus my thoughts:
- Q: What are the one or two things you recommend all AR professionals do right now with respect to blogging?
- A: One: Read the blogs. Two: Respect the bloggers--and treat them as full analysts--if and only if they behave as professional analysts in this and all other media through which they present their thoughts. In this latter regard, also respect the fact that each professional analyst may choose to surface different analytical perspectives, priorities, and voices through different channels--“color outside the lines,” as it were--show different sides of themselves--and if you wonder how it all coheres with and supports their 9-to-5 selves, just ask the analyst to explain themselves. The best of us are continually self-reinventing/evolving. The same James Kobielus stands behind this and all other things I choose to say, wherever, whenever, however--but don’t expect me to stay stuck in a rut of saying the same things on the same topics over and over. And don’t expect me to stay content broadcasting through only one channel forever. Public speaking, for example, is something I enjoy and haven’t done enough of. I enjoy participating in panel sessions, such as this latest one alongside Messrs. Gardner, Lusher,
, and Eunice. Actually, I enjoyed listening to them as much as wagging my own tongue. But nobody expects an analyst to just sit there and listen. I’m expected to dispense brilliance on demand. So I’m always spring-loading fresh thoughts to share. Through any channel. Hopkins
Another panel query, plus my toosense:
- Q: What impact does microblogging--or Facebook, LinkedIn profiles, discussion groups, etc.--have on AR? Is this something that is around the corner, or not likely to be important?
- A: First and foremost, you should target the analyst-ish voices who use the broadcast-ish media, such as “macroblogging” (i.e., plain old blogging of the sort you’re reading at this moment) and not the more “narrowcast-ish” voices, such as micro-blogging. Bottom line: widespread publication is key to influence. After all, that’s why you should target the bigger analyst firms, and the most widely read macro-bloggers, with your AR initiatives, while at least keeping everyone else, especially the mid-tier analyst firms, but also the micro-pundits, such as the twitterers, in some inner or outer band of the AR loop. You should look at microbloggers as being part of the “long tail” of the analyst community these days and forever more--collectively they have considerable mass and gravitational sway, but individually they’re barely in your telescope. But in dealing with the long tail of the analyst community, there’s only so far you’ll want to dive that deep into the Kuiper Belt--much of this dark-matter commentary is undistinguished me-too pontificating, at best, or just flame-intensive blather at worst. Wait till some cometary fragment of it orbits closer to your home planet, and whips its long tail in your face, before giving it sustained attention.
Another question posed to the panel, plus my response (not verbatim--it wasn’t being recorded--and I don’t have a phonographic memory--rather, this is a paraphrase of things I actually said live, plus stuff I should have said, perhaps stuff others on the panel said that in my memory are now indistinguishably blurred into my own comments):
- Q: Does the activity of blogging fundamentally change the definition of “who is an analyst” or “a reliable voice in the industry?” and--as a consequence--who AR professionals should work with?
- A: Not really. My feeling is that analysts have always primarily been self-designated entities, and that blogging has simply intensified that trend by making it free, fast, and easy to self-publish. We’re all fundamentally self-designated--in the sense that many of us start our core business/career models by self-asserting through some channel(s)--perhaps a newsletter that we put out too see if the world will read/subscribe--and then, if widely read and perhaps even paid to opine, we the analyst, by virtue of that, get progressively validated by others (to greater or lesser degrees). Self-concept is key. We self-designate ourselves as somebody worth paying attention to. Our “know-something” self-image may precede the perception of same by others (hopefully, without much lag, and without too much disconnect between our self-perception and universal regard). We don’t usually start our professional lives as “leading industry analysts”--gosh knows I didn’t (and I’m not sure I completely fit that designator now)--I got initially validated by the good Dr. Peter G.W. Keen, and then by the fine editors of Network World, just three years out of grad school, in the late 80s, when the former hired me as an IT industry analyst, and the latter started publishing my still-going column. So we pay our dues, sometimes for years, struggling to gain visibility, acceptance, smarts, connections, reputation, differentiation, starpower, etc. Building our brand, and aging into it. But no matter how big we get for our britches, we’re still all just alternative voices, none of us infallible, none indispensable, some of us more or less reliable/valuable than others. It does little good to demonize or marginalize any alternative analyst-ish voice over the long term--it will just stoke ill will against your firm or client. To the extent that any analyst (multi- or one-person, established or new, widely read or largely ignored) takes an interest (even a cynical/adversarial one) in your firm/client, you, as AR professionals, should respect their right to their viewpoint. You should also work with them--at least to the extent of adding them to your mailing list, and responding to their questions and requests for briefings (within reason). Reliabilty and professionalism are proved out by people’s track records (e.g., checking facts, responding to objections, defending themselves with sound argumentation, honoring NDAs, etc.) Some of these people just want your attention/validation, and just need to vent/showboat. That’s why they started blogging in the first place. Those who intend to keep doing in long-term quickly adopt the professional ethics of a true analyst.
Friday, May 30, 2008
Second Q posed to the Forrester AR Council panel on how to relate to the blogosphere, followed by my A:
- Q: Does the manner in which the AR professional deals with blogging change with the size of the organization, e.g., is it harder for AR at larger firms to anticipate and address the myriad of issues coming at them from blogging pundits? Are smaller, more agile firms at an advantage?
- A: Hard to say. If you’re a bigger, more diversified, more dynamic vendor you’re likely to elicit more commentary from more external parties through more channels on more issues more of the time. But you’re also likely to have more of your own people reading--and anticipating--all of this, and preparing/spinning suitable responses. But you’re also more likely, if you’re big, to have more trouble coordinating internally among all stakeholders in order to prepare a concerted response. But, conversely, if you empower more of your people to post replies/counter-attacks through their own blogs, or through your company’s blogs, you can defuse the issues more rapidly. Or, if you’re not careful, light more fuses. And give the appearance that your right hand doesn’t know what your left hand is doing. Gee I wish there were easy answers.
Thursday, May 29, 2008
Whew...quite a string of travels...not through it all yet. In the past month, I’ve been to TIBCO’s TUCON (San Francisco) and SAP’s SAPPHIRE (Orlando), plus a quick IT vendor consult...next week, I do Informatica (Vegas), then the following week Microsoft (Orlando)
Last week, I was at Forrester's IT Forum 2008 in Vegas, where, among other things, I participated in a panel session on blogging, focusing on how analyst relations (AR) professionals should relate to “influencers” in the blogosphere.
Organized by Forrester’s Analyst Relations Council and moderated by Forrester VP Laura Ramos, the panel brought together leading IT industry analyst/bloggers plus those who blog-about-analyst/bloggers: Carter Lusher, president Sage Circle; Dana Gardner, principal analyst, Interarbor Solutions; Bill Hopkins, founder & CEO, Knowledge Capital Group; and Jonathan Eunice, founder and principal consultant, Illuminata. Oh, and a “token” Forrester analyst who’s been kicking around the blogosphere for a few years, including, increasingly, under our information and knowledge management blog (in case you’re wondering why the rate of postings to my personal blog has dropped in the past few months--still searching for the right rhythm and balance and partitioning of the jim-o-spheres, left and right, between the two).
Last week’s Forrester AR Council panel was well-attended, and the questions from council members were excellent. My fellow panelists were everything we could have hoped: smart, informed, opinionated, articulate, provocative. I’ll leave it up to them, in their respective blogs, to repeat what they put forth.
Here now, is the first question that was posed to us, plus generally how Kobielus responded:
- Q: How do AR professionals stay on top of bloggers and determine who to interact with and “influence” and who to ignore?
- A: Simply ask yourself who you read, who your colleagues read, your clients read--whose pieces you/all forward--whose you/all link to--whose ideas stick in your minds--whose names, reputations, and methodologies resonate with everybody in your immediate work environment and/or industry. Those are the indicators of “influence.” To the extent that analyst exerts such influence purely through one channel--blogging--all power to them. But the best analysts have always availed themselves of all channels at their disposal to inject their ideas into the bloodstream of the industry. Chances are that the chief “bloggers” are established analysts who have simply reinforced their brand through this medium. If they’ve made blogging the core of their for-pay business model, cool (and please explain how). Most of us analysts use blogging in various and sundry funky ways to supplement/promote our for-pay gigs.
Saturday, May 03, 2008
Analytic databases are the principal engines driving business intelligence (BI), delivering operational data into reports, dashboards, and ad-hoc queries.
Essential as they may be, analytic databases have been largely overlooked in the BI industry’s recent consolidation spree. Sitting at the core of data warehouses (DWs) everywhere, these data stores have been treated as mere plumbing rather than as differentiating platform components. Instead, most recent BI mergers have been driven by vendors’ desire to beef up their financial analytic applications, or add more sophisticated visualization, search, and other access-oriented features to their BI platforms.
Though often taken for granted, analytic databases will almost certainly become a key BI solution differentiator over the next several years. With the trend toward commoditization of core BI features, more vendors will distinguish their offerings through the speed, scalability, throughput, and mixed-workload support that only a well-tuned analytic database can provide. Every self-respecting BI vendor will boast that their analytic database can handle more concurrent users, process more complex multidimensional queries, load bulk data more rapidly, execute more compute-intensive transforms, and manage more massive data sets than the competition. Just as important, they’ll brag that they can do all this more cheaply than the next guy.
In an increasingly commoditized BI market, analytic price-performance is becoming the principal buying criterion. This trend is fueling the industry’s growing focus on analytic appliances, which are also called BI appliances or data warehousing (DW) appliances. Indeed, most of the leading BI vendors--SAP/Business Objects, IBM/Cognos, Oracle, Microsoft, and SAS Institute--provide their own analytic appliances now, or are developing appliance-based offerings on their own or with partners. Though these vendors will continue to deliver BI/DW solutions as packaged software offerings, they all see the appeal of appliances as turnkey solutions for many customer requirements. Midmarket customers, in particular, are taking a keen interest in appliances, which provide them with quick-deployment pre-optimized solutions and thereby relieve the burden on their limited technical staffs.
As analytic appliances become central to enterprises’ BI strategies, DW appliances will evolve into full-fledged BI platforms in their own right. Appliance vendors such as Teradata, HP, Netezza, Greenplum, DATAllegro, Dataupia, and ParAccel will expand their ability to run “in-database analytics” and other applications developed in-house, or by partners and customers. Appliance vendors will outdo each other in tuning database features--such as indexing, partitioning, in-memory caching, compression, cubing, tokenization, and query-plan optimization--that are geared for managing myriad analytic workloads. And every appliance vendor will beef up their hardware’s scalability through massively parallel processing, clustering, workload management, and other ongoing enhancements.
In addition, every vendor of column-oriented databases--which are exquisitely well-suited to data-intensive query processing--will soon either realign its go-to-market strategy around appliances or get out of the analytics market altogether. The performance advantages of a hardware-optimized column-oriented database over software-only rivals will be too pronounced for the latter to hold onto their market share. And though most appliance vendors currently eschew column-oriented approaches, preferring to tweak traditional row-oriented RDBMSs for multidimensional online analytical processing (OLAP), many will explore this alternative technique in order to eke out further performance improvements.
The growing demand for cheap analytic horsepower will also foster the development of subscription-based DW services, also known as “DW 2.0,” “Database 2.0,” “cloud databases,” and “on-demand databases.” Though not the first entrant in this new arena, Microsoft is the most prominent, having recently rolled out a limited beta of its hosted SQL Server Data Services (SSDS), which is slated for full production release in 2009. Under SSDS, Microsoft hosts a subset of SQL Server’s relational database management system (RDBMS) functionality in support of analytics as well as transactional applications. Though it has not yet specifically optimized SSDS for analytics, Microsoft has stated that it plans to evolve the service in that direction.
As it becomes available from many service providers, DW 2.0 will offer an ever-expanding supply of cheap, plentiful analytic horsepower. Over the coming decade, software-as-a-service (SaaS) providers will begin to offer feature-complete, subscription-based BI/DW services for high-performance, high-volume, complex analytics. These clouds will leverage the full virtualized, distributed, scalable, grid-computing fabric that Microsoft, Google, and other SaaS behemoths can bring to bear on data mining, performance optimization, and other compute- and data-intensive tasks.
Over time, we’ll come to take DW 2.0 for granted. We’ll call it up on demand, a utility for processing any and all decision-support tasks, large or small, throughout the business world or in our daily lives.
Wednesday, April 30, 2008
SOA’s strength is in its inner abstraction, its paradigmatic focus on the goal of maximizing sharing, reuse, and interoperability of key corporate resources over networks, thanks to open standards.
SOA’s goals are laudable, but now we have the presentation, access, delivery, and socialization layers to consider. Socialization layer? You mean social networking? You mean wikis, collaborative bookmarking, and all of that Web 2.0 stuff that keeps on innovating so fast that no clear design patterns, hence no stable interoperability specs, can emerge?
How can we define standards to support sharing, reuse, and interoperability in an emerging network computing fabric that steadfastly refuses to settle down and decide what it wants to do when it grows up?
In which friends organize architectures, and failures only accelerate the push toward some simpler, less abstract, more practical architecture that totally works.
Whether or not we analysts have conceptualized it all in every fine detail in advance.
SOA’s failure isn’t so much a fault of the vision, as it is a reluctance to recognize that any particular middleware implementation will soon be obsoleted by something much grander, and fuzzier.
Besides, WOA is primarily a phenomenon of the presentation, access, delivery, and socialization layer, a domain that SOA never seriously attempted to penetrate. WOA is exposing the inherent limitations of the SOA vision, which have been there from the start.
If you consistently acknowledge the limits of your vision, feasibility and flexibility considerations will inevitably open your architecture.
SOA’s f*cked, some say, but I don’t hold with the naysayers.
Anne Thomas Manes of Burton Group has spread this notion that SOA’s a failure because few enterprises have come anywhere close to the nirvana of 100 percent service reuse. But my feeling is that that’s like saying democracy’s a failure because we’ve never come close to 100 percent voter turnout in any specific election.
If you like, you can posit a utopian ideal and then declare the world a failure because it hasn’t signed up to your vision. Or you can declare your vision a vision and then commit yourself to pushing the world in that direction little by little for the rest of your life. The open issue is: Do you see yourself living this vision till the end of your days? Can you live with the possibility that your vision, however much it gives your life purpose, is regarded by others as an impractical pet cause that sets you apart from the pack, and not in a good way?
If you’re prepared to push that boulder up a hill, if you’re so bold that you can’t rest till your vision is in everybody’s sights, then you’re a true visionary. And you might stand a chance of succeeding against all odds. Then you’re the Al Gore of SOA, weathering disdain with full knowledge that the facts bear you out and history will judge you kindly.
If a bare majority consistently sides with your vision, fate will ordain your architecture.