SOA: The ABCs
A practical primer to a complex development methodology
--James Kobielus
Service-oriented architecture (SOA) is an inescapable IT-industry buzzphrase these days. It’s also a sprawling, eye-glazing abstraction that means many things to many people.
SOA concept, benefits, and business drivers.
However, there is considerable agreement on SOA’s core meaning. At heart, SOA is a development methodology that encourages greater sharing of remotely invocable application functions throughout networks. SOA best practices help enterprises create more flexible, adaptable IT organizations. Other SOA benefits include accelerated application development, reduced development costs, reduced duplication of work across development teams, consolidated deployment and management of application code, and increased business-process consistency across applications that share common services.
SOA development practices have taken hold throughout the corporate IT world over the past several years. One of the major factors driving this trend has been the growth of Web services, which provide a ubiquitous middleware fabric that makes SOA feasible.
SOA, though often linked in industry discussions with Web services, isn’t necessarily tied to that environment. SOA is a set of service design principles that may be implemented within and across diverse development, application, middleware, and operating environments. Many enterprises, including the ones we discuss in this article, are rigorously applying SOA service design principles to environments that haven’t yet implemented all of the core Web services standards, such as XML, Web Services Description Language (WSDL), Simple Object Access Protocol (SOAP), and Universal Description, Discovery, and Integration (UDDI).
Some would argue that a true SOA implementation requires the full Web services stack, including UDDI, WSDL, SOAP, and the growing range of specifications beginning with “WS-.” Others, including this author, try not to be Web services bigots. SOA is a set of objectives and practices that may be realized to greater or lesser degrees under various platform and middleware environments, using various programming languages and development tools. In this article, we discuss several examples of successful SOA implementations that aren’t predicated on widespread Web services deployment.
“For the purposes of SOA, Web services aren’t fundamentally different from CORBA, Java Remote Method Invocation (RMI), and other application frameworks,” says Christopher Crowhurst, vice president and principal architect at Thomson Learning, a Stamford, Connecticut-based business unit of Thomson Corporation that provides technology and assessment services worldwide. “These are all middleware environments within which you can build loosely coupled distributed services.”
Another factor behind SOA’s popularity is the never-ending business imperative to accelerate application development and cut costs. Under SOA methodology, developers write new applications by looking up the requisite functionality in online service registries, plugging those services’ APIs into their code, and writing some fresh orchestration logic to tie it all together. Taken to its logical extreme, SOA can make new development a “connect the dots” exercise that greatly shortens the time-to-market cycle for software deliverables.
SOA service-development practices.
Before we delve into SOA case studies from today’s corporate world, let’s define SOA as an overall approach. SOA refers to any distributed application environment that emphasizes the following service-development practices:
• Service virtualization: An SOA-based environment virtualizes or abstracts external service-invocation interfaces from those services’ internal platform implementations. Under pure SOA, the external application interface—or API—should be agnostic to the underlying platforms. Under SOA and other virtualization paradigms, applications may run on diverse operating and application platforms; have been deployed on nodes in diverse locations; have been aggregated across diverse hosting platforms; and have been provisioned dynamically in response to client requests. Some refer to service virtualization or abstraction as “loose coupling.” Within Web services environments, WSDL “service contracts” provide the principal platform-agnostic APIs for service virtualization.
• Service reuse: An SOA-based environment maximizes standards-based reuse, interoperability, orchestration, management, and control of distributed, virtualized services. In a pure SOA-based nirvana, developers would never consider rewriting an application function from scratch if an equivalent service has already been developed and is being hosted elsewhere in their enterprise (or in a partner organization). They would simply invoke that service from within their applications, using standard messaging-based protocols. By the same token, they would make all of their application functions available to other programmers as remotely invocable services. Within Web services environments, SOAP is the principal remote-invocation messaging protocol enabling cross-platform service reuse.
• Service brokering: An SOA-based environment facilitates reuse through infrastructure that allows service interfaces to be registered by their providers and to be discovered by other services that wish to integrate with them. A service broker is any entity that facilitates registration, location, retrieval, synchronization, replication, and/or delivery of services. Within Web services environments, UDDI registries are the principal service-brokering nodes for platform-agnostic SOA.
We now discuss each of these SOA practices in greater detail.
Service virtualization.
Services are the core concept of SOA. A service is a reusable unit of functionality that can be invoked through a published metadata interface, which is often known as a “service contract.” Under pure SOA, a service contract—such as an API expressed in WSDL--exposes only as much application logic, behavior, and context as that service is willing to grant to other services, leaving opaque all other aspects of a service’s internal implementation.
Typically, virtualized services present their abstract interfaces via standard service description languages. The services expose their functionality through standard message/document-oriented middleware approaches, such as the SOAP Document/Literal interface. Services implement end-to-end semantic interoperability through standard, agreed-upon schemas—such as XML Schema--for inputting and outputting message/document-oriented data structures. Interactions among consumer and provider services often involve exchange of self-describing messages/documents—such as those in XML markup--that convey session state and application context information. Figure 1 illustrates these essential concepts of service virtualization.
Effectively, a service’s contract, such as a WSDL definition, constitutes its SOA virtualization layer. The contract abstracts the service’s external invocation interface from the platform that hosts the service’s underlying software components. A service may be abstracted from its enabling platform in various ways, depending on how closely its published interface it corresponds to the platform’s native APIs, class libraries, object model, and other programming artifacts. Developers shouldn’t simply echo their platforms’ native object models in their service contracts. As external interfaces, abstract service contracts should remain stable through platform changes, presenting a consistent invocation interface for consuming applications. By the same token, developers should enable interoperability among services using platform-agnostic, self-describing XML messages/documents that convey context information pertaining to ongoing interactions among services.
On any given application platform, service contracts may be decoupled from each other in various ways, depending on business requirements. SOA best practice calls for decomposition of applications into service primitives that correspond to basic business functions or processes.
“SOA requires that you focus on your core business capabilities,” says Crowhurst of Thomson Learning. “These are what you should be exposing as Web services within your SOA environment. SOA requires that you stay at a high level in defining these reusable business services, rather than drill down too quickly into application code.”
Under SOA, each service should have a recognizable business function that plays a clear role in many applications, says Derek Ireland, group tech solutions manager at Standard Life Group, an Edinburgh, Scotland-based insurance company. “Examples of these reusable business services in our SOA include ‘provide pension valuation,’ ‘verify identity,’ ‘provide bank details,’ ‘maintain address,’ and ‘produce statement,’” says Ireland.
“Currently, we have around 300 business services in our SOA service catalog,” says Ireland. “These services are high-level business functions that abstract away from the underlying complexities of our principal platforms: WebSphere Application Server, WebSphere Business Integrator, WebSphere MQ IMS, and .NET. All of these business services are accessible from the languages—Java, COBOL, and C#--that are implemented in the development tools that provide our SOA software framework.”
“Our SOA software framework allows the development teams to concentrate on the business aspects of the application under development,” he says. In this way, Standard Life’s SOA software framework allows developers to avoid getting distracted by platform-specific implementation details.
Another core SOA tenet is that services should be factored into stable, well-bounded sets of functionality. The greater the stability of a service’s external interface, the less likely it is that a change in the service’s underlying implementation will disrupt interoperability with existing consumers of that service. “SOA is an approach for defining clear boundaries between business and technical services that need to be decoupled,” says Jayson Minard, chief information officer at Abebooks (www.abebooks.com), an online book marketplace based in Victoria, British Columbia.
Typically, stable service contracts are those that have been defined to be coarsely granular. A coarse-grained service contract describes the interface to an entire business process or a substantial subprocess, rather than to the fine-grained details of a particular platform’s object model, classes, and APIs. Coarse-grained service design allows developers to migrate to new platforms while reducing the need to update the service contracts through which platforms expose application functionality.
Though some might argue otherwise, coarse granularity is not a hard and fast rule of SOA. The appropriate granularity depends on the nature of the services being defined and integrated.
Abebooks’ SOA is based on the need to define fine-grained business services that correspond to particular B2B technical-integration connections. Abebooks uses SOA practices to decouple the integration logic through which its websites connect with each of its myriad business partners. More than 13,000 booksellers from 48 countries list their books on several Abebooks sites, and major online booksellers such as Amazon and Barnes and Noble have outsourced their used-book operations to the firm. “We have a lot of legacy Java code, which we need to continually tweak to address the data-translation and other functions specific to various partner integrations,” says Minard. “Without clear boundaries among various code segments associated with partner integration, we would risk disrupting global interoperability with all of our partners every time we changed, say, a data-translation routine for one partner.”
Under SOA, a business service should be factored as broadly or narrowly as appropriate to the business process being automated, according to Maja Tibbling, application architect with Con-Way Transportation Services in Portland, OR. “Services can be fine-grained, such as a logging service, or coarse-grained, such as those services that contain an entire business process. Services should be defined at whatever granularity best promotes their reuse.”
Con-Way’s ongoing SOA initiative began in the late 90s as an initiative to implement component-based development (CBD) of mainframe applications. “Most of our back-end business functionality has been partitioned into components and our business functionality exposed as component operations,” says Tibbling. “These meet the functional definition of a service as they are consumed by other applications through interfaces. Initial implementations were on the mainframe. Since then, we have successfully ventured into the J2EE world and have deployed many business applications in the middle tier with Web-based front ends. So, the next step [for SOA] was consuming the same back-end shared services through vendor-provided Java proxies to the mainframe.”
Con-Way has also implemented SOA-based component partitioning on many mid-tier J2EE-based and Web-accessible applications. “It was anticipated that many J2EE-based composite applications may need services from some other business area,” says Tibbling. “Collections of related services are defined in business interfaces that are implemented in J2EE Session Beans. To invoke these services, applications use a couple of useful J2EE patterns, BusinessDelegate and ServiceLocator, which abstract the service invocation, so that the underlying technology could be changed at any time. These same services are also available to external customers directly on our company website for things such as rate quote and tracking.”
Some enterprise IT groups take coarse-grained SOA to (what some might regard as) its logical extreme. “SOA for us is mostly providing a rational framework for distributed server connectivity between distinct application servers,” says Adam Blum, director of server engineering at Good Technology, Inc., a Santa Clara CA-based provider of wireless products. “For example, our e-mail servers behind the customer company firewall which connect to Microsoft Exchange and forward e-mail need to connect to our hosted router for forwarding traffic and to our Web store to validate purchased licenses. We do this through a set of XML messages. An important aspect of SOA for us is expressing common abstractions (users, customer companies, messages, licenses, server information) in common ways.”
Service reuse.
Service reuse is where SOA pays off. Companies that make the most of SOA are those that train, encourage, and reward programmers to reuse existing services—no matter who developed them--to the maximum extent possible. Developers should also be encouraged to publish and share their own services widely. In an SOA nirvana, programmers would write as little new code as possible when constructing new applications, and the only new code would simply orchestrate new interaction patterns among existing services.
Maximizing service reuse demands an SOA-focused culture that spans all IT disciplines. “We have a virtual team representing all development areas that identifies, exhibits, and promotes best practice for delivering reusable components,” says Ireland of Standard Life. “The team manages the catalogue of reusable business services and the development process required to ensure consistency of XML interfaces and interoperability of services.”
Greater service reuse translates into lower costs and accelerated development cycles. “In 2004 we were able to implement several major business initiatives within extremely short timelines,” says Tibbling of Con-Way Transportation Services, “because of our ability to reuse existing functionality and to protect existing consumers from the impact of changes to existing functionality. Our core business components such as Customer and Shipment offer services that every single application ends up using. This has allowed greatly enhanced time to market for subsequent projects.”
In practice, service reuse depends on middleware that allows any service to interoperate with any other service over networks. Increasingly, Web services environments are the middleware fabrics of choice for SOA, leveraging WSDL, SOAP, and other WS-* standards. But, in many real-world SOA implementations, services are being shared, reused, orchestrated, and invoked over a broad range of legacy middleware environments that interoperate in various ways. A Web services environment is just one type of integration environment—or “enterprise service bus” (ESB)--in real-world SOAs. Figure 2 illustrates service reuse over an ESB. It’s surprising to find out how many actual SOA case studies involve only a subset of the Web services standards—sometimes, just the core specification, XML, and that simply for structured data interchange.
“We chose XML, not necessarily Web services [for exposing application interfaces across our SOA environment],” says Crowhurst of Thomson Learning. “We transfer XML over HTTP, and XML over file transfer. We also do XML over message queueing transports for abstraction of application interfaces.”
Con-Way Transportation is running SOA over a very heterogeneous, evolving middleware environment in which Web services are still just a bit player. “We are using TIBCO BusinessWorks to reliably orchestrate asynchronous business processes in near real-time using [J2EE] ServiceLocator [patterns], but invoked from the integration infrastructure,” says Con-Way’s Tibbling. “[W]e are defining an XML canonical model in the integration layer in XML Schema (XSD). TIBCO’s transformations are based on XML, Xpath, and Extensible Stylesheet Language Transformations (XSLT). So, we can take a transaction from any source in any format within our very heterogeneous environment, transform the data into a canonical XSD, and then interact with the shared services.”
Con-Way’s XML Web services implementation is growing. Nevertheless, they’ve also given J2EE a growing role in their SOA. “Java Message Service (JMS) is being used more and more,” says Tibbling. “As we upgrade our J2EE environment, we anticipate using MessageBeans as process triggers into the integration layer.”
The company has designed its SOA to facilitate graceful migration from older to newer middleware environments as necessary. “The architecture is such that service invocations using one technology can easily be replaced with another,” says Tibbling. “We also have some Web services implemented to facilitate access to our functionality from other websites and also for use by external customers.”
Con-Way isn’t the only SOA shop that has limited its use of Web services primarily to support external interoperability. “We’ve selected IBM WebSphere as our primary application server,” says Ireland of Standard Life.” “We also use Java as our principal development language, IBM WebSphere MQ Integrator as our [message-oriented middleware] integration platform, and XML as the markup syntax for all interoperability interfaces….We don’t see the advantage of using Web services and SOAP [in place of WebSphere MQ] for integration. But we will use SOAP when it’s appropriate, such as for B2B integration.”
Standard Life has standardized internally on SOA development tools that will facilitate service reuse through Web services interfaces, when appropriate. “It’s easy for us to implement Web services under our SOA framework [which includes design patterns and development tools, such as IBM WebSphere Studio Application Developer],” says Ireland, “and we’re showing we can do that.”
Service brokering.
Service brokering infrastructures encourage reuse by allowing developers to “advertise” their programs’ service contracts and other descriptive metadata in a shared online registry, repository, or catalog.
Service brokering infrastructures take many forms, and are often specific to particular middleware or platform environments. Any entity that facilitates registration, discovery, and retrieval of service contracts may be regarded as a broker. The UDDI standard defines a service-brokering environment for Web services.
Service brokering infrastructures support the following core functions:
• Service registration: Application developers publish their functionality to service brokering nodes, such as UDDI registries. Developers, also known as “service providers,” publish their services’ contracts, which include such descriptive attributes as service identities, locations, methods, bindings, configurations, schemas, and policies.
• Service location: Service consumers—in other words, application developers who wish to invoke registered services--query the broker to find services that match their functional requirements. The broker allows the service consumer to retrieve service contracts.
• Service binding: Service consumers use the retrieved service contracts to develop code that will bind, invoke, and interact with the services. Ideally, the service consumer’s development tool should automatically generate client code during this process. Developers should use integrated development environments to compile service contracts from visual, flowcharted process maps and then to register these contracts to the appropriate service brokers.
Figure 3 illustrates the essential functional elements and interactions in a service brokering environment.
As noted above, the UDDI standard defines a brokering environment geared to the emerging world of Web services (though many Web services implementations lack UDDI registries). WSDL service contracts and UDDI “tModels” are the principal Web service interfaces published to UDDI registries. Other common service-brokering environments include Common Object Request Broker Architecture (CORBA) Naming Service, Distributed Computing Environment (DCE) Cell Directory Service, Windows NT Registry, Java Remote Method Invocation (RMI) Registry, and Electronic Business XML (ebXML) repositories.
If they wish, enterprises can deploy a DBMS as a service-brokering node for their SOA environment. “Our SOA environment includes a runtime business service directory that runs on IBM Universal Database/DB2 on a mainframe,” says Standard Life’s Ireland. “We now have around 300 services published in the business service directory, which is accessible from our SOA framework. We can enforce version controls on services that are published to the directory.”
Abebooks has implemented a hybrid UDDI/LDAP registry architecture, according to company CIO Minard. “We’ve implemented an open-source UDDI registry outside our firewall to publish external service interfaces that our partners can use to connect to our services. The external UDDI registry uses LDAP to do lookups of the master service directory that sits behind our firewall.”
Not all SOA implementations rely on service-brokering infrastructures, but the broker-less SOA is usually a transitory phenomenon. “Early on, a major obstacle to achieving reuse was convincing the development community to first look for services that do what you need or almost do what you need,” says Con-Way Transportation’s Tibbling. “[One of our] main challenges now [is] creating a meaningful catalog or repository of existing services…[We plan to] create a comprehensive repository of business component services for all types of services (both Web Services and other technology implementations), perhaps using WSDL and UDDI.”
As this discussion indicates, UDDI is becoming the service-brokering infrastructure of choice for many SOA implementations, including those that are only partially down the road to Web services. UDDI-based service registries provide a basis for ongoing governance of SOA-based distributed application environments, many of which are migrating toward more thorough implementation of Web services. Enterprise SOA implementations should include policy-based tools that leverage their UDDI registries and support governance of the entire service lifecycle, from design and development through deployment and ongoing administration. Web services management tools are a critical component of the SOA governance equation.
Clearly, SOA is a much broader trend than Web services. SOA shifts developer focus away from individual applications and toward reuse of modular service components hosted throughout distributed environments. As this article has shown, many enterprises will maintain mixed SOA environments for the foreseeable future, combining a bit of Web services with a lot of legacy middleware, and implementing UDDI—in fits and starts--alongside other registries.
This platform-agnosticism is the essence of true SOA. When implementing SOA, enterprises should take care not to introduce any specific dependencies on WSDL, SOAP, UDDI, or any other Web services standards or specifications. SOA implementers should treat Web services—and any other development, interoperability, and operating environment—merely as implementation details.
SOA should be equally applicable to any environment—legacy, current, and future—and should serve as a virtualization blueprint for coexistence and interoperability among many generations of application infrastructure.
Friday, September 23, 2005
imho SOA--the 1-2-3s
SOA: The 1-2-3s
Perspectives on measuring the ROI and implementation progress of your enterprise SOA push
--James Kobielus
SOA’s return-on-investment (ROI) is both hard and soft, quantitative and qualitative, near-term and deferred—depending on who you ask, how you ask, and how both you and they define SOA.
SOA’s ramp-up costs
SOA’s ROI can be calculated in monetary terms if you have a crisp enough definition of the approach. Fundamentally, SOA is a development methodology that encourages sharing of remotely invocable application functions throughout networks. Another way of characterizing SOA is as the “provide once, consume everywhere” paradigm. It’s a way of doing more with less, where applications can be built more quickly and incrementally, through recomposition and orchestration of pre-existing services, and with fewer and fewer lines of original code. In other words, SOA promises that the marginal cost of building new applications will continue to drop closer to zero as the service-reuse rate climbs closer to 100 percent—in other words, new applications simply plug into existing applications rather than re-invent their functionality. This SOA nirvana of 100 percent service reuse is illustrated in Figure 4.
SOA can have a substantial payback for implementers. First, though, your organization may have to surmount a significant ramp-up curve that doesn’t come cheap. As the word “architecture” in SOA indicates, you’re going to need to rethink many of your traditional approaches to application modeling, development, integration, deployment, and management if you truly commit your careers to this new paradigm.
“No business wants to spend on architecture—they want you to produce products,” says Christopher Crowhurst, vice president and principal architect at Thomson Learning, a Stamford, Connecticut-based business unit of Thomson Corporation. “I can guarantee there’s a cheaper way to built your next product [than by taking an architectural approach], but there’s no cheaper way to build your next 20 products. That’s why IT professionals are adopting SOA [for application development].”
“The core tenets of SOA are coarse-grained service abstraction, orchestration, and interoperability through proxies.” said Crowhurst. “I only know four or five companies that are purely implementing all of these SOA approaches, because it’s a hard job. It requires a considerable amount of business process re-engineering.”
These observations are borne out by leading analyst firms. Forrester Research analyst Ken Vollmer and Mike Gilpin report that SOA-based development often costs more than—sometimes twice as much as--traditional approaches when viewed solely with respect of building a particular application component. But when that application component is reused over and over, for integration with other applications—both internal and external to the enterprise—SOA may become more than 30 percent more cost effective than traditional development approaches. These savings derive from the lower lifetime development, integration, and maintenance costs associated with SOA-based distributed applications.
SOA’s downstream cost savings from application reuse and consolidation
Many of the savings from SOA stem from its ability to consolidate silos of redundant application functionality and data throughout organizations. Fewer software licenses and servers translates into clear cost savings in enterprise capital and operating budgets. Fewer redundant software components throughout the organization translates into less need for redundant programming groups. Application consolidation onto fewer platforms reduces deployment, integration, and maintenance costs over the software lifecycle. A recent Gartner Group study found that the costs of installing, integrating, and maintaining software over its useful life can be as much as six times greater than its license cost.
You need to make an investment in SOA in order to reap the cost-reduction benefits of application consolidation, sharing, and reuse. Standard Life Group of Edinburgh, Scotland has committed considerable resources to SOA as the basis for all current and future business applications.
“We have invested in our people and processes in order to fully implement and realize the value of SOA,” says Derek Ireland, group tech solutions manager at the firm. “We have developed architecture, design and implementation patterns for delivering service-oriented applications. We have over 30 person-years of effort invested in a run-time framework that underpins the whole approach, abstracting applications from underlying transports and protocols and providing standard APIs for common application behaviors such as logging. We have a virtual team representing all development areas that identifies, exhibits, and promotes best practice for delivering reusable business services. This team manages the catalog of reusable business services and the development process required to ensure consistency of XML interfaces, ensuring interoperability of services.”
Standard Life has kept close tabs on the return from its SOA investment. “We’ve saved over 2.8 million pounds in development costs over the past three years,” says Ireland, “based on reuse of existing functionality within the service catalog.” The company currently has around 300 reusable services in its catalog. Over 50 percent of those services being reused. They are being consumed by more than 70 other services across all environments at any time. All in all, the company counts 361 instances of service reuse. Over 40 percent of its back-end transactions are initiated through its SOA-based environment. The volume of transactions that traverse Standard Life’s SOA environment has increased 900 percent since the company began its SOA implementation in 2001. The company maintains three SOA-implementing development groups with around 500 personnel among them, of whom about half are delivering SOA services and applications. Their SOA-enabling distributed-application infrastructure is managed by a staff of seven.
Other enterprises report both hard and soft monetary paybacks from SOA. “We’ve seen huge savings in Oracle [database] licenses,” says Jayson Minard, CIO of Abebooks in Victoria, British Columbia, due to the server consolidation that has accompanied their SOA push. At the same time, says Minard, “we’ve also seen savings on the team side, due to having more development staff bandwidth for new projects. Development group efficiency is going up.”
Another cost advantage of SOA is that—by stressing platform-agnostic service virtualization—it allows enterprises to choose the most cost-effective best-of-breed application components for particular functions. “With SOA, you can go the vendor’s SOA route,” says Minard. “Or you can take the approach that we did. First, we identified the features and benefits we were looking for, such as flexibility, decoupling, asynchronous communications, and service boundary partitioning. Then we identified best-of-breed products that addressed those requirements. We evaluated 14 products, including commercial offerings and open-source packages. We tested interoperability among those products, and eliminated the ones where the vendors differed from us in their philosophical approach to SOA. In that way, we’ve steered clear of vendor lock-in.”
SOA-based acceleration of application development
Speedier SOA-based application development can also contribute to a company’s financial well-being and competitive agility. “Applications that used to take two weeks to develop now take two days,” says Minard.
“We’re a fast-paced multi-channel transportation company that requires ever faster time to market for IT solutions,” says Maja Tibbling, application architect with Con-Way Transportation Services in Portland, OR. “To take advantage of new business opportunities, SOA allows for speedy delivery of new functionality as well as providing adaptability in existing functionality. This flexibility also permits us to meet the evolving demands of our customers.”
Faster development allows companies to respond more flexibly to new competitive challenges, stimulating top-line revenues. TSYS Prepaid Inc. is a hosted service provider that processes prepaid debit cards on behalf of financial institutions. Its SOA-based applications are the basis for revenue-producing services. “Accelerated development allows us to recognize revenue more quickly,” says Carl Ansley, the company’s CTO.
Difficulty of quantifying other SOA ROI metrics
SOA’s other benefits may be quantified to greater or lesser degrees. How can you put a valid number on improved business adaptability and agility? Or on improvements in application consistency due to reliance on a common set of shared services? Or on reductions in the risk to interoperability when you make changes to application code that’s been virtualized, abstracted, and loosely coupled in keeping with SOA principles? Or enhancements to application usability, scalability, and performance that come from reliance on shared services that have been built and optimized by development centers of competence within your organization?
One of the principal difficulties with quantifying SOA’s ROI in the real world is that the paradigm overlaps with so many other development, integration, and management approaches. SOA, to some degree, carries on many of the objectives and methods of component-based development, as attested by several case studies who were interviewed for these articles.
But SOA has, in industry discussions, been elevated from a mere approach to something resembling a religion. It is often portrayed as an all-pervading panacea, a golden force field that encompasses all good programming practices from Babbage to the present. If you look at almost any popular discussion of SOA, you’ll see it mentioned in the same breath as Web services, enterprise service bus (ESB), business process management (BPM), model-driven development (MDD), governance, and other hot topics. Indeed, every IT vendor with a savvy marketing group these days has hitched its entire product portfolio and roadmap on its alignment with SOA. So it’s hard to know what ROI to credit to SOA versus these other development and integration approaches.
SOA is still a fuzzy concept to many enterprise IT professionals, and it sprawls, in many people’s minds, across a wide range of loosely related technologies and approaches. According to a recent IDG Research Services Group survey, IT professionals are almost evenly split between people who claim some familiarity with SOA (52 percent) and those who admit they haven’t a clue (48 percent). Likewise, the split was almost even between those respondents who reported strong confidence in SOA’s long-term potential (55 percent) and those whose confidence in the paradigm was lacking or lackluster (45 percent).
IDG asked that same population of respondents to associate various phrases with SOA. The respondents ranked the most valid SOA descriptor—“reusable applications”—fifth among the options given. Respondents ranked such descriptors as “software as a service,” “enterprise application integration,” “Web services registries/repositories.” and “frequent use of Web services” higher, even though those phrases don’t directly articulate SOA’s core meaning.
As we can see, popular understanding of SOA concepts is fuzzy, to say the least. What’s even fuzzier is any reliable data on who exactly is implementing SOA, with what degree of commitment, and at what level in the organization. In that same IDG study, 28 percent of respondents stated that their companies are already implementing SOA, with slightly less than half of those SOA implementers merely conducting pilot projects. Of those respondents who are considering SOA but don’t currently have pilot projects, IDG found that only 22 percent are actively investigating SOA-based solutions over the coming year.
However, a recent Forrester Research survey of large North American companies reported that more than 70 percent of respondents have already implemented SOA. Forrester analysts say that by the end of 2005, 51 percent of medium-sized enterprises and 46 percent of small businesses will have adopted SOA as well.
The leading IT analyst firms present widely divergent pictures of SOA adoption rates in 2005. So, really, it’s anybody’s guess exactly how many organizations of various sizes have adopted SOA, or plan to do so over the coming several years. And there’s no telling how many of the people responding to these surveys are clear on precisely what they mean by SOA, or are using the term that’s consistent with how others are construing it. Or whether any of them is correctly using the term.
Enumerating the levels of SOA implementation
Another problem with such surveys is that they tend to use a binary definition of SOA adoption: either you’re implementing SOA or you’re not, with no gray territory between these two poles. This simplistic approach to gauging SOA adoption is at odds with the extremely complex, multifaceted nature of this methodology.
Anybody who’s spent any time studying the topic can quickly identify various steps, phases, layers, degrees, and approaches to implementing SOA. As the case studies in this article and the companion article, “SOA: The ABCs,” illustrate, there’s no single way to do SOA. By the same token, your ROI will depend considerably on the approach your organization has taken, and on how far you are along your SOA implementation roadmap.
One of the most critical elements of any enterprise SOA roadmap is the need to gain a solid commitment from senior IT and business managers, based on such business benefits as accelerated development, reduced cost, and greater business agility. Also, you should provide developers with the training, tools, guidelines, and incentives necessary to get them thinking in SOA terms, and to discourage development of one-off and non-modular applications. Your SOA roadmap would be incomplete without a thorough reorganization of IT governance processes aimed at enforcing organization-wide adherence to SOA best practices.
An enterprise SOA roadmap should incorporate the requisite philosophy, culture, practices, tools, and infrastructure. The more of these roadmap components you’ve established in your organization, the closer you are to realizing the full ROI on your commitment to SOA. Figure 5 represents the SOA roadmap components graphically.
Standard Life has implemented all of these SOA roadmap components. As we’ve noted earlier in this article, Standard Life has achieved considerable cost savings from its SOA and has embedded the paradigm deeply into its development culture and operations. Let’s look at their approach in greater detail.
As a manager responsible for application solutions at Standard Life, Derek Ireland sees his primary job as one of instilling the SOA philosophy among the firm’s development groups. “My team provides direction on the application of SOA to business problems. We help developers to identify when it is appropriate to reuse the application solutions we already have, identify when it is appropriate to use packaged solutions, identify when it is appropriate to develop new applications using our core technologies, and identify when it is appropriate to develop new applications using new technologies.”
As noted earlier, Standard Life has created an internal SOA culture that spans all development areas, identifying, exhibiting, and promoting best practices for delivering reusable business services. “Implementing SOA required a shift in our development culture. Our SOA virtual services team maintains that culture by disseminating SOA best practices across development groups.”
The company’s SOA practices continue to evolve. Standard Life defines various SOA “application design patterns” that apply to development and integration projects throughout the company. Programmers are encouraged to use these patterns as company standard practices on all projects. “An application design pattern is a set of architecture, design and implementation patterns,” he says. “Architecture patterns show concepts, such as the need for channel-independent application development. Design patterns give technology-independent design advice, such as when and how to implement reliable messaging in distributed services. And implementation patterns give technology-specific ‘how to’ advice, such as how to deliver a business service using J2EE Message-Driven Beans in IBM WebSphere.”
Standard Life’s developers use company-standard SOA tools to build these application design patterns. “We call this our runtime SOA software framework,” Ireland says. “It abstracts applications from lower-level implementation issues such as communication across tiers and logging. We have implemented our SOA software framework in IBM WebSphere Application Studio Developer for Java development, IBM IMS tools for COBOL development, and Microsoft Visual Studio .NET for C# development. The framework enables all inter-layer communication and run-time business service directory look-ups. Fundamentally, the framework enables development teams to concentrate on the business aspects of the application under development.”
As noted above, Standard Life’s SOA infrastructure includes a mixture of IBM and Microsoft platforms and middleware. The company has standardized on IBM WebSphere MQ Integrator as its principal message-oriented middleware (MOM) platform, Java as the principal programming language, and XML as the markup syntax for all interoperability interfaces. However, it has only a minimal SOAP implementation and hasn’t yet migrated to UDDI and other Web services standards. Their SOA service registry runs on IBM’s UDB database on AIX.
Layered SOA infrastructures
Enterprises committed to SOA will need to continue evolving their infrastructures to realize the ROI from service reuse. Where reuse is concerned, the principal layers of an SOA-enabling infrastructure are the service brokers, orchestration engines, message-oriented middleware environments, and service-level management tools. Many SOA case studies involve all or most of these service layers, implemented through a combination of Web services and other middleware protocols.
Brokering infrastructures encourage reuse by allowing developers to “advertise” their programs’ service interfaces in a shared online registry, repository, or catalog. Any entity that facilitates registration, discovery, and retrieval of service contracts may be regarded as a broker.
The UDDI standard defines a service-brokering environment for Web services. However, many companies have implemented service-brokering infrastructures on other platforms, such as DBMSs. As noted above, Standard Life’s non-UDDI-based SOA service registry runs on an IBM UDB DBMS. Standard Life first implemented its registry early in this decade, at a time when “UDDI was immature for a runtime registry,” says the company’s Derek Ireland, “and it didn’t contain some of the features we wanted for a runtime registry, such as quality of service attributes.”
Even when companies commit to UDDI, Web services aren’t the only type of service that companies want to register and manage in their registries. One of the next steps on Con-Way Transportation’s SOA roadmap is to “create a comprehensive repository of business component services for all types of services, including both Web services and other technology implementations,” says the company’s Maja Tibbling, “perhaps using WSDL and UDDI.”
Orchestration engines encourage reuse by allowing developers to build new services through workflow definitions that connect pre-existing services. Developers often use graphical process-definition tools that allow them to specify orchestration tasks, dependencies, and routing and processing steps with flowchart icons. This approach is also called “model-driven development” (MDD). Once defined, these visual process models may be compiled into reusable rule definitions that control the execution of multistep interaction flows, such as those that involve complex transformation and routing rules, across heterogeneous platforms.
“We are using TIBCO BusinessWorks to reliably orchestrate asynchronous business processes in near real-time,” says Con-Way Transportation’s Maja Tibbling. “TIBCO’s transformations are based on XML, Xpath, and XSLT. We can take a transaction from any source in any format within our very heterogeneous environment, transform the data into a canonical XML Schema, and then interact with the shared services.”
MOM environments encourage reuse by providing guaranteed-delivery, event-notification, and publish-and-subscribe protocols that bind heterogeneous application endpoints into an enterprise service bus.
Abebooks has deployed Sonic Software’s MOM products as the backplane for its SOA environment. “We deploy Sonic MQ everywhere in our intranet,” says Abebooks’ Jayson Minard. “Sonic MQ supports both asynchronous and synchronous calls between service endpoints and provides pub/sub, message queuing, and event notification services. At the edges of our intranet, we use Sonic ESB to wrap shared services with Web services interfaces so that they can be called by our external trading partners.”
Service-level management infrastructures encourage reuse by helping companies monitor, optimize, control, and integrate their distributed application environments. A more common term for this functionality is Web services management (WSM) tools. WSM infrastructures help companies ensure the performance, reliability, availability, operational management, lifecycle management, and security of end-to-end Web services within an SOA environment. However, a growing range of WSM tools also monitor and enforce service levels in environments that implement MOM and other middleware protocols, which is why the broader term “service-level management” is more appropriate than the middleware-specific term “WSM.”
Thomson Learning uses Actional’s SOAPstation WSM proxies to control XML-based interactions within their SOA environment, according to company VP Chris Crowhurst. “We chose XML as the basis for service abstraction in our SOA, but haven’t tied XML interfaces to Web services protocols. Much of our SOA involves interchange of XML via HTTP or file transfers, not Web services. Every SOA consumer or provider service interoperates through Actional SOAPstation proxies that sit in the middle to do performance analysis and content-aware routing among distributed services.”
Enterprises can implement SOA without service-level management, but they would be foolish to do so for long. As SOA succeeds, companies will need to ensure 24x7 availability, guaranteed delivery, and performance optimization across the service bus, spanning all service endpoints. But, just as important, your company will need to maintain an IT culture that encourages maximum service reuse, through a full slate of SOA-focused training, incentives, tools, and practices.
Without a doubt, realizing SOA’s full ROI will be impossible without the appropriate technical infrastructure and organizational commitment, operating hand in hand.
Perspectives on measuring the ROI and implementation progress of your enterprise SOA push
--James Kobielus
SOA’s return-on-investment (ROI) is both hard and soft, quantitative and qualitative, near-term and deferred—depending on who you ask, how you ask, and how both you and they define SOA.
SOA’s ramp-up costs
SOA’s ROI can be calculated in monetary terms if you have a crisp enough definition of the approach. Fundamentally, SOA is a development methodology that encourages sharing of remotely invocable application functions throughout networks. Another way of characterizing SOA is as the “provide once, consume everywhere” paradigm. It’s a way of doing more with less, where applications can be built more quickly and incrementally, through recomposition and orchestration of pre-existing services, and with fewer and fewer lines of original code. In other words, SOA promises that the marginal cost of building new applications will continue to drop closer to zero as the service-reuse rate climbs closer to 100 percent—in other words, new applications simply plug into existing applications rather than re-invent their functionality. This SOA nirvana of 100 percent service reuse is illustrated in Figure 4.
SOA can have a substantial payback for implementers. First, though, your organization may have to surmount a significant ramp-up curve that doesn’t come cheap. As the word “architecture” in SOA indicates, you’re going to need to rethink many of your traditional approaches to application modeling, development, integration, deployment, and management if you truly commit your careers to this new paradigm.
“No business wants to spend on architecture—they want you to produce products,” says Christopher Crowhurst, vice president and principal architect at Thomson Learning, a Stamford, Connecticut-based business unit of Thomson Corporation. “I can guarantee there’s a cheaper way to built your next product [than by taking an architectural approach], but there’s no cheaper way to build your next 20 products. That’s why IT professionals are adopting SOA [for application development].”
“The core tenets of SOA are coarse-grained service abstraction, orchestration, and interoperability through proxies.” said Crowhurst. “I only know four or five companies that are purely implementing all of these SOA approaches, because it’s a hard job. It requires a considerable amount of business process re-engineering.”
These observations are borne out by leading analyst firms. Forrester Research analyst Ken Vollmer and Mike Gilpin report that SOA-based development often costs more than—sometimes twice as much as--traditional approaches when viewed solely with respect of building a particular application component. But when that application component is reused over and over, for integration with other applications—both internal and external to the enterprise—SOA may become more than 30 percent more cost effective than traditional development approaches. These savings derive from the lower lifetime development, integration, and maintenance costs associated with SOA-based distributed applications.
SOA’s downstream cost savings from application reuse and consolidation
Many of the savings from SOA stem from its ability to consolidate silos of redundant application functionality and data throughout organizations. Fewer software licenses and servers translates into clear cost savings in enterprise capital and operating budgets. Fewer redundant software components throughout the organization translates into less need for redundant programming groups. Application consolidation onto fewer platforms reduces deployment, integration, and maintenance costs over the software lifecycle. A recent Gartner Group study found that the costs of installing, integrating, and maintaining software over its useful life can be as much as six times greater than its license cost.
You need to make an investment in SOA in order to reap the cost-reduction benefits of application consolidation, sharing, and reuse. Standard Life Group of Edinburgh, Scotland has committed considerable resources to SOA as the basis for all current and future business applications.
“We have invested in our people and processes in order to fully implement and realize the value of SOA,” says Derek Ireland, group tech solutions manager at the firm. “We have developed architecture, design and implementation patterns for delivering service-oriented applications. We have over 30 person-years of effort invested in a run-time framework that underpins the whole approach, abstracting applications from underlying transports and protocols and providing standard APIs for common application behaviors such as logging. We have a virtual team representing all development areas that identifies, exhibits, and promotes best practice for delivering reusable business services. This team manages the catalog of reusable business services and the development process required to ensure consistency of XML interfaces, ensuring interoperability of services.”
Standard Life has kept close tabs on the return from its SOA investment. “We’ve saved over 2.8 million pounds in development costs over the past three years,” says Ireland, “based on reuse of existing functionality within the service catalog.” The company currently has around 300 reusable services in its catalog. Over 50 percent of those services being reused. They are being consumed by more than 70 other services across all environments at any time. All in all, the company counts 361 instances of service reuse. Over 40 percent of its back-end transactions are initiated through its SOA-based environment. The volume of transactions that traverse Standard Life’s SOA environment has increased 900 percent since the company began its SOA implementation in 2001. The company maintains three SOA-implementing development groups with around 500 personnel among them, of whom about half are delivering SOA services and applications. Their SOA-enabling distributed-application infrastructure is managed by a staff of seven.
Other enterprises report both hard and soft monetary paybacks from SOA. “We’ve seen huge savings in Oracle [database] licenses,” says Jayson Minard, CIO of Abebooks in Victoria, British Columbia, due to the server consolidation that has accompanied their SOA push. At the same time, says Minard, “we’ve also seen savings on the team side, due to having more development staff bandwidth for new projects. Development group efficiency is going up.”
Another cost advantage of SOA is that—by stressing platform-agnostic service virtualization—it allows enterprises to choose the most cost-effective best-of-breed application components for particular functions. “With SOA, you can go the vendor’s SOA route,” says Minard. “Or you can take the approach that we did. First, we identified the features and benefits we were looking for, such as flexibility, decoupling, asynchronous communications, and service boundary partitioning. Then we identified best-of-breed products that addressed those requirements. We evaluated 14 products, including commercial offerings and open-source packages. We tested interoperability among those products, and eliminated the ones where the vendors differed from us in their philosophical approach to SOA. In that way, we’ve steered clear of vendor lock-in.”
SOA-based acceleration of application development
Speedier SOA-based application development can also contribute to a company’s financial well-being and competitive agility. “Applications that used to take two weeks to develop now take two days,” says Minard.
“We’re a fast-paced multi-channel transportation company that requires ever faster time to market for IT solutions,” says Maja Tibbling, application architect with Con-Way Transportation Services in Portland, OR. “To take advantage of new business opportunities, SOA allows for speedy delivery of new functionality as well as providing adaptability in existing functionality. This flexibility also permits us to meet the evolving demands of our customers.”
Faster development allows companies to respond more flexibly to new competitive challenges, stimulating top-line revenues. TSYS Prepaid Inc. is a hosted service provider that processes prepaid debit cards on behalf of financial institutions. Its SOA-based applications are the basis for revenue-producing services. “Accelerated development allows us to recognize revenue more quickly,” says Carl Ansley, the company’s CTO.
Difficulty of quantifying other SOA ROI metrics
SOA’s other benefits may be quantified to greater or lesser degrees. How can you put a valid number on improved business adaptability and agility? Or on improvements in application consistency due to reliance on a common set of shared services? Or on reductions in the risk to interoperability when you make changes to application code that’s been virtualized, abstracted, and loosely coupled in keeping with SOA principles? Or enhancements to application usability, scalability, and performance that come from reliance on shared services that have been built and optimized by development centers of competence within your organization?
One of the principal difficulties with quantifying SOA’s ROI in the real world is that the paradigm overlaps with so many other development, integration, and management approaches. SOA, to some degree, carries on many of the objectives and methods of component-based development, as attested by several case studies who were interviewed for these articles.
But SOA has, in industry discussions, been elevated from a mere approach to something resembling a religion. It is often portrayed as an all-pervading panacea, a golden force field that encompasses all good programming practices from Babbage to the present. If you look at almost any popular discussion of SOA, you’ll see it mentioned in the same breath as Web services, enterprise service bus (ESB), business process management (BPM), model-driven development (MDD), governance, and other hot topics. Indeed, every IT vendor with a savvy marketing group these days has hitched its entire product portfolio and roadmap on its alignment with SOA. So it’s hard to know what ROI to credit to SOA versus these other development and integration approaches.
SOA is still a fuzzy concept to many enterprise IT professionals, and it sprawls, in many people’s minds, across a wide range of loosely related technologies and approaches. According to a recent IDG Research Services Group survey, IT professionals are almost evenly split between people who claim some familiarity with SOA (52 percent) and those who admit they haven’t a clue (48 percent). Likewise, the split was almost even between those respondents who reported strong confidence in SOA’s long-term potential (55 percent) and those whose confidence in the paradigm was lacking or lackluster (45 percent).
IDG asked that same population of respondents to associate various phrases with SOA. The respondents ranked the most valid SOA descriptor—“reusable applications”—fifth among the options given. Respondents ranked such descriptors as “software as a service,” “enterprise application integration,” “Web services registries/repositories.” and “frequent use of Web services” higher, even though those phrases don’t directly articulate SOA’s core meaning.
As we can see, popular understanding of SOA concepts is fuzzy, to say the least. What’s even fuzzier is any reliable data on who exactly is implementing SOA, with what degree of commitment, and at what level in the organization. In that same IDG study, 28 percent of respondents stated that their companies are already implementing SOA, with slightly less than half of those SOA implementers merely conducting pilot projects. Of those respondents who are considering SOA but don’t currently have pilot projects, IDG found that only 22 percent are actively investigating SOA-based solutions over the coming year.
However, a recent Forrester Research survey of large North American companies reported that more than 70 percent of respondents have already implemented SOA. Forrester analysts say that by the end of 2005, 51 percent of medium-sized enterprises and 46 percent of small businesses will have adopted SOA as well.
The leading IT analyst firms present widely divergent pictures of SOA adoption rates in 2005. So, really, it’s anybody’s guess exactly how many organizations of various sizes have adopted SOA, or plan to do so over the coming several years. And there’s no telling how many of the people responding to these surveys are clear on precisely what they mean by SOA, or are using the term that’s consistent with how others are construing it. Or whether any of them is correctly using the term.
Enumerating the levels of SOA implementation
Another problem with such surveys is that they tend to use a binary definition of SOA adoption: either you’re implementing SOA or you’re not, with no gray territory between these two poles. This simplistic approach to gauging SOA adoption is at odds with the extremely complex, multifaceted nature of this methodology.
Anybody who’s spent any time studying the topic can quickly identify various steps, phases, layers, degrees, and approaches to implementing SOA. As the case studies in this article and the companion article, “SOA: The ABCs,” illustrate, there’s no single way to do SOA. By the same token, your ROI will depend considerably on the approach your organization has taken, and on how far you are along your SOA implementation roadmap.
One of the most critical elements of any enterprise SOA roadmap is the need to gain a solid commitment from senior IT and business managers, based on such business benefits as accelerated development, reduced cost, and greater business agility. Also, you should provide developers with the training, tools, guidelines, and incentives necessary to get them thinking in SOA terms, and to discourage development of one-off and non-modular applications. Your SOA roadmap would be incomplete without a thorough reorganization of IT governance processes aimed at enforcing organization-wide adherence to SOA best practices.
An enterprise SOA roadmap should incorporate the requisite philosophy, culture, practices, tools, and infrastructure. The more of these roadmap components you’ve established in your organization, the closer you are to realizing the full ROI on your commitment to SOA. Figure 5 represents the SOA roadmap components graphically.
Standard Life has implemented all of these SOA roadmap components. As we’ve noted earlier in this article, Standard Life has achieved considerable cost savings from its SOA and has embedded the paradigm deeply into its development culture and operations. Let’s look at their approach in greater detail.
As a manager responsible for application solutions at Standard Life, Derek Ireland sees his primary job as one of instilling the SOA philosophy among the firm’s development groups. “My team provides direction on the application of SOA to business problems. We help developers to identify when it is appropriate to reuse the application solutions we already have, identify when it is appropriate to use packaged solutions, identify when it is appropriate to develop new applications using our core technologies, and identify when it is appropriate to develop new applications using new technologies.”
As noted earlier, Standard Life has created an internal SOA culture that spans all development areas, identifying, exhibiting, and promoting best practices for delivering reusable business services. “Implementing SOA required a shift in our development culture. Our SOA virtual services team maintains that culture by disseminating SOA best practices across development groups.”
The company’s SOA practices continue to evolve. Standard Life defines various SOA “application design patterns” that apply to development and integration projects throughout the company. Programmers are encouraged to use these patterns as company standard practices on all projects. “An application design pattern is a set of architecture, design and implementation patterns,” he says. “Architecture patterns show concepts, such as the need for channel-independent application development. Design patterns give technology-independent design advice, such as when and how to implement reliable messaging in distributed services. And implementation patterns give technology-specific ‘how to’ advice, such as how to deliver a business service using J2EE Message-Driven Beans in IBM WebSphere.”
Standard Life’s developers use company-standard SOA tools to build these application design patterns. “We call this our runtime SOA software framework,” Ireland says. “It abstracts applications from lower-level implementation issues such as communication across tiers and logging. We have implemented our SOA software framework in IBM WebSphere Application Studio Developer for Java development, IBM IMS tools for COBOL development, and Microsoft Visual Studio .NET for C# development. The framework enables all inter-layer communication and run-time business service directory look-ups. Fundamentally, the framework enables development teams to concentrate on the business aspects of the application under development.”
As noted above, Standard Life’s SOA infrastructure includes a mixture of IBM and Microsoft platforms and middleware. The company has standardized on IBM WebSphere MQ Integrator as its principal message-oriented middleware (MOM) platform, Java as the principal programming language, and XML as the markup syntax for all interoperability interfaces. However, it has only a minimal SOAP implementation and hasn’t yet migrated to UDDI and other Web services standards. Their SOA service registry runs on IBM’s UDB database on AIX.
Layered SOA infrastructures
Enterprises committed to SOA will need to continue evolving their infrastructures to realize the ROI from service reuse. Where reuse is concerned, the principal layers of an SOA-enabling infrastructure are the service brokers, orchestration engines, message-oriented middleware environments, and service-level management tools. Many SOA case studies involve all or most of these service layers, implemented through a combination of Web services and other middleware protocols.
Brokering infrastructures encourage reuse by allowing developers to “advertise” their programs’ service interfaces in a shared online registry, repository, or catalog. Any entity that facilitates registration, discovery, and retrieval of service contracts may be regarded as a broker.
The UDDI standard defines a service-brokering environment for Web services. However, many companies have implemented service-brokering infrastructures on other platforms, such as DBMSs. As noted above, Standard Life’s non-UDDI-based SOA service registry runs on an IBM UDB DBMS. Standard Life first implemented its registry early in this decade, at a time when “UDDI was immature for a runtime registry,” says the company’s Derek Ireland, “and it didn’t contain some of the features we wanted for a runtime registry, such as quality of service attributes.”
Even when companies commit to UDDI, Web services aren’t the only type of service that companies want to register and manage in their registries. One of the next steps on Con-Way Transportation’s SOA roadmap is to “create a comprehensive repository of business component services for all types of services, including both Web services and other technology implementations,” says the company’s Maja Tibbling, “perhaps using WSDL and UDDI.”
Orchestration engines encourage reuse by allowing developers to build new services through workflow definitions that connect pre-existing services. Developers often use graphical process-definition tools that allow them to specify orchestration tasks, dependencies, and routing and processing steps with flowchart icons. This approach is also called “model-driven development” (MDD). Once defined, these visual process models may be compiled into reusable rule definitions that control the execution of multistep interaction flows, such as those that involve complex transformation and routing rules, across heterogeneous platforms.
“We are using TIBCO BusinessWorks to reliably orchestrate asynchronous business processes in near real-time,” says Con-Way Transportation’s Maja Tibbling. “TIBCO’s transformations are based on XML, Xpath, and XSLT. We can take a transaction from any source in any format within our very heterogeneous environment, transform the data into a canonical XML Schema, and then interact with the shared services.”
MOM environments encourage reuse by providing guaranteed-delivery, event-notification, and publish-and-subscribe protocols that bind heterogeneous application endpoints into an enterprise service bus.
Abebooks has deployed Sonic Software’s MOM products as the backplane for its SOA environment. “We deploy Sonic MQ everywhere in our intranet,” says Abebooks’ Jayson Minard. “Sonic MQ supports both asynchronous and synchronous calls between service endpoints and provides pub/sub, message queuing, and event notification services. At the edges of our intranet, we use Sonic ESB to wrap shared services with Web services interfaces so that they can be called by our external trading partners.”
Service-level management infrastructures encourage reuse by helping companies monitor, optimize, control, and integrate their distributed application environments. A more common term for this functionality is Web services management (WSM) tools. WSM infrastructures help companies ensure the performance, reliability, availability, operational management, lifecycle management, and security of end-to-end Web services within an SOA environment. However, a growing range of WSM tools also monitor and enforce service levels in environments that implement MOM and other middleware protocols, which is why the broader term “service-level management” is more appropriate than the middleware-specific term “WSM.”
Thomson Learning uses Actional’s SOAPstation WSM proxies to control XML-based interactions within their SOA environment, according to company VP Chris Crowhurst. “We chose XML as the basis for service abstraction in our SOA, but haven’t tied XML interfaces to Web services protocols. Much of our SOA involves interchange of XML via HTTP or file transfers, not Web services. Every SOA consumer or provider service interoperates through Actional SOAPstation proxies that sit in the middle to do performance analysis and content-aware routing among distributed services.”
Enterprises can implement SOA without service-level management, but they would be foolish to do so for long. As SOA succeeds, companies will need to ensure 24x7 availability, guaranteed delivery, and performance optimization across the service bus, spanning all service endpoints. But, just as important, your company will need to maintain an IT culture that encourages maximum service reuse, through a full slate of SOA-focused training, incentives, tools, and practices.
Without a doubt, realizing SOA’s full ROI will be impossible without the appropriate technical infrastructure and organizational commitment, operating hand in hand.
imho SOA--the TBDs
SOA: The TBDs
What still needs to be defined, developed, delivered, deployed, and demonstrated to realize SOA’s potential in the real world
--James Kobielus
SOA is clearly a work in progress. Actually, SOA is a target state that many organizations may approach but never fully attain. It’s all about achieving maximum reuse and minimum redundancy of services throughout complex, multiplatform distributed environments. Raising the bar even higher, the industry continues to elaborate the SOA vision by adding all of Web services, IT governance, enterprise service bus (ESB), business process management (BPM), model-driven development (MDD), and other hot topics to the bubbling brew.
TBD: SOA conceptual clarity
If SOA is going to endure as a useful paradigm, the IT industry will need to continue clarifying what it means by the term and why anybody should care. Given the restlessness of the IT industry, another paradigm may some day surface to steal SOA’s fire as the architectural buzzphrase du jour.
For sure, there’s no shortage of SOA whitepapers, consultant reports, position papers, articles, courses, books, and other prescriptive guidelines on the market. Every SOA consultant has their own vision of what the term means—or should mean—as does every IT vendor that has aligned its products and services with this paradigm. For enterprise IT professionals, it’s often difficult to map conceptually between different SOA visions presented in vendor materials, consultant reports, and press articles. At the same time, distinguishing between substantive SOA discussions and marketing fluff can also be a bit tricky.
Paradigms are slippery things to standardize, but that doesn’t stop well-meaning industry groups from attempting to do just that. The Organization for the Advancement of Structured Information Standards (OASIS) is one of the principal IT standardization groups these days, and it has established two technical committees (TCs) to clarify industry understanding of SOA approaches. In April 2004, OASIS created the Electronic Business SOA (ebSOA) TC. In February 2005, OASIS chartered the SOA Reference Model (SOA-RM) TC. The overlap among these groups’ charters is considerable, but the general division of responsibilities is as follows.
OASIS’ ebSOA TC is defining a reference architecture, guidelines, and best practices for implementing SOA within B2B environments that implement ebXML standards. The TC is also defining the ongoing roadmap for OASIS’ ebXML Technical Architecture, which is currently in version 1.04. The group is attempting to align the ebXML Technical Architecture with ongoing changes to the various ebXML standards, which address such B2B requirements as service brokering, reliable messaging, orchestration, and trading partner agreements. It is also addressing how ebXML standards will evolve to integrate more thoroughly with the growing range of Web services standards being defined at OASIS, W3C, and elsewhere. The ebSOA TC has promised its first draft deliverable, an architectural specification, for July 31, 2005. An ebSOA best practice document is promised for mid-2006.
OASIS’ SOA-RM TC is defining an SOA reference model that is broader in scope than that being worked out at the ebSOA TC. The SOA-RM TC is defining an abstract reference model that can encompass ebXML, Web services, and other implementation environments within which SOA may be applied. The group will release a first draft reference model for general industry comment by the end of 2005.
One of the SOA-RM TC’s principal goals is to define a set of SOA concepts, functional elements, architectural patterns, and best practices that can be applied unambiguously within and between different implementation environments. The TC has stated that ebXML and Web services aren’t the only possible SOA implementation environments that it is addressing. Per the group’s ongoing work, core SOA abstractions include services, contracts, policies, semantics, discovery, presence, and availability. In addition, the group is drawing an important distinction between SOA use cases of varying complexity:
• Simple SOA: This involves a single shared service on which there are no requirements for reliable messaging, transactional rollback, long-running orchestration, or quality of service.
• Intermediate SOA: This involves multiple shared services that are presented to consumers through a single “root” or “façade” service; are hosted and managed within the same administrative domain; and may require transactional rollback, long-running orchestration, and/or quality-of-service behind the service root.
• Complex SOA: This involves multiple shared services that are hosted and managed within one or more administrative domains, and that involve reliable messaging, transactional rollback, long-running orchestration, and/or quality-of-service policies to be enforced in interactions among services. Relationships among services may be hierarchical, parent-child, peer-to-peer, and other models.
In addition to the work that OASIS TCs have been doing to clarify SOA, vendors of all sorts have been pitching their SOA frameworks into industry circulation. Whether all of this SOA framework-mongering has clarified anybody’s thinking on the topic, or simply added more clutter, is a point worth arguing. Usually, any given vendor’s take on SOA is closely aligned with its commercial offerings—a fact you should weight into your evaluation of their SOA recommendations. But that shouldn’t stop you from considering what they have to say on SOA, and on how you can realize cost savings and other SOA benefits through real-world IT solutions.
IBM, for example, has established several SOA-related initiatives that leverage its worldwide dominance in IT professional services. Many enterprises will adopt the SOA approaches of their principal integration partners, so IBM’s SOA framework will increasingly find its way into many real-world deployments.
IBM recently launched an SOA implementation framework—called Service-Oriented Modeling and Architecture (SOMA)--that encourages enterprises to use professional services for internal and external enterprise integration. Managed by the IBM Global Services organization, the SOMA framework guides enterprises in SOA planning, design, implementation, and management. IBM Global Services implements SOMA through reusable software components, best practices, and business modeling tools that it developed in engagements in many vertical markets. The company provides online and classroom SOA education courses and customizes one-on-one SOA workshops to client needs. It provides a free online assessment tool that enables businesses to evaluate their current level of SOA readiness and identify SOA priorities. And—no surprise here--IBM has integrated SOA patterns, processes, and tools into its WebSphere, Rational and Tivoli products.
SAP, as the world’s dominant business application vendor, is also promoting SOA as a core theme for its ongoing product development initiatives and customer engagements. Many enterprises will find themselves moving deeper into SOA as SAP—their ERP vendor—adopts service orientation throughout its product suite.
SAP’s SOA focus is on its mySAP generation of packaged business applications and on the underlying NetWeaver platform. NetWeaver implements the vendor’s Enterprise Services Architecture (ESA), which encompasses a broad range of Web services standards. Through native Web services support, integration middleware, and visual MDD tools, developers can built composite applications that bridge diverse SAP and non-SAP environments. SAP is in the process of decomposing its vast monolithic R/3 and mySAP application suites into functions that can be exposed as modular business services. By 2007, all SAP application functionality will be exposed through WSDL and registered within a UDDI registry native to NetWeaver, making SAP’s product portfolio thoroughly SOA-enabled. SAP is also ramping up its SOA professional services offerings, providing a broader range of best-practices workshops and tools.
Systinet, a leading vendor of UDDI registries, has developed a framework that stresses the central role of registries in policy-based SOA. Systinet’s Governance Interoperability Framework (GIF) positions the SOA registry as the principal policy-management platform for service composition, integration, security, management, and other governance functions. Systinet has gained support for GIF from many Web services management business partners, including Actional, AmberPoint, DataPower, Layer 7, Reactivity, and Service Integrity. For several years, Systinet has been a vanguard vendor in Web services and SOA, so these partnerships should be interpreted as strong industry support for GIF as a basis for SOA governance.
To date, Systinet has not published the GIF specifications that are being implemented by the company and its partners. However, Systinet has stated that the GIF includes specifications that cover the following policy management requirements (aka governance) across multivendor SOAs:
• Governance data integration: GIF-compliant SOAs will leverage the registry as the authoritative repository and management platform for service descriptions, attributes, and policies.
• Governance control integration: GIF-compliant SOAs will implement a common framework for service event notification and response based on policies managed in the registry.
• Governance user-interface integration: GIF-compliant SOAs will provide a common view of service descriptions, attributes, and policies managed in the registry.
Infravio is a WSM and Web services registry vendor with its own SOA framework, which it calls “Intentional SOA.” Basically, Infravio’s framework is a checklist that enterprises can use to determine their requirements and readiness for SOA, Web services, Web services management, and registries. One of the most useful features of Infravio’s Intentional SOA whitepaper is that it clearly delineates the various Web services standards that will be necessary for a layered, full-function, platform-agnostic SOA. In this regard, Infravio is like many vendors, whose Web services and SOA strategies are one and the same thing.
TBD: SOA on mature, comprehensive, ubiquitous, standards-based Web services
Truth be told, that’s not such a bad SOA strategy. Web services are the first middleware and development environment that can truly be called platform-agnostic. Web services enable more complete service virtualization, abstraction, loose coupling, and reuse than any prior environment. Web services are the preferred environment for thoroughgoing SOA. Within Web services environments, WSDL service contracts provide the principal platform-agnostic APIs for service virtualization.
In order to realize the nirvana of SOA everywhere, the following, at minimum, must happen:
• SOAP—the principal Web services middleware protocol--must be implemented natively in all new application development and integration projects.
• WSDL—the principal Web services API--must be used to expose the functionality of new applications, and to wrap existing services, applications, operating platforms, and other resources as Web services.
• UDDI—the principal Web services registry environment—must be implemented throughout your application infrastructure and all WSDL and other service metadata and policies must be published to those registries.
Of course, enterprises can implement a limited form of SOA within traditional middleware environments, such as Common Object Request Broker Architecture (CORBA) and Distributed Complement Object Model (DCOM), relying on these environments’ native service contract, registration, location, and binding interfaces. Indeed, many of the SOA case studies we describe in this Network World special section rely on older middleware approaches in addition to Web services. However, those pre-Web services SOA implementations are tightly coupled to the native object models, methods, and application programming interfaces of particular operating and application platforms. Consequently, changes to the underlying, tightly coupled platforms can disrupt interoperability with consuming services.
Ubiquitous adoption of Web services is a necessary, but not sufficient, condition for universal SOA. There are SOA-friendly and SOA–unfriendly ways to implement Web services. With Web services-based SOAs, developers have the option of coupling services either loosely or tightly, based on whether they design WSDL interfaces to correspond closely with service implementations—though loose coupling is the best practice for flexible SOAs. Some Web services implementations may introduce tight coupling with underlying platforms, but, generally, Web services encourage loose coupling via the following service design practices:
• Define WSDL-based service APIs so that these interfaces don’t contain types, collections, or other artifacts specific to the platform—such as J2EE or .NET—on which the service is implemented;
• Expose service functionality through standard document-oriented approaches, especially SOAP’s Document/Literal style interface;
• Create self-describing XML-based messages/documents that convey context information pertaining to ongoing interactions among services.
One of the drawbacks of a purely Web services-based approach to SOA is that some of the most critical “WS-*” standards have not yet been finalized, ratified, or adopted broadly by vendors and users. According to a recent IDG Research Services Group survey, IT professionals ranked incomplete and immature Web services standards as one of the top inhibitors of SOA implementations in their companies. Other significant SOA inhibitors include the immaturity of solutions that implement Web services standards and limited developer support for these standards.
The WS-* “stack,” like SOA, is a work in progress. It is not yet a full-featured alternative to older object-brokering and message-oriented middleware protocols. And it is not a unified set of specifications under single authorship, management, or governance, and probably will never become such. It is being defined by diverse organizations, including OASIS, the World Wide Web Consortium (W3C), and various vendor partnerships.
Sharing and reusing Web services is complicated by the fact that many platforms implement diverse versions of various WS-* specifications for federated identity, security, reliable messaging, event notification, publish and subscribe, and other critical infrastructure functions. Consequently, IT professionals will continue grappling with diverse platform-specific implementations of WS-* specifications until vendors converge on consistent implementations of common standards in various layers. In this regard, then, the current WS-* stack—as implemented in real products and tools--doesn’t yet support truly platform-independent SOA.
There is considerable industry disagreement and rivalry regarding which WS* specifications should prevail in many areas, such as federated identity, reliable messaging, event notification, and transactions. And even in functional areas where the industry has already converged on principal WS-* standards, it may take years before most vendors have implemented those standards in their products and have worked through the interoperability issues associated with diverse implementation of agreed-upon standards.
Until such time as the industry converges on universal, comprehensive, stable WS-* standards in various functional layers, Web services—as a complete SOA stack—will remain immature. Some of the most noteworthy areas of WS-* industry rivalry are as follows:
• Federated identity: In March 2005, OASIS ratified Security Assertion Markup Language 2.0 as a standard, incorporating specifications that had been contributed by the Liberty Alliance initiative. The previous SAML versions—1.0 and 1.1—have achieved considerable commercial adoption throughout the federated identity marketplace, with the Liberty Alliance specifications achieving some commercial adoption. Another competing specification—the Microsoft-developed WS-Federation—remains in the industry picture, despite having achieved far less commercial adoption and not being managed by any standards group. WS-Federation, released in July 2003, covers a similar functional range as SAML 2.0. However, SAML 2.0 and WS-Federation are entirely non-interoperable specifications. WS-Federation doesn’t build on or extend SAML, and it isn’t directly interoperable with SAML or the Liberty interfaces. WS-Federation is kept alive through Microsoft’s decision to build its Active Directory Federation Services feature of “Longhorn” on the specification.
• Reliable messaging: In November 2004, OASIS ratified WS-Reliability 1.1 as a standard. Produced by OASIS’ Web Services Reliable Messaging Technical Committee (WSRM TC), the standard defines a protocol for asynchronous or synchronous message exchange with guaranteed, once-only, ordered delivery, even in the presence of component, system, or network failures. It defines mechanisms for message persistence, message acknowledgement and resending, duplicate message elimination, ordered message delivery, and delivery status awareness for sender and receiver applications. However, WS-Reliability is not the only WS-* specification to achieve considerable industry support. An industry group dominated by Microsoft has implemented the rival WS-ReliableMessaging specifications, which addresses the same core requirements and features as WS-Reliability.
• Transactional coordination: In October 2003, Sun, Oracle, and other vendors submitted WS Composite Application Framework (WS-CAF) to OASIS. WS-CAF defines a standard framework for transactional coordination of long-running business processes within Web services environments. WS-CAF consists of three specifications (none of which have been finalized or ratified yet within the OASIS WS-CAF TC: WS-Context, WS-Coordination Framework, and WS-Transaction Management. Meanwhile, in September 2003, WS-Coordination was published by Microsoft, IBM, and BEA. WS-Coordination provides an extensible framework for defining transaction coordination patterns and protocols. At the same time, Microsoft, IBM, and BEA also published WS-Atomic Transaction, which references WS-Coordination and defines an “atomic” (i.e., ACID) coordination type. Several months later, these vendors published WS Business Activity Framework (BAF), which defines protocols for transactional coordination on long-running business activities. Since that time, Microsoft, IBM, and BEA have not submitted WS-Coordination, WS-AtomicTransaction, or WS-BAF to OASIS or any other standards group.
Of course, the existence of rival WS-* specifications doesn’t doom Web services as a basis for SOA. Sharing and reuse can still proceed in a heterogeneous WS-* stack and application infrastructure. To address this stubborn reality, a flexible enterprise service bus (ESB) infrastructure can provide “impedance matching” between diverse WS-* standards, versions, and implementations. ESB products come from a broad range of middleware vendors, including Cape Clear Software, Fiorano Software, IBM, IONA, Sonic Software, Systinet, TIBCO Software, and webMethods. Depending on the vendor and product, ESB solutions provide any and all of the following features for a robust SOA infrastructure:
--Wrap legacy environments with WS-* interfaces, thereby supporting legacy integration;
--Provide any-to-any interoperability through integration brokers, which provide orchestration, transformation, and routing services among distributed services, applications, data repositories, and other resources;
--Support content-aware monitoring and enforcement of performance, security, and other enterprise policies pertaining to Web services traffic;
--Support the request-response, event-driven, and method-invocation message flows among distributed services;
--Support reliable messaging, transactional coordination, and other robust interaction patterns; and
--Support hub-and-spoke, decentralized, and peer-to-peer message exchange patterns
Figure 6 illustrates the many functional service layers that are necessary for a full-fledged, robust SOA interoperability environment. WS-* standards and specifications have been developed in all of these layers. However, standards in some layers are more mature and widely adopted than in others. The core layers for today’s Web services are messaging and communications (SOAP); description, semantics, and policy (WSDL, XML Schema, WS-Policy); and brokering, registry, and discovery (UDDI).
Even the consensus WS-* standards—such as WSDL, SOAP, and UDDI—have not been implemented consistently and universally by all platform, application, and tool vendors. The standards provide many options that can hinder interoperability, unless organizations agree on implementation profiles that were published by the Web Services Interoperability (WS-I) Organization.
Vendors may claim conformance with WS-I profiles for various WS-* standards. But what those vendors have actually implemented, as default settings in their products, may be another thing entirely. And that doesn’t stop application developers and administrators from overriding the vendor defaults and implementing WS-* specifications in entirely non-standard ways, thereby frustrating any-to-any interoperability throughout their SOA environments.
TBD: SOA developed through visual modeling techniques
For practical SOA, it’s not enough simply to have conceptual clarity on what SOA means, or to have a ubiquitous Web services or ESB environment within which services can be reused, invoked, monitored, and managed. It’s also critical that developers have new visually oriented modeling tools for building distributed services upon SOA principles.
Visual models are essential for developers who need to navigate the unfamiliar seas of SOA. If you’re a developer, SOA may make you feel a tad queasy, because it’s rocking your familiar paradigms to the core.
Fundamentally, SOA is a disruptive new approach to building distributed services. SOA defines an environment within which the most ancient computing paradigm is rapidly dissolving. From the dawn of computing, we’ve developed new functionality upon and within such concepts as “platform,” “application,” and “language.” Each of these concepts has traditionally had a well-defined sphere of reference: the platform hosted the application, and the application was developed in a language. Now all of that is changing, thanks to the emergence of SOA.
The first of these ancient computing concepts to wither away will be that of the platform. This term originally applied to operating systems, and then broadened to include application servers that implement a particular development framework (such as Java 2 Platform Enterprise Edition or .NET) over one or more operating systems. However, the growth of standards-based, distributed Web services has made it clear that fewer and fewer business processes will execute entirely within the confines of a J2EE 1.3 server, or Windows Server 2003, or Linux, but will execute across them all. When all platforms share a common environment for describing, publishing, and invoking services, the notion of self-contained platforms disintegrates in favor of SOA, which is essentially a platformless service cosmos.
Likewise, another casualty of this evolution is the notion of applications as discrete functional components that execute on particular platforms. SOA is founded on the notion of virtualization. Under this paradigm, services describe abstract interfaces within standard, platform-independent metadata vocabularies such as WSDL. The underlying service functionality may be provided from components on any platform—J2EE, .NET, or otherwise—without the need to change the abstract interface. Under SOA, the application dissolves into a service that may have no fixed implementation but simply bids for on-demand networked software and hardware resources.
And programming languages are also becoming, if not entirely obsolete, something that fewer developers touch directly. MDD—which refers to visual model-driven development and code-generation tools—is at the forefront of the SOA revolution. You’re more likely these days to see a vendor boast of its ability to support visual modeling in the Unified Modeling Language (UML) than development in Java, C#, or any other declarative programming language. For complex, orchestrated, multiplatform Web services, MDD is the most effective approach for specifying, implementing, and maintaining the end-to-end logic and rules upon which the service depends.
SOA has spawned a range of new terms—such as “orchestration” and “recomposition”--to describe what it is that developers actually develop. IT professionals are increasingly defining their creations in terms of services, models, and patterns (rather than in terms of platforms, applications, and languages). The notion of MDD-orchestrated service patterns will become increasingly critical to discussions of distributed services. An orchestrated service pattern is simply a generic approach—such as “service proxying” or “service coordination”--to architecting interactions within the SOA. Every pattern defines its own abstract Web services functional elements and its own SOAP-based interactions that are executed by integration brokers within your ESB.
Welcome to the dizzying new world of SOA. Platforms are dissolving, new concepts are taking over, and Web services will become everywhere shareable and reusable. There’s so much new work to be done, and so many new tools—both conceptual and computational—within which to do it.
What still needs to be defined, developed, delivered, deployed, and demonstrated to realize SOA’s potential in the real world
--James Kobielus
SOA is clearly a work in progress. Actually, SOA is a target state that many organizations may approach but never fully attain. It’s all about achieving maximum reuse and minimum redundancy of services throughout complex, multiplatform distributed environments. Raising the bar even higher, the industry continues to elaborate the SOA vision by adding all of Web services, IT governance, enterprise service bus (ESB), business process management (BPM), model-driven development (MDD), and other hot topics to the bubbling brew.
TBD: SOA conceptual clarity
If SOA is going to endure as a useful paradigm, the IT industry will need to continue clarifying what it means by the term and why anybody should care. Given the restlessness of the IT industry, another paradigm may some day surface to steal SOA’s fire as the architectural buzzphrase du jour.
For sure, there’s no shortage of SOA whitepapers, consultant reports, position papers, articles, courses, books, and other prescriptive guidelines on the market. Every SOA consultant has their own vision of what the term means—or should mean—as does every IT vendor that has aligned its products and services with this paradigm. For enterprise IT professionals, it’s often difficult to map conceptually between different SOA visions presented in vendor materials, consultant reports, and press articles. At the same time, distinguishing between substantive SOA discussions and marketing fluff can also be a bit tricky.
Paradigms are slippery things to standardize, but that doesn’t stop well-meaning industry groups from attempting to do just that. The Organization for the Advancement of Structured Information Standards (OASIS) is one of the principal IT standardization groups these days, and it has established two technical committees (TCs) to clarify industry understanding of SOA approaches. In April 2004, OASIS created the Electronic Business SOA (ebSOA) TC. In February 2005, OASIS chartered the SOA Reference Model (SOA-RM) TC. The overlap among these groups’ charters is considerable, but the general division of responsibilities is as follows.
OASIS’ ebSOA TC is defining a reference architecture, guidelines, and best practices for implementing SOA within B2B environments that implement ebXML standards. The TC is also defining the ongoing roadmap for OASIS’ ebXML Technical Architecture, which is currently in version 1.04. The group is attempting to align the ebXML Technical Architecture with ongoing changes to the various ebXML standards, which address such B2B requirements as service brokering, reliable messaging, orchestration, and trading partner agreements. It is also addressing how ebXML standards will evolve to integrate more thoroughly with the growing range of Web services standards being defined at OASIS, W3C, and elsewhere. The ebSOA TC has promised its first draft deliverable, an architectural specification, for July 31, 2005. An ebSOA best practice document is promised for mid-2006.
OASIS’ SOA-RM TC is defining an SOA reference model that is broader in scope than that being worked out at the ebSOA TC. The SOA-RM TC is defining an abstract reference model that can encompass ebXML, Web services, and other implementation environments within which SOA may be applied. The group will release a first draft reference model for general industry comment by the end of 2005.
One of the SOA-RM TC’s principal goals is to define a set of SOA concepts, functional elements, architectural patterns, and best practices that can be applied unambiguously within and between different implementation environments. The TC has stated that ebXML and Web services aren’t the only possible SOA implementation environments that it is addressing. Per the group’s ongoing work, core SOA abstractions include services, contracts, policies, semantics, discovery, presence, and availability. In addition, the group is drawing an important distinction between SOA use cases of varying complexity:
• Simple SOA: This involves a single shared service on which there are no requirements for reliable messaging, transactional rollback, long-running orchestration, or quality of service.
• Intermediate SOA: This involves multiple shared services that are presented to consumers through a single “root” or “façade” service; are hosted and managed within the same administrative domain; and may require transactional rollback, long-running orchestration, and/or quality-of-service behind the service root.
• Complex SOA: This involves multiple shared services that are hosted and managed within one or more administrative domains, and that involve reliable messaging, transactional rollback, long-running orchestration, and/or quality-of-service policies to be enforced in interactions among services. Relationships among services may be hierarchical, parent-child, peer-to-peer, and other models.
In addition to the work that OASIS TCs have been doing to clarify SOA, vendors of all sorts have been pitching their SOA frameworks into industry circulation. Whether all of this SOA framework-mongering has clarified anybody’s thinking on the topic, or simply added more clutter, is a point worth arguing. Usually, any given vendor’s take on SOA is closely aligned with its commercial offerings—a fact you should weight into your evaluation of their SOA recommendations. But that shouldn’t stop you from considering what they have to say on SOA, and on how you can realize cost savings and other SOA benefits through real-world IT solutions.
IBM, for example, has established several SOA-related initiatives that leverage its worldwide dominance in IT professional services. Many enterprises will adopt the SOA approaches of their principal integration partners, so IBM’s SOA framework will increasingly find its way into many real-world deployments.
IBM recently launched an SOA implementation framework—called Service-Oriented Modeling and Architecture (SOMA)--that encourages enterprises to use professional services for internal and external enterprise integration. Managed by the IBM Global Services organization, the SOMA framework guides enterprises in SOA planning, design, implementation, and management. IBM Global Services implements SOMA through reusable software components, best practices, and business modeling tools that it developed in engagements in many vertical markets. The company provides online and classroom SOA education courses and customizes one-on-one SOA workshops to client needs. It provides a free online assessment tool that enables businesses to evaluate their current level of SOA readiness and identify SOA priorities. And—no surprise here--IBM has integrated SOA patterns, processes, and tools into its WebSphere, Rational and Tivoli products.
SAP, as the world’s dominant business application vendor, is also promoting SOA as a core theme for its ongoing product development initiatives and customer engagements. Many enterprises will find themselves moving deeper into SOA as SAP—their ERP vendor—adopts service orientation throughout its product suite.
SAP’s SOA focus is on its mySAP generation of packaged business applications and on the underlying NetWeaver platform. NetWeaver implements the vendor’s Enterprise Services Architecture (ESA), which encompasses a broad range of Web services standards. Through native Web services support, integration middleware, and visual MDD tools, developers can built composite applications that bridge diverse SAP and non-SAP environments. SAP is in the process of decomposing its vast monolithic R/3 and mySAP application suites into functions that can be exposed as modular business services. By 2007, all SAP application functionality will be exposed through WSDL and registered within a UDDI registry native to NetWeaver, making SAP’s product portfolio thoroughly SOA-enabled. SAP is also ramping up its SOA professional services offerings, providing a broader range of best-practices workshops and tools.
Systinet, a leading vendor of UDDI registries, has developed a framework that stresses the central role of registries in policy-based SOA. Systinet’s Governance Interoperability Framework (GIF) positions the SOA registry as the principal policy-management platform for service composition, integration, security, management, and other governance functions. Systinet has gained support for GIF from many Web services management business partners, including Actional, AmberPoint, DataPower, Layer 7, Reactivity, and Service Integrity. For several years, Systinet has been a vanguard vendor in Web services and SOA, so these partnerships should be interpreted as strong industry support for GIF as a basis for SOA governance.
To date, Systinet has not published the GIF specifications that are being implemented by the company and its partners. However, Systinet has stated that the GIF includes specifications that cover the following policy management requirements (aka governance) across multivendor SOAs:
• Governance data integration: GIF-compliant SOAs will leverage the registry as the authoritative repository and management platform for service descriptions, attributes, and policies.
• Governance control integration: GIF-compliant SOAs will implement a common framework for service event notification and response based on policies managed in the registry.
• Governance user-interface integration: GIF-compliant SOAs will provide a common view of service descriptions, attributes, and policies managed in the registry.
Infravio is a WSM and Web services registry vendor with its own SOA framework, which it calls “Intentional SOA.” Basically, Infravio’s framework is a checklist that enterprises can use to determine their requirements and readiness for SOA, Web services, Web services management, and registries. One of the most useful features of Infravio’s Intentional SOA whitepaper is that it clearly delineates the various Web services standards that will be necessary for a layered, full-function, platform-agnostic SOA. In this regard, Infravio is like many vendors, whose Web services and SOA strategies are one and the same thing.
TBD: SOA on mature, comprehensive, ubiquitous, standards-based Web services
Truth be told, that’s not such a bad SOA strategy. Web services are the first middleware and development environment that can truly be called platform-agnostic. Web services enable more complete service virtualization, abstraction, loose coupling, and reuse than any prior environment. Web services are the preferred environment for thoroughgoing SOA. Within Web services environments, WSDL service contracts provide the principal platform-agnostic APIs for service virtualization.
In order to realize the nirvana of SOA everywhere, the following, at minimum, must happen:
• SOAP—the principal Web services middleware protocol--must be implemented natively in all new application development and integration projects.
• WSDL—the principal Web services API--must be used to expose the functionality of new applications, and to wrap existing services, applications, operating platforms, and other resources as Web services.
• UDDI—the principal Web services registry environment—must be implemented throughout your application infrastructure and all WSDL and other service metadata and policies must be published to those registries.
Of course, enterprises can implement a limited form of SOA within traditional middleware environments, such as Common Object Request Broker Architecture (CORBA) and Distributed Complement Object Model (DCOM), relying on these environments’ native service contract, registration, location, and binding interfaces. Indeed, many of the SOA case studies we describe in this Network World special section rely on older middleware approaches in addition to Web services. However, those pre-Web services SOA implementations are tightly coupled to the native object models, methods, and application programming interfaces of particular operating and application platforms. Consequently, changes to the underlying, tightly coupled platforms can disrupt interoperability with consuming services.
Ubiquitous adoption of Web services is a necessary, but not sufficient, condition for universal SOA. There are SOA-friendly and SOA–unfriendly ways to implement Web services. With Web services-based SOAs, developers have the option of coupling services either loosely or tightly, based on whether they design WSDL interfaces to correspond closely with service implementations—though loose coupling is the best practice for flexible SOAs. Some Web services implementations may introduce tight coupling with underlying platforms, but, generally, Web services encourage loose coupling via the following service design practices:
• Define WSDL-based service APIs so that these interfaces don’t contain types, collections, or other artifacts specific to the platform—such as J2EE or .NET—on which the service is implemented;
• Expose service functionality through standard document-oriented approaches, especially SOAP’s Document/Literal style interface;
• Create self-describing XML-based messages/documents that convey context information pertaining to ongoing interactions among services.
One of the drawbacks of a purely Web services-based approach to SOA is that some of the most critical “WS-*” standards have not yet been finalized, ratified, or adopted broadly by vendors and users. According to a recent IDG Research Services Group survey, IT professionals ranked incomplete and immature Web services standards as one of the top inhibitors of SOA implementations in their companies. Other significant SOA inhibitors include the immaturity of solutions that implement Web services standards and limited developer support for these standards.
The WS-* “stack,” like SOA, is a work in progress. It is not yet a full-featured alternative to older object-brokering and message-oriented middleware protocols. And it is not a unified set of specifications under single authorship, management, or governance, and probably will never become such. It is being defined by diverse organizations, including OASIS, the World Wide Web Consortium (W3C), and various vendor partnerships.
Sharing and reusing Web services is complicated by the fact that many platforms implement diverse versions of various WS-* specifications for federated identity, security, reliable messaging, event notification, publish and subscribe, and other critical infrastructure functions. Consequently, IT professionals will continue grappling with diverse platform-specific implementations of WS-* specifications until vendors converge on consistent implementations of common standards in various layers. In this regard, then, the current WS-* stack—as implemented in real products and tools--doesn’t yet support truly platform-independent SOA.
There is considerable industry disagreement and rivalry regarding which WS* specifications should prevail in many areas, such as federated identity, reliable messaging, event notification, and transactions. And even in functional areas where the industry has already converged on principal WS-* standards, it may take years before most vendors have implemented those standards in their products and have worked through the interoperability issues associated with diverse implementation of agreed-upon standards.
Until such time as the industry converges on universal, comprehensive, stable WS-* standards in various functional layers, Web services—as a complete SOA stack—will remain immature. Some of the most noteworthy areas of WS-* industry rivalry are as follows:
• Federated identity: In March 2005, OASIS ratified Security Assertion Markup Language 2.0 as a standard, incorporating specifications that had been contributed by the Liberty Alliance initiative. The previous SAML versions—1.0 and 1.1—have achieved considerable commercial adoption throughout the federated identity marketplace, with the Liberty Alliance specifications achieving some commercial adoption. Another competing specification—the Microsoft-developed WS-Federation—remains in the industry picture, despite having achieved far less commercial adoption and not being managed by any standards group. WS-Federation, released in July 2003, covers a similar functional range as SAML 2.0. However, SAML 2.0 and WS-Federation are entirely non-interoperable specifications. WS-Federation doesn’t build on or extend SAML, and it isn’t directly interoperable with SAML or the Liberty interfaces. WS-Federation is kept alive through Microsoft’s decision to build its Active Directory Federation Services feature of “Longhorn” on the specification.
• Reliable messaging: In November 2004, OASIS ratified WS-Reliability 1.1 as a standard. Produced by OASIS’ Web Services Reliable Messaging Technical Committee (WSRM TC), the standard defines a protocol for asynchronous or synchronous message exchange with guaranteed, once-only, ordered delivery, even in the presence of component, system, or network failures. It defines mechanisms for message persistence, message acknowledgement and resending, duplicate message elimination, ordered message delivery, and delivery status awareness for sender and receiver applications. However, WS-Reliability is not the only WS-* specification to achieve considerable industry support. An industry group dominated by Microsoft has implemented the rival WS-ReliableMessaging specifications, which addresses the same core requirements and features as WS-Reliability.
• Transactional coordination: In October 2003, Sun, Oracle, and other vendors submitted WS Composite Application Framework (WS-CAF) to OASIS. WS-CAF defines a standard framework for transactional coordination of long-running business processes within Web services environments. WS-CAF consists of three specifications (none of which have been finalized or ratified yet within the OASIS WS-CAF TC: WS-Context, WS-Coordination Framework, and WS-Transaction Management. Meanwhile, in September 2003, WS-Coordination was published by Microsoft, IBM, and BEA. WS-Coordination provides an extensible framework for defining transaction coordination patterns and protocols. At the same time, Microsoft, IBM, and BEA also published WS-Atomic Transaction, which references WS-Coordination and defines an “atomic” (i.e., ACID) coordination type. Several months later, these vendors published WS Business Activity Framework (BAF), which defines protocols for transactional coordination on long-running business activities. Since that time, Microsoft, IBM, and BEA have not submitted WS-Coordination, WS-AtomicTransaction, or WS-BAF to OASIS or any other standards group.
Of course, the existence of rival WS-* specifications doesn’t doom Web services as a basis for SOA. Sharing and reuse can still proceed in a heterogeneous WS-* stack and application infrastructure. To address this stubborn reality, a flexible enterprise service bus (ESB) infrastructure can provide “impedance matching” between diverse WS-* standards, versions, and implementations. ESB products come from a broad range of middleware vendors, including Cape Clear Software, Fiorano Software, IBM, IONA, Sonic Software, Systinet, TIBCO Software, and webMethods. Depending on the vendor and product, ESB solutions provide any and all of the following features for a robust SOA infrastructure:
--Wrap legacy environments with WS-* interfaces, thereby supporting legacy integration;
--Provide any-to-any interoperability through integration brokers, which provide orchestration, transformation, and routing services among distributed services, applications, data repositories, and other resources;
--Support content-aware monitoring and enforcement of performance, security, and other enterprise policies pertaining to Web services traffic;
--Support the request-response, event-driven, and method-invocation message flows among distributed services;
--Support reliable messaging, transactional coordination, and other robust interaction patterns; and
--Support hub-and-spoke, decentralized, and peer-to-peer message exchange patterns
Figure 6 illustrates the many functional service layers that are necessary for a full-fledged, robust SOA interoperability environment. WS-* standards and specifications have been developed in all of these layers. However, standards in some layers are more mature and widely adopted than in others. The core layers for today’s Web services are messaging and communications (SOAP); description, semantics, and policy (WSDL, XML Schema, WS-Policy); and brokering, registry, and discovery (UDDI).
Even the consensus WS-* standards—such as WSDL, SOAP, and UDDI—have not been implemented consistently and universally by all platform, application, and tool vendors. The standards provide many options that can hinder interoperability, unless organizations agree on implementation profiles that were published by the Web Services Interoperability (WS-I) Organization.
Vendors may claim conformance with WS-I profiles for various WS-* standards. But what those vendors have actually implemented, as default settings in their products, may be another thing entirely. And that doesn’t stop application developers and administrators from overriding the vendor defaults and implementing WS-* specifications in entirely non-standard ways, thereby frustrating any-to-any interoperability throughout their SOA environments.
TBD: SOA developed through visual modeling techniques
For practical SOA, it’s not enough simply to have conceptual clarity on what SOA means, or to have a ubiquitous Web services or ESB environment within which services can be reused, invoked, monitored, and managed. It’s also critical that developers have new visually oriented modeling tools for building distributed services upon SOA principles.
Visual models are essential for developers who need to navigate the unfamiliar seas of SOA. If you’re a developer, SOA may make you feel a tad queasy, because it’s rocking your familiar paradigms to the core.
Fundamentally, SOA is a disruptive new approach to building distributed services. SOA defines an environment within which the most ancient computing paradigm is rapidly dissolving. From the dawn of computing, we’ve developed new functionality upon and within such concepts as “platform,” “application,” and “language.” Each of these concepts has traditionally had a well-defined sphere of reference: the platform hosted the application, and the application was developed in a language. Now all of that is changing, thanks to the emergence of SOA.
The first of these ancient computing concepts to wither away will be that of the platform. This term originally applied to operating systems, and then broadened to include application servers that implement a particular development framework (such as Java 2 Platform Enterprise Edition or .NET) over one or more operating systems. However, the growth of standards-based, distributed Web services has made it clear that fewer and fewer business processes will execute entirely within the confines of a J2EE 1.3 server, or Windows Server 2003, or Linux, but will execute across them all. When all platforms share a common environment for describing, publishing, and invoking services, the notion of self-contained platforms disintegrates in favor of SOA, which is essentially a platformless service cosmos.
Likewise, another casualty of this evolution is the notion of applications as discrete functional components that execute on particular platforms. SOA is founded on the notion of virtualization. Under this paradigm, services describe abstract interfaces within standard, platform-independent metadata vocabularies such as WSDL. The underlying service functionality may be provided from components on any platform—J2EE, .NET, or otherwise—without the need to change the abstract interface. Under SOA, the application dissolves into a service that may have no fixed implementation but simply bids for on-demand networked software and hardware resources.
And programming languages are also becoming, if not entirely obsolete, something that fewer developers touch directly. MDD—which refers to visual model-driven development and code-generation tools—is at the forefront of the SOA revolution. You’re more likely these days to see a vendor boast of its ability to support visual modeling in the Unified Modeling Language (UML) than development in Java, C#, or any other declarative programming language. For complex, orchestrated, multiplatform Web services, MDD is the most effective approach for specifying, implementing, and maintaining the end-to-end logic and rules upon which the service depends.
SOA has spawned a range of new terms—such as “orchestration” and “recomposition”--to describe what it is that developers actually develop. IT professionals are increasingly defining their creations in terms of services, models, and patterns (rather than in terms of platforms, applications, and languages). The notion of MDD-orchestrated service patterns will become increasingly critical to discussions of distributed services. An orchestrated service pattern is simply a generic approach—such as “service proxying” or “service coordination”--to architecting interactions within the SOA. Every pattern defines its own abstract Web services functional elements and its own SOAP-based interactions that are executed by integration brokers within your ESB.
Welcome to the dizzying new world of SOA. Platforms are dissolving, new concepts are taking over, and Web services will become everywhere shareable and reusable. There’s so much new work to be done, and so many new tools—both conceptual and computational—within which to do it.
Tuesday, September 20, 2005
fyi Social Software
All:
Pointer to article:
http://www.computerworld.com/careertopics/careers/recruiting/story/0,10801,104653,00.html
Kobielus kommentary:
This sentence caught my eye: “Social networking technology helps connect friends, business partners and others using a variety of tools such as search and data mining.”
“And others...tools...search…data mining.” I’ll bet “social software” is a major tool for identity theft, facilitated by identity mining that’s enabled through indiscriminate linking at the behest of faux friends and business acquaintances. LinkedIn, ZeroDegrees, Ryze, etc—-I’m not singling any of them out—the whole phenomenon is highly suspect.
What I’ve noticed about “social software” is that, after the first few weeks/months of everybody linking to people they actually know, one starts to receive a greater volume of invites from people you’ve never heard of. Which, of course, assumes that these people are using their actual names. All of which makes you wonder what these “people” expect to get from linking to you, and you to them.
But of course…your resume, with all your personal data arranged neatly, spelled correctly, and packed efficiently into a scannable data structure. I’ve not heard of any incidents of identity theft in the world of “social software,” but it just seems like it’s ripe for the plucking. Imagine all of the potential passwords—and password clues—available to the industrious identity miner who’s latched onto your resume. Among other things, they have your name, address, phone numbers, e-mail addresses, possibly your IM addresses, and so forth. If nothing else, great fodder for the spammers. Or the spammers in league with the identity thieves.
Folks should think long and hard before putting out another resume on one of these “services.” There are ways to network yourself that don’t expose you to rampant identity mining. Cold calls (or “cold e-mails”), for example. Yeah, that’s a lonely game, but it beats the sort of bogus connections you forge on these social networks.
Life’s too full of vacuous relationships already. Real people who suck you dry and never spend any quality time. Why add to the emptiness?
Jim
Pointer to article:
http://www.computerworld.com/careertopics/careers/recruiting/story/0,10801,104653,00.html
Kobielus kommentary:
This sentence caught my eye: “Social networking technology helps connect friends, business partners and others using a variety of tools such as search and data mining.”
“And others...tools...search…data mining.” I’ll bet “social software” is a major tool for identity theft, facilitated by identity mining that’s enabled through indiscriminate linking at the behest of faux friends and business acquaintances. LinkedIn, ZeroDegrees, Ryze, etc—-I’m not singling any of them out—the whole phenomenon is highly suspect.
What I’ve noticed about “social software” is that, after the first few weeks/months of everybody linking to people they actually know, one starts to receive a greater volume of invites from people you’ve never heard of. Which, of course, assumes that these people are using their actual names. All of which makes you wonder what these “people” expect to get from linking to you, and you to them.
But of course…your resume, with all your personal data arranged neatly, spelled correctly, and packed efficiently into a scannable data structure. I’ve not heard of any incidents of identity theft in the world of “social software,” but it just seems like it’s ripe for the plucking. Imagine all of the potential passwords—and password clues—available to the industrious identity miner who’s latched onto your resume. Among other things, they have your name, address, phone numbers, e-mail addresses, possibly your IM addresses, and so forth. If nothing else, great fodder for the spammers. Or the spammers in league with the identity thieves.
Folks should think long and hard before putting out another resume on one of these “services.” There are ways to network yourself that don’t expose you to rampant identity mining. Cold calls (or “cold e-mails”), for example. Yeah, that’s a lonely game, but it beats the sort of bogus connections you forge on these social networks.
Life’s too full of vacuous relationships already. Real people who suck you dry and never spend any quality time. Why add to the emptiness?
Jim
Thursday, September 15, 2005
imho Cisco's me-too AON strategy
One of the truisms of modern networks is that they’re growing inexorably more “intelligent.” Another widespread article of telecommunications faith is that application-oriented intelligence must be introduced at the network’s edges, not into its core traffic control nodes such as IP routers and switches. According to the orthodox view, new application functionality springs up more rapidly when the intervening network functions simply as a dumb, general-purpose communication channel.
Oddly, Cisco—the bluest of the blue-chip IP-router vendors—seems to be challenging this orthodoxy. Cisco recently announced a new family of products that it claims embed greater intelligence in networks. Under its Application-Oriented Networking (AON) strategy, the vendor will, by the end of this year, begin to offer content-filtering blades that can be configured into its routers and switches. Cisco’s AON blades will handle application and middleware functionality, such as the ability to route, transform, cache, compress, track, and apply security and other policies to XML documents and other protocol payloads.
Essentially, AON devices are content-aware network appliances. They are important adjuncts to routers, switches, and other traditional layer-three traffic management nodes, but they are not a new phenomenon in the marketplace. Cisco’s bold AON positioning can’t hide the fact that it’s essentially pursuing a defensive, me-too strategy. When Cisco finally announced these products earlier this year, many of its traditional rivals—such as Juniper and F5 Networks--had long since beaten it to this growing niche.
Actually, the true pioneers in this niche are the Web services management (WSM) vendors, who, for the past several years, have been offering proxies, agents, and appliances to filter and control the flow of XML/SOAP traffic in keeping with centrally managed rules and policies. In the WSM space, some vendors specialize in providing performance-optimized, hardware-based XML-processing appliances. XML appliance vendors such as DataPower, Sarvega (recently acquired by Intel), and NetScaler (recently acquired by Citrix) represent the closest direct competitors to Cisco’s emerging AON family.
So, Cisco is not the first mover in content-filtering appliance niche--not by a long stretch. But you wouldn’t know that by reading its marketing materials. The vendor describes AON as the “first network-embedded intelligent message-routing system, integrating application message-level communication, visibility, and security.”
This is classic vendor self-aggrandizement, designed to create the appearance of leadership when the reality is the opposite.
One of the first things that you notice about Cisco’s AON positioning is that it seems to hinge on a Cisco-centric notion of what it means for a “message-routing system” to be both “intelligent” and “network-embedded.” How does Cisco classify integration brokers and message-oriented middleware, which are deployed into networks, filter application messages, perform policy-driven message routing, and possess intelligence? Does “network-embedded” mean, for Cisco, that the message-routing intelligence must be deployed onto traditional layer-three platforms such as routers and switches?
Could it be that Cisco is urging customers to upgrade traditional routers and switches to newer AON-enabled models that route traffic based both on IP addresses and on the contents of XML/SOAP and other middleware messages?
Not really. Cisco’s not telling customers that IP routers and switches are obsolete. It’s not integrating AON technology into the core of its existing layer-three devices. And it certainly isn’t recommending that customers do application-layer filtering of all IP packets in all routers on the network backbone. That would create a massive performance hit on all traffic.
So, down deep and despite its AON positioning, Cisco isn’t really recommending that customers put more application or middleware intelligence into the network backbone. The company is simply recommending that AON blades—which provide this intelligence--be deployed at the network edge: as proxies, gateways, co-processors, and branch-office routers.
In other words, Cisco AON—for all its promise--isn’t a revolutionary product strategy or architecture. It isn’t challenging the orthodox industry view on the appropriate deployment of application and middleware intelligence in the network. And it isn’t expected to cannibalize or replace Cisco’s traditional market for IP routers and switches.
Nevertheless, Cisco’s AON strategy validates and gives greater visibility to a market that’s been emerging for several years: XML-aware network appliances. Cisco’s move into this market signals a growing trend: the core IP router network is evolving to embrace more content-aware XML/SOAP filtering, routing, and traffic management functions, which are provided by specialized appliances.
These are the specialized roles into which network managers should consider deploying Cisco AON appliances and similar devices from rival vendors.
Oddly, Cisco—the bluest of the blue-chip IP-router vendors—seems to be challenging this orthodoxy. Cisco recently announced a new family of products that it claims embed greater intelligence in networks. Under its Application-Oriented Networking (AON) strategy, the vendor will, by the end of this year, begin to offer content-filtering blades that can be configured into its routers and switches. Cisco’s AON blades will handle application and middleware functionality, such as the ability to route, transform, cache, compress, track, and apply security and other policies to XML documents and other protocol payloads.
Essentially, AON devices are content-aware network appliances. They are important adjuncts to routers, switches, and other traditional layer-three traffic management nodes, but they are not a new phenomenon in the marketplace. Cisco’s bold AON positioning can’t hide the fact that it’s essentially pursuing a defensive, me-too strategy. When Cisco finally announced these products earlier this year, many of its traditional rivals—such as Juniper and F5 Networks--had long since beaten it to this growing niche.
Actually, the true pioneers in this niche are the Web services management (WSM) vendors, who, for the past several years, have been offering proxies, agents, and appliances to filter and control the flow of XML/SOAP traffic in keeping with centrally managed rules and policies. In the WSM space, some vendors specialize in providing performance-optimized, hardware-based XML-processing appliances. XML appliance vendors such as DataPower, Sarvega (recently acquired by Intel), and NetScaler (recently acquired by Citrix) represent the closest direct competitors to Cisco’s emerging AON family.
So, Cisco is not the first mover in content-filtering appliance niche--not by a long stretch. But you wouldn’t know that by reading its marketing materials. The vendor describes AON as the “first network-embedded intelligent message-routing system, integrating application message-level communication, visibility, and security.”
This is classic vendor self-aggrandizement, designed to create the appearance of leadership when the reality is the opposite.
One of the first things that you notice about Cisco’s AON positioning is that it seems to hinge on a Cisco-centric notion of what it means for a “message-routing system” to be both “intelligent” and “network-embedded.” How does Cisco classify integration brokers and message-oriented middleware, which are deployed into networks, filter application messages, perform policy-driven message routing, and possess intelligence? Does “network-embedded” mean, for Cisco, that the message-routing intelligence must be deployed onto traditional layer-three platforms such as routers and switches?
Could it be that Cisco is urging customers to upgrade traditional routers and switches to newer AON-enabled models that route traffic based both on IP addresses and on the contents of XML/SOAP and other middleware messages?
Not really. Cisco’s not telling customers that IP routers and switches are obsolete. It’s not integrating AON technology into the core of its existing layer-three devices. And it certainly isn’t recommending that customers do application-layer filtering of all IP packets in all routers on the network backbone. That would create a massive performance hit on all traffic.
So, down deep and despite its AON positioning, Cisco isn’t really recommending that customers put more application or middleware intelligence into the network backbone. The company is simply recommending that AON blades—which provide this intelligence--be deployed at the network edge: as proxies, gateways, co-processors, and branch-office routers.
In other words, Cisco AON—for all its promise--isn’t a revolutionary product strategy or architecture. It isn’t challenging the orthodox industry view on the appropriate deployment of application and middleware intelligence in the network. And it isn’t expected to cannibalize or replace Cisco’s traditional market for IP routers and switches.
Nevertheless, Cisco’s AON strategy validates and gives greater visibility to a market that’s been emerging for several years: XML-aware network appliances. Cisco’s move into this market signals a growing trend: the core IP router network is evolving to embrace more content-aware XML/SOAP filtering, routing, and traffic management functions, which are provided by specialized appliances.
These are the specialized roles into which network managers should consider deploying Cisco AON appliances and similar devices from rival vendors.
fyi Terrorists Don't Do Movie Plots
All:
Pointer to article:
http://www.wired.com/news/business/0,1367,68789,00.html
Kobielus kommentary:
Schneier’s hitting on something I learned in poli-sci class a long time ago. Politics is driven by cultural agendas, and cultural agendas are driven by the latest demons, disasters, and other discontinuities to wrack and rock the body politic. Nothing ever truly moves in a straight line from venerated past to noble future. People—including, especially, politicians—set their agendas to respond to the threat that’s freshest in their nervous system. And those threats are just as likely to be conjured by Steven Spielberg as by Osama bin Laden.
I agree with Schneier’s broad recommendations from this article, which I’ll quote, number, and then comment on individually:
1. “We need to defend against the broad threat of terrorism.”
2. “Security is most effective when it doesn't make arbitrary assumptions about the next terrorist act.”
3. “We need to spend more money on intelligence and investigation: identifying the terrorists themselves, cutting off their funding, and stopping them regardless of what their plans are.”
4. “We need to spend more money on emergency response: lessening the impact of a terrorist attack, regardless of what it is.”
5. “And we need to face the geopolitical consequences of our foreign policy and how it helps or hinders terrorism.”
#1 is about as vague and useless a prescription as you can dish out. It’s like saying we need to defend ourselves against the broad threat of bad weather in all varieties: hurricanes, tornados, hail, lightning, etc.
#2 is a tautology—true by definition. My dictionary defines “arbitrary” as “existing or coming about seemingly at random or by chance or as a capricious and unreasonable act of will.” Yes, any approach to anything is more effective when it’s based on reasonable, not unreasonable, assumptions about that problem domain. So security is in fact almost more effective when it’s based on a solid, valid reason for securing something in a particular way.
#3 is a good actionable statement of what we need to do to deal with particular identifiable terrorists. Of course, it assumes that the “terrorists” are a discrete class of individuals who can in fact be targeted and squelched, thereby alleviating the “terrorism” problem. That’s like assuming that vandals are a discrete class of criminally-inclined individuals, when, in fact, vandalism is a pattern of criminal behaviors into which various individuals (e.g., otherwise non-threatening teenagers with too much time on their hands) fall from time to time. Yeah, of course terrorism is far more serious than most acts of teenage vandalism. But terrorism is a pattern of criminal activity into which various individuals (including those who engage in it to further political aims that, should their side ultimately win, would be lauded by their historians as noble and nation-building) may participate in for various reasons. Yeah, yeah—I know—the whole “terrorists” vs. “freedom fighters” semantic tug-of-war.
#4 is essentially the fatalistic approach that we must take when terrorism/terrorists raise its/their ugly heads and blow things up. No defense is perfect. Determined bad people can and will persist until they make themselves heard. Casualties are inevitabilities—perhaps even of ourselves. Evacuation, rescue, healing, burying, mourning, cleanup, recovery, and restoration are never-ending responses that we must collectively be prepared to activate. How resilient is our society, economy, political system? How hard a hit can we take to the solar plexus and continue to forge ahead? One of the first things that I thought after 9/11 is that this is a test of the US resilience. Nasty as the attacks were, we’re not the first nation to be afflicted by terrorism (or floods, earthquakes, etc.), and more catastrophes will certainly come, as they always have. One of the things I noticed is that our well-developed IT infrastructure provided a shock absorber when the airports closed temporarily: we resorted to phone, e-mail, IM, videoconferencing, etc to continue our lives and businesses after a fashion, albeit with serious disruption. Being a fatalist who has weathered the loss of both parents at a young age, I knew full well that the most important consideration is keeping your soul on automatic pilot through such dark days, which will certainly pass, and putting your old habits on hold (perhaps permanently) as you build workarounds into your life.
#5 is a good general heads-up for US government and business to deal with the occasionally nasty consequences of being a huge powerful country with entanglements, investments, encampments, friends, and enemies seemingly everywhere. Whether or not each of us Americans personally agrees with the policies of this or past US governments, or businesses, we inherit both their halos and pitchforks, in the eyes of others.
Fact is: Those who are inclined to love you for who you are (or what you represent to them) will love. Others will hate you. Just because you’re you. Regardless of what you personally do or have done.
So be aware. Disaster is arbitrary.
Jim
Pointer to article:
http://www.wired.com/news/business/0,1367,68789,00.html
Kobielus kommentary:
Schneier’s hitting on something I learned in poli-sci class a long time ago. Politics is driven by cultural agendas, and cultural agendas are driven by the latest demons, disasters, and other discontinuities to wrack and rock the body politic. Nothing ever truly moves in a straight line from venerated past to noble future. People—including, especially, politicians—set their agendas to respond to the threat that’s freshest in their nervous system. And those threats are just as likely to be conjured by Steven Spielberg as by Osama bin Laden.
I agree with Schneier’s broad recommendations from this article, which I’ll quote, number, and then comment on individually:
1. “We need to defend against the broad threat of terrorism.”
2. “Security is most effective when it doesn't make arbitrary assumptions about the next terrorist act.”
3. “We need to spend more money on intelligence and investigation: identifying the terrorists themselves, cutting off their funding, and stopping them regardless of what their plans are.”
4. “We need to spend more money on emergency response: lessening the impact of a terrorist attack, regardless of what it is.”
5. “And we need to face the geopolitical consequences of our foreign policy and how it helps or hinders terrorism.”
#1 is about as vague and useless a prescription as you can dish out. It’s like saying we need to defend ourselves against the broad threat of bad weather in all varieties: hurricanes, tornados, hail, lightning, etc.
#2 is a tautology—true by definition. My dictionary defines “arbitrary” as “existing or coming about seemingly at random or by chance or as a capricious and unreasonable act of will.” Yes, any approach to anything is more effective when it’s based on reasonable, not unreasonable, assumptions about that problem domain. So security is in fact almost more effective when it’s based on a solid, valid reason for securing something in a particular way.
#3 is a good actionable statement of what we need to do to deal with particular identifiable terrorists. Of course, it assumes that the “terrorists” are a discrete class of individuals who can in fact be targeted and squelched, thereby alleviating the “terrorism” problem. That’s like assuming that vandals are a discrete class of criminally-inclined individuals, when, in fact, vandalism is a pattern of criminal behaviors into which various individuals (e.g., otherwise non-threatening teenagers with too much time on their hands) fall from time to time. Yeah, of course terrorism is far more serious than most acts of teenage vandalism. But terrorism is a pattern of criminal activity into which various individuals (including those who engage in it to further political aims that, should their side ultimately win, would be lauded by their historians as noble and nation-building) may participate in for various reasons. Yeah, yeah—I know—the whole “terrorists” vs. “freedom fighters” semantic tug-of-war.
#4 is essentially the fatalistic approach that we must take when terrorism/terrorists raise its/their ugly heads and blow things up. No defense is perfect. Determined bad people can and will persist until they make themselves heard. Casualties are inevitabilities—perhaps even of ourselves. Evacuation, rescue, healing, burying, mourning, cleanup, recovery, and restoration are never-ending responses that we must collectively be prepared to activate. How resilient is our society, economy, political system? How hard a hit can we take to the solar plexus and continue to forge ahead? One of the first things that I thought after 9/11 is that this is a test of the US resilience. Nasty as the attacks were, we’re not the first nation to be afflicted by terrorism (or floods, earthquakes, etc.), and more catastrophes will certainly come, as they always have. One of the things I noticed is that our well-developed IT infrastructure provided a shock absorber when the airports closed temporarily: we resorted to phone, e-mail, IM, videoconferencing, etc to continue our lives and businesses after a fashion, albeit with serious disruption. Being a fatalist who has weathered the loss of both parents at a young age, I knew full well that the most important consideration is keeping your soul on automatic pilot through such dark days, which will certainly pass, and putting your old habits on hold (perhaps permanently) as you build workarounds into your life.
#5 is a good general heads-up for US government and business to deal with the occasionally nasty consequences of being a huge powerful country with entanglements, investments, encampments, friends, and enemies seemingly everywhere. Whether or not each of us Americans personally agrees with the policies of this or past US governments, or businesses, we inherit both their halos and pitchforks, in the eyes of others.
Fact is: Those who are inclined to love you for who you are (or what you represent to them) will love. Others will hate you. Just because you’re you. Regardless of what you personally do or have done.
So be aware. Disaster is arbitrary.
Jim
Sunday, September 11, 2005
poem Playa
PLAYA
Pleasure—-our seaside companion, perched as worrying wings on the rail of the porch of the house over azure—-precipitous---ah with the fall and the flutter, as the leaf grieves its summer, grips its slumber as sure as the winters melt, poles precess, and oceans never freeze-—smooth revolution—-don’t we need these assurances—-levels beyond which tides won’t climb, within which we withstand the pressure of waves at play.
Pleasure—-our seaside companion, perched as worrying wings on the rail of the porch of the house over azure—-precipitous---ah with the fall and the flutter, as the leaf grieves its summer, grips its slumber as sure as the winters melt, poles precess, and oceans never freeze-—smooth revolution—-don’t we need these assurances—-levels beyond which tides won’t climb, within which we withstand the pressure of waves at play.
Wednesday, September 07, 2005
fyi Group as User: Flaming and the Design of Social Software
All:
Pointer to blogpost:
http://www.shirky.com/writings/group_user.html
Kobielus kommentary:
One of Clay Shirky’s consistently brilliant musings on the social context within which software is created, consumed, subsumed, and doomed.
I’m curious why he chose to treat flaming as an evil that we need to prevent, dampen, neutralize, or punish in online forums (mailing lists, blogs, wikis, etc.). It seems to me that flaming is one of the indicators that a forum is still alive, kicking, and stirring up something of interest. Something that might make me want to revisit and participate in the forum.
The death of online forums isn’t flaming—it’s emptiness, pointlessness, and boredom. Most forums go through a predictable lifecycle, from fresh frenzied startup activity, drifting and shifting of subject matter in keeping with participant interest, through churn and attrition of membership, through lagging participation, and finally to stasis, decay, and death. Maybe Shirky should recognize that few interest-centric social groupings—such as mailing lists—are permanent and sustainable. Maybe we shouldn’t try to engineer them for permanence, beyond their natural life expectancy.
Can Shirky suggest any technological fix to keep forums consistently fresh and interesting, month after month, year after year? I doubt it. Forum participants must somehow hit on a magic formula that rewards repeated visits.
Each of us who maintains a blog—a private forum with public access—wrestles with that problem continually. How can I keep posting, and keep posting interesting stuff, stuff that’s interesting to me and possibly to a hardcore of folks who reward me with their attention, and, possibly, feedback and friendship and flames. Yes, flames. Screaming and cursing at me tells me I’ve struck a nerve, and that you care enough to let me know. Flames are destructive only if the flamed parties are thereby disinclined to post further.
If my goal is to put my ideas out there, then flames will almost inevitably follow, because others might find my boldness offputting. When you’re a pundit (and that in fact is what bloggers are), boldness is an essential component of your value proposition. Nobody wants to read wishy-washy punditry.
Bold pundits attract bold antagonists. I was taught that in j-school, by a colleague named Anne St. Germaine (Anne: you out there?), who was herself on the receiving end of a flame (actually, it was a physical rock directed at a window of her apartment) in response to an opinion she had published.
And I've experienced a less violent but nonetheless steady stream of flames over the years for things I've published in Network World, Burton Group, my blog, etc. The most noteworthy was a phone call from someone who, incidentally, invented the blog medium, who took particular exception to one of my published statements about something else that he regarded himself as the undisputed paramount inventor of.
He used an obscenity to characterize my work. Which made him memorable.
Jim
Pointer to blogpost:
http://www.shirky.com/writings/group_user.html
Kobielus kommentary:
One of Clay Shirky’s consistently brilliant musings on the social context within which software is created, consumed, subsumed, and doomed.
I’m curious why he chose to treat flaming as an evil that we need to prevent, dampen, neutralize, or punish in online forums (mailing lists, blogs, wikis, etc.). It seems to me that flaming is one of the indicators that a forum is still alive, kicking, and stirring up something of interest. Something that might make me want to revisit and participate in the forum.
The death of online forums isn’t flaming—it’s emptiness, pointlessness, and boredom. Most forums go through a predictable lifecycle, from fresh frenzied startup activity, drifting and shifting of subject matter in keeping with participant interest, through churn and attrition of membership, through lagging participation, and finally to stasis, decay, and death. Maybe Shirky should recognize that few interest-centric social groupings—such as mailing lists—are permanent and sustainable. Maybe we shouldn’t try to engineer them for permanence, beyond their natural life expectancy.
Can Shirky suggest any technological fix to keep forums consistently fresh and interesting, month after month, year after year? I doubt it. Forum participants must somehow hit on a magic formula that rewards repeated visits.
Each of us who maintains a blog—a private forum with public access—wrestles with that problem continually. How can I keep posting, and keep posting interesting stuff, stuff that’s interesting to me and possibly to a hardcore of folks who reward me with their attention, and, possibly, feedback and friendship and flames. Yes, flames. Screaming and cursing at me tells me I’ve struck a nerve, and that you care enough to let me know. Flames are destructive only if the flamed parties are thereby disinclined to post further.
If my goal is to put my ideas out there, then flames will almost inevitably follow, because others might find my boldness offputting. When you’re a pundit (and that in fact is what bloggers are), boldness is an essential component of your value proposition. Nobody wants to read wishy-washy punditry.
Bold pundits attract bold antagonists. I was taught that in j-school, by a colleague named Anne St. Germaine (Anne: you out there?), who was herself on the receiving end of a flame (actually, it was a physical rock directed at a window of her apartment) in response to an opinion she had published.
And I've experienced a less violent but nonetheless steady stream of flames over the years for things I've published in Network World, Burton Group, my blog, etc. The most noteworthy was a phone call from someone who, incidentally, invented the blog medium, who took particular exception to one of my published statements about something else that he regarded himself as the undisputed paramount inventor of.
He used an obscenity to characterize my work. Which made him memorable.
Jim
Friday, September 02, 2005
imho The Great New Orleans Flood, global warming, blah blah
All:
Now for a rambling disquisition on this, that, these, and those.
This tragic levee break and flood in New Orleans, and all the flooding and other hurricane damage in Mississippi, Alabama, and Florida. They're "used to" hurricanes, obviously, but this was clearly "The Big One" they've been fearing. Bigger than Camille in 1969.
And for New Orleans, the big one they've been abstractly bracing themselves for for years and years. And now it's just a real nightmare. Essentially, the totalling of a major American city, on a par with the 1906 San Francisco earthquake/fire and the 1871 Chicago fire. Pretty much an entire large city has been rendered uninhabitable in a sudden natural catastrophe. Such are the perils of living below sea level.
There's no question whether we/they'll drain, mop up, and rebuild New Orleans. Of course, we/they will. Any human settlement that's positioned so centrally on a maritime crossroads--or any crossroads--will get built and rebuilt and rebuilt again, new shiny layer upon old shattered layer, and so on and so forth. Hasn't archaeology demonstrated that central fact of human existence conclusively. Floods, earthquakes, fires, wars, etc.--the smashed crockery of human habitations will fairly quickly be swept away and new crockery set on new shelves. The "New New Orleans" will look a bit different from the old--it may lose some of the funky charm of the ancient (for North America) Cajun/Creole Big Easy. But it will be built back.
Which brings me to a semi-related thought. This flood was obviously due to a transient weather phenomenon: a powerful hurricane that swept up from the Gulf of Mexico. What's not-so-transient is the gradual warming of the earth's climate, and an increased frequency and ferocity of hurricanes and other storms may be the inevitable consequence of that long-term trend.
As we know, another consequence of global warming is the melting of the glaciers--north and south. Which leads to rising of sea level--everywhere. People who live at or below sea level will start to realize that their days in their current digs are numbered, most likely, unless they figure out how to rebuilt their cities on telescoping stilts or whatever. A sort of Jetsons scenario. Try to stay elevated above the rising seas. Glub glub.
A while back, I did a quick mental survey of the earth's coastlines, trying to figure out what currently-at/below-sea-level areas will be rendered uninhabitable by global warming, glacier melt, and glub glub. Off the top of my head, I listed: New Orleans and southern Louisiana, the Netherlands, Israel/Palestine, Bangladesh, and Micronesia. Oh, and of course, New York City; Venice, Italy; Jakarta, Indonesia; and other low-lying coastal cities. Or they're going to have to build new floodwalls or radically raise/strengthen existing levees/dikes/etc to keep out the ocean.
If you think of it, the contours of coastlines have always defined the geographic context for civilization. Think of the Bering land bridge, long gone vestige of the previous ice age, and how it temporarily provided a conduit for settlement of the western hemisphere. Think of the end of the last ice age, and how the rising of the seas inundated the Black Sea, causing the diaspora of Indo-European peoples (whose ancestors probably hailed from the prior shores of that sea) across Eurasia. Think of how humanity appears to have emigrated from Africa first along the coast of the Indian Ocean, in boats and/or on foot, and settled Australia (yes...archaelogical and DNA evidence indicate that the native Australians were among the first out-of-Africa emigrants, 40,000 years ago, and they undoubtedly got there via coast-hugging boat).
My feeling is that we must recognize the pivotal role of sea-level change in the shaping of civilization. Every human society's civilization. Essentially, the best way to remind ourselves of that role is to date all years from the putative recession of the previous ice age glaciation, hence the latest (in historic time) rise in sea levels everywhere.
Scientists have determined that the last ice age ended around 10,000-12,000 years ago. Let's, for the sake of convenience, peg that ice-age end at 12,000 years ago. If we do, then renumbering the years is as simple as prepending a "1" to the front of each year.
So, today, in PG (post-glacier) time, it's 2 September, 12005.
Remember. We're in post-glacier time. Keeps everything in perspective. Post-glacier time may also be, and probably is, pre-glacier time.
It's also post-flood time. And pre-flood time.
Depending on which end of the linked-disaster scenario you focus on.
Jim
Now for a rambling disquisition on this, that, these, and those.
This tragic levee break and flood in New Orleans, and all the flooding and other hurricane damage in Mississippi, Alabama, and Florida. They're "used to" hurricanes, obviously, but this was clearly "The Big One" they've been fearing. Bigger than Camille in 1969.
And for New Orleans, the big one they've been abstractly bracing themselves for for years and years. And now it's just a real nightmare. Essentially, the totalling of a major American city, on a par with the 1906 San Francisco earthquake/fire and the 1871 Chicago fire. Pretty much an entire large city has been rendered uninhabitable in a sudden natural catastrophe. Such are the perils of living below sea level.
There's no question whether we/they'll drain, mop up, and rebuild New Orleans. Of course, we/they will. Any human settlement that's positioned so centrally on a maritime crossroads--or any crossroads--will get built and rebuilt and rebuilt again, new shiny layer upon old shattered layer, and so on and so forth. Hasn't archaeology demonstrated that central fact of human existence conclusively. Floods, earthquakes, fires, wars, etc.--the smashed crockery of human habitations will fairly quickly be swept away and new crockery set on new shelves. The "New New Orleans" will look a bit different from the old--it may lose some of the funky charm of the ancient (for North America) Cajun/Creole Big Easy. But it will be built back.
Which brings me to a semi-related thought. This flood was obviously due to a transient weather phenomenon: a powerful hurricane that swept up from the Gulf of Mexico. What's not-so-transient is the gradual warming of the earth's climate, and an increased frequency and ferocity of hurricanes and other storms may be the inevitable consequence of that long-term trend.
As we know, another consequence of global warming is the melting of the glaciers--north and south. Which leads to rising of sea level--everywhere. People who live at or below sea level will start to realize that their days in their current digs are numbered, most likely, unless they figure out how to rebuilt their cities on telescoping stilts or whatever. A sort of Jetsons scenario. Try to stay elevated above the rising seas. Glub glub.
A while back, I did a quick mental survey of the earth's coastlines, trying to figure out what currently-at/below-sea-level areas will be rendered uninhabitable by global warming, glacier melt, and glub glub. Off the top of my head, I listed: New Orleans and southern Louisiana, the Netherlands, Israel/Palestine, Bangladesh, and Micronesia. Oh, and of course, New York City; Venice, Italy; Jakarta, Indonesia; and other low-lying coastal cities. Or they're going to have to build new floodwalls or radically raise/strengthen existing levees/dikes/etc to keep out the ocean.
If you think of it, the contours of coastlines have always defined the geographic context for civilization. Think of the Bering land bridge, long gone vestige of the previous ice age, and how it temporarily provided a conduit for settlement of the western hemisphere. Think of the end of the last ice age, and how the rising of the seas inundated the Black Sea, causing the diaspora of Indo-European peoples (whose ancestors probably hailed from the prior shores of that sea) across Eurasia. Think of how humanity appears to have emigrated from Africa first along the coast of the Indian Ocean, in boats and/or on foot, and settled Australia (yes...archaelogical and DNA evidence indicate that the native Australians were among the first out-of-Africa emigrants, 40,000 years ago, and they undoubtedly got there via coast-hugging boat).
My feeling is that we must recognize the pivotal role of sea-level change in the shaping of civilization. Every human society's civilization. Essentially, the best way to remind ourselves of that role is to date all years from the putative recession of the previous ice age glaciation, hence the latest (in historic time) rise in sea levels everywhere.
Scientists have determined that the last ice age ended around 10,000-12,000 years ago. Let's, for the sake of convenience, peg that ice-age end at 12,000 years ago. If we do, then renumbering the years is as simple as prepending a "1" to the front of each year.
So, today, in PG (post-glacier) time, it's 2 September, 12005.
Remember. We're in post-glacier time. Keeps everything in perspective. Post-glacier time may also be, and probably is, pre-glacier time.
It's also post-flood time. And pre-flood time.
Depending on which end of the linked-disaster scenario you focus on.
Jim
poem Palimpsest
PALIMPSEST
Staring at a cloud,
a tangible cloud,
not an earthen clod
glimpsed through glaze and lash.
Staring at a cloud,
a tangible cloud,
not an earthen clod
glimpsed through glaze and lash.
Subscribe to:
Posts (Atom)