Puntero al articulo:
Komentario de Kobielus:
Virtualization is one of those venerable old computing concepts that has achieved new life in recent years.
Virtualization—like SOA--is so broad in scope that it’s becoming almost useless as a differentiator of any vendor’s offerings. In fact, virtualization is the umbrella concept of which SOA is one implementation approach. Grids are another. On-demand and utility computing are others.
Virtualization refers to environments that abstract external invocation interfaces from internal platform implementations of services and other resources. The external interface may conceal various facts about the implementations of the underlying resources. For example, the resources may run on diverse operating and application platforms; have been deployed on nodes in diverse locations; have been aggregated across diverse hosting platforms (or partitioned within a single hosting platform, either through virtual machine software, separate CPUs, or separate blade servers); and have been provisioned dynamically in response to a client request.
SOA refers to virtualized application environments that abstract external service-invocation interfaces from those services’ internal platform implementations. Under pure SOA, the external application interface—or API—should be agnostic to the underlying platforms. SOA is often software-oriented, but needn't be. Some refer to service virtualization or abstraction as “loose coupling.” Within Web services environments, WSDL “service contracts” provide the principal platform-agnostic APIs for service virtualization.
Grid computing refers to virtualized environments that are designed principally for brokering access to distributed, dynamically adaptable, parallel-processing resources. Grids may support massive parallel processing of jobs that have been partitioned in either symmetric or asymmetric fashion, in terms of the constituent processing tasks and datasets. However, grids are usually employed for massively parallel jobs in symmetric mode.
On-demand computing refers to virtualized environments that dynamically provision, aggregate, and allocate existing, distributed resources from various sources in real time in response to client demand. On-demand computing environments provide client access to resources that already exist—internally or externally—obviating the need to deploy additional physical servers, databases, and other platforms, nodes, and capacity for this purpose. Grid is just one type of on-demand computing environment: one that is geared to serving distributed processing and storage resources. However, server clustering, outsourced application service providers (ASPs), and client-based peer-to-peer (P2P) also qualify as on-demand computing environments. Grids offer distributed virtual hardware resources (“hardware as a service”), which may or may not be provided on an outsourced, pay-as-you-go, ASP basis (a la “software as a service”).
Utility computing refers to virtualized environments that provide on-demand computing as a general-purpose infrastructure to all applications and users. Grid is a distributed-execution environment that may be provided as a general-purpose infrastructure. Alternatively, a grid environment may be limited to a particular operating/application platform (such as a grid of Linux servers running Java 2 Enterprise Edition [J2EE]), or only process a particular type of application (such as finite-element modeling or parametric analysis). In these types of deployment scenarios, grid is not a general-purpose utility environment.
This article talks about “OS virtualization,” in terms of physically and logically partitioning server resources so that those partitions can run entirely distinct, cloned, replicated server virtual machines.
One can even talk about “client virtualization” In my upcoming Network World column on that topic, I define client virtualization as follows: “Client virtualization is an underlying theme in many recent industry announcements. Essentially, a client becomes virtualized when its GUI grows abstracted from the resources of the local access device, be it a PC, handheld, or other computer. The virtualized client may rely on both local and remote network resources to render its interface, furnish its processing power, store its data, route its print jobs, and handle other core client functions. Users remain blissfully unaware of what blend of distributed resources is actually driving their presentation experience.”
Dizzy? So, what again, Professor Kobielus, is virtualization? Can you give us the radically simplified definition? One that gets closer to an elevator pitch?
Last night the following nutshell definition of virtualization came to me, as in a dream (no actually, it was while working out, when my best thoughts tend to coalesce into crisp structures—body occupied full steam—mind free to focus on purely cerebral stuff, full steam also).
Just as GUIs became known two decades ago by the cute acronym “WYSIWYG” (what you see is what you get), I’d like to propose the following acronym for virtualization (of any sort):
• ARWIS (Ain’t Really What It Seems)
If we start from the textbook definition offered above—“abstract external interfaces from internal implementations,” then we can parse this coinage into its critical components:
• WIS Layer: What It Seems: “External interfaces”
• R Layer: Really: “Internal implementations”
• Ain’t Layer: “Abstract …. From ….”
The R Layer is what’s actually going on, behind the Ain't Layer, and it’s what deployers deploy, integrators integrate, and administrators administer. What it really R.
The WIS Layer is what users use and experience, oblivious to the R Layer. What we WISh it R.
The Ain’t Layer is what developers develop, to virtualize the WIS Layer from the R Layer. The Ain’t Layer are the service contracts (WSDL, etc.) and the WS-* and other interfaces that shield the WIS Layer from all the platform- and config-specific R stuff roiling around down there in the SOAP soup we call SOA.
You virtualize anything by applying a layer of Ain’t to get R WISh.