One aspect of ODC to which I feel a strong affinity is the proposed manageability of the software elements. This has potentially enormous implications for the way in which ODC software will be produced. A simple example is logging: let's say we want to produce logging that is machine-readable. Why would you want to do this? Well, one reason is to allow downstream software to read the logging data. There's no major reason why this log-reading software couldn't interpret the data and suggest what the problem might be.
This is the beginning of closing the administration loop and potentially letting software start to dynamically process its own problems.
The usual example of policy-driven computing concerns the quality of service on networks; i.e., if the CEO needs network access and an engineer needs access, then give the CEO's traffic priority. This is a little dated. The need for policy now spans the entire business. Let's say a vendor is selling widgets online in a very competitive market for price X. Imagine a new vendor appears in a puff of smoke, also selling the same widgets online for price X minus Y.
If our first vendor is tuned in and operating an on-demand environment, then they should be able to automatically adjust their price to X minus Z in order to continue to compete. The basis of the price adjustment is a policy, which might be expressed as a condition-action clause:
If ((OurPrice - CompetitorPrice) > Tolerance) then
The vendors must keep a close eye on each other and on many other business issues, such as revenue assurance, cost management, profit, etc.
Business continuity infrastructure can also be viewed as a type of policy:
If (we lose site 1) then (move offsite personnel to site
As the number of policy-controlled scenarios increases, it's likely we'll see more of such considerations. It's hard to imagine such policies in a traditional (i.e., non-on-demand business) IT infrastructure.
Given the pressures on organizations to improve ROI, it's no surprise to that executives are keen to "sweat their IT assets" for maximum value. The automation possible through autonomic computing should provide useful dividends with reduced staff levels, moving IT staff onto more complex business-centric tasks. However, the bulk of today's IT infrastructure operates in a standalone fashion. This is reminiscent of the "islands of automation" that used to exist in the manufacturing sector. There is a greater need for IT integration so that multiple servers cooperate to provide business value.
The adoption of web services is giving rise to a much more dynamic use of resources to solve static problems. By static problems, I mean those like the classic example of automatic airline ticket booking. The main variable in this instance is the number of clients; e.g., if airline X offers an irresistible bargain, then its website will have to be ready to receive unusually high levels of traffic. This is an essentially static problem.
IBM currently sees the ODC strategy as the nexus between two areas:
- Autonomic computing
- Grid computing
We've discussed the former; let's briefly look at grid computing.
Grid Computing Problems
A different class of computing problem is one that requires a vast amount of processing, such as modeling weather patterns, predicting volcanic eruptions, predicting tsunamis, modeling stock price variations, molecular modeling, etc. Grid computing is increasingly being used for this class of problem. Grids are also being employed by corporations and used as the basis for outsourcing IT facilities; e.g., Oracle operates a large grid out of its Austin, Texas operation. Client organizations can outsource software applications to Oracle and then use the grid to gain access as required.
Even though the cost of such grid use seems high (millions of dollars annually), this is often much less than the cost of a client self-hosting the infrastructure.