Chapter 1: Introduction from Enterprise Curl, by Prentice Hall | 2 | WebReference

Chapter 1: Introduction from Enterprise Curl, by Prentice Hall | 2

Enterprise Curl: Introduction

In the 1980s, corporations sought to move away from mainframe-based user applications toward a client/server architecture that would offer a more sophisticated user interface and the expectation of increased productivity. This architecture typically featured rich graphical interfaces that stored and retrieved data in a central database. Not only did this architecture achieve its promise of increased usability, but it was also a cheaper solution than the mainframe model. This resulted in small- to medium-sized firms rapidly embracing the client/server architecture. However, in time, people realized that the client/server solution was extremely costly to deploy and maintain. A technician would typically have to visit every user’s workstation to install the initial runtime environment, and then might have to re-visit each machine for subsequent maintenance releases.

Figure 1–2 illustrates the architecture of a client/server application.

The 1990s saw the invention of the Internet and another architecture model available to corporations. The Internet took away the high installation and maintenance costs associated with client/server applications, and promised a minimal cost deployment model coupled with a vastly increased potential user base. Internal corporate applications could now be re-developed and opened up to be run by anyone in the world, as long as he or she had an Internet browser installed on a machine. There are a number of problems with this model, though, mainly usability, performance, maintainability, and processor utilization.

The original Internet applications were based on hypertext markup language (HTML), the language of the Web invented by Tim Berners-Lee, which was never designed for anything more than to present static textual information. These original Internet applications had user interfaces far removed from anything previously offered in the client/server model of the 1980s, and arguably offered little more than the usability of the mainframe model in the 1970s.

These applications were also built on the page-based request and response model implicit within the use of Hypertext Transfer Protocol (HTTP). This is where you are initially presented with a Web page, the user enters some information and clicks on an item, and the page is then re-drawn with the next processing screen to complete. This model is far from ideal: it is slow, it has a very uninteresting interface, and it is extremely server-centric. One difference between mainframes and the Web is that in the mainframe days, the clients were typically physically close to the mainframe computer. So, interaction with the mainframe was fairly fast. However, the Web resulted in distributed systems, where the server could be located across the world from the end-user. This resulted in the long waits we have today. To combat this limitation and to make Web applications more interactive and usable, a myriad of languages were introduced into the mix, such as VBScript, JavaScript, Flash, dynamic HTML (DHTML), ActiveX etc. None of these technologies were originally designed to interoperate, but developers found ways to mix them together to create more usable Web-based applications—at a cost.

The more visually exciting an Internet page, the slower it is to download to the browser, because the files being transmitted are large and the bandwidth typically used is small. Corporations that want to provide a more exciting experience at their Web sites typically resort to having multiple versions that users can access, which are dependent on the network bandwidth available to them. Furthermore, due to the large number of different technologies used to create these more exciting Web sites, corporations are forced to hire different developers for each version. This all results in the maintainability nightmare that companies are battling.

As personal computers (PCs) have become more and more powerful, the actual utilization of this processing power has declined due to the dominance of the Web-based application architecture. Now corporations are buying more, larger, and increasingly expensive Web and application servers to keep up with the demands of their ever-growing worldwide user community. The processing capabilities of their users’ machines are being ignored, and the machines are simply acting as graphical rendering tools, much like the mainframe terminals.

Figure 1–3 illustrates a typical Web site in use today, and the technologies used to create it.

Bill Gates, the chairman and chief software architect of Microsoft, summed up today’s architecture well when he wrote, “In many respects, today’s Internet still mirrors the old mainframe world. It’s a server-centric computing model, with the browser playing the role of dumb terminal. Much of the information your business needs is locked up in centralized databases, served up a page at a time to individual users. Worse, Web pages are simply a ‘picture’ of the data, not the data itself, forcing many developers back to ‘screen scraping’ to acquire information.” (June 2001 essay entitled “Why We’re Building .NET Technology,” available at: presspass/misc/06-18BillGNet.asp)

In addition to Bill Gates, other industry leaders share this opinion. Bruce Togazzini and Jakob Nielsen wrote, “the Web browser ... has been crippling the software industry for the past eight years and it will kill productivity at any company that introduces major enterprise applications on its intranet” (March 2001 article entitled “Beyond The Browser,” available on the Web site). In the same article, Togazzini and Nielsen look to the future and proclaim that, “Web applications will become indistinguishable from traditional applications.”

So this is where we are today. We have an excellent, cost-effective deployment model, but with applications that make no use of local processing power, have poor usability, exhibit terrible performance over low-bandwidth networks, and are difficult and costly to maintain.

As a result of these shortcomings in the current Web application model, Curl and other companies have focused on an executable Internet architecture. In this architecture, processing is moved away from the server and back on to the client. Many of the industry think tanks and research firms, such as Gartner, Forrester, and The Reybold Group, have all talked about this new approach, finding that companies are ready to embrace this architecture. George Colony, the CEO of Forrester, said recently that, “another software technology …will kill off the current Web … very soon—in the next 2 to 3 years,” and “the new software model will use executable programs.”

Enterprise application architecture has gone through numerous 10-year cycles, and we are now on the verge of another potential shift. The industry opinion and business needs are all in place for the executable Internet to take off, and Curl is positioned well at the forefront of this technology.

Created: May 2, 2003
Revised: May 2, 2003