N-TIER Articles Click here to N-Able your Enterprise!

Micromax Information Services Ltd.



N-Tier Products

N-TIER Services

N-TIER Reference Material

N-TIER Links

Email Us

N-Tier Home Page

Site Design by RGM WebDesign
RGM
WebDesign

This Site is Always Under Construction

Bookmark this Page now and check back often for updates

Two-tiered Client/Server Limitations

While Two-tier Client/Server (C/S) architectures are the most widely implemented today, their drawbacks have been widely publicized as more organizations grapple with the difference between expectations and reality.

Today, the two-tiered system is seen as increasingly obsolete, or even viewed as an impediment to a more open, efficient and reliable computing environment.

Cross Platform Integration.

In the typical two-tier arrangement, a Windows/GUI-based PC handles presentation and application activities at the client level, while the server provides access to the database. This arrangement seemed ideal for a time.

However as enterprises added new platforms, operating systems and languages, and as the number and complexity of stored procedures multiplied, the inherent limitations of two-tiered architectures were revealed.

The Problems
Perhaps the most obvious problem with the two-tier arrangement is the need to store and manage increasingly massive volumes of application oriented technology at the client location.

This means that in a large retail organization, for example, when a decision is made to change a "compute cost" algorithm, the code must be rewritten and this new logic dispersed to affected client locations throughout the entire enterprise.

With many businesses now using hundreds or thousands of client workstations to process business data, the cost implications are enormous.

By placing the application workload at the client level, the two-tier architecture requires a major and ongoing investment in technology, software, and data updates. That's the downside of just distributing computers.

Standards...  What Standards?
Language is another serious drawback when attempting to serve a varied client base from a two-tiered system. Most client-based applications are written in Visual Basic, Delphi, C ++, PowerBuilder or GUPTA -- languages which cannot be utilized by mainframe, UNIX or many other client stations.

As a result, critical application logic is embedded at far-flung client locations and is written in languages that makes it all but useless to others in the enterprise.

Companies which operate mixed mainframe-and-UNIX systems confront additional problems under a two-tier architecture. Performance and security requirements many times prevent two-tier access to mainframe data sources.

In addition, many mainframe data sources are non-relational and require expensive gateway products to access the data in a relational manner. As a result most mainframe shops keep a copy of the required data on the UNIX systems.

To keep the UNIX box in sync with the mainframe, data managers keep two copies of the data on the mainframe, one before image and one after image.

Each day the data managers perform a sort/merge operation on the two data sources and write out a log of add, change, and delete records. These records are then transferred to the UNIX box where yet an other program applies the changes.

As IS directors know all too well, these nightly batch jobs can consume staggering volumes of system capacity. In data-intense industries such as banking and retail operations, these updates can easily involve from 20,000 to 30,000 changes per night per table -- and can total upwards of 200,000 items per crunch.

In fact, it is estimated that in many transaction-oriented companies, fully 25% of all mainframe CPUs are dedicated exclusively to sort/merge operations. Of course, this process also monopolizes vast amounts of system memory and it takes a considerable communications punch to push these monster files across the network.

These operations are vulnerable to failure at several points -- but those risks must be run every time these massive files are captured, processed and transferred.

Architecture and old technology are the underlying causes of a legacy system held together by bailing wire as well as an inadequate two-tier client/server implementation.

If an organization wants to make fundamental changes in its computing systems, it must change the architecture. The right architecture can accommodate business changes, whether that means more users or new business rules.

Similarly, the right architecture is key to today's client/server computing systems. For most computing environments, the right choice is an N-Tier client/server architecture.

Misconceptions of Two-Tier Client/Server Architectures.
Organizations and systems integrators alike have preconceived notions about a two-tier client/server architecture. Once a system is in place and its performance does not match expectations or needs, the organization realizes that these preconceived ideas are simply not reality of the situation.

Most organizations hold several misconceptions about the performance of two-tier client/server architectures.

The first misconception is that because clients are easy to use, a Client/Server system is easy to design and implement.

In actuality, the easier a client is to use, the more difficult the client/server architecture is to develop.

A second common misconception some organizations make, is to assume that a fast network will not experience bottlenecks.

In reality, as more and more clients access the server, they increase demands to the point that eventually an undesirable network management issue is created. This undesirable situation occurs regardless of the bandwidth of the network.

Stored Procedures
A third misconception about two-tier Client/Server architectures concerns utilizing stored procedures to solve the "Fat Client" dilemma.

Stored procedures are instruction sets provided by relational database vendors. The products are intended to help clients handle business logic and data integrity functions.

Many vendors' stored procedures are not robust enough for large applications, and even the best is not a full-featured development environment.

While stored procedures work well in limited applications, they are not designed to deal with large or complex programs.

By using stored procedures to solve fat client problems, the organization is not taking advantage of modern developers' tools such as Object Oriented Programming, RAD environments and interactive full screen debuggers.

In addition, stored procedures are platform dependent. A stored procedure based application is not portable to another database, it is tied to that particular database vendor.

Such database dependency is not compatible with efforts to have a flexible computing environment based on open systems.

Other Problems with Stored Procedures
  • Stored procedures were (and still are) very difficult to write and debug.
  • The lack of server managers make it difficult to load balance stored procedures across a distributed environment.
  • Stored procedures are inadequate when the application requires access to non-database services.
In short, while stored procedures are well suited to maintaining business rule data integrity, they are generally not a suitable choice to implement two-tier or N-Tier client/server systems.

Given these sizable drawbacks, it is no surprise that an increasing number of companies are moving quickly to the safer, less costly and more practical distributed computing environment.

The N-Tier Client/Server Architecture
N-Tier architectures sidestep most of the misconceptions people have of two-tier architectures.

The primary feature of a more distributed, or multi-tier client/server architecture is that it moves the application burden from the client to the server side of the equation.

In the N-Tier model, you redistribute the logic-intense application activities away from the client side (which in many cases consists of thousands of client workstations), to the more centralized and far more cost-efficient server domain.

An N-Tier architecture features two separate server levels: a business logic or application layer and the underlying data layer.

By moving to N-Tier enterprise-wide computing, we eliminate the expensive and cumbersome necessity of maintaining redundant databases while gaining the substantial benefits of reduced hardware and software costs.

Thus, we distribute computing while providing powerful access to enterprise data.

Clients are insulated from database and network operations and are no longer burdened with the need to know where or how, to find and get data.

The client does what it does best; it handles the user interface.

The client becomes  the presentation layer, handling just the user interface and is freed of application-layer tasks and the associated need for powerful and expensive hardware/software technologies.

In an N-Tier client/server environment, where the application layer functions between the presentation and data layers, the client does not have to be a powerful, Pentium-based PC.

Because client-side terminals are now freed from these technology intense application tasks, numerous client sites can be housed in lower-end Intel-based systems, Macintosh, X-Terminal or NC devices.

Top
End Page Scroll
©Micromax Information Services Ltd. 1999