We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
According to the vendor literature, it appears that moving to multi-tiered client/server computing is as simple as buying a few products, creating a few Web pages, and writing a little bit of application code. In a recent Microsoft presentation, a company representative created a simple three-tiered application in less than ten minutes (Microsoft, Inc. 1998). He built a Web page with a couple extra lines of VBScript code, displayed about twenty lines of Visual Basic code that was already installed on the transaction server, and then with a few mouse clicks, showed how easy it was to validate a customer number. Too bad the transaction server code only returned “credit OK” if the customer number was 123456789.
Microsoft is not the only vendor using this approach to sell middleware products. Although the vendors make the development process look easy, these products are complex pieces of software. Just learning the programming conventions and protocols can take weeks, while producing industrial-strength code could take months. Tools like the Microsoft Transaction Server can make programming somewhat easier for developers by providing communications protocols and development frameworks, but even the simplest service will take far more than twenty lines of code.
This chapter will examine the application server from an architectural viewpoint and will also examine the major categories of middleware software. Topics will include:
Overview of the application server architecture
Middleware—the glue that holds it together
Middleware categories
Applying middleware to the application server architecture
The application server model will be an effective approach for building software in the twenty-first century, but it is only one of many different options available to the business software developer. Mainframe computing is still effective for large corporations, as millions of lines of code still perform their tasks effectively every day. Two-tiered client/server also offers an excellent option for less intensive database applications and there are a host of different Internet technologies available to deliver applications to Web browsers. In addition to existing technologies, the rapid rate of change will continue to bring new computing models that will spawn new software technologies. Just as client/server was a response to desktop computers, and a host of new software technology appeared in response to the Internet, other new technologies will continue to rise and shape the way that we build software.
Looking into the future is a difficult and perilous task. Even the best minds in the industry have trouble seeing beyond the next few years. To see how well others have fared, I got out the book Programmers at Work, published in 1986 (Lammers 1986). The book is a collection of interviews with some of the industry leaders of the time; early Mac developers Jeff Raskin and Andy Hertzfeld, dBase author Wayne Ratliff, VisiCalc designers Dan Bricklin and Bob Frankston, and of course Bill Gates.
What initially drew me to application server technology was the need to support constantly changing, complex business rules. Traditional two-tiered client/server worked well for data intensive applications, but as the processing requirements grew, it became difficult to create and manage large client-based programs that supported these complex requirements. Moving the business logic to centralized application servers simplified both software development and code distribution.
Today's business applications are expected to move far beyond simple data management chores, performing business intelligence and decision support tasks for all levels of the company. Business requirements are also constantly changing, with new products and processes introduced in ever-shorter business cycles. Applications must not only encapsulate existing business logic, but be structured openly to allow quick response when rules change.
Business rule processing takes on a variety of forms, from simple data validation to complex data classifications and business processing logic. Encapsulating these processes in program code that can respond to changing requirements is a challenging task. Often, implementing rule processing around data structures instead of program code enables more flexibility and ease of maintenance. Commercial business rule processors and languages can also make the task easier to manage.
This chapter will examine how to incorporate complex business rules and processing into the application server environment. It will examine what a business rule is, how to implement the rules in program code, where to place the logic within the application architecture, and how to standardize error handling and reporting. Application security will also be examined, both as an application server issue and as a way of illustrating how to implement complex business rules.
The goal of business object design is to create a collection of reusable software objects that model your business. While interface design is a bottom-up approach, used to determine application requirements, business object design is a top-down analysis of the entire business, identifying roles and functions. Business objects are formed by specifying properties and services that reflect the real-world objects they model. As with real-world objects, software objects often combine and collaborate to perform tasks that they cannot perform individually.
Object design requires a global view of the organization, determining not only the needs of the current task, but the functions required for the entire business. Objects must be designed for reuse across both the current application and be ready for use in the next project, even if the project is for another department or a different line of business. Although not a simple task, it is not as difficult as it seems. Business objects mirror people, forms, and other objects that have already been integrated into the business. As such, when the objects simulate these functions, they also fit inside the same business context.
The object designer cannot possibly know all of the business requirements, so designing all functionality from the beginning is an impossible task. Business requirements are constantly changing and today's needs may not be relevant tomorrow. Business objects must be designed as open, dynamic components that can easily be changed without impact on other functions.
You've read everything you can find about middleware, CORBA, transaction monitors, message brokers, enterprise JavaBeans, and other distributed technologies. Now it's time to put them to work. Time to build your company's first multi-tiered application. But where do you start? How do you structure the programs? How do you distribute the code? What about integrating existing applications and databases? This was the problem that I faced as I began working with multi-tiered development. There was plenty of information on the tools and technologies, but little on how to make them work in a business setting.
Application servers and related technologies offer great promise and potential for solving the issues that trouble corporate computing. Problems like scalability, application integration and code reuse. But before we can solve these grand problems, we have to figure out how to use the technology. How do we process orders, ship products, bill customers, approve loan applications and pay insurance claims.
My hope is that this book will offer some guidelines to start you on your way. Instead of focusing on middleware, the emphasis is on the design issues and programming techniques necessary to create an overall business application framework. The approach is user-centric, relying on joint development between developers and business people, using short, iterative design-program-review cycles. Object-oriented development is also stressed using designs illustrated with UML and programming examples written for the Java platform. Although Java and RMI are used, the framework will work with almost any language or distributed object platform.
Part 3 describes tools and processes that can be used to transform the user requirements into working program code. These chapters examine how to implement the business objects and place them into a framework that services the user interface programs.
The promise of Java as a truly distributed software platform is now a step closer to reality. The recent integration of Java archive functionality significantly improves the ability of developers to manage and transfer disbursed data over large networks. Specifically, Java now provides two disparate areas of archiving functionality that address the same issue: an improvement in download time—but through different means. Java Archive files, or JAR files, allow an entire applet's dependency list to be transferred in the form of a single compressed file, while Java's archiving classes provide functionality for the programmatic manipulation of files in various compression formats. Given the benefits of archives in a distributed model, I will detail some of these newly integrated features, as well as demonstrate how archive functionality can improve enterprise applet performance.
JAR FILES
Introduced with the JDK 1.1, Java Archive files provide a vastly improved delivery mechanism for applets. An entire applet's dependency list (all. class files, images, sounds, text, etc.) can now be aggregated into a single compressed file, which can then be transferred over a single HTTP connection. Once downloaded, JDK 1.1-compliant browsers can then seemly decompress and run the applet. The result of this process is a marked decrease in the time it takes to launch an applet, due in part to a reduction of both the bytes transferred and number of dependency-based HTTP transactions.
Java Archive files are based on the popular ZIP archive format as defined by PKWare.
Java and corba fit together. With Java, you have portability of code and platform independence. With CORBA you add location transparency and an enterprise level object model that allows us to interoperate with a multitude of existing languages and integrated or legacy systems.
One of the most important steps when designing your client applications and applets is how they should bootstrap into the CORBA system. With a good system design, you can make this bootstrapping phase straightforward and avoid any bottlenecks along the way. You need to consider how CORBA servers should distribute CORBA object references so that clients can easily and efficiently find them. Some of your decisions may be made at the relatively early IDL design phase, while others can be implemented as late as when you deploy your clients and servers.
BOOTSTRAPPING A CORBA APPLICATION
A CORBA application only needs to obtain one CORBA Object reference (otherwise known as an Interoperable Object Reference (IOR)) for it to be able to connect to and participate in a CORBA system. From then on, a CORBA client or server should be able to obtain new IORs through normal IDL invocations. Therefore, it is the mechanism by which a client or server obtains this initial object reference that can be vital for a CORBA system's overall accessibility and scalability. The most interoperable and scalable solution to locating CORBA objects is to use the CORBA Naming Service.
Java is becoming important for building real-world, mission-critical applications. Although Java is still not a perfect language, it is becoming more mature every day. We all know the advantages of Java, especially the “write once, run anywhere” approach, but we are also aware of the disadvantages (its performance being the most commonly offered reason for not using Java).
In spite of that, there are many large companies claiming they are developing their crucial business applications in Java. Modern applications are not monolithic programs, they are built of objects. Therefore, developers need a “glue” for bringing all the pieces together and coordinating them into a functional application. Object location independence is an advantage that gives developers the ability to structure the application into multiple tiers.
For building distributed applications in Java it is natural to choose the Remote Method Invocation (RMI), but there is another possibility—the Common Object Request Broker Architecture (CORBA). CORBA is a valuable alternative to RMI. We could describe CORBA as a superset of RMI, although both technologies are not compatible yet. Both CORBA and RMI allow remote method invocation independently of location, however, CORBA is more than just an object request broker. It offers programming language independence and a rich set of object services and common facilities all the way to the business objects.
There are multiple factors that can affect the decision. One of them is certainly the performance.
In the first article of this series, Tim Matthews described how JavaSoft is developing a Java Cryptography Architecture (JCA) and extensions (Java Cryptography Extensions, or JCE). He described their contents and structure in the java.security package, and outlined their uses. In the second installment, I presented some actual code using the base functionality in the JCA. This third article describes programming using the JCE and multiple providers.
After reading this article, you will, I trust, be able to write a program in Java (an application or applet) that can encrypt or decrypt data using DES and create an RSA digital envelope with the extensions package. Beyond the specific example presented here, though, I hope you will understand the JCE model enough to be able to quickly write code for any operation in the package, and to be able to use multiple providers.
Before beginning, however, it is important to note that the security packages are not part of the JDK 1.0.2, only JDK 1.1 and above. Furthermore, there are significant differences between the security packages in JDK 1.1 and 1.2. This article (and the previous) describes features in 1.2. If you have not yet left 1.0.2 behind, now would be a good time to do so. After all, with 1.2, you are not only getting the security packages, you are also getting improved cloning, serialization and many other features.
There is an important change from JDK 1.1 to 1.2, the JCE is in a different package.
Java has quickly evolved into more than a simple-minded language used to create cute animations on the Web. These days, a large number of serious Java developers are building enterprise-critical applications that are being deployed by large companies around the world. Some applications currently being developed on the Java platform range from traditional Spreadsheets and Word Processors to Accounting, Human Resources, and Financial Planning applications. Because these applications are complex and use rapidly evolving Java technology, companies need to employ a vigorous quality assurance program to produce a high-quality and reliable product. Quality assurance and test teams must get involved early in the product development life cycle, creating a sound test plan, and applying an effective test strategy to insure that these enterprise-critical applications provide accurate results and are defect-free. Accuracy is critical for users who apply these results to crucial decisions about their business, their finances, and their enterprise. I present a case-study of how an effective testing strategy, focused on sub-system level automation, was applied to successfully test a critical real-world Java-based financial application.
THE APPLICATION
The application is being developed by a leading financial services company (hereafter referred to as client) and is targeted toward both individual investors and institutional investors, particularly those investors that are interested in managing their finances and retirement plans. Reliable Software Technologies (RST) provided the QA/Test services and the test team (hereafter referred to as the QA/Test team) to the client.
This the first first in a series of articles focusing on the Swing components that will be released as part of the Java Foundation Classes in JDK 1.2. I'll review the underlying infrastructure for the Swing components, in addition to some of the components Swing has to offer, in my next few installments. This month, after a brief history of the Java Foundation Classes, I'll discuss Swing's implementation of the Model/View/Controller (MVC) architecture.
THE JAVA FOUNDATION CLASSES
In addition to being a vast improvement over its predecessor, the 1.1 AWT lays the foundation for one of the most visible core Java APIs: Foundation Classes (JFC). The JFC consists of the 1.1 (and later) AWT, the Swing components, the 2D API, and the Accessibility API.
HISTORY OF THE JFC
Back in 1995, no one overestimated the impact that Java was about to have on the modern computing world. As a language originally designed for consumer electronic devices, Java was suddenly catapulted into the stratosphere as the language for developing Web software. Over the next couple of years, Java would mature quickly; not only the language but also core packages, such as the AWT.
The original AWT was not designed to be a high powered UI toolkit—instead it was envisioned as providing support for developing simple user interfaces for simple applets. The original AWT was fitted with an inheritance-based event model that did not scale well.
Business application development and deployment using Java has become much more popular in the past year. This is partly because of the redesigned java.awt library in the JDK 1.1, as well as other third-party JDK 1.1-compliant GUI class libraries and IDEs. Developers can now build sophisticated and complex GUI interface front-ends for their applications. As these frontends become heavier, special consideration needs to be given to the deployment strategy used to deploy the client side of a client/server application.
There are several different options available for deploying Java client applications. Some of the options are fairly familiar, while others are not. Even if you understand what options are available, it is not always as obvious which should be used in a given situation. This article reviews options available for client-side deployment of Java applications along with the advantages and disadvantages of each strategy.
TRADITIONAL DEPLOYMENT
In most client/server applications, the deployment options for the client piece of the application is fairly limited. Usually, a client platform and programming language are chosen before development begins and the application is built with the target platform in mind. For example, a telephone invoicing client GUI application could be built using C++ on a Windows NT machine. On completion of the coding for the application, it would have to be manually or remotely installed on every Windows NT client machine that needed to use the application.
Anyone who has ever tried to construct modular, object-oriented user interfaces using the AWT knows how hard it can be. The result can easily end up being difficult to debug, complex to understand and maintain, and certainly not reusable (except by cutting and pasting!). However, huge benefits can be obtained by separating out the user interface from the application code. This has been acknowledged for a long time and the Java Development Kit included the Observer class and the Observable interface to support this. However, with the addition of the delegation event model in the JDK 1.1, the potential for separating the view and control parts of the interface was provided. This allows the separation of the interface from the control elements (i.e., what to do when a user presses a button) and from the application code. Such a separation is often referred to as a model-view-controller architecture (or just as the MVC for short). The MVC originated in Smalltalk, but the concept has been used in many places. This article considers what the MVC is, why it is a good approach to GUI construction, and what features in Java support it. It then describes a GUI application which has been built using the MVC architecture. The source code for this application is provided as an appendix.
The basic synchronization primitives provided in Java are very easy to use, and well-suited to the majority of situations a programmer sees. In many respects they are simpler, easier to use, and less prone to errors than their counterparts in POSIX threads (Pthreads). They do not, however, cover all situations and extending them is not always obvious.
PROVIDING PROTECTION
The first thing that synchronization techniques must provide is a method to ensure that updates to shared data are done in a correct fashion. In particular, many updates comprise several distinct operations and those operations must be done together, atomically. The canonical example is a program that updates your bank account. In Listing 1, it should be obvious that thread number 1 could overwrite the data that thread number 2 had just saved.
The solution to this is to ensure that each of those operations happen atomically. In Java, this is done with a synchronized method (objects with such synchronization are known as monitors). Now the second method will have to wait for the one called first to complete before it can start. The code in Listing 2 will operate correctly.
Pthreads accomplishes the same behavior by using explicit mutex locks. While functionally equivalent, using explicit mutexes suffers from the added complexity of the programmer having to remember which locks to use and to unlock them after use. This is not a big problem, but when writing complex code that has numerous branches and return statements, mistakes do happen (I speak from experience!) and it can be irritating to track down.