Introduction To The Server-Side Component Software

 

Problems with the traditional software making

                Component Software is a chunk of software that can offer some functionality which can be plugged into some other software to build its capabilities. Before Component Software came into picture, the software used to be developed as a bulky single piece with all the functionality incorporated into it. For that, source code often become the base for reusability and customization of the modules take a complex path depending upon the requirements. Another problem with the traditional software is that different modules are bound at the compilation phase. So any simple change to the module demands the recompilation of those modules or all of the software. So naturally the software development/maintenance turns out complex, almost like developing the same modules multiple times for different projects. Modules naturally need to be viewed like white boxes due to poor interface(s) of such software. This problem naturally occurs due to lack of proper standards in making the software meant for some purpose.

                For example a simple change in the library or module requires the whole software to be recompiled and tested for backward compatibility. If that software is already operational, the complications add up. It amounts to almost another iteration of complete life cycle for the software. Another issue is the granularity. Software unless it is granular enough to promise minimum coupling, the software cant be reused in a very generic way. With all these complexities software can be seen as a chunk of code rather than like an intelligent service providers.

 

What is the solution?

                To fix such inefficiencies, some solutions came up like DLLs. But none of such solutions could provide the actual solution for all the above-mentioned problems. For example DLL is a concept meant for Windows and a DLL cant offer the object orientation. Then came the concept of Component software that can fix all the above-mentioned inefficiencies with ease and it brought a few more advantages with it.

                Component software is built purely on the concept of interfaces. Dividing the software into interfaces and implementation can bring the dramatic changes to the efficiency and reusability of the software. Interfaces are the promises made by the component for the functionality to be offered. Implementation fulfills those promises behind the scenes. Next coming versions of the software can improve its implementation but it wont breaks the existing interfaces. If at all any changes are to be made for the interfaces, those changes will be incremental. Hence it allows the software to be viewed as the black box. So the software began acting like an intelligent service provider.  Next is the improvement of granularity. Components architectures are meant for bringing a good deal of granularity to the software. Each component offers a solid predefined functionality in a very generic way. If a big component has to be made it can be composed with a number of small components each offering a unit functionality. The big component does the job of integrating the functionality of the small components.

                 Component model of software reduces the coupling among the modules/components by using the dynamic binding model. For that, software using a particular component gets the components interfaces and uses it for compilation. Only at the runtime the components implementation will be searched and it will be contacted. So once an interface is published for a component, its implementation can be changed from time to time. For example, if the next release for a component is released, the whole software doesn’t have to be recompiled. Ill that is needed is to replace the older version of it with newer one.  Another serious problem is addressed by component software. Till component software came into existence, objects were restricted to a particular process. An object cant be used outside the process through some standardized mechanism. Component software adopted the concept of distributed computing for object oriented usage. Objects were made network(even internet) aware so that a remote object can be used just like a local object. Probably it a great achievement of component software to extend the object orientation to multiple processes, machines and platforms. For that the concept of Stub-Skeleton or Proxy-Stub is devised which is nothing but some piece of software sitting with the component and the component user and it represents the component locally and then does all necessary data/control transfer to the remote process or machine. This is the key behind the interoperability feature of the components.

 

How Component Architectures Offer Promises

                Most of the Component architectures publish their specification, which gives the basic guidelines about how the components should be made. For example Microsoft released the COM specification for making the COM components and the clients. Here both the components and the component’s clients should agree for a set of rules, which help in fixing all the inefficiencies of traditional software. Once the software follows such rules, a runtime environment will provide the basic mechanisms for using the components. This runtime environment helps the components not to have all the functionality related to the essential and repetitive tasks like publishing a component, communication to/from the components, message passing, data transfer across processes/machines etc. Apart from these basic tasks today’s matured runtimes (even termed as Application Servers) are providing a very advanced set of features called services.

                Here are the example services provided by most of the component architectures may be with different names.

 

What is Server-Side Component Software

 

                Server-Side Component software is the component software specialized for some unique purpose. This model came after the evolution of 3-tier and n-tier architectures.  Here are the characteristics of Server-Side Component software.

 

What are the most popular Server-Side Component Architectures?

Today a lot many matured component architectures are available that are optimized for server-side. Not only are they very specialized for a set of requirements; they are quite efficient in providing the best abstraction. It’s the quality of abstraction towards which the server-side component architectures are racing.  Here are the most popular server-side component architectures explained briefly.

COM / COM+

                This is one of the very oldest component architectures that are evolving over the years. COM has started its life as the underlying technology of OLE, which offered a stack of diverse features. OCX Controls, which were very popular among the Windows community, is based on the automation concept of OLE. COM is built into the core of the Windows operating systems and its runtime is directly embedded into the operating system, and even the operating system uses COM for it’s own functionality. COM is capable enough to give the best performance for the mission critical software. Even though COM is ported to non-Windows platforms it is not received well. Its main disadvantage is its complexity and the learning curve. So naturally CORBA and EJB are preferred to COM in the last few years.

                To give COM a good mileage Microsoft has introduced COM+ as the successor to COM. The theme of COM+ is the rich abstraction for COM and a hand full of services that makes COM development easy. COM+ has got a better runtime and it uses other technologies like MTS(for transaction support), ODBC( for database), MSMQ(for messaging), WolfPack(for clustering) etc. Good part of COM/COM+ is that it comes free with Windows. No additional investments are needed to buy it’s runtime separately. Even today for performance demanding software COM/COM+ is one of the best options available. But COM+ has to offer more abstraction to compete with its rivals CORBA and EJB. Probably Microsoft’s C# language (part of .NET) is having better features for component development.

CORBA

CORBA can be thought as the father of today’s component architectures. It is developed by 800+ companies after years of brainstorming and struggle. It’s main advantage is the ability for cross language and cross platform usage. Added to this are the advantages that CORBA is not proprietary, which helped it to get a wide acceptance in the industry. CORBA has got a proprietary IIOP protocol that guarantees the interoperability/compatibility across different vendor’s CORBA implementations. CORBA services are feature rich for both vertical and horizontal markets. CORBA is an evolving architecture that has interoperability with other Component architectures like RMI etc. It’s greatest accomplishment is that it enables C++ and Java talk in a platform neutral and natural way. Every vendor of CORBA implementation like Inprise, IONA offers the additional features with their ORB that makes a particular ORB different from the others. Since CORBA specification will be developed by 800+ companies, overnight changes cant happen and it cant offer some features effectively to remain language neutral.

RMI

Remote Method Invocation is the Sun’s Java based component architecture. Since it a Java only technology, it imposes a very few restrictions in it’s architecture. It is a light weight framework with support for features like distributed exceptions etc. It is like a plain vanilla kind of thing with out any well advanced features. While using RMI for building a software system , advanced features shouldn’t be  expected from it or such features are to be built on top of RMI. Since RMI is “for Java, by Java and to Java” it naturally fits the cross platform requirements. For it’s advantage, RMI is equipped with a new protocol RMI-IIOP which is meant for interoperability between RMI and CORBA.  Since Sun(and others) are giving away Java for free and its implementation is not proprietary, RMI  is received well by the industry for it’s distributed computing capabilities.

EJB

               Enterprise Java Beans framework is the latest of the component architectures. The theme of EJB is “abstraction, abstraction and abstraction”. EJB can be defined as RMI + sophisticated services.  RMI is used as the base of EJB to provide it the distributed computing capabilities. On top of RMI a set of sophisticated services is developed which makes it the Next Generation Component Framework. EJB has the support for transactions, events, resource pooling (like for threads), data access etc. To access these services one don’t have to write the code, but simply inform the EJB runtime what all the services are required and how. It’s the advantage of EJB which made it extremely popular. EJB’s runtime is provided as EJB Container + EJB Server. EJB container is like the manager that handles the responsibilities like resource pooling etc. It instructs the EJB server about the low level tasks like data handling etc.

               One main difference of EJB with other component architectures is that, EJB offers a facility to differentiate the beans that represent data(Entity Beans) and those which do processing(Session Beans). EJB uses a wide variety of  technologies like JTS/JTA(for transaction support), JNDI(for naming service), JDBC(for database) etc. EJB implementations nowadays comes with very advanced features like load balancing, distributed transactions, fault tolerance etc.  Due to the level of abstraction it provides, EJB became matured and captured a good share of market within just a couple of years. Only disadvantage of EJB is that, it is a Java only technology. But by using RMI-IIOP it can talk to CORBA which may be in other languages like C++ , so that it can be used along with Legacy systems.