An Introduction to Data Warehousing


Introduction

Structure of a Datawarehouse

Types of Datawarehouses

Data Warehouse Architecture

Data Warehouse Modeling: Key to Decision Support

OLAP in the Data Warehouse Environment

Data Warehousing for Parallel Environments

Data Warehouse tools and Products


Introduction

Definitions

Data Mining Definition

Data mining is the process of extracting previously unknown but significant information from large databases and using it to make crucial business decisions. Data mining transforms the data into information and tends to be bottom-up.

Data Mining Process

  1. Data extraction process extracts useful subsets of data for mining.
  2. Aggregation may be done if summary statistics are useful.
  3. Initial searches should be carried out on aggregated data to develop a bird's eye view of the information. (extracted information)
  4. Focus on the detailed data provides a clearer view. (assimilated information)

Operational versus Informational Systems

Why not operational environment for decision-support?


Structure of a Datawarehouse


Types of Datawarehouses

Business Factors deciding the type of Data warehouse

  1. Business Objectives
    Many enterprises know that they need a data warehouse but are not certain about their priorities or options. The priorities impact the warehouse model as to its size, location, frequency of use, and maintenance.
  2. Location of the current data
    One of the major challenges is understanding where the data is and what we know about it. Complicating the issue is the fact that many legacy applications have redefined the old historical data to conserve space and maximize performance for archiving and backup.
  3. Need to move the data
    The data movement can only be decided by considering a combination of
    • Quality of the existing data
    • The size of the usable data
    • Data design
    • Performance impact of a direct query
    • Performance impact on the current production systems
    • Availability and ease of use of the tool
  4. Movement of data
    Many tools are available to move any type of data to any place. But the lack of understanding of the attributes of the data make it very difficult to use any such tools effectively.
  5. Location to which the data needs to be moved
    A significant issue in the design of a data warehouse. Before designing to move data, you must consider if the data store is host-based or LAN-based.
  6. Data preparation
    Once data is moved, you need to consider a number of factors to refresh the data. Just replacing the existing data in a field with new information will not reflect the historical change in data over time. Therefore, you must choose to replace the data or look for ways to update it based on incremental changes. You must also choose how to coordinate master files with transaction files.
  7. Query and reporting requirements
    For a poorly built data warehouse a range of tools will need to be deployed to address the different needs of information workers, advanced warehouse users, application developers, executive users and other endusers.
  8. Integration of the data model
    Many enterprises design data models as part of the data warehouse effort. If you choose that approach, you must integrate the results into your development process and the enduser tool facilities.
  9. Management and Administration
    Once the data warehouse is built, you must put mechanisms and policies in place for managing and maintaining the warehouse.

Types of Data Warehouses

  1. Host Based Datawarehouses
    • Host Based (MVS) Data Warehouses
        The data warehouses that reside on high-volume databases on MVS are the host based type of data warehouses.
        Such data warehouses
        1. usually have very high volumes of data storage
        2. require support for both MVS and client-based report and query facilities.
        3. have very complex source systems
        4. require continuous maintenance since these must be used for mission-critical purposes.
        Steps to build such a data warehouse.
        • Unload Phase involves selecting and scrubbing the operational data.
        • Transform Phase for translating it into an appropriate form and defining the rules for accessing and storing it.
        • Load phase for moving the data directly into DB2 tables or a special file for moving it to another database or non-MVS warehouse.
    • Host Based (Unix) Data Warehouses
        Oracle and Informix RDBMSs provide the facilities for such data warehouses. Both of these databases can extract data from MVS-based databases as well as a larger number of other UNIX-based databases.
  2. Host Based single-stage (LAN) Datawarehouses
      With a LAN-based warehouse, data delivery can be managed either centrally or from the workgroup environment so that business groups can meet and manage their own information needs without burdening centralized IT resources.

      Limitations/challenges:

      • LAN-based warehousing solutions are normally limited by both DBMS and hardware scalability factors.
      • Many LAN based enterprises have not implemented adequate job scheduling, recovery management, organized maintenance, and performance monitoring procedures to support robust warehousing solutions.
      • Often these warehouses are dependent on other platforms for source data. Building an environment that has data integrity, recoverability, and security needs careful design, planning and implementation. Otherwise, synchronisation of changes and loads from sources to server could cause innumerable problems.
  3. LAN Based workgroup Datawarehouses
      In this warehouse, you extract data from a variety of sources (like Oracle, IMS, DB2) and provide multiple LAN-based warehouses.
      Designed for workgroup environment, it is ideal for any business organization that wishes to build a data warehouse, often called a data mart. Usually requires minimal initial investment and technical training. Its low startup cost and ease of use allow a workgroup to quickly build and easily manage its own custom data mart.

      Common Issues:

      • Lack of understanding how to distribute data and supporting intentional data redundancy for performance reasons.
      • Many organizations may not have adequate job scheduling, recovery management, and performance monitoring to support robust warehousing solutions.
      • Although providing +ve cost benefits, LAN-based warehousing solutions can be limited by both hardware and DBMS limitations.
      • For many large enterprises, similar skills in database design, maintenance, and recovery are not present in every workgroup environment.
  4. Multistage Datawarehouses
      This configuration is well suited to environments where endusers in different capacities require access to both summarized data for up-to-the-minute tactical decisions as well as summarized, cumulative data for long-term strategic decisions. Both ODS (Operation Data Store) and the data warehouse may reside on host-based on LAN-based databases, depending on volume and usage requirements. Typically the ODS stores only the most recent records. The data warehouse stores the historical evolution of the records.
  5. Stationary Datawarehouses
      In this type of a data warehouse, user are given direct access to the data, instead of moving from the sources. For many organizations, infrequent access, volume issues or corporate necessities dictate such an approach.
      This is likely to impact performace since users will be competing with the production data stores.
      Such a warehouse will require sophisticated middleware, possible with a single interface to the user. An integrated metadata repository becomes an absolute necessity under this environment.
  6. Distributed Datawarehouses
      There are at least two types of distributed data warehouses and their variations for the enterprise: local warehouses distributed throughout the enterprises and a global warehouse.
      Useful when there are diverse businesses under the same enterprise umbrella. This approach may be necessary if a local warehouse already existed, prior to joining the enterprise.
      Local data warehouses have the following common characteristics:
      1. Activity occurs at local level
      2. Majority of the operational processing is done at the local site.
      3. Local site is autonomous
      4. Each local data warehouse has its own unique structure and content of data.
      5. The data is unique and of prime importance to that locality only.
      6. Majority of the data is local and not replicated.
      7. Any intersection of data between local data warehouses is conincidental.
      8. Local site serves different geographic regions.
      9. Local site serves different technical communities.
      The primary motivation in implementing distributed data warehouses is that integration of the entire enterprise data does not make sense. It is reasonable to assume that an enterprise will have at least some natural intersections of data from one local site to another. If there is any intersection, then it is usually contanined in a global data warehouse.
  7. Virtual Datawarehouse
      The data warehouse is a great idea, but it is complex to build and requires investment. Why not use a cheap and fast approach by eliminating the transformation steps of repositories for metadata and another database. This approach is termed the 'virtual data warehouse'.
      To accomplish this there is need to define 4 kinds of information:
      1. A data dictionary containing the definitions of the various databases.
      2. A description of the relationship among the data elements.
      3. The description of the way user will interface with the system.
      4. The algorithms and business rules that define what to do and how to do it.
      Disadvantages:
      1. Since queries compete with production data transactions, performance can be degraded.
      2. There is no metadata, no summary data or no individual DSS (Decision Support System) integration or history. All queries must be repeated, causing addditional burden on the system.
      3. There is no refreshing process, causing the queries to be very complex.

Data Warehouse Architecture

Data Access Factors

  1. No single view of data
    Some data is accessible only to the operating departments that use the data. Some data is duplicated or subsetted for specific applications needs.
  2. Different user tools
    Different data stores are accessed by different tools. The enduser, who must access data from several sources, must learn several tools.
  3. Lack of consistency
    Often the definitions used to describe data are not available. If the data is identical from one data store to another is unknown, making it difficult to combine or compare.
  4. Lack of useful historical capability
    Most operational applications do not actually keep or manage historical information. Those systems generally archive data onto various external media, which further compounds the problem of accessing historical information.
  5. Conflict between application types
    Informational and operational applications usually have different data designs, data requirements and approaches to accessing data. Therefore concurrent use of a shared database is often a problem.
  6. Problems in administering data
    These problems arise from the multiplicity and complexity of data and their support tools.
  7. Proliferation of complex extract applications
    Because operational data is kept in different types of data stores and endusers increasingly want access to that data, they have to deal with an increasing number of differing applications and interfaces. Most existing informational applications are based upon data which is extracted periodically from operational databases, enhanced in some way, and then totally reloaded into informational data stores.

Data Configurations

  1. Single Copy Configuration
    Only one copy of data is used for both operational and informational applications.
  2. Reconciled Data Configuration
    In this configuration, a new level is present - the reconciled data. It contains detailed records from the real-time level which has been reconciled (cleaned, adjusted, enhanced) so that the data can be used by informational applications.
  3. Derived Data Configuration
    This configuration provides a derived data level of data store. Derived data has its origin in detailed, actual records and can contain derivations of the detailed records (such as summarizations or joins) or semantic subsets of the detailed records (based on a variety of criteria, including time). Each set can represent a particular point in time, and the sets can be kept to record history.
  4. Hybrid Data Configuration
    This configuration introduces the notion of deriving data from the reconciled level (instead of directly from the real-time level). Since both the reconciled and derived levels typically reside on relational data stores, this task is significantly similar than creating derived data directly from heterogeneous real-time data.

Architectural Components

Though each data warehouse is different, all are characterized by a few key components: The Components

A Data Warehouse Architecture Model

  1. Operational Database/External Database Layer
  2. Information-Access Layer
  3. Data-Access Layer
  4. Data Directory (Metadata) Layer
  5. Process Management Layer
  6. Application Messaging Layer
  7. Data Warehouse Layer
  8. Data Staging Layer
  1. Operational Data/External Data Layer
      Operational systems process data to support critical operational needs. In order to do that, operational databases have been historically created to provide an efficient processing structure for a relatively small number of well-defined business transactions.
  2. Information-Access Layer
      The layer that the enduser deals with directly. In particular, it represents the tools that the enduser normally uses day to day, for example, Excel, Lotus 1-2-3, Access, SAS.
  3. Data-Access Layer
      This layer is involved with allowing the information-access layer to talk to the operational layer. The common data language is SQL.
      The data-access layer not only spans different DBMSs and file systems on the same hardware; it also spans manufacturers and network protocols as well.
  4. Data Directory (Metadata) Layer
      In order to provide for universal data access, it is necessary to maintain some form of data directory or repository of metadata information.
      Ideally, endusers should be able to access data from the data warehouse (or from the operational databases) without having to know where that data resides or the form in which it is stored.
  5. Process Management Layer
      The process management layer is involved in scheduling the various tasks that must be accomplished to build and maintain the data warehouse and data directory information. The process management layer can be thought of as the scheduler or the high-level job control for the many processes (procedures) that must occur to keep the data warehouse up to date.
  6. Application Messaging Layer
      The application messaging layer has to do with transporting information around the enterprise computing network. Application messaging, for example, can be used to isolate applications, operational or informational, from the extract data format on either end.
      Application messaging is the transport system underlying the data warehouse.
  7. Data Warehouse (Physical) Layer
      The (core) data warehouse is where the actual data used primarily for informational uses ocurs. In some cases, one can think of the data warehouse simply as a logical or virtual view of data.
  8. Data Staging Layer
      Data staging is also called replication management, but in fact, it includes all of the processes necessary to select, edit, summarize, combine, and load data warehouse and information-access data from operational and/or external databases.
      It may also involve data quality analysis programs and filters that identify patterns and data structures within existing operational data.

Implementation Options

Decision Support Architecture


Data Warehouse Modeling: Key to Decision Support

Operational versus Data Warehouse Systems

Feature Operational Data Warehouse
Data content current values archival data, summarized data, calculated data
Data organization application by application subject areas across enterprise
Nature of data dynamic static until refreshed
Data structure, format complex; suitable for operational computation simple; suitable for business analysis
Access probability high moderate to low
Data update updated on a field-by-field basis accessed and manipulated; no direct update
Usage highly structured repetitive processing highly unstructured analytical processing
Response time subsecond to 2-3 seconds seconds to minutes

Operational Data versus Warehouse Data

Operational Data Warehouse Data
Short-lived, rapidly changing Long-living, static
Requires record-level access Data is aggregated into sets, similar to relational database
Repetitive standard transactions and access patterns Ad hoc queries with some specific reporting
Updated in real time Updated periodically with mass loads
Event driven - process generates data Data driven - data governs process

The Multidimensional versus Relational Model

  1. Transaction view versus Slice of time
    • The multidimensional model views information from the perspective of a 'slice of time' instead of atomic transactions.
    • OLTP systems record actual events, or transactions like purchase orders. The multidimensional data model is not concerned with actual events, only the quantitative result of them at some interval in time, such as days, weeks, or months.
  2. Local consistency versus Global consistency
      A properly designed OLTP system is consistent within its own scope. The multidimensional model starts from a globally consistent view of the enterprise.
  3. Audit trail versus Big picture
      OLTP systems provide for detailed audit trial. Multidimensional models fare better for the big picture.
      When customers have a question about their credit card bills, they want to see every transaction. When your overnight package is lost, you want to know who the last person was to see it intact. The multidimensional model is designed to answer questions such as, 'Will I make money on this deal or not ?' or 'Who are my best customers and why?'
  4. Explicit versus Implied relationships
      Relationships are modeled explicitly in the relational model and implicitly in the multidimensional model.
      Entity-relationship modeling is the heart of the relational model. The explicit relationsips between customers and sales orders are burned into the design of the database. In multidimensional modeling, these relationships are implied by the existence of 'facts' at the cross section of dimensions. For example, if there are sales dollars to Customer 987, of Product 1241, the relationship between customer and product is implied.

Data Model Implementation and Administration

  1. Step 1 - The operational data
      This step provides the source data for the corporate data warehouse. Because there are usually considerable differences in the quality of data on different operational systems, it is necessary, in some instances, to condition the data before it is transported into the data warehouse environment.
      It is very likely that there is a considerable gap between the data model for the data warehouse and other data models on which the individual operational systems are based. An essential task in building a data warehouse is to properly map such data from the operational data to the data warehouse database.
  2. Step 2 - Data migration
      The extraction, conversion, migration of data from the source to the target must be done such that data warehouse holds only accurate, timely and credible data.
      • Refreshing the data on the operational system and onto the data warehouse is the simple option. It does not involve any transformation, however, the physical layout may change in terms of hierarchical to relational.
        However such data may not build accurate histories since the refreshes occur at intervals and the old data is discarded.
      • Updating of data at intervals overcomes the above deficiencies.
  3. Step 3 - Database administration
      It is in this step that the model is actually implemented. Therefore, compliance with a standard model and quick business benefits have to be maintained. Data granularity and metadata are the most important issues to be considered.
  4. Step 4 - Middleware
      The degree to which the data warehouse is accessed by a wide variety of users determines the degree of complexity needed in the middleware. The range of system software which is necessary to make the data warehouse accessible in a client/server is also termed as middleware. It is the middleware that allows an application on a client to execute a request for data on a local (LAN) or remote database server (data warehouse).
  5. Step 5 - Decision support applications
      Decision support applications are employed to use the data warehouse. Some of those applications are for presentation of information in the form of predefined reports and statistical analysis. Some can be interrogative, allowing the users to construct queries and directly interact with the data.
  6. Step 6 - The user or presentation interface
      The command-line interface is the most basic interface level and is appropriate for interrogating very complex queries with a SQL program.
      The menu-driven interface provides the user with controlled access to the data.
      A hypertext interface is useful in presenting metadata to users.

OLAP in the Data Warehouse Environment

What is OLAP ?

OLAP stands for On-Line Analytical Processing. OLAP describes a class of technologies that are designed for live ad hoc data access and analysis, based on multidimensional views of business data. With OLAP tools individuals can analyze and navigate through data to discover trends, spot exceptions, and get the underlying details to better understand the flow of their business activity.

A user's view of the enterprise is multidimensional in nature. Sales, can be viewed not only by product but also by region, time period, and so on. That's why OLAP models should be multidimensional in nature.

Most approaches to OLAP center around the idea of reformulating relational or flat file data into a multidimensional data store that is optimized for data analysis. This multidimensional data store known as hypercube stores the data along dimensions. Analysis requirements span a spectrum from statistics to simulation. The two popular forms of analysis are 'slice and dice' and 'drill-down'.

Similarities and Differences between OLTP and OLAP

Feature OLTP OLAP
Purpose Run day-to-day operation Information retrieval and analysis
Structure RDBMS RDBMS
Data Model Normalized Multidimensional
Access SQL SQL plus data analysis extensions
Type of Data Data that runs the business Data to analyse the business
Condition of data Changing, incomplete Historical, descriptive

Data Warehousing Applications

In general, the applications served by data warehousing can be placed in one of the three main categories. These planning and analysis requirements, referred to as OLAP applications, share a set of user requirements that cannot be met by applying query tools against the historical data maintained in the warehouse repository.

Data Warehousing for Parallel Environments

Parallel Architectures

  1. Shared-Memory Architectures (SMP)

    Systems can share disks and main memory. In addition, each processor has local cache memory. These are referred to as tightly coupled or SMP (Symmetric Multi Processing) systems because they share a single operating system instance. SMP looks like a single computer with a single operating system. A DBMS can use it with little, if any, reprogramming.

    In a shared resource environment, each processor executes a task on the required data, which is shipped to it. The only problem with data shipping is that it limits the computer's scalability. The scaling problems are caused by interprocessor communication.

  2. Shared-Nothing Architectures (MPP)

    Each processor has its own memory, its own OS, and its own DBMS instance, and each executes tasks on its private data stored on its own disks. Shared-nothing architectures offer the most scalability and are known as loosely coupled or Massively Parallel Processing (MPP) systems. The processors are connected, and messages or functions are passed among them. Shipping tasks to the data, instead of data to the tasks, reduces interprocessor communications. Programming, administration and database design are intrinsically more difficult in this environment than in the SMP environments.

    An example is the high-performance switch used in IBM's Scalable Power Parallel Systems 2 (SP2). This switch is a high bandwidth crossbar, just like the one used in telephone switching, that can connect any node to any other node, eliminating transfer through intermediate nodes.

    A node failure renders data on that node inaccessible. Therefore, there is a need for replication of data across multiple nodes so that you can still access it even if one node fails, or provide alternate paths to the data in a hybrid shared-nothing architecture.

  3. Clustered SMP Systems

    In this type, multiple 'tightly coupled' SMP systems are linked together to form a 'loosely coupled' processing complex. Clustering requires shared resource coordination via a lock manager to preserve data integrity across the RDBMS instances, disks, and tape drives. While clustering SMP systems requires a looser coupling among the nodes, there is no need to replace hardware or rewrite applications.

    An example, is the Sequent's Symmetry 5000 SE100 cluster, which supports more than 100 processors.

    A natural benefit of clustered SMP is much greater availability than MPP systems and even more availability than SMP.

    Every component of an SMP system is controlled by a single executing copy of an OS managing a shared global memory. Because memory in an SMP system is shared among the CPUs, SMP systems have a single address space and run a single copy of the OS and application. All processes are fully symmetric in the sense that any process can execute on any processor at any time. As system loads and configurations change, tasks or processes are automatically distributed among the CPUs - providing a benefit known as dynamic load balancing.

  4. Asymmetric Multiprocessor (AMP) Systems

    Early multiprocessing systems were designed around an asymmetric paradigm, where one master processor is designed to handle all operating systems tasks. The rest of the processors only handle user processes. They are referred to as slave processors. The disadvantages are:

    • Adding extra processors actually increases the work requirement for the master processor
    • The master processor becomes the bottleneck.
    Fully asymmetric designs represent past technology trends.

Parallel Databases

  1. Paralell I/O: The CPU works much faster than the disk I/O, so the CPU must frequently wait for the disk, but, if there is enough memory, you can still perform parallel tasks.

    For example, the system can buffer data in memory for multiple tasks. It can retrieve data to be scanned and sorted and also retrieve more data for the next transaction. The more disks and controllers the system has, the faster it can feed memory and the CPU.

  2. Transaction Scale Up: You can assign small independent transactions to different processors. The more processors, the more transactions the system can execute without reducing throughput.

  3. Interquery Parallelism: Similar to transaction scale up, a collection of independent SQL statements can be broken up, each allocated to a processor.

  4. Intraquery Parallelism: A single large SQL query must be broken up into tasks, execute those tasks on separate processors and recombine them for the answer.

  5. Pipelined Parallelism: The opportunities for scaling or speeding up queries, are limited by the number of steps in executing the statement. Steps, such as SORT and GROUP BY, may require all the data from the previous step before they start.

  6. Partitioned Parallelism: If there is more than one data stream, it is possible for some operations to proceed simultaneously. For example, a product table could be spread across multiple disks and a thread could reach each subset of the product data.

    In practice, there is a combination of simultaneous and sequential SQL operations to be performed. Therefore, partitioned parallelism is typically combined with pipelined parallelism.


Data Warehouse tools and Products

Data Warehouse Tools

  1. Data analysis tools
    Data analysis tools are used to perform statistical and mathematical functions, forecasting, and multidimensional modeling. They enable users to analyse data across several dimensions, including market, time, and product categories. Such tools are also used to measure the efficiency of business operations over time. These evaluations provide support for strategic business making and insights on how to improve efficiency and reduce costs of business operations.
    Data analysis tools typically work with summarized rather than detailed data. Summaries are often stored in special databases known as data marts , which are tailored to specific sets of users and applications. Data marts are usually built from the detailed historical data, and in some cases, are constructed directly from operational databases, by using either RDBMS or MDBMS technology.
  2. Data warehouse query tools
    Query and reporting tools are most often used to track day-to-day business operations and support tactical business decisions.
    In this context, a warehouse offers the advantage of data that has been cleansed and integrated from multiple operational systems. Such a warehouse typically contains detailed data that reflects the current (or near current) status of data in operational systems and is thus referred to as an operational data store or operational data warehouse.
  3. Reporting tools
    Report-writer tools, such as MS Access, are best at retrieving operational data using canned formats and layouts. They adequately answer questions such as, 'How many green dresses scheduled to ship this month have not shipped?' Report writers are excellent and cost-effective for mass deployment of applications where a handful of database tables are managed as one database by any of the relational database suppliers' products.
  4. Discovery and mining tools
    Query, reporting and data analysis tools are used to process or look for known facts. Discovery and mining tools are used to explore data for unknown facts. It may, for example, be used to examine customer buying habits or detect fraud. Such processing (data exploration) involves digging through large amounts of historical detailed data typically kept in a DSS data warehouse.
  5. Multidimensional OLAP (query) tools
    A multidimensional query tool allows multiple data views (e.g., sales by category, brand, season and store) to be defined and queried. Multidimensional tools are based on the notion of arrays, an organizational principle for arranging and storing related data so that it can be viewed and analysed from multiple perspectives.

    There are 3 types of multidimensional OLAP tools:

    • Client-side MDBs maintain precalculated consolidation data in PC memory and are proficient at handling a few megabytes of data.
    • Server-based MDBs optimize gigabytes of data by using any of several performance and storage optimization tricks.
    • Spreadsheets, allow small data sets to be viewed in the cross-tab format familiar to business users.
    Current MDBs still lack provisions for
    • Connecting multiple databases, including RDBMSs, and allowing them to interact.
    • High availability backup and restore.
    • Subsetting multidimensional data for individual analysis and manipulation.
    • Updating the database incrementally while users continue to access it.
  6. Relational OLAP tools
    Relationsal OLAP is the next logical step in the evolution of complex decision support tools. Relational OLAP combines flexible query capabilities with a scalable multitier architecture while symbiotically depending on and leveraging the capabilities of today's parallel-scalable relational databases.

Criteria for Selecting Systems and Vendors

  1. What is the vendor's primary strategic objective ?
    • Hardware vendors
    • Database vendors: their tools establish connections to the database.
    • Gateway vendors: provide connectivity to heterogeneous relational and nonrelational data sources.
    • Repository vendors: provide datawarehousig and systems management functionality for metadata repository.
    • Tools and utility vendors: provide database, CASE and development tools.
  2. What is the vendor's multidimensional strategy ?
  3. What is the vendor's metadata strategy ?
  4. What architecture does the vendor support ?
    • Direct query
    • Event-Driven systems
    • Mixed workloads
    • Single-subject data warehouses
    • Virtual global datawarehouse
  5. How scalable is the vendor's solution ?
  6. To what extent has the vendor integrated warehouse products ?
  7. How experienced is the vendor ?
  8. What is the nature of the vendor's partnerships ?
  9. How comprehensive is the vendor's program ?

Source: Data Warehousing Concepts, Technologies, Implementations and Management
by, Harry S Singh, Prentice Hall, New Jersey, 1998, ISBN 0-13-591793-X.

This notes was compiled by V.V.S.Raveendra, in June 1999.



Home