Thursday, August 20, 2009

Structure of IAS




Structure of IAS

The control unit fetches 2 instruction at a time but execute only 1 instruction at a time.


Memory Buffer Register (MBR): It stores word to be written in memory or receive a word from memory.
Memory Address Register (MAR): It specifies address of memory location for data transfer
Instruction Register (IR) : It stores 8 bit opcode of the instruction getting currently executed
Instruction Buffer Register(IBR) : It holds temporarily the right instruction fetch from an instruction word in memory
Program Counter (PC): It contain address of the next instruction pair to be fetched from memory.
Accumulator(AC) & Multiplier Quotient(MQ): They hold operands and result of ALU Operations. e.g Multiplying two 40 bit numbers give 80 bit result. Higher 40 bit are stored in AC and lower 40 bit are stored in MQ.

Von Neumann




The task of entering and altering programs for the ENIAC was extremely tedious.
The programming process could be facilitated if the program could be represented in a form suitable for storing in memory alongside the data.

Then, a computer could get its instructions by reading them from memory, and a program could be set of altered by setting the values of a portion of memory.
This idea, known as the Stored-program concept, is usually attributed to the ENIAC designers, most notably the mathematician John von Neumann, who was a consultant on the ENIAC project.

In 1946, von Neumann and his colleagues began the design of a new stored-program computer, referred to as the IAS computer, at the Princeton Institute for Advanced Studies.

The IAS computer, although not completed until 1952, is the prototype of all subsequent general-purpose computers.


Figure shows the general structure of the IAS computer.

It consists of:
A main memory, which stores both data and instructions.
An arithmetic-logical unit (ALU) capable of operating on binary data.
A control unit, which interprets the instructions in memory and causes them to be executed.
Input and output (I/O) equipment operated by the control unit.


Wednesday, August 19, 2009

Traditional Bus Architecture




Figure shows some typical example of I/O devices that might be attached to expansion devices The traditional bus connection uses three buses local bus , system bus and expansion bus



1. Local bus connects the processor to cache memory and may support one or more local devices



2. The cache memory controller connects the cache to local bus and to the system bus.



3. System bus also connects main memory module



4. Input /output transfer to and from the main memory across the system bus do not interface with the processor activity because process accesses cache memory.



5. It is possible to connect I/O controllers directly on to the system bus. A more efficient solution is to make use of one or more expansion buses for this purpose.An expansion bus interface buffers data transfer between system bus and i/o controller on the expansion bus.



This arrangement allows the system to support a wide variety of i/o devices and at the same time insulate memory to process or traffic from i/o traffic.

Explain why multibus hierarchies are required?

If large number of devices are connected to the single shared bus , performance will suffer. There are following problems

1>Bus length is longer. Therefore propagaton time is more. This propagation dealy can affect performance. When control of the bus passes from one device to another frequently


2>The bus may become bottleneck as aggreagate data transfer demand approaches the capacity of bus. Because data rate generated by attached deviceslike graphics and video controller are growing rapidly

3>Only one master bus can operate at a time, other waits. To overcome this problem most computer system use multiple buses, generally laid out in hierarchy.


Bus design parameter
1. Type : Dedicated or multiplexed
2. Arbitration : Centralized or distributed
3. Timing ; Synchronous or Asynchronus
4. Bus width ; Address or data
5. Data Transfer Type ; R, W, Read, Modify Write, after write, block
Bus Design Parameter in details
1. Bus Type :
i) Dedicated bus:
When a bus is permanently assigned only 1 functiion , it is called dedicated bus.
E.g. separate address and data lines separate bus for memory and I/O modules
Advantages: It gives high performance and less bus contention
Disadvantages : Increased size and cost.

ii) Multiplexed bus:
When the bus is used for more than 1 funcion in different time zones it is called multiplexed bus. E.g. 8085 microprocessor outputs A7- A0 in first clock cycles on pins. AD7 – AD0.
Advantages ; few pins lines are required . less cost and save space
Disadvantages: slow in speed
2. Bus Arbitration:
Several bus master connected to a common bus may require access to the same bus at the same time. A selection mechanism called bus arbitration describes which device should be given access to the bus

i) In Centralized approach; A hardware device called bus controller or bus arbiter allocates bus. It uses one of the following type
(1) Daisy chaining
(2) Polling
(3) Multiple priority levels

ii) In Distributed Approach: each master has arbiter compared to only single in centralized approach. Equal responsibility is given to all devices to carry out arbitration process, without using a central arbiter
3. Bus Timing: In synchronous timing ,e very event is synchronized by clock whereas in asynchronous every event occurring depends on previous events of bus .

4. Bus width: It decides the number of lines to be used for address and data. More addrss lines means more memory can be accessed e.g 16 line address make 2 16 = 64 kb , 20 address line makes 220 = 1 mb memory access .
More data lines means more number of bits can be transferred at a time. Therefore speed increases.
5. Data transfer type; A bus can support various type of data transfer
1) For multiplexed bus

a) Write operation : data is outputted immediately outputting address
b) Read operation: First address is outputted then sufficient acces s time is given gto address device to output data. Now data is read from bus
c) Read , modify write; Read data transfer is followed by write data transfer at the same address. It stop other cpu to use bus.
d) Read after write; Writer transfer is followed with read transfer after some access time . it is used for checking purpose.
e) Block operation; number of data are transferred at the same address one after another e.g. saving file in secondary storage.

2) For non-multiplexed bus :
Address and data outputted at the same time on different bus. It is faster system.

Wednesday, August 5, 2009

Organization and Architecture
In describing computer system, a distinction is often made between computer architecture and computer organization. Although it is difficult to give precise definition for these terms, a consensus exists about the general areas covered by each.


Computer architecture refers to those attributes of a system visible to a programmer, or put another way, those attributes that have a direct impact on the logical execution of a program.


Computer organization refers to the operational units and their interconnection that realize the architecture specification.


Examples of architecture attributes include the instruction set, the number of bit to represent various data types (e.g.., numbers, and characters), I/O mechanisms, and technique for addressing memory.

Organization attributes include those hardware details transparent to the programmer, such as control signals, interfaces between the computer and peripherals, and the memory technology used.
As an example, it is an architectural design issue whether a computer will have a multiply instruction.

It is an organizational issue whether that instruction will be implemented by a special multiply unit or by a mechanism that makes repeated use of the add unit of the system. The organization decision may be bases on the anticipated frequency of use of the multiply instruction, the relative speed of the two approaches, and the cost and physical size of a special multiply unit.Historically, and still today, the distinction between architecture and organization has been an important one.

Many computer manufacturers offer a family of computer model, all with the same architecture but with differences in organization. Consequently, the different models in the family have different price and performance characteristics.

Furthermore, an architecture may survive many years, but its organization changes with changing technology.

The Five Generations of Computers


The history of computer development is often referred to in reference to the different generations of computing devices.

Each generation of computer is characterized by a major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices.


First Generation - 1940-1956: Vacuum Tubes:

The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking up entire rooms. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions.
First generation computers relied on machine language, the lowest-level programming language understood by computers, to perform operations, and they could only solve one problem at a time. Input was based on punched cards and paper tape, and output was displayed on printouts.
The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in 1951.

Second Generation - 1956-1963: Transistors:

Transistors replaced vacuum tubes and ushered in the second generation of computers. The transistor was invented in 1947 but did not see widespread use in computers until the late 50s. The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output.
Second-generation computers moved from cryptic binary machine language to symbolic, or assembly, languages, which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology.
The first computers of this generation were developed for the atomic energy industry.


Third Generation - 1964-1971: Integrated Circuits

The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers.
Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors.


Fourth Generation - 1971-Present: Microprocessors

The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were built onto a single silicon chip. What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the computer - from the central processing unit and memory to input/output controls - on a single chip.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors.
As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of GUIs, the mouse and handheld devices.


Fifth Generation - Present and Beyond: Artificial Intelligence

Fifth generation computing devices, based on artificial intelligence, are still in development, though there are some applications, such as voice recognition, that are being used today. The use of parallel processing and superconductors is helping to make artificial intelligence a reality. Quantum computation and molecular and nanotechnology will radically change the face of computers in years to come. The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization.