Introduction to Computer Architecture
Computer Architecture refers to the conceptual design and fundamental operational structure of a computer system. It focuses on how hardware components are organized and how they interact to execute instructions.
Computer Organization vs Computer Architecture
| Computer Architecture | Computer Organization |
|---|---|
| Deals with what the system does | Deals with how the system operates |
| Focuses on high-level design issues | Focuses on low-level implementation details |
| Includes instruction sets, addressing modes | Includes circuit design, control signals |
| Architectural attributes: visible to programmer | Organizational attributes: transparent to programmer |
Basic Structure of Computers
A computer consists of four main functional components:
Von Neumann Architecture
- Central Processing Unit (CPU): Brain of the computer
- Memory Unit: Stores data and instructions
- Input/Output Devices: Communication with external world
- System Bus: Communication pathway between components
Basic Computer Structure
CPU ↔ Memory ↔ Input/Output Devices
Connected through System Bus (Address Bus, Data Bus, Control Bus)
Harvard Architecture
Uses separate memory and pathways for instructions and data, allowing simultaneous access.
CPU Organization
The Central Processing Unit (CPU) is the primary component that executes instructions.
CPU Components
- Control Unit (CU): Directs operation of the processor
- Arithmetic Logic Unit (ALU): Performs arithmetic and logical operations
- Registers: Small, fast storage locations
- CPU Interconnections: Mechanisms for communication between components
Register Types
- Program Counter (PC): Holds address of next instruction
- Instruction Register (IR): Holds current instruction being executed
- Memory Address Register (MAR): Holds memory address for data transfer
- Memory Buffer Register (MBR): Holds data being transferred to/from memory
- Accumulator: Stores results of arithmetic operations
Instruction Sets
An instruction set is the collection of all instructions that a CPU can execute.
Instruction Format
Typical instruction contains:
- Operation Code (Opcode): Specifies the operation to be performed
- Operands: Data or addresses on which operation is performed
Types of Instruction Sets
| CISC (Complex Instruction Set Computer) | RISC (Reduced Instruction Set Computer) |
|---|---|
| Large instruction set | Small instruction set |
| Variable length instructions | Fixed length instructions |
| Multiple addressing modes | Limited addressing modes |
| Complex operations in single instruction | Simple operations requiring multiple instructions |
| Example: Intel x86 | Example: ARM, MIPS |
Addressing Modes
Addressing modes specify how the operand of an instruction is determined.
Common Addressing Modes
- Immediate Addressing: Operand is part of instruction itself
- Direct Addressing: Address field contains effective address of operand
- Indirect Addressing: Address field points to address of operand
- Register Addressing: Operand is in a register
- Register Indirect Addressing: Register contains address of operand
- Indexed Addressing: Effective address = base address + index register
- Relative Addressing: Effective address = PC + address field
Instruction Pipelining
Pipelining is a technique where multiple instructions are overlapped in execution to improve performance.
Basic Pipeline Stages
- Instruction Fetch (IF): Get instruction from memory
- Instruction Decode (ID): Decode instruction and read registers
- Execute (EX): Perform the operation
- Memory Access (MEM): Access memory if needed
- Write Back (WB): Write results to register
Pipeline Hazards
- Structural Hazards: Resource conflicts when hardware cannot support all combinations
- Data Hazards: Instructions depend on results of previous instructions
- Control Hazards: Branch instructions change the flow of execution
Memory Hierarchy
Memory hierarchy organizes different types of memory to balance speed, size, and cost.
Memory Levels
- Registers: Fastest, smallest, most expensive
- Cache Memory: Very fast, small, expensive
- Main Memory (RAM): Fast, medium size, moderate cost
- Secondary Storage: Slow, large, inexpensive
Memory Hierarchy Pyramid
Registers (Top - Fastest/Smallest)
↓ Cache (L1, L2, L3)
↓ Main Memory (RAM)
↓ Secondary Storage (HDD/SSD - Bottom - Slowest/Largest)
Locality of Reference
- Temporal Locality: Recently accessed items are likely to be accessed again
- Spatial Locality: Items near recently accessed items are likely to be accessed
Cache Memory
Cache is a small, fast memory that stores frequently accessed data from main memory.
Cache Mapping Techniques
- Direct Mapping: Each block of main memory maps to exactly one cache line
- Fully Associative Mapping: A block can be placed anywhere in cache
- Set Associative Mapping: Cache is divided into sets, each containing multiple lines
Cache Performance
- Hit: Data found in cache
- Miss: Data not found in cache
- Hit Rate: Percentage of memory accesses that are hits
- Miss Rate: Percentage of memory accesses that are misses
- Hit Time: Time to access cache
- Miss Penalty: Time to retrieve data from main memory
Input/Output Organization
I/O organization deals with how the CPU communicates with peripheral devices.
I/O Transfer Methods
- Programmed I/O: CPU directly controls I/O transfer
- Interrupt-driven I/O: I/O device interrupts CPU when ready
- Direct Memory Access (DMA): Special controller transfers data directly to/from memory
I/O Interface
- Data Register: Holds data being transferred
- Status Register: Provides status information about I/O device
- Control Register: Used to send commands to I/O device
Parallel Processing
Parallel processing uses multiple processing elements to solve problems faster.
Flynn's Taxonomy
- SISD (Single Instruction, Single Data): Traditional sequential computer
- SIMD (Single Instruction, Multiple Data): Same operation on multiple data elements
- MISD (Multiple Instruction, Single Data): Multiple operations on same data (rare)
- MIMD (Multiple Instruction, Multiple Data): Multiple processors working independently
Parallel Computer Architectures
- Vector Processors: Operate on arrays of data
- Array Processors: Multiple ALUs operating simultaneously
- Multiprocessors: Multiple CPUs sharing memory
- Multicomputers: Multiple computers connected via network
Multiprocessor Systems
Multiprocessor systems contain multiple processors that can execute instructions simultaneously.
Types of Multiprocessors
| Shared Memory Multiprocessor | Distributed Memory Multiprocessor |
|---|---|
| All processors share common memory | Each processor has its own local memory |
| Communication through shared memory | Communication through message passing |
| Uniform Memory Access (UMA) | Non-Uniform Memory Access (NUMA) |
| Easier to program | Better scalability |
Interconnection Networks
- Bus: Simple but limited scalability
- Crossbar Switch: Non-blocking but expensive
- Multistage Interconnection Network: Compromise between bus and crossbar