Block Diagram of the Computer
Next Topic(s):
Created:
5th of August 2025
02:09:36 PM
Modified:
5th of August 2025
05:15:56 PM
Block Diagram of the Computer
Every computer—whether an Alexa sitting on your desk or a cloud VM in a datacentre—follows the same basic blueprint: a central brain that performs operations, fast transient memory for running code, persistent storage for files, and input/output channels to communicate with the outside world. Modern architectures extend this diagram to include networked and cloud‐native components.
flowchart TD subgraph CPU ["Central Processing Unit"] CU["Control Unit"] ALU["Arithmetic & Logic Unit"] REG["Registers"] end subgraph BUS["System Buses"] ADDR["Address Bus"] DATA["Data Bus"] CTRL["Control Bus"] end subgraph MEM["Primary Memory"] RAM["RAM (volatile)"] end subgraph STORAGE["Secondary & Cloud Storage"] LOCAL["HDD / SSD"] SAN["SAN Array"] CLOUD["Cloud Object Store (API)"] end subgraph IO["Input / Output"] SER["Serial Ports (UART, USB)"] PAR["Parallel Buses (PCIe, memory bus)"] NET["Network Interface"] end CPU <-->|ADDR & DATA| BUS BUS <--> RAM BUS <--> LOCAL BUS <--> SAN BUS --> IO IO <--> CPU CLOUD <-. API .-> CPU
Key Components
- Control Unit: Fetches instructions from RAM, decodes opcodes, and issues control signals over the Control Bus to coordinate all parts.
- Arithmetic & Logic Unit (ALU): Executes numeric and logical operations at the bit level—this is where addition, subtraction and comparisons occur in hardware.
- Registers: Ultra-fast storage inside the CPU holding operands, results and pointers. Their width (8/16/32/64 bit) determines the maximum value you can handle directly.
- Address Bus: Carries binary addresses from the CPU to RAM or storage controllers. A 32-bit bus can address up to 4 GB of RAM; a 64-bit bus extends that to exabytes.
- Data Bus: Moves bytes (or words) in parallel between CPU, memory and devices. Parallel transfers inside the motherboard give high throughput.
- Control Bus: Sends timing, read/write and interrupt signals—ensuring each transfer happens at the right moment.
- Primary Memory (RAM): Volatile, byte-addressable storage for running programs. The CPU issues an address and instantly reads or writes the corresponding cell.
- Local Storage (HDD/SSD): Secondary memory mapped by the file system. Controllers translate logical block addresses into physical tracks/sectors or flash pages.
- SAN Array: High-performance networked block storage. Servers issue the same block addresses over Fibre Channel or iSCSI, with a dedicated storage fabric handling concurrency.
- Cloud Object Store (API): Abstracts blocks into objects accessed via HTTP(S). The cloud provider translates object names into distributed blocks with replication and integrity checks.
- I/O Channels:
- Serial Ports: UART, USB—one bit at a time over long distances.
- Parallel Buses: PCI Express, memory bus—many bits at once for high speed.
- Network Interfaces: Ethernet, Wi-Fi—packetized data over TCP/IP to SANs or cloud endpoints.
Data Flow
Before any program can run, the CPU and its supporting components work in a well-defined sequence to process instructions. These steps—though executed in microseconds—are the heartbeat of every computing task, from opening a file to streaming a video. Understanding this flow reveals how the control unit, buses, memory, and ALU interact in real time to fetch, decode, execute, and store instructions. The following breakdown traces this journey, step by step.
1. Fetch: The Control Unit places the instruction’s address on the Address Bus; RAM returns the instruction bits over the Data Bus.
2. Decode: Control signals interpret opcodes and select ALU operations.
3. Execute: Operands are read from Registers or RAM; the ALU computes results.
4. Store: Results are written back to Registers, RAM, or sent as I/O commands to storage or peripherals.
Moving Ahead: With this hardware overview in place, the next step is to dive into the CPU’s own “language”—its Instruction Set Architecture—and see how it uses addressing modes to reference data.
Bringing It All Together
— The CPU issues a binary address over the address bus; RAM or the storage controller decodes it into a physical location (track/sector or flash page).
— Data flows back over the data bus: in parallel on the motherboard or serially over USB/Ethernet to remote storage.
— Control signals synchronise each transfer, ensuring data integrity (CRC checks), file locking in SANs, or API acknowledgements in cloud storage.
Having seen how data and control signals traverse the CPU, buses, memory and storage, the next step is to look under the hood of the CPU itself—its instruction set and how it addresses that memory.
Instruction Set & Addressing Techniques
An Instruction Set Architecture (ISA) defines the low‐level commands a CPU understands—its “vocabulary”—and the ways it accesses memory (addressing modes). Different CPUs (Intel 8086, ARM, RISC-V) each have their own ISA, so software must be built specifically for that architecture.
8086 Example: Adding Two Numbers
; 8086 assembly: sum = 5 + 3
MOV AX, 5 ; load immediate 5 into register AX
MOV BX, 3 ; load immediate 3 into register BX
ADD AX, BX ; AX ← AX + BX (result = 8)
MOV [SUM], AX ; store result at memory label SUM
SUM DW 0 ; reserve a word for SUM
Python Equivalent
# Python: sum = 5 + 3
a = 5
b = 3
sum = a + b
print(sum) # outputs 8
Addressing Modes
CPUs support various ways to reference data in instructions:
- Immediate: value encoded in the instruction (e.g. 5 in MOV AX, 5).
- Register: operate on CPU registers (e.g. ADD AX, BX).
- Direct: fixed memory address (e.g. MOV AX, [0x2000]).
- Indirect: register holds address (e.g. MOV AX, [BX]).
- Based/Indexed: register + offset (e.g. MOV AX, [SI+4]).
- Segment:Offset (8086): physical = segment×16 + offset.
flowchart LR SEG["Segment (e.g., CS)"] -- "×16" --> MULT["Shifted Segment"] OFF["Offset (e.g., IP)"] -- "+" --> ADD["Linear Address"] MULT --> ADD ADD --> PA["Physical Address"]
This flowchart helps explain how memory addressing works in the 8086 architecture using the Segment:Offset model. Since the 8086 processor only has 16-bit registers, it cannot directly address the full 1 MB (220) memory space using a single register. Instead, Intel designed a method that splits the memory address into two parts:
- Segment: A 16-bit value stored in one of the segment registers (such as CS, DS, SS, or ES). It points to the start of a 64 KB memory block.
- Offset: Another 16-bit value that specifies a location within that segment block (e.g., the Instruction Pointer
IP
for code, orBX
/SI
for data).
To calculate the actual physical address in RAM, the CPU shifts the Segment value left by 4 bits (which is the same as multiplying by 16), and then adds the Offset. This gives a 20-bit physical address. The formula is:
Physical Address = (Segment × 16) + Offset
In the flowchart below:
- The segment value is passed to a block that performs the ×16 operation (shift left by 4 bits).
- The offset is sent directly to the addition step.
- Both values are then combined using an addition block to produce the final address in memory.
This method allowed the 8086 processor to access more memory than a single 16-bit register could represent. It also introduced the idea of segment-based memory organisation, which was an important step in early computer architecture.
Architecture‐Specific Software
Because each ISA differs, you need:
- Assembler/Compiler: converts human‐readable code into machine instructions for a specific CPU (e.g. NASM for 8086, GCC targeting x86_64).
- Linker/Loader: resolves addresses, combines modules, relocates code and data into memory.
- Device Drivers: specialized code that speaks the CPU’s ISA and bus protocols to control hardware.
Modern OS & Language Support
Operating systems abstract hardware differences via:
- Virtual Memory: maps process addresses to physical RAM or swap, so programs need not worry about segment:offset.
- System Calls: provide uniform interfaces for I/O, regardless of underlying bus (PCIe, USB, network).
- Portability Layers: JIT compilers (e.g. for Python, Java) translate high‐level code into optimized machine code at runtime.
Tip: High-level languages and modern OS kernels shield you from ISA details—letting you write once and run across multiple architectures without rewriting your code.
Tip: In cloud‐native designs, your “disk” is an API call. Behind the scenes, the provider maps HTTP object names to physical blocks across data centres—extending the block diagram into a global fabric.
With this high-level map of a computer’s data pathways—extended now to include cloud storage APIs—you can see how each chapter’s concepts (bits & bytes, serial vs parallel, addressing, file systems, SANs and RAID) integrate into the flow of every instruction you write in Python.