Unix Systems For Modern Architectures -1994- Pdf -
Modern RISC CPUs are clocked at 66-200MHz, while DRAM access times hover at 60-80ns. The performance gap—the "memory wall"—is now two orders of magnitude. Consequently, the UNIX kernel’s data structures (process table, buffer cache, vnode/inode tables) must be arranged for L1/L2 cache locality.
Old UNIX ran all device interrupts on the single CPU. On SMP, interrupt routing is critical. Modern architectures (PCI-based Intel MP spec 1.1, SGI's IRIX, Sun's SBus) support interrupt vectors that can be directed to any CPU.
UNIX System V Release 4.0 MP (1991) was a disaster. It used a single "master lock" around the entire kernel. On a 4x Intel 486, performance was worse than on a single CPU because of lock contention on the run queue and buffer cache.
UNIX for Modern Architectures: Scalability, SMP, and the Post-RISC Era (1994) unix systems for modern architectures -1994- pdf
The original UNIX kernel—a masterpiece of simplicity—assumed a single CPU, a single memory bus, and an I/O subsystem that was slow compared to the CPU. Today, that kernel becomes the bottleneck. The "Big Kernel Lock" (BKL) found in many commercial UNIXes (System V Release 4, early BSD derivatives) is no longer viable. When a 150MHz Alpha processor sits idle waiting for a spinlock held by a 50MHz SuperSPARC, the system's scalability collapses.
The traditional BSD scheduler (O(N) priority recalculation every second) is fatal on a 16-CPU system. The 4.4BSD-Lite scheduler, while improved, still requires a global lock on the run queue.
The traditional UNIX buffer cache—a pool of memory pages used to cache disk blocks—is obsolete on modern architectures for two reasons. First, the virtual memory system can now page directly from the filesystem (using mmap() and clustered pageins). Second, on SMP systems, the buffer cache lock becomes a global bottleneck. Modern RISC CPUs are clocked at 66-200MHz, while
The danger is . A misbehaving network card at 100Mbps can generate 150,000 interrupts per second. If all interrupts go to one CPU, that CPU is dead. The solution is interrupt coalescing (already in some Ethernet chips) and the use of "kernel threads" for bottom halves, allowing the interrupt dispatcher to merely wake a thread that runs on any CPU.
The optimal policy in 1994 is : bind a high-bandwidth device (e.g., FDDI or UltraSCSI controller) to a dedicated CPU. That CPU runs the interrupt handler, the device driver's bottom half, and the user process that consumes the data. This "pipeline" design, seen in Sequent's DYNIX/ptx, can achieve 85% linear scaling for network I/O.
This paper examines how UNIX must be—and is being—re-architected for three pillars of the modern (1994) architecture: , non-uniform memory access (NUMA) , and 64-bit addressability . Old UNIX ran all device interrupts on the single CPU
Consider the traditional sleep() / wakeup() mechanism. In a single-CPU UNIX, this was elegant. In an SMP, it requires a "rendezvous" interrupt to all CPUs, flushing TLBs and invalidating cache lines. A 1994 benchmark on an SGI Challenge (12x MIPS R4400) showed that a simple select() loop on 1000 file descriptors caused 40% of kernel time to be spent in cross-CPU TLB shootdowns.
In 1994, UNIX stands at a paradoxical crossroads. Having vanquished proprietary operating systems from VMS to OS/400, it now faces a crisis born of its own success. The architectures UNIX must run on have fundamentally mutated. The simple, single-issue, in-order scalar processors of the 1980s (e.g., Motorola 68030, Intel 80386) are being replaced by superscalar, out-of-order RISC behemoths (Alpha AXP, MIPS R4000, POWER2, SPARC v9) and, increasingly, Symmetric Multiprocessors (SMPs) with 8, 16, or even 64 CPUs.




