What is scale invariance?

The ability of a system to scale-down, scale-up, scale-out, and scale-in.

Four dimensions of scalability

Four dimensions of scalability

Scalability is the ability of a system, network, or process to handle a growing amount of work in a capable manner, and its ability to be further empowered to accommodate that growth. For example to handle more users, more data and sources, greater knowledge intensivity of decision making, increasing process dynamics, infrastructure expansion, and system adaptivity.

Architectures for the next stage of the internet will evolve along four key dimensions of scalability:

Scale-down
As Richard Feynman said, “There is plenty of room at the bottom!” Scale-down architecture is about maximizing feature density and performance, and minimizing power consumption. Approaches include nano-technology, atomic transistors, atomic storage, and quantum computing.

Scale-up
Scale-up architecture (or vertical scaling) is about maximizing device capacity. It adds more processing power, memory, storage, and bandwidth on top of a centralized computing architecture. Typically, scaling up uses more expensive, platform-specific, symmetric multiprocessing hardware. It’s like a fork lift to increase capacity and performance.

Scale-out
Scale out architecture (or horizontal scaling) is about maximizing system capacity. It adds multiple instances of the process architecture enabling the system to address vastly more subjects, services, things. Processors perform essentially the same task but address different portions of the total workload, typically using commodity Intel/AMD hardware and platform independent open source software.

Scale-in
Scale in architecture is about maximizing system density and minimizing end-to-end latency. Scale-in architecture differs from the old compute-centric model, where data lives on the disk in a deep storage hierarchy and gets moved to and from the CPU as needed. The new scale-in architecture is data-centric. Multi-threading within and across muti-core processors enables massive parallelism. Data lives in persistent memory. Many CPUs surround and use in-memory data in a shallow flat storage hierarchy where each memory location is addressable by every process thread.

Scale-in architecture is data-centric

Scale-in architecture is data-centric

Scale-in architecture empowers concept computing — a dimension of increasing knowledge intensivity of process, decision-making and user experience. From a big data perspective, it allows addition of new data elements, aggregation of context, and on the fly modification of schemas without the necessity of off-line rebuilds such as are needed to modify data warehouses and NOSQL processes at scale. From a business perspective scaling-in enables “systems that know” where changes to business requirements, policies, laws, logic etc can be managed flexibly and separately from data sources and operations. Scale-in architecture is a prerequisite for cognitive computing and systems that can learn and improve performance with use and scale.

These four dimensions of scalability are key to building smart, high-performing processes of any size, any complexity, any level of knowledge intensivity, and which can learn and grow live to quickly adapt to changing requirements and implement scale-free (or scale-invariant) business models.

Comments are closed.