SKA Per-Class Entropy Explorer
Runs SKA independently for each digit class and overlays entropy trajectories. Each digit has its own model and weights — the entropy trajectory is a pure fingerprint of that digit's structure.
Reference Paper
Abstract
We introduce the Structured Knowledge Accumulation (SKA) framework, which reinterprets entropy as a dynamic, layer-wise measure of knowledge alignment in neural networks. Instead of relying on traditional gradient-based optimization, SKA defines entropy in terms of knowledge vectors and their influence on decision probabilities across multiple layers. This formulation naturally leads to the emergence of activation functions such as the sigmoid as a consequence of entropy minimization. Unlike conventional backpropagation, SKA allows each layer to optimize independently by aligning its knowledge representation with changes in decision probabilities. As a result, total network entropy decreases in a hierarchical manner, allowing knowledge structures to evolve progressively. This approach provides a scalable, biologically plausible alternative to gradient-based learning, bridging information theory and artificial intelligence while offering promising applications in resource-constrained and parallel computing environments.
SKA Explorer Suite
About this App
SKA runs independently for each digit. Each class traces its own entropy trajectory — revealing phase differences, amplitude inversion, and the hierarchical structure of digit recognition. No labels are used.