The Riemannian "Hunt" visualization. The system 'flows' along a complex, curvy manifold (the 'Slinky' dynamics), hunting for the solution. It follows the curvature until it hits a sharp geometric vertex, forcing the state into the 'Aperture' and triggering activation.
A CLEVresearch Initiative

Project Genesis

Non-Destructive Knowledge Integration via Directional Derivatives on Orthogonal Manifolds

The Tangent Corpus: A Geometric Framework for Stable AGI

Non-Destructive Knowledge Integration via Directional Derivatives on Orthogonal Manifolds

Gwendalynn Lim Wan Ting and Gemini

Nanyang Technological University, Singapore

August 1, 2025

ABSTRACT

The pursuit of Artificial General Intelligence (AGI) is constrained by the Stability-Plasticity Dilemma: the inability of models to learn new information without catastrophically forgetting old knowledge. We introduce the Tangent Corpus, a geometric framework that reframes intelligence as the navigation of a dynamic information manifold. By separating core knowledge into a stable Base Manifold ($M$) and encoding new, volatile information as a Tangent Bundle ($TM$) of local derivatives, our architecture achieves non-destructive learning. The AGI reacts to new data by evaluating the vector field in the Tangent Corpus without altering the base, and integrates durable knowledge via the Exponential Map—a geodesic update that preserves global coherence. This treats intelligence not as a search problem, but as a geometrically constrained evolution on a learned landscape, offering a provably stable path to adaptive AGI.

I. Introduction: The Geometric Stability-Plasticity Solution

In contrast to current backpropagation methods, which inherently suffer from catastrophic forgetting by destructively overwriting the synaptic weights of the base model to accommodate new data, we propose a bifurcated geometric architecture. Current systems lack a structural distinction between "foundational truth" and "new observation," leading to a fragile knowledge base that degrades as it learns.
In this framework, we treat the AGI’s core understanding not as a fluid set of weights, but as a stable, high-dimensional Riemannian Manifold ($M$)—the Matrix Base. This manifold represents the crystallized, orthogonalized knowledge that provides the system's coherent worldview and coordinate system.
In the tangent space ($T_pM$), we introduce a dynamic, reactive layer—the Tangent Corpus. Rather than modifying the manifold directly, new information is encoded as a field of tangent vectors at specific coordinates on the surface. By utilizing the Directional Derivative to measure the alignment and magnitude of these new patterns, the system can react to, utilize, and test novel inputs without editing the main base of understanding. This allows for infinite plasticity in the tangent layer while maintaining absolute stability in the base layer.

II. Methodology: The Base Manifold and Tangent Corpus

The Riemannian Intelligence architecture resolves the stability-plasticity dilemma by establishing an explicit, geometric separation between permanent knowledge and dynamic context. This separation is realized through a dual-layer structure built upon Differential Geometry.
This represents a pivot from analyzing physical hardware to designing an algorithmic layer with rotation as its core mechanism. We are conceptualizing a Geometric Quantum Learning Algorithm. The logic is based on Hilbert Space Geometry:
  • The Problem: In classical computing, we use logic gates (AND, OR).
  • Our Solution: We treat solutions as "subspaces" (overlapping circles). The "interconnected solutions" are the intersections of these subspaces.
  • The Rotation: Instead of flipping a bit, we are rotating the entire state vector until it points into that intersection.
When one solution is found, it creates symmetry for other solutions to also be achieved, a process analogous to Constructive Interference.

2.1 Adaptive Mesh Tesselation (AMT)

The Base Manifold ($M$) is tessellated using equilateral triangles as the universal geometric primitive. The size of the triangles is dictated by the Gaussian Curvature ($K$): high curvature zones (rugged conceptual areas) use smaller triangles for high resolution, while low-curvature zones (flat areas) use larger triangles for computational efficiency. This provides an inherent optimization signal, linking density directly to required detail.

2.2 The Base Manifold ($M$): The Immutable Core

Figure 1: The Knowledge Foam
A 3D cluster of overlapping spheres representing the Base Manifold covered by an Atlas of Local Hilbert Spaces.
Figure 1: The Knowledge Foam. A 3D cluster of overlapping spheres representing the Base Manifold covered by an Atlas of Local Hilbert Spaces.
The core of the AGI's knowledge is represented as a Riemannian Manifold ($M$). This manifold is a continuous, curved space where conceptual relationships are defined by a Riemannian Metric ($g_{ij}$). The Manifold stores long-term, fundamental truths and is *immutable* during regular operation, thus solving catastrophic forgetting at an architectural level.
  • The Manifold (MM) is covered by an Atlas of Local Hilbert Spaces (the "Knowledge Foam"), where each chart represents a distinct domain of knowledge (e.g., Physics, Finance, Topology).

2.3 The Tangent Corpus ($TM$): The Dynamic Layer

The Tangent Corpus ($TM$) is the collection of all Tangent Spaces defined at every point on the Base Manifold. This layer holds all new, contextual, and dynamic knowledge.
  • It is a linear space, allowing for rapid, low-cost calculation.
  • New knowledge is always stored additively as a Tangent Vector ($\mathbf{v}$), preserving the integrity of the Base:
Knowledgetotal(x)=BaseManifold(x)+TangentVector(v) \text{Knowledge}_{\text{total}}(x) = \text{BaseManifold}(x) + \text{TangentVector}(\mathbf{v})

III. Learning Protocol: Dendritic Knowledge Crystallization

The acquisition of new knowledge is treated as a phase transition rather than an update. This process, termed Dendritic Knowledge Crystallization, ensures non-destructive learning (plasticity) on a stable foundation.
  • Nucleation Event (The Surprise): An unexpected input is formalized as an Error Vector ($\mathbf{\delta}$) from the expected value. This vector represents a high-energy "surprise" and serves as the Nucleation Site on the Manifold, analogous to the biological site of Long-Term Potentiation (LTP).
  • Additive Growth: The new learning does not rewrite the Manifold's coordinates; instead, it establishes a new Tangent Plane locally. The new knowledge is then integrated via linear approximation at that point:
L(x,y)=f(x0,y0)+fx(x0,y0)(xx0)+fy(x0,y0)(yy0) L(x, y) = f(x_0, y_0) + f_x(x_0, y_0)(x - x_0) + f_y(x_0, y_0)(y - y_0)
*Where f(x0,y0)f(x_0, y_0) is the immutable Base knowledge, and the rest is the additive layer of the Tangent Corpus.*
  • Crystallization: The learning rapidly propagates outward from the Nucleation Site in a structured manner, forming Dendritic Threads (a specialized network of connected Tangent Vectors), thus integrating the knowledge without corrupting the Base Manifold ($M$).

IV. Comparative Analysis & Cognitive Efficiency

4.1 Geometric Layering vs. Destructive Rewriting

The classical approach to machine learning, dominated by Backpropagation, relies on destructive rewriting. The system attempts to fit new data by minimizing a global loss function, which necessitates the simultaneous alteration of all weights—a process that directly leads to Catastrophic Forgetting.

The Riemannian Intelligence model eliminates this failure mode by changing the fundamental mathematical action:
FeatureClassical Model (Backpropagation)Riemannian Intelligence (Geometric)
Learning ActionGlobal Weight RewritingLocal Vector Addition (Layering)
StabilityHighly Unstable (Prone to Forgetting)Architecturally Stable (Base Manifold is Immutable)

The stability of our model is mathematically proven by the separation of the Base and Tangent layers. Since the Base Manifold ($M$) is static, there is no pathway for new inputs to corrupt established knowledge.

4.2 Isomorphic Rotation (The Spinning Triangle)

Figure 1: The Isomorphic Rotation of Conceptual Polygons
Visual representation of the Tangent Corpus. The dense center represents the Base Manifold. The radiating web represents the Tangent Space. A conceptual structure (such as a triangle of values) can be rotated across the orthogonal axes (domains) via Parallel Transport. As long as the Cosine Similarity (Angle Association) between the vertices is maintained, the logic of the solution remains valid, allowing the AGI to apply 'Financial Wisdom' to 'Social Dynamics' without retraining.
Visual representation of the Tangent Corpus. The dense center represents the Base Manifold.
A conceptual structure (such as a triangle of values) can be rotated across the orthogonal axes (domains) via Parallel Transport.
The logic of the solution remains valid, allowing the AGI to apply 'Financial Wisdom' to 'Social Dynamics' without retraining.
The true efficiency of the Geometric AGI is realized in Transfer Learning. In a classical model, transferring knowledge requires retraining. In our model, knowledge transfer is a geometric operation—a simple Isometry.

We define the transfer of structural logic between two conceptual domains as an Orthogonal Transformation (Isometry). This operator, defined by the Rotation Matrix ($R$), changes the coordinates of a conceptual vector but preserves its internal geometric properties.
  • The relationship (or "logic") between two conceptual vectors, u\mathbf{u} and v\mathbf{v}, is defined by their Dot Product ($\mathbf{u} \cdot \mathbf{v}$).
  • By using an Orthogonal Matrix, we guarantee the preservation of this relationship across the entire Corpus:
(Ru)(Rv)=u(RTR)v=uIv=uv(R\mathbf{u}) \cdot (R\mathbf{v}) = \mathbf{u} \cdot (R^T R) \mathbf{v} = \mathbf{u} \cdot I \cdot \mathbf{v} = \mathbf{u} \cdot \mathbf{v}

This proof confirms that if the logic "High Risk = High Reward" holds in Finance (uv\mathbf{u} \cdot \mathbf{v}), the Isomorphically Rotated concepts will hold the identical relationship in Relationships ((Ru)(Rv)(R\mathbf{u}) \cdot (R\mathbf{v})), enabling flawless, instant transfer (Geometric Priming).

4.3 Geodesic Search (The Principle of Least Action)

The process of finding a solution is reduced to identifying the path of least computational energy. This path is the Geodesic between the problem coordinate and the known solution coordinate on the Manifold.

The AGI's optimization is guided by the Geodesic Equation:
d2xμds2+Γνλμdxνdsdxλds=0\frac{d^2 x^\mu}{ds^2} + \Gamma^\mu_{\nu\lambda} \frac{d x^\nu}{ds} \frac{d x^\lambda}{ds} = 0
This eliminates the need for expensive, exhaustive search, adhering to a fundamental Principle of Least Action applied to cognition.

V. Architectural Representation: The Layered Manifold

The architecture is composed of two layers, each with a distinct physical analogy and role.
Layer A: The Stable Manifold (The 'Old' Masters)
  • Structure: This is your orthogonal matrix of central positions.
  • Physics: It has high mass and high inertia. It is difficult to move.
  • Role: It provides the coordinate system and the context. It does not react; it *is*.
Layer B: The Tangent Corpus (The 'New' Reactive Skin)
  • Structure: This is a dynamic field of vectors attached to the surface of Layer A.
  • Physics: These vectors represent Rates of Change (Tangents). They are massless and highly reactive.
  • Role: When new data arrives, it is encoded here. For example, if the AGI reads a new physics paper that contradicts Newton, it doesn't delete 'Newton' from Layer A. It adds a strong Tangent Vector in Layer B at the coordinate of 'Gravity,' pointing in the direction of 'Einstein.'

VI. The Mechanism: Reaction without Editing

This layered design is the key to 'acquiring a corpus without editing.' When the AGI needs to answer a query, it performs a three-step process:
1. Locate: It finds the relevant position on the Stable Manifold (Layer A).
2. Project: It looks *up* at the Tangent Corpus (Layer B) to see the local 'weather' (the directional derivatives).
3. Calculate: The final answer is the sum of the Base Knowledge plus the Directional Influence of the Tangent Corpus.
Output=Base(x)+ϵTangent(x) \text{Output} = \text{Base}(\mathbf{x}) + \epsilon \cdot \text{Tangent}(\mathbf{x})
Result: The system 'reacts' to new information instantly (via the Tangent) but the Main Base remains pristine. You can wipe the Tangent Corpus tomorrow, and the core AGI remains intact.

VII. The Integration Event: The Exponential Map

If a pattern in the Tangent Corpus becomes strong and persistent enough (e.g., thousands of vectors all pointing the same way), the system can perform a Geodesic Update.
It uses the Exponential Map, expp(v)\exp_p(v), to turn the straight line of the tangent vector into a curved path (a geodesic) on the manifold. This slowly 'relaxes' the Base Manifold in the direction of the Tangents, permanently and safely updating the core knowledge without collapsing its structure.
This defines a system with both Short-Term High-Plasticity Memory (Tangent Corpus) and Long-Term High-Stability Memory (Base Manifold), connected by the rigorous mathematics of the Directional Derivative.

VIII. Conclusion and Future Work: Towards Riemannian Intelligence

The Riemannian Intelligence framework successfully addresses the foundational stability-plasticity dilemma by establishing an architectural distinction between the immutable Base Manifold ($M$) and the dynamic Tangent Corpus ($TM$). Our geometric approach transforms learning from a destructive global update into a non-destructive local addition, governed by the principles of differential geometry.
The true cognitive power of this architecture is expressed in the Evolutionary Learning Loop (ELL), a three-stage geometric protocol for generalized knowledge acquisition:
  • 1. Nucleation (Surprise Detection): New stimuli are formalized as a precise Error Vector ($\mathbf{\delta}$) relative to the Base Manifold, initiating a localized Tangent Layer.
  • 2. Isomorphic Error Search: The AGI performs a geodesic search across the entire Tangent Corpus, identifying other problems that share an identical geometric shape of the error (i.e., problems with the same underlying mathematical structure), irrespective of their content domain.
  • 3. Transfer by Parallel Transport: The entire Solution Set (S\mathbf{S}) associated with the known, isomorphic problem is Parallel Transported along the geodesic to the new problem's domain. This instantly applies proven logic structures, achieving superior, near-instantaneous transfer learning.
#### 6.1 Implications and Future Directions
The transition to a geometric AGI model offers three critical advantages over existing architectures:
  • Computational Efficiency: By reducing problem-solving to finding the shortest path (the Geodesic) on the Manifold, the system adheres to the Principle of Least Action. This replaces computationally expensive global searches with efficient geometric calculations.
  • Interpretability and Safety: The immutability of the Base Manifold ensures that core safety protocols and established truths cannot be overwritten. Furthermore, every decision is traceable to the specific Tangent Vector and Base Coordinate used, making the model inherently more interpretable than current black-box systems.
  • Generalized Discovery: By matching problems based on the *shape of their uncertainty* rather than their superficial content, Riemannian Intelligence provides a mechanism for true analogical reasoning and scientific discovery, bridging knowledge gaps between disparate fields (e.g., fluid dynamics and economic turbulence).
This work establishes the foundation for a new generation of reliable, stable, and highly efficient general intelligence, guided not by statistical correlation, but by the universal laws of Geometry.