Selective Forgetting in Large Language Models: Why Memory Pruning Outperforms Memory Augmentation

Abstract Current approaches to improving LLM performance focus on expanding context windows and augmenting memory. This paper argues the opposite: strategic forgetting — selectively pruning low-value information from context — produces better outcomes than unbounded memory accumulation. We propose a formal framework for selective forgetting and demonstrate its effectiveness in multi-session AI interactions. Status In preparation.

March 2026 · SUN

The Inductivist's Dilemma: Why AI Hallucination is an Epistemological Problem

Abstract AI hallucination is not an engineering bug — it is the epistemological ceiling of inductive reasoning applied to non-stationary environments. This paper traces the problem from Hume’s Problem of Induction through PAC learning bounds to the No Free Lunch theorem, proving that zero hallucination is mathematically impossible in open systems. The root cause is not insufficient data or compute, but the fundamental limitation of pattern-matching on historical observations to predict novel futures. ...

March 2026 · SUN