Selective Forgetting in Large Language Models: Why Memory Pruning Outperforms Memory Augmentation
Abstract Current approaches to improving LLM performance focus on expanding context windows and augmenting memory. This paper argues the opposite: strategic forgetting — selectively pruning low-value information from context — produces better outcomes than unbounded memory accumulation. We propose a formal framework for selective forgetting and demonstrate its effectiveness in multi-session AI interactions. Status In preparation.