Synthetic emotions and consciousness: exploring architectural boundaries
As artificial agents display increasingly sophisticated emotion-like behaviors, frameworks for assessing whether such systems risk instantiating consciousness remain limited. This contribution asks whether synthetic emotion-like control can be implemented while deliberately excluding architectural features that major theories associate with access-like consciousness. We propose architectural principles (A1-A8) for a hierarchical, dual-source implementation in which (i) immediate needs generate motivational signals and (ii) episodic memory provides affective guidance from similar past situations; the two sources converge to modulate action selection. To operationalize consciousness-related risk, we distill predictions from major theories into four engineering risk-reduction constraints: (R1) no content-general, workspace-like global broadcast, (R2) no metarepresentation, (R3) no autobiographical consolidation, and (R4) bounded learning. We address three questions: (Q1) Can emotion-like control satisfy R1-R4? We present a concrete architecture as an existence proof. (Q2) Can the architecture be extended without introducing access-enabling features? We identify stable modifications that preserve compliance. (Q3) Can we trace graded paths that plausibly increase access risk? We map gradual transitions that progressively violate the constraints. Our contribution operates at three levels: on the engineering side, we present a modular, biologically motivated control architecture; on the theoretical side, we propose a control model of emotions and a methodological template for converting consciousness-related questions into auditable architectural tests; on the safety side, we sketch preliminary audit indicators that may inform future governance frameworks. The architecture functions independently as an emotion-like controller, while the risk-reduction criteria may extend to other AI systems.
View on arXiv