ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.15594
23
17

Learning the greatest common divisor: explaining transformer predictions

29 August 2023
Franccois Charton
ArXivPDFHTML
Abstract

The predictions of small transformers, trained to calculate the greatest common divisor (GCD) of two positive integers, can be fully characterized by looking at model inputs and outputs. As training proceeds, the model learns a list D\mathcal DD of integers, products of divisors of the base used to represent integers and small primes, and predicts the largest element of D\mathcal DD that divides both inputs. Training distributions impact performance. Models trained from uniform operands only learn a handful of GCD (up to 383838 GCD ≤100\leq100≤100). Log-uniform operands boost performance to 737373 GCD ≤100\leq 100≤100, and a log-uniform distribution of outcomes (i.e. GCD) to 919191. However, training from uniform (balanced) GCD breaks explainability.

View on arXiv
Comments on this paper