ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.03950
  4. Cited By
ValNorm Quantifies Semantics to Reveal Consistent Valence Biases Across
  Languages and Over Centuries

ValNorm Quantifies Semantics to Reveal Consistent Valence Biases Across Languages and Over Centuries

6 June 2020
Autumn Toney
Aylin Caliskan
ArXivPDFHTML

Papers citing "ValNorm Quantifies Semantics to Reveal Consistent Valence Biases Across Languages and Over Centuries"

3 / 3 papers shown
Title
Contrastive Visual Semantic Pretraining Magnifies the Semantics of
  Natural Language Representations
Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations
Robert Wolfe
Aylin Caliskan
VLM
21
13
0
14 Mar 2022
Low Frequency Names Exhibit Bias and Overfitting in Contextualizing
  Language Models
Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models
Robert Wolfe
Aylin Caliskan
85
51
0
01 Oct 2021
From Frequency to Meaning: Vector Space Models of Semantics
From Frequency to Meaning: Vector Space Models of Semantics
Peter D. Turney
Patrick Pantel
82
2,980
0
04 Mar 2010
1