ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.13331
  4. Cited By
Using Focal Loss to Fight Shallow Heuristics: An Empirical Analysis of
  Modulated Cross-Entropy in Natural Language Inference

Using Focal Loss to Fight Shallow Heuristics: An Empirical Analysis of Modulated Cross-Entropy in Natural Language Inference

23 November 2022
Frano Rajic
Ivan Stresec
Axel Marmet
Tim Postuvan
ArXivPDFHTML

Papers citing "Using Focal Loss to Fight Shallow Heuristics: An Empirical Analysis of Modulated Cross-Entropy in Natural Language Inference"

2 / 2 papers shown
Title
Diversify and Disambiguate: Learning From Underspecified Data
Diversify and Disambiguate: Learning From Underspecified Data
Yoonho Lee
Huaxiu Yao
Chelsea Finn
203
64
0
07 Feb 2022
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
226
4,453
0
23 Jan 2020
1