ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.02295
  4. Cited By
A Mathematical Abstraction for Balancing the Trade-off Between
  Creativity and Reality in Large Language Models

A Mathematical Abstraction for Balancing the Trade-off Between Creativity and Reality in Large Language Models

4 June 2023
Ritwik Sinha
Zhao-quan Song
Tianyi Zhou
ArXivPDFHTML

Papers citing "A Mathematical Abstraction for Balancing the Trade-off Between Creativity and Reality in Large Language Models"

7 / 7 papers shown
Title
Fast Heavy Inner Product Identification Between Weights and Inputs in
  Neural Network Training
Fast Heavy Inner Product Identification Between Weights and Inputs in Neural Network Training
Lianke Qin
Saayan Mitra
Zhao-quan Song
Yuanyuan Yang
Tianyi Zhou
27
0
0
19 Nov 2023
The Expressibility of Polynomial based Attention Scheme
The Expressibility of Polynomial based Attention Scheme
Zhao-quan Song
Guangyi Xu
Junze Yin
27
5
0
30 Oct 2023
Efficient SGD Neural Network Training via Sublinear Activated Neuron
  Identification
Efficient SGD Neural Network Training via Sublinear Activated Neuron Identification
Lianke Qin
Zhao-quan Song
Yuanyuan Yang
20
9
0
13 Jul 2023
Efficient Alternating Minimization with Applications to Weighted Low
  Rank Approximation
Efficient Alternating Minimization with Applications to Weighted Low Rank Approximation
Zhao-quan Song
Mingquan Ye
Junze Yin
Licheng Zhang
24
7
0
07 Jun 2023
An Iterative Algorithm for Rescaled Hyperbolic Functions Regression
An Iterative Algorithm for Rescaled Hyperbolic Functions Regression
Yeqi Gao
Zhao-quan Song
Junze Yin
23
33
0
01 May 2023
Probing Classifiers: Promises, Shortcomings, and Advances
Probing Classifiers: Promises, Shortcomings, and Advances
Yonatan Belinkov
221
402
0
24 Feb 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,878
0
15 Sep 2016
1