ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.05553
  4. Cited By
Principal Components Bias in Over-parameterized Linear Models, and its
  Manifestation in Deep Neural Networks

Principal Components Bias in Over-parameterized Linear Models, and its Manifestation in Deep Neural Networks

12 May 2021
Guy Hacohen
D. Weinshall
ArXivPDFHTML

Papers citing "Principal Components Bias in Over-parameterized Linear Models, and its Manifestation in Deep Neural Networks"

3 / 3 papers shown
Title
Many Perception Tasks are Highly Redundant Functions of their Input Data
Many Perception Tasks are Highly Redundant Functions of their Input Data
Rahul Ramesh
Anthony Bisulco
Ronald W. DiTullio
Linran Wei
Vijay Balasubramanian
Kostas Daniilidis
Pratik Chaudhari
41
2
0
18 Jul 2024
What do CNNs Learn in the First Layer and Why? A Linear Systems
  Perspective
What do CNNs Learn in the First Layer and Why? A Linear Systems Perspective
Rhea Chowers
Yair Weiss
33
2
0
06 Jun 2022
Active Learning on a Budget: Opposite Strategies Suit High and Low
  Budgets
Active Learning on a Budget: Opposite Strategies Suit High and Low Budgets
Guy Hacohen
Avihu Dekel
D. Weinshall
124
116
0
06 Feb 2022
1