ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.00097
  4. Cited By
Modular Universal Reparameterization: Deep Multi-task Learning Across
  Diverse Domains

Modular Universal Reparameterization: Deep Multi-task Learning Across Diverse Domains

31 May 2019
Elliot Meyerson
Risto Miikkulainen
    OOD
ArXivPDFHTML

Papers citing "Modular Universal Reparameterization: Deep Multi-task Learning Across Diverse Domains"

10 / 10 papers shown
Title
Leveraging Hypernetworks and Learnable Kernels for Consumer Energy Forecasting Across Diverse Consumer Types
Leveraging Hypernetworks and Learnable Kernels for Consumer Energy Forecasting Across Diverse Consumer Types
Muhammad Umair Danish
Katarina Grolinger
AI4TS
82
3
0
07 Feb 2025
Principled Weight Initialization for Hypernetworks
Principled Weight Initialization for Hypernetworks
Oscar Chang
Lampros Flokas
Hod Lipson
22
73
0
13 Dec 2023
How to Reuse and Compose Knowledge for a Lifetime of Tasks: A Survey on
  Continual Learning and Functional Composition
How to Reuse and Compose Knowledge for a Lifetime of Tasks: A Survey on Continual Learning and Functional Composition
Jorge Armando Mendez Mendez
Eric Eaton
KELM
CLL
26
27
0
15 Jul 2022
Learning the Effect of Registration Hyperparameters with HyperMorph
Learning the Effect of Registration Hyperparameters with HyperMorph
Andrew Hoopes
Malte Hoffmann
Douglas N. Greve
Bruce Fischl
John Guttag
Adrian V. Dalca
28
38
0
30 Mar 2022
Continual Learning for Multivariate Time Series Tasks with Variable
  Input Dimensions
Continual Learning for Multivariate Time Series Tasks with Variable Input Dimensions
Vibhor Gupta
Jyoti Narwariya
Pankaj Malhotra
L. Vig
Gautam M. Shroff
AI4TS
19
20
0
14 Mar 2022
Simple Genetic Operators are Universal Approximators of Probability
  Distributions (and other Advantages of Expressive Encodings)
Simple Genetic Operators are Universal Approximators of Probability Distributions (and other Advantages of Expressive Encodings)
Elliot Meyerson
Xin Qiu
Risto Miikkulainen
19
4
0
19 Feb 2022
HyperDynamics: Meta-Learning Object and Agent Dynamics with
  Hypernetworks
HyperDynamics: Meta-Learning Object and Agent Dynamics with Hypernetworks
Zhou Xian
Shamit Lal
H. Tung
Emmanouil Antonios Platanios
Katerina Fragkiadaki
AI4CE
33
23
0
17 Mar 2021
The Traveling Observer Model: Multi-task Learning Through Spatial
  Variable Embeddings
The Traveling Observer Model: Multi-task Learning Through Spatial Variable Embeddings
Elliot Meyerson
Risto Miikkulainen
11
12
0
05 Oct 2020
Meta-Learning in Neural Networks: A Survey
Meta-Learning in Neural Networks: A Survey
Timothy M. Hospedales
Antreas Antoniou
P. Micaelli
Amos Storkey
OOD
38
1,927
0
11 Apr 2020
Learning Task Grouping and Overlap in Multi-task Learning
Learning Task Grouping and Overlap in Multi-task Learning
Abhishek Kumar
Hal Daumé
181
524
0
27 Jun 2012
1