ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.00680
  4. Cited By
GNNear: Accelerating Full-Batch Training of Graph Neural Networks with
  Near-Memory Processing
v1v2 (latest)

GNNear: Accelerating Full-Batch Training of Graph Neural Networks with Near-Memory Processing

International Conference on Parallel Architectures and Compilation Techniques (PACT), 2021
1 November 2021
Zhe Zhou
Cong Li
Xuechao Wei
Xiaoyang Wang
Guangyu Sun
    GNN
ArXiv (abs)PDFHTML

Papers citing "GNNear: Accelerating Full-Batch Training of Graph Neural Networks with Near-Memory Processing"

2 / 2 papers shown
A Tensor Compiler for Processing-In-Memory Architectures
A Tensor Compiler for Processing-In-Memory Architectures
Peiming Yang
Sankeerth Durvasula
Ivan Fernandez
Mohammad Sadrosadati
O. Mutlu
Gennady Pekhimenko
Christina Giannoula
169
0
0
19 Nov 2025
YOLO9000: Better, Faster, Stronger
YOLO9000: Better, Faster, StrongerComputer Vision and Pattern Recognition (CVPR), 2016
Joseph Redmon
Ali Farhadi
VLMObjD
603
17,014
0
25 Dec 2016
1