ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.03021
  4. Cited By
LogicNets: Co-Designed Neural Networks and Circuits for
  Extreme-Throughput Applications

LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications

International Conference on Field-Programmable Logic and Applications (FPL), 2020
6 April 2020
Yaman Umuroglu
Yash Akhauri
Nicholas J. Fraser
Michaela Blott
    MQ
ArXiv (abs)PDFHTML

Papers citing "LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications"

37 / 37 papers shown
hls4ml: A Flexible, Open-Source Platform for Deep Learning Acceleration on Reconfigurable Hardware
hls4ml: A Flexible, Open-Source Platform for Deep Learning Acceleration on Reconfigurable Hardware
Jan-Frederik Schulte
Benjamin Ramhorst
Chang Sun
Jovan Mitrevski
Nicolò Ghielmetti
...
C. Herwig
Ho Fung Tsoi
D. Rankin
Shih-Chieh Hsu
Scott Hauck
VLM
108
2
0
01 Dec 2025
FPGA-Based Real-Time Waveform Classification
FPGA-Based Real-Time Waveform Classification
Alperen Aksoy
Ilja Bekman
Chimezie Eguzo
Christian Grewing
Andre Zambanini
67
0
0
07 Nov 2025
LL-ViT: Edge Deployable Vision Transformers with Look Up Table Neurons
LL-ViT: Edge Deployable Vision Transformers with Look Up Table Neurons
Shashank Nag
Alan T. L. Bacellar
Zachary Susskind
Anshul Jha
Logan Liberty
...
Krishnan Kailas
P. Lima
Neeraja J. Yadwadkar
F. M. G. França
L. John
102
1
0
02 Nov 2025
TeLLMe v2: An Efficient End-to-End Ternary LLM Prefill and Decode Accelerator with Table-Lookup Matmul on Edge FPGAs
TeLLMe v2: An Efficient End-to-End Ternary LLM Prefill and Decode Accelerator with Table-Lookup Matmul on Edge FPGAs
Ye Qiao
Z. Chen
Yifan Zhang
Yian Wang
Sitao Huang
171
1
0
03 Oct 2025
Light Differentiable Logic Gate Networks
Light Differentiable Logic Gate Networks
Lukas Rüttgers
Till Aczél
Andreas Plesner
Roger Wattenhofer
111
3
0
26 Sep 2025
Learning Interpretable Differentiable Logic Networks for Time-Series Classification
Learning Interpretable Differentiable Logic Networks for Time-Series Classification
C. Yue
N. Jha
AI4CE
107
0
0
24 Aug 2025
Optimizing Neural Networks with Learnable Non-Linear Activation Functions via Lookup-Based FPGA Acceleration
Optimizing Neural Networks with Learnable Non-Linear Activation Functions via Lookup-Based FPGA Acceleration
Mengyuan Yin
Benjamin Chen Ming Choong
Chuping Qu
Rick Siow Mong Goh
Weng-Fai Wong
Tao Luo
79
0
0
23 Aug 2025
NeuraLUT-Assemble: Hardware-aware Assembling of Sub-Neural Networks for Efficient LUT Inference
NeuraLUT-Assemble: Hardware-aware Assembling of Sub-Neural Networks for Efficient LUT InferenceIEEE Symposium on Field-Programmable Custom Computing Machines (FCCM), 2025
Marta Andronic
George A. Constantinides
340
11
0
01 Apr 2025
Architect of the Bits World: Masked Autoregressive Modeling for Circuit Generation Guided by Truth Table
Architect of the Bits World: Masked Autoregressive Modeling for Circuit Generation Guided by Truth Table
Haoyuan Wu
Haisheng Zheng
Shoubo Hu
Zhuolun He
Bei Yu
254
1
0
18 Feb 2025
Runtime Tunable Tsetlin Machines for Edge Inference on eFPGAs
Runtime Tunable Tsetlin Machines for Edge Inference on eFPGAsSensors Applications Symposium (SAS), 2025
Tousif Rahman
Gang Mao
Bob Pattison
Sidharth Maheshwari
Marcos Sartori
A. Wheeldon
Rishad Shafik
Alex Yakovlev
180
1
0
10 Feb 2025
PolyLUT: Ultra-low Latency Polynomial Inference with Hardware-Aware Structured Pruning
PolyLUT: Ultra-low Latency Polynomial Inference with Hardware-Aware Structured PruningIEEE transactions on computers (IEEE Trans. Comput.), 2025
Marta Andronic
Jiawen Li
George A. Constantinides
181
8
0
14 Jan 2025
TreeLUT: An Efficient Alternative to Deep Neural Networks for Inference Acceleration Using Gradient Boosted Decision Trees
TreeLUT: An Efficient Alternative to Deep Neural Networks for Inference Acceleration Using Gradient Boosted Decision TreesSymposium on Field Programmable Gate Arrays (FPGA), 2025
Alireza Khataei
Kia Bazargan
206
20
0
02 Jan 2025
Shrinking the Giant : Quasi-Weightless Transformers for Low Energy
  Inference
Shrinking the Giant : Quasi-Weightless Transformers for Low Energy Inference
Shashank Nag
Alan T. L. Bacellar
Zachary Susskind
Anshul Jha
Logan Liberty
...
Krishnan Kailas
P. Lima
Neeraja J. Yadwadkar
F. M. G. França
L. John
278
0
0
04 Nov 2024
LUTMUL: Exceed Conventional FPGA Roofline Limit by LUT-based Efficient
  Multiplication for Neural Network Inference
LUTMUL: Exceed Conventional FPGA Roofline Limit by LUT-based Efficient Multiplication for Neural Network InferenceAsia and South Pacific Design Automation Conference (ASP-DAC), 2024
Yanyue Xie
Zhengang Li
Dana Diaconu
Suranga Handagala
M. Leeser
Xue Lin
386
3
0
01 Nov 2024
Differentiable Weightless Neural Networks
Differentiable Weightless Neural NetworksInternational Conference on Machine Learning (ICML), 2024
Alan T. L. Bacellar
Zachary Susskind
Mauricio Breternitz Jr.
E. John
L. John
P. Lima
F. M. G. França
601
27
0
14 Oct 2024
PolyLUT-Add: FPGA-based LUT Inference with Wide Inputs
PolyLUT-Add: FPGA-based LUT Inference with Wide InputsInternational Conference on Field-Programmable Logic and Applications (FPL), 2024
Binglei Lou
Richard Rademacher
David Boland
Philip H. W. Leong
194
15
0
07 Jun 2024
Reconfigurable Edge Hardware for Intelligent IDS: Systematic Approach
Reconfigurable Edge Hardware for Intelligent IDS: Systematic Approach
Wadid Foudhaili
Anouar Nechi
Celine Thermann
Mohammad Al Johmani
R. Buchty
Mladen Berekovic
Saleh Mulhem
102
5
0
13 Apr 2024
Architectural Implications of Neural Network Inference for High
  Data-Rate, Low-Latency Scientific Applications
Architectural Implications of Neural Network Inference for High Data-Rate, Low-Latency Scientific Applications
Olivia Weng
Alexander Redding
Nhan Tran
Javier Mauricio Duarte
Ryan Kastner
209
3
0
13 Mar 2024
NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable
  Functions
NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions
Marta Andronic
George A. Constantinides
236
20
0
29 Feb 2024
Quantization-aware Neural Architectural Search for Intrusion Detection
Quantization-aware Neural Architectural Search for Intrusion Detection
R. Acharya
Laurens Le Jeune
N. Mentens
F. Ganji
Domenic Forte
271
3
0
07 Nov 2023
Logic Design of Neural Networks for High-Throughput and Low-Power
  Applications
Logic Design of Neural Networks for High-Throughput and Low-Power ApplicationsAsia and South Pacific Design Automation Conference (ASP-DAC), 2023
Kangwei Xu
Grace Li Zhang
Ulf Schlichtmann
Bing Li
181
10
0
19 Sep 2023
PolyLUT: Learning Piecewise Polynomials for Ultra-Low Latency FPGA
  LUT-based Inference
PolyLUT: Learning Piecewise Polynomials for Ultra-Low Latency FPGA LUT-based InferenceInternational Conference on Field-Programmable Technology (ICFPT), 2023
Marta Andronic
George A. Constantinides
228
39
0
05 Sep 2023
Mitigating Memory Wall Effects in CNN Engines with On-the-Fly Weights
  Generation
Mitigating Memory Wall Effects in CNN Engines with On-the-Fly Weights Generation
Stylianos I. Venieris
Javier Fernandez-Marques
Nicholas D. Lane
MQ
176
4
0
25 Jul 2023
MetaML: Automating Customizable Cross-Stage Design-Flow for Deep
  Learning Acceleration
MetaML: Automating Customizable Cross-Stage Design-Flow for Deep Learning AccelerationInternational Conference on Field-Programmable Logic and Applications (FPL), 2023
Zhiqiang Que
Shuo Liu
Markus Rognlien
Ce Guo
Jose G. F. Coutinho
Wayne Luk
152
8
0
14 Jun 2023
DietCNN: Multiplication-free Inference for Quantized CNNs
DietCNN: Multiplication-free Inference for Quantized CNNsIEEE International Joint Conference on Neural Network (IJCNN), 2023
Swarnava Dey
P. Dasgupta
P. Chakrabarti
MQ
296
1
0
09 May 2023
LUT-NN: Empower Efficient Neural Network Inference with Centroid
  Learning and Table Lookup
LUT-NN: Empower Efficient Neural Network Inference with Centroid Learning and Table LookupACM/IEEE International Conference on Mobile Computing and Networking (MobiCom), 2023
Xiaohu Tang
Yang Wang
Ting Cao
Li Zhang
Qi Chen
Deng Cai
Yunxin Liu
Mao Yang
192
32
0
07 Feb 2023
Reaching the Edge of the Edge: Image Analysis in Space
Reaching the Edge of the Edge: Image Analysis in Space
R. Bayer
Julian Priest
Pınar Tözün
336
7
0
12 Jan 2023
Efficient Compilation and Mapping of Fixed Function Combinational Logic
  onto Digital Signal Processors Targeting Neural Network Inference and
  Utilizing High-level Synthesis
Efficient Compilation and Mapping of Fixed Function Combinational Logic onto Digital Signal Processors Targeting Neural Network Inference and Utilizing High-level SynthesisACM Transactions on Reconfigurable Technology and Systems (TRETS), 2022
Soheil Nazar Shahsavani
A. Fayyazi
M. Nazemi
Massoud Pedram
98
2
0
30 Jul 2022
Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural
  Network Inference
Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network InferenceSymposium on Field Programmable Gate Arrays (FPGA), 2021
Erwei Wang
James J. Davis
G. Stavrou
P. Cheung
George A. Constantinides
Mohamed S. Abdelfattah
MQ
190
13
0
04 Dec 2021
How to Reach Real-Time AI on Consumer Devices? Solutions for
  Programmable and Custom Architectures
How to Reach Real-Time AI on Consumer Devices? Solutions for Programmable and Custom ArchitecturesIEEE International Conference on Application-Specific Systems, Architectures, and Processors (ASAP), 2021
Stylianos I. Venieris
Ioannis Panopoulos
Ilias Leontiadis
I. Venieris
249
7
0
21 Jun 2021
RHNAS: Realizable Hardware and Neural Architecture Search
RHNAS: Realizable Hardware and Neural Architecture Search
Yash Akhauri
Adithya Niranjan
J. P. Muñoz
Suvadeep Banerjee
A. Davare
P. Cocchini
A. Sorokin
R. Iyer
Nilesh Jain
143
3
0
17 Jun 2021
NullaNet Tiny: Ultra-low-latency DNN Inference Through Fixed-function
  Combinational Logic
NullaNet Tiny: Ultra-low-latency DNN Inference Through Fixed-function Combinational LogicIEEE Symposium on Field-Programmable Custom Computing Machines (FCCM), 2021
M. Nazemi
A. Fayyazi
Amirhossein Esmaili
Atharva Khare
Soheil Nazar Shahsavani
Massoud Pedram
124
15
0
07 Apr 2021
unzipFPGA: Enhancing FPGA-based CNN Engines with On-the-Fly Weights
  Generation
unzipFPGA: Enhancing FPGA-based CNN Engines with On-the-Fly Weights GenerationIEEE Symposium on Field-Programmable Custom Computing Machines (FCCM), 2021
Stylianos I. Venieris
Javier Fernandez-Marques
Nicholas D. Lane
162
11
0
09 Mar 2021
Enabling Binary Neural Network Training on the Edge
Enabling Binary Neural Network Training on the Edge
Erwei Wang
James J. Davis
Daniele Moro
Piotr Zielinski
Jia Jie Lim
C. Coelho
S. Chatterjee
P. Cheung
George A. Constantinides
MQ
565
32
0
08 Feb 2021
Logic Synthesis Meets Machine Learning: Trading Exactness for
  Generalization
Logic Synthesis Meets Machine Learning: Trading Exactness for GeneralizationDesign, Automation and Test in Europe (DATE), 2020
Shubham Rai
Walter Lau Neto
Yukio Miyasaka
Xinpei Zhang
Mingfei Yu
...
Zhiru Zhang
V. Tenace
P. Gaillardon
A. Mishchenko
S. Chatterjee
NAI
302
31
0
04 Dec 2020
Automatic heterogeneous quantization of deep neural networks for
  low-latency inference on the edge for particle detectors
Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors
C. Coelho
Aki Kuusela
Shane Li
Zhuang Hao
T. Aarrestad
Vladimir Loncar
J. Ngadiuba
M. Pierini
Adrian Alan Pol
S. Summers
MQ
318
218
0
15 Jun 2020
Exposing Hardware Building Blocks to Machine Learning Frameworks
Exposing Hardware Building Blocks to Machine Learning Frameworks
Yash Akhauri
87
0
0
10 Apr 2020
1
Page 1 of 1