Title |
|---|
| Name | # Papers | # Citations |
|---|---|---|
| Date | Location | Event |
|---|---|---|
Model Quantization is a technique used to reduce the size and computational requirements of machine learning models by representing weights and activations with lower precision. This is particularly useful for deploying models on resource-constrained devices, such as mobile phones and embedded systems.
Title |
|---|
| Name (-) |
|---|
| Name (-) |
|---|
| Name (-) |
|---|
| Date | Location | Event | |
|---|---|---|---|
| No social events available | |||