Is aspect ratio of cells important in deep learning? A robust comparison
of deep learning methods for multi-scale cytopathology cell image
classification: from convolutional neural networks to visual transformers
Cervical cancer is a very common and fatal cancer in women. Cytopathology images are often used to screen this cancer. Since there is a possibility of a large number of errors in manual screening, the computer-aided diagnosis system based on deep learning is developed. The deep learning methods required a fixed size of input images, but the sizes of the clinical medical images are inconsistent. The aspect ratios of the images are suffered while resizing it directly. Clinically, the aspect ratios of cells inside cytopathological images provide important information for doctors to diagnose cancer. Therefore, it is illogical to resize directly. However, many existing studies resized the images directly and obtained very robust classification results. To find a reasonable interpretation, we have conducted a series of comparative experiments. First, the raw data of the SIPaKMeD dataset are preprocessed to obtain the standard and scaled datasets. Then, the datasets are resized to 224 x 224 pixels. Finally, twenty-two deep learning models are used to classify standard and scaled datasets. The conclusion is that the deep learning models are robust to changes in the aspect ratio of cells in cervical cytopathological images. This conclusion is also validated on the Herlev dataset.
View on arXiv