State-of-the-art semantic segmentation models are typically optimized in a data-driven fashion, minimizing solely per-pixel or per-segment classification objectives on their training data. This purely data-driven paradigm often leads to absurd segmentations, especially when the domain of input images is shifted from the one encountered during training. For instance, state-of-the-art models may assign the label ``road to a segment that is located above a segment that is respectively labeled as ``sky, although our knowledge of the physical world dictates that such a configuration is not feasible for images captured by forward-facing upright cameras. Our method, Physically Feasible Semantic Segmentation (PhyFea), first extracts explicit constraints that govern spatial class relations from the semantic segmentation training set at hand in an offline, data-driven fashion, and then enforces a morphological yet differentiable loss that penalizes violations of these constraints during training to promote prediction feasibility. PhyFea is a plug-and-play method and yields consistent and significant performance improvements over diverse state-of-the-art networks on which we implement it across the ADE20K, Cityscapes, and ACDC datasets. Code and models will be made publicly available.
View on arXiv