Segmenting Object Affordances:
Reproducibility and Sensitivity to Scale

T. Apicella1,2, A. Xompero2, P. Gastaldo1, A. Cavallaro3,4

1University of Genoa, Italy; 2Queen Mary University of London, United Kingdom;
3Idiap Research Institute, Switzerland; 4Ecole Polytechnique Federale de Lausanne, Switzerland

Visual affordance segmentation identifies image regions of an object an agent can interact with. Existing methods re-use and adapt learning-based architectures for semantic segmentation to the affordance segmentation task and evaluate on small-size datasets. However, experimental setups are often not reproducible, thus leading to unfair and inconsistent comparisons. In this work, we benchmark these methods under a reproducible setup on two single objects scenarios, tabletop without occlusions and hand-held containers, to facilitate future comparisons. We include a version of a recent architecture, Mask2Former, re-trained for affordance segmentation and show that this model is the best-performing on most testing sets of both scenarios. Our analysis show that models are not robust to scale variations when object resolutions differ from those in the training set.


Available models

Models trained on hand-occluded object setting using CHOC-AFF:


Reference

If you use the code, or the models, please cite the following reference.

Plain text format
 
        T. Apicella, A. Xompero, P. Gastaldo, A. Cavallaro, Segmenting Object Affordances: Reproducibility and Sensitivity to Scale, 
        Proceedings of the European Conference on Computer Vision Workshops, Twelfth International Workshop on Assistive Computer Vision and Robotics (ACVR),
        Milan, Italy, 29 September 2024.
        

Bibtex format
 
        @InProceedings{Apicella2024ACVR_ECCVW,
            title = {Segmenting Object Affordances: Reproducibility and Sensitivity to Scale},
            author = {Apicella, T. and Xompero, A. and Gastaldo, P. and Cavallaro, A.},
            booktitle = {Proceedings of the European Conference on Computer Vision Workshops},
            note = {Twelfth International Workshop on Assistive Computer Vision and Robotics},
            address={Milan, Italy},
            month="29" # SEP,
            year = {2024},
        }
        

Contact

If you have any further enquiries, question, or comments, please contact t.apicella@qmul.ac.uk