Training structured learning models is time-consuming and current approaches are limited to using a single machine. Thus, the advantage of more computing power and the capacity for larger data set has not been exploited for structured learning. In this work, we propose two efficient algorithms for distributedly training large-scale structured support vector machines. One is based on alternating direction method of multipliers, while the other is based on a recently proposed distributed block-coordinate descent method. Theoretical and experimental results both indicate that our methods are efficient for training structured models.
View on arXiv