pith. machine review for the scientific record. sign in

arxiv: 1608.05749 · v1 · submitted 2016-08-19 · 💻 cs.LG · cs.IT· math.IT· math.ST· stat.ML· stat.TH

Recognition: unknown

Solving a Mixture of Many Random Linear Equations by Tensor Decomposition and Alternating Minimization

Authors on Pith no claims yet
classification 💻 cs.LG cs.ITmath.ITmath.STstat.MLstat.TH
keywords linearmixedalgorithmalternatingminimizationproblemtensorcomponents
0
0 comments X
read the original abstract

We consider the problem of solving mixed random linear equations with $k$ components. This is the noiseless setting of mixed linear regression. The goal is to estimate multiple linear models from mixed samples in the case where the labels (which sample corresponds to which model) are not observed. We give a tractable algorithm for the mixed linear equation problem, and show that under some technical conditions, our algorithm is guaranteed to solve the problem exactly with sample complexity linear in the dimension, and polynomial in $k$, the number of components. Previous approaches have required either exponential dependence on $k$, or super-linear dependence on the dimension. The proposed algorithm is a combination of tensor decomposition and alternating minimization. Our analysis involves proving that the initialization provided by the tensor method allows alternating minimization, which is equivalent to EM in our setting, to converge to the global optimum at a linear rate.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Locally Near Optimal Piecewise Linear Regression in High Dimensions via Difference of Max-Affine Functions

    stat.ML 2026-05 unverdicted novelty 7.0

    ABGD parametrizes piecewise linear functions as difference of max-affine functions and converges linearly to an epsilon-accurate solution with O(d max(sigma/epsilon,1)^2) samples under sub-Gaussian noise, which is min...

  2. Expectation Maximization (EM) Converges for General Agnostic Mixtures

    cs.LG 2026-04 conditional novelty 7.0

    Gradient EM converges exponentially to optimal population loss minimizers for agnostic fitting of k parametric functions under strong convexity and smoothness of the loss, proper initialization, and separation conditions.