I added the link to the paper “A Multilevel Framework for Sparse Optimization With Application to Inverse Covariance Estimation and Logistic Regression” soon to appear in SIAM Scientific Computing (SISC) journal. The paper describes a method that accelerates sparse optimization methods that use L1 regularization to achieve sparse solution. We show how to apply this method to the sparse inverse covariance method (also known as GLASSO) and the L1-regularized logistic regression.
Released Code of the Block-Coordinate Descent for Inverse Covariance
I released the code for the paper “A Block-Coordinate Descent Approach for Large-scale Sparse Inverse Covariance Estimation” that was presented in NIPS 2014. The algorithm includes a flag that enables the multilevel acceleration. This flag is very useful for large-scale problems on the thousands-millions of variables. The code runs in Matlab and includes some functions in C that require compilation. Also, it calls functions from METIS 5.0.2 to partitioning the neighbors in every sweep. The released version was tested on Windows, although it should work on other platforms as well.
You are welcome to try it and contact me with any comment you may have. I would like to know if somebody managed to run it in linux or mac.
Sparse Inverse Covariance Estimation Paper Published in NIPS Site
Today, I found that the work “A Block-Coordinate Descent Approach for Large-Scale Sparse Inverse Covariance Estimation” joint with Eran Treister was published in the NIPS 2014 proceedings website. I will publish the algorithm code for this work and the Multilevel framework in a few days.
Hope that you enjoy it and please send me your comments!
Optimization Workshop OPT2014 in NIPS
Last week, I received the notice that the work with Eran Treister and Irad Yavneh was accepted in the optimization workshop at NIPS 2014. This is a follow up work the sparse inverse covariance work, where we present an acceleration framework based on multilevel techniques. The framework reduces the number of computations needed by defining an hierarchy of levels and updating a subset of the active set of non-zero elements. We tested the framework on QUIC and on BCD-IC algorithms with very interesting results, in particular for large-scale problems where the timings are reduced up to 10x.
See you at NIPS 2014 and in the OPT 2014 workshop.
NIPS 2014 – Accepted!
A few days ago, I’ve received the notification about the acceptance to NIPS 2014 of the work I submitted with my friend and colleague Eran Treister back in June. The NIPS 2014 conference will be held in Montreal, Canada during December 8th and 11th. Our work is about a new algorithm to solve the Sparse Inverse Covariance Estimation problem in high dimensions, such that the memory is a limitation factor. In the work we show that the algorithm is faster than the previous methods in thousands to millions of variables, and that the algorithm is capable of running in a single server with 64GB because of its reduced memory usage.
2nd Prize in the CS Research Day 2014
The poster that I presented about the new work with my friend and colleague Eran Treister, obtained the 2nd place in the CS Faculty Research Day. The event was held last Monday at CS faculty building. Among visitors there were undergrad students, professors, and industry people.
The work presents a state-of-the-art method to compute the sparse inverse of the covariance matrix in huge dimensions (hundred thoudsands elements). The method allows for computation of a 100K by 100K matrix in about 10 hours in a quad core computer with 8Gb memory.