Parallel algorithms for computing sparse matrix permanents

Parallel algorithms for computing sparse matrix permanents

The permanent is an important characteristic of a matrix and it has been used in many applications.Unfortunately, it is a hard to compute and hard to approximate the immanant. For dense/full matrices, the fastest exact algorithm, Ryser, has O(2n−1n) complexity. In this work, a parallel algorithm, SkipPer, is proposed to exploit the sparsity within the input matrix as much as possible. SkipPer restructures the matrix to reduce the overall work, skips the unnecessary steps, and employs a coarse-grain, shared-memory parallelization with dynamic scheduling. The experiments show that SkipPer increases the performance of exact permanent computation up to 140× compared to the naive version for general matrices. Furthermore, thanks to the coarse-grain parallelization, 14–15× speedup on average is obtained with τ = 16 threads over sequential SkipPer. Overall, by exploiting the sparsity and parallelism, the speedup is 2000× for some of the matrices used in the experimental setting. The proposed techniques in this paper can be used to significantly improve the performance of exact permanent computation by simply replacing Ryser with SkipPer, especially when the matrix is highly sparse.

___

  • [1] Kılıç E, Tasçı D. On the permanents of some tridiagonal matrices with applications to the Fibonacci and Lucas numbers. Rocky Mountain Journal of Mathematics 2007; 37: 1953-1969. doi: 10.1216/rmjm/1199649832
  • [2] Balakrishnan N. Permanents, order statistics, outliers, and robustness. Revista Matematica Complutense 2007; 20 (1): 7-107. doi: 10.5209/rev_REMA.2007.v20.n1.16528
  • [3] Aaronson S, Arkhipov A. The computational complexity of linear optics. In: Proceedings of the Forty-third Annual ACM Symposium on Theory of Computing; New York, NY, USA; 2011. pp. 333-342.
  • [4] Wu J, Liu Y, Zhang B, Jin X, Wang Y et al. A benchmark test of boson sampling on Tianhe-2 supercomputer. National Science Review 2018; 5 (5): 715-720. doi: 10.1093/nsr/nwy079
  • [5] Narahara M, Tamaki K, Yamada R. Application of permanents of square matrices for DNA identification in multiple-fatality cases. BMC Genetics 2013; 14 (1): 72. doi: 10.1186/1471-2156-14-72
  • [6] Mahajan M, Raghavendra RBV, Sreenivasaiah K. Monomials, multilinearity and identity testing in simple readrestricted circuits. Theoretical Computer Science 2014; 524: 90-102. doi: 10.1016/j.tcs.2014.01.005
  • [7] Minc H. Permanents (Encyclopedia of Mathematics and Its Applications). Cambridge, UK: Cambridge University Press, 1984.
  • [8] Valiant LG. The complexity of computing the permanent. Theoretical Computer Science 1979; 8 (2): 189-201. doi: 10.1016/0304-3975(79)90044-6
  • [9] Jerrum M, Sinclair A, Vigoda E. A polynomial-time approximation algorithm for the permanent of a matrix with nonnegative entries. Journal of the ACM 2004; 51 (4): 671-697. doi: 10.1145/1008731.1008738
  • [10] Štefankovič D, Vigoda E, Wilmes J. On counting perfect matchings in general graphs. In: Theoretical Informatics, LATIN 2018; Cham, Switzerland; 2018. pp. 873-885. doi: 10.1007/978-3-319-77404-6_63
  • [11] Gurvits L, Samorodnitsky A. Bounds on the permanent and some applications. In: FOCS ’14 Proceedings of the 2014 IEEE 55th Annual Symposium on Foundations of Computer Science; Washington, DC, USA; 2014. pp. 90-99. doi: 10.1109/FOCS.2014.18
  • [12] Linial N, Samorodnitsky A, Wigderson A. A deterministic strongly polynomial algorithm for matrix scaling and approximate permanents. Combinatorica 2000; 20: 545-568. doi: 10.1007/s004930070007
  • [13] Dufossé F, Kaya K, Panagiotas I, Uçar B. Scaling Matrices and Counting the Perfect Matchings in Graphs. Research Report RR-9161. Inria Grenoble Rhône-Alpes, 2018.
  • [14] Anari N, Gurvits K, Gharan SO, Saberi A. Simply exponential approximation of the permanent of positive semidefinite matrices. In: 58th IEEE Annual Symposium on Foundations of Computer Science; Berkeley, CA, USA; 2017. pp. 914-925.
  • [15] Eldar L, Mehraban S. Approximating the permanent of a random matrix with vanishing mean. In: 59th IEEE Annual Symposium on Foundations of Computer Science; Paris, France; 2018. pp. 23-34.
  • [16] Mittal RC, Al-Kurdi A. Efficient computation of the permanent of a sparse matrix. International Journal of Computer Mathematics 2001; 77 (2): 189-199. doi: 10.1080/00207160108805061
  • [17] Servedio S, Wan A. Computing sparse permanents faster. Information Processing Letters 2005; 96 (3): 89-92. doi: 10.1016/j.ipl.2005.06.007
  • [18] Bax E, Franklin J. A permanent algorithm with exp[ω(n 1/3/2 ln n)] expected speedup for 0-1 matrices. Algorithmica 2008; 32: 157-162. doi: 10.1007/s00453-001-0072-0
  • [19] Yue B, Liang H, Bai F. Improved algorithms for permanent and permanental polynomial of sparse graph. Communications in Mathematical and in Computer Chemistry 2013; 69: 831-842.
  • [20] Ryser HJ. Combinatorial Mathematics. New York, NY, USA: Mathematical Association of America, 1963, doi: 10.5948/UPO9781614440147
  • [21] Nijenhuis A, Wilf HS. Combinatorial Algorithms. New York, NY, USA: Academic Press, 1978.
  • [22] Forbert H, Marx D. Calculation of the permanent of a sparse positive matrix. Computer Physics Communications 2003; 150 (3): 267-273. doi: 10.1016/S0010-4655(02)00683-5
  • [23] Liang H, Huang S, Bai F. A hybrid algorithm for computing permanents of sparse matrices. Applied Mathematics and Computation 2006; 172 (2): 708-716. doi: 10.1016/j.amc.2004.11.020
  • [24] Wang L, Liang H, Bai F, Huo Y. A load balancing strategy for parallel computation of sparse permanents. Numerical Linear Algebra with Applications 2011; 19: 1017-1030. doi: 10.1002/nla.1844
  • [25] Björklund A, Gupt B, Quesada N. A faster Hafnian formula for complex matrices and its benchmarking on a supercomputer. Journal of Experimental Algorithmics 2019; 24 (1): 1.11:1-1.11:17. doi: 10.1145/3325111