Weak subgradient method with path based target level algorithm for nonconvex optimization

We study a new version of the weak subgradient method, recently developed by Dinc Yalcin and Kasimbeyli for solving nonsmooth, nonconvex problems. This method is based on the concept of using any weak subgradient of the objective of the problem at the currently generated point with a version of the dynamic stepsize in order to produce a new point at each iteration. The target value needed in the dynamic stepsize is defined using a path based target level (PBTL) algorithm to ensure the optimal value of the problem is reached. We analyze the convergence and give an estimate of the convergence rate of the proposed method. Furthermore, we demonstrate the performance of the proposed method on nonsmooth, nonconvex test problems, and give the computational results by comparing them with the approximately optimal solutions.

___

  • Akbari, Z., Yousefpour, R., Peyghami, M. R., A new nonsmooth trust region algorithm for locally Lipschitz unconstrained optimization problems, Journal of Optimization Theory and Applications, 164 (3) (2015), 733–754, https://dx.doi.org/10.1007/s10957-014-0534-6
  • Allen, E., Helgason, R., Kennington, J., Shetty, B., A generalization of Polyak’s convergence result for subgradient optimization, Mathematical Programming, 37 (3) (1987), 309–317.
  • Azimov, A., Gasimov, R., On weak conjugacy, weak subdifferentials and duality with zero gap in nonconvex optimization, International Journal of Applied Mathematics, 1 (2) (1999), 171–192.
  • Azimov, A., Gasimov, R., Stability and duality of nonconvex problems via augmented Lagrangian, Cybernetics and Systems Analysis, 38 (3) (2002), 412–421, https://dx.doi.org/10.1023/A:1020316811823
  • Bagirov, A., A method for minimization of quasidifferentiable functions, Optimization Methods and Software, 17 (1) (2002), 31–60, https://dx.doi.org/10.1080/10556780290027837
  • Bagirov, A. M., A method for minimizing convex functions based on continuous approximations to the subdifferential, Optimization Methods and Software, 9 (1998), 1–17, https://dx.doi.org/10.1080/10556789808805683
  • Bagirov, A. M., Ugon, J., Codifferential method for minimizing nonsmooth DC functions, Journal of Global Optimization, 50 (1) (2011), 3–22, https://dx.doi.org/10.1007/s10898-010-9569-x
  • Beck, A., Teboulle, M., Smoothing and first order methods: A unified framework, SIAM Journal on Optimization, 22 (2) (2012), 557–580, https://dx.doi.org/10.1137/100818327
  • Bolte, J., Sabach, S., Teboulle, M., Proximal alternating linearized minimization for nonconvex and nonsmooth problems, Mathematical Programming, 146 (1-2) (2014), 459–494, https://dx.doi.org/10.1007/s10107-013-0701-9
  • Borges, P., Sagastiz´abal, C., Solodov, M., A regularized smoothing method for fully parameterized convex problems with applications to convex and nonconvex two-stage stochastic programming, Mathematical Programming (2020), 1–33, https://dx.doi.org/10.1007/s10107-020-01582-2
  • Bot¸, R. I., Csetnek, E. R., An inertial Tsengs type proximal algorithm for nonsmooth and nonconvex optimization problems, Journal of Optimization Theory and Applications, 171 (2) (2016), 600–616, https://dx.doi.org/10.1007/s10957-015-0730-z
  • Bot, R. I., Csetnek, E. R., Nguyen, D. K., A proximal minimization algorithm for structured nonconvex and nonsmooth problems, SIAM Journal on Optimization, 29 (2) (2019), 1300–1328, https://dx.doi.org/10.1137/18M1190689
  • Bot¸, R. I., Hendrich, C., A variable smoothing algorithm for solving convex optimization problems, TOP, 23 (1) (2015), 124–150, https://dx.doi.org/10.1007/s11750-014-0326-z
  • Brannlund, U., On Relaxation Methods for Nonsmooth Convex Optimization, PhD thesis, Stockholm: Department of Mathematics, Royal Institute of Technology, 1993.
  • Burke, J. V., Hoheisel, T., Epi-convergent smoothing with applications to convex composite functions, SIAM Journal on Optimization, 23 (3) (2013), 1457–1479, https://dx.doi.org/10.1137/120889812
  • Burke, J. V., Lewis, A. S., Overton, M. L., A robust gradient sampling algorithm for nonsmooth, nonconvex optimization, SIAM Journal on Optimization, 15 (3) (2005), 751–779, https://dx.doi.org/10.1137/030601296
  • Chen, X., Smoothing methods for nonsmooth, nonconvex minimization, Mathematical Programming, 134 (1) (2012), 71–99, https://dx.doi.org/10.1007/s10107-012-0569-0
  • Clarke, F. H., Optimization and Nonsmooth Analysis, SIAM, 1990.
  • Curtis, F. E., Que, X., An adaptive gradient sampling algorithm for nonsmooth optimization, Optimization Methods and Software, 28 (6) (2013), 1302–1324, https://dx.doi.org/10.1080/10556788.2012.714781
  • Curtis, F. E., Que, X., A quasi-Newton algorithm for nonconvex, nonsmooth optimization with global convergence guarantees, Mathematical Programming Computation, 7 (4) (2015), 399–428, https://dx.doi.org/10.1007/s12532-015-0086-2
  • Dennis, J. E., Li, S. B. B., Tapia, R. A., A unified approach to global convergence of trust region methods for nonsmooth optimization, Mathematical Programming, 68 (1-3) (1995), 319–346, https://dx.doi.org/10.1007/BF01585770
  • Dinc Yalcin, G., Kasimbeyli, R., Weak subgradient method for solving nonsmooth nonconvex optimization problems, Optimization, 70 (7) (2021), 1513–1553, https://dx.doi.org/10.1080/02331934.2020.1745205
  • Ermoliev, Y. M., Methods of solution of nonlinear extremal problems, Cybernetics, 2 (4) (1966), 1–14, https://dx.doi.org/10.1007/BF01071403
  • Fuduli, A., Gaudioso, M., Giallombardo, G., A DC piecewise affine model and a bundling technique in nonconvex nonsmooth minimization, Optimization Methods and Software, 19 (1) (2004), 89–102, https://dx.doi.org/10.1080/10556780410001648112
  • Fuduli, A., Gaudioso, M., Giallombardo, G., Minimizing nonconvex nonsmooth functions via cutting planes and proximity control, SIAM Journal on Optimization, 14 (3) (2004), 743–756, https://dx.doi.org/10.1137/S1052623402411459
  • Gasimov, R. N., Augmented Lagrangian duality and nondifferentiable optimization methods in nonconvex programming, Journal of Global Optimization, 24 (2) (2002), 187–203, https://dx.doi.org/10.1023/A:1020261001771
  • Goffin, J. L., Kiwiel, K. C., Convergence of a simple subgradient level method, Mathematical Programming, 85 (1) (1999), 207–211, https://dx.doi.org/10.1007/s101070050053
  • Grothey, A., Decomposition Methods for Nonlinear Nonconvex Optimization Problems, PhD thesis, Citeseer, 2001.
  • Haarala, M., Miettinen, K., M¨akel¨a, M. M., New limited memory bundle method for largescale nonsmooth optimization, Optimization Methods and Software, 19 (6) (2004), 673–692, https://dx.doi.org/10.1080/10556780410001689225
  • Haarala, N., Miettinen, K., M¨akel¨a, M. M., Globally convergent limited memory bundle method for large-scale nonsmooth optimization, Mathematical Programming, 109 (1) (2007), 181–205, https://dx.doi.org/10.1007/s10107-006-0728-2
  • Hoseini, N., Nobakhtian, S., A new trust region method for nonsmooth nonconvex optimization, Optimization, 67 (8) (2018), 1265–1286, https://dx.doi.org/10.1080/02331934.2018.1470175
  • Hu, Y., Yang, X., Sim, C.-K., Inexact subgradient methods for quasi-convex optimization problems, European Journal of Operational Research, 240 (2) (2015), 315–327, https://dx.doi.org/10.1016/j.ejor.2014.05.017
  • Kasimbeyli, R., Mammadov, M., On weak subdifferentials, directional derivatives, and radial epiderivatives for nonconvex functions, SIAM Journal on Optimization, 20 (2) (2009), 841– 855, https://dx.doi.org/10.1137/080738106
  • Kasimbeyli, R., Mammadov, M., Optimality conditions in nonconvex optimization via weak subdifferentials, Nonlinear Analysis: Theory, Methods & Applications, 74 (7) (2011), 2534–2547, https://dx.doi.org/10.1016/j.na.2010.12.008
  • Kiwiel, K. C., Methods of Decent for Nondifferentiable Optimization, Lecture Notes in Mathematics, No. 1133, Amer Chemical Soc 1155 16TH St, NW, Washington, DC 20036, 1985.
  • Kiwiel, K. C., Restricted step and Levenberg–Marquardt techniques in proximal bundle methods for nonconvex nondifferentiable optimization, SIAM Journal on Optimization, 6 (1) (1996), 227–249, https://dx.doi.org/10.1137/0806013
  • Kiwiel, K. C., Convergence and efficiency of subgradient methods for quasiconvex minimization, Mathematical Programming, 90 (1) (2001), 1–25, https://dx.doi.org/10.1007/s101070100198
  • Kiwiel, K. C., Convergence of approximate and incremental subgradient methods for convex optimization, SIAM Journal on Optimization, 14 (3) (2004), 807–840, https://dx.doi.org/10.1137/S1052623400376366
  • Kiwiel, K. C., Convergence of the gradient sampling algorithm for nonsmooth nonconvex optimization, SIAM Journal on Optimization, 18 (2) (2007), 379–388, https://dx.doi.org/10.1137/050639673
  • Lewis, A. S., Overton, M. L., Nonsmooth optimization via quasi-Newton methods, Mathematical Programming, 141 (1-2) (2013), 135–163, https://dx.doi.org/10.1007/s10107-012-0514-2
  • Luksan, L., Vlcek, J., A bundle-Newton method for nonsmooth unconstrained minimization, Mathematical Programming, 83 (1-3) (1998), 373–391, https://dx.doi.org/10.1007/BF02680566
  • Luksan, L., Vlcek, J., NDA: Algorithms for Nondifferentiable Optimization, Technical Report 797, Institute of Computer Science, Academy of Sciences of the Czech Rebuplic, Prague, 2000.
  • Luksan, L., Vlcek, J., Ramesova, N., UFO 2002, Interactive System for Universal Functional Optimization, Technicka Zprava 883, 2002.
  • Makela, M. M., N. P., Nonsmooth Optimization: Analysis and Algorithms with Applications to Optimal Control, World Scientific, 1992.
  • Nedic, A., Bertsekas, D. P., Incremental subgradient methods for nondifferentiable optimization, SIAM Journal on Optimization, 12 (1) (2001), 109–138, https://dx.doi.org/10.1137/S1052623499362111
  • Nedic, A., Bertsekas, D. P., The effect of deterministic noise in subgradient methods, Mathematical Programming, 125 (1) (2010), 75–99, https://dx.doi.org/10.1007/s10107-008-0262-5
  • Nesterov, Y., Smooth minimization of non-smooth functions, Mathematical Programming, 103 (1) (2005), 127–152, https://dx.doi.org/10.1007/s10107-004-0552-5
  • Pock, T., Sabach, S., Inertial proximal alternating linearized minimization (ipalm) for nonconvex and nonsmooth problems, SIAM Journal on Imaging Sciences, 9 (4) (2016), 1756–1787, https://dx.doi.org/10.1137/16M1064064
  • Polak, E., Mayne, D. Q., Higgins, J. E., Superlinearly convergent algorithm for minmax problems, Journal of Optimization Theory and Applications, 69 (3) (1991), 407–439, https://dx.doi.org/10.1007/BF00940683
  • Polyak, B. T., A general method for solving extremal problems, Doklady Akademii Nauk, Russian Academy of Sciences, 174 (1) (1967), 33–36.
  • Polyak, B. T., Minimization of unsmooth functionals, USSR Computational Mathematics and Mathematical Physics, 9 (3) (1969), 14–29, https://dx.doi.org/10.1016/0041-5553(69)90061-5
  • Qi, L., Sun, J., A trust region algorithm for minimization of locally Lipschitzian functions, Mathematical Programming, 66 (1-3) (1994), 25–43, https://dx.doi.org/10.1007/BF01581136
  • Shor, N. Z., Minimization Methods for Non-differentiable Functions (K. Kiwiel and A. Ruszcynski,translate), Heidelberg:Springer-Verlag Berlin, 1985.
  • Tran-Dinh, Q., Adaptive smoothing algorithms for nonsmooth composite convex minimization, Computational Optimization and Applications, 66 (3) (2017), 425–451, https://dx.doi.org/10.1007/s10589-016-9873-6
  • Vlcek, J., Luksan, L., Globally convergent variable metric method for nonconvex nondifferentiable unconstrained minimization, Journal of Optimization Theory and Applications, 111 (2) (2001), 407–430, https://dx.doi.org/10.1023/A:1011990503369
  • Zhang, P., Bao, G., Path-based incremental target level algorithm on riemannian manifolds, Optimization, 69 (4) (2020), 799–819, https://dx.doi.org/10.1080/02331934.2019.1671840