KAPASİTE KISITLI STOKASTİK STOK PROBLEMİ İÇİN BENZETİMSEL DİNAMİK PROGRAMLAMA SEZGİSELİ

Rekabetçi bir ortamda faaliyet gösteren işletmeler için belirsizlik altında stok yönetimi oldukça önemlidir. Bu çalışma kapasite kısıtlı stokastik stok problemine Benzetimsel Dinamik Programlama (Approximate Dynamic Programming‐ADP) tabanlı bir sezgisel yöntem önermektedir. Önerilen sezgisel yöntem, Geçici Farklarla Öğrenme (Temporal Difference Learning‐TD) yöntemini kullanmaktadır. Söz konusu problem için sezgisel yöntemden elde edilen çözümlerin performansı, Dinamik Programlama algoritmasından elde edilen optimal sonuçlar ilekarşılaştırılarak değerlendirilmiştir. Sayısal örnekler sonucunda ADP’nin kısa süre içerisinde ümit vadeden sonuçlar verdiği gözlenmiştir.

AN APPROXIMATE DYNAMIC PROGRAMMING HEURISTIC FOR THE CAPACITATED STOCHASTIC INVENTORY PROBLEM

Inventory management under uncertainty is crucial for enterprises operating in competitive environment. This study presents a heuristic based on Approximate Dynamic Programming for the capacitated stochastic inventory problem. The proposed heuristic employs Temporal Difference Learning method. The performance of the heuristic solutions for the aforementioned problem has been assessed relative to the optimal solutions obtained through Dynamic Programming algorithm. Numerical experiments show that Approximate Dynamic Programming provides promising solutions within relatively short computational times.

___

  • Aviv, Y., Federgruen, A. (1997). Stochastic Inventory Models With Limited Production Capacity and Periodically Varying Parameters, Probability in the Engineering and Informational Sciences, 11(1), 107‐135.
  • Axsäter, S. (1990). Simple Solution Procedures For A Class Of Two‐chelon Inventory Problems, Operations Research, 38(1), 64‐69.
  • Chen, H. D., Hearn, D. W., Lee, C. Y. (1994). A New Dynamic Programming Algorithm For The Single Item Capacitated Dynamic Lot Size Model, Journal of Global Optimization, 4(3), 285‐
  • -
  • De Vericourt, F., Karaesmen, F., Dallery, Y. (2002). Optimal Stock Allocation For A Capacitated Supply System, Management Science, 48(11), 1486‐1501.
  • Ehrhardt, R. (1984). (s, S) Policies For A Dynamic Inventory Model With Stochastic Lead Times,
  • Operations Research, 32(1), 121‐132.
  • Erdelyi, A., Topaloglu, H. (2011). Using Decomposition Methods to Solve Pricing Problems in Network Revenue Management, Journal of Revenue & Pricing Management, 10(4), 325‐ 343.
  • Federgruen, A., Zipkin, P. (1986a). An Inventory Model With Limited Production Capacity and Uncertain Demands I. The Average‐Cost Criterion, Mathematics of Operations Research, 11(2), 193‐207.
  • Federgruen, A., Zipkin, P. (1986b). An Inventory Model With Limited Production Capacity And Uncertain Demands II. The Discounted‐Cost Criterion, Mathematics of Operations Research, 11(2), 208‐215.
  • Gallego, G., Scheller‐Wolf, A. (2000). Capacitated Inventory Problems With Fixed Order Costs: Some Optimal Policy Structure, European Journal of Operational Research, 126(3), 603‐613.
  • He, H. Z. (2013). An Approximate Dynamic Programming Approach For Computing Base Stock Levels. In The 19th International Conference on Industrial Engineering and Engineering Management (749‐757). Springer Berlin Heidelberg.
  • Iglehart, D. L. (1963). Optimality of (s, S) Policies in the Infinite Horizon Dynamic Inventory Problem, Management science, 9(2), 259‐267.
  • Iida, T., Zipkin, P. H. (2006). Approximate Solutions of A Dynamic Forecast‐Inventory Model, Manufacturing & Service Operations Management, 8(4), 407‐425.
  • Kapuściński, R., Tayur, S. (1998). A Capacitated Production‐Inventory Model With Periodic Demand, Operations Research, 46(6), 899‐911.
  • Katanyukul, T., Duff, W. S., Chong, E. K. (2011). Approximate Dynamic Programming For An Inventory Problem: Empirical Comparison, Computers & Industrial Engineering, 60(4), 719‐743.
  • Levi, R., Roundy, R. O., Shmoys, D. B., Truong, V. A. (2008). Approximation Algorithms For Capacitated Stochastic Inventory Control Models, Operations Research, 56(5), 1184‐1199.
  • Özer, Ö., Wei, W. (2004). Inventory Control With Limited Capacity and Advance Demand Information, Operations Research, 52(6), 988‐1000.
  • Powell, W. B. (2007). Approximate Dynamic Programming: Solving The Curses of Dimensionality, John Wiley & Sons.
  • Pratikakis, N. E., Realff, M. J., Lee, J. H. (2010). Strategic Capacity Decision‐Making in A Stochastic Manufacturing Environment Using Real‐Time Approximate Dynamic Programming, Naval Research Logistics (NRL), 57(3), 211‐224.
  • Roundy, R. O., Muckstadt, J. A. (2000). Heuristic Computation of Periodic‐Review Base Stock Inventory Policies, Management Science, 46(1), 104‐109.
  • Scarf, H. (1959). The Optimality of (s, S) Policies in the Dynamic Inventory Problem, in K. J. Arrow, S. Karlin, P. Suppes (eds.), Mathematical Methods in the Social Science, Stanford University
  • Press, Stanford.
  • Sethi, S. P., Cheng, F. (1997). Optimality of (s, S) Policies in Inventory Models With Markovian Demand, Operations Research, 45(6), 931‐939.
  • Shaw, D. X., Wagelmans, A. P. (1998). An Algorithm For Single‐Item Capacitated Economic Lot Sizing With Piecewise Linear Production Costs and General Holding Costs, Management Science, 44(6), 831‐838.
  • Shervais, S., Shannon, T. T., Lendaris, G. G. (2003). Intelligent Supply Chain Management Using Adaptive Critic Learning, Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, 33(2), 235‐244.
  • Sobel, M. J., Zhang, R. Q. (2001). Inventory Policies For Systems With Stochastic and Deterministic Demand, Operations research, 49(1), 157‐162.
  • Sutton, R. S., Barto, A. G. (1998). Reinforcement Learning: An Introduction, MIT Press, Masachusetts.
  • Topaloglu, H., Kunnumkal, S. (2006). Approximate Dynamic Programming Methods For An Inventory Allocation Problem Under Uncertainty, Naval Research Logistics (NRL), 53(8), 822‐841.
  • Van Hoesel, C. P. M., Wagelmans, A. P. (2001). Fully Polynomial Approximation Schemes For Single‐Item Capacitated Economic Lot‐Sizing Problems, Mathematics of Operations Research, 26(2), 339‐357.
  • Yu, H., Bertsekas, D. P. (2013). Q‐Learning And Policy Iteration Algorithm