pith. machine review for the scientific record. sign in

arxiv: 1906.06639 · v1 · submitted 2019-06-16 · 💻 cs.LG · stat.ML

Recognition: unknown

Reinforcement Learning Driven Heuristic Optimization

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords heuristicalgorithmsbetterlearningoptimizationannealingapproachesframework
0
0 comments X
read the original abstract

Heuristic algorithms such as simulated annealing, Concorde, and METIS are effective and widely used approaches to find solutions to combinatorial optimization problems. However, they are limited by the high sample complexity required to reach a reasonable solution from a cold-start. In this paper, we introduce a novel framework to generate better initial solutions for heuristic algorithms using reinforcement learning (RL), named RLHO. We augment the ability of heuristic algorithms to greedily improve upon an existing initial solution generated by RL, and demonstrate novel results where RL is able to leverage the performance of heuristics as a learning signal to generate better initialization. We apply this framework to Proximal Policy Optimization (PPO) and Simulated Annealing (SA). We conduct a series of experiments on the well-known NP-complete bin packing problem, and show that the RLHO method outperforms our baselines. We show that on the bin packing problem, RL can learn to help heuristics perform even better, allowing us to combine the best parts of both approaches.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.