Evolution through large models

neub9
By neub9
1 Min Read

This research paper explores the potential of utilizing large language models (LLMs) trained to generate code to enhance the effectiveness of mutation operators in genetic programming (GP). By leveraging LLMs that have been trained on a wide range of sequential changes and modifications, we can benefit from their ability to approximate human-like changes in code. The main experiment demonstrates the far-reaching impact of this evolution through large models (ELM) when combined with MAP-Elites, generating hundreds of thousands of functional examples of Python programs that produce ambulating robots in the Sodarace domain, an area that the original LLM had not encountered during pre-training. These examples are then used to bootstrap the training of a new conditional language model, enabling it to output the appropriate walker for a specific terrain. The capability to bootstrap new models that can generate suitable artifacts for previously untrained domains has significant implications for open-endedness, deep learning, and reinforcement learning. This paper delves into these implications in detail, with the aim of inspiring new avenues for research made possible by ELM.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *