SkillAgentSearch skills...

EVOLvE

🤖 A framework for experimenting with LLMs in bandit scenarios with customizable agents.

Install / Use

/learn @allenanie/EVOLvE
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<div align="center">

EVOLvE: Evaluating and Optimizing LLMs For In-Context Exploration

<p align="center"> <img src="https://github.com/allenanie/EVOLvE/blob/main/assets/logo.png?raw=true" alt="EVOLvE Logo" width="200" height="200"/> </p>

Github ArXiv

PyPI version Python License Build Status

<div align="center" style="font-family: Arial, sans-serif;"> <p> <a href="#-news" style="text-decoration: none; font-weight: bold;">🎉 News</a> • <a href="#️-installation" style="text-decoration: none; font-weight: bold;">✨ Getting Started</a> • <a href="#-features" style="text-decoration: none; font-weight: bold;">📖 Introduction</a> </p> <p> <a href="#-bandit-scenario-example" style="text-decoration: none; font-weight: bold;">🔧 Usage</a> • <a href="#-citation" style="text-decoration: none; font-weight: bold;">🎈 Citation</a> • <a href="#-acknowledgement" style="text-decoration: none; font-weight: bold;">🌻 Acknowledgement</a> </p> </div> </div>

EVOLvE is a framework for evaluating Large Language Models (LLMs) for In-Context Reinforcement Learning (ICRL). We provide a flexible framework for experimenting with different LLM Agent Context Layers and analyzing how they affect a model's ability to interact with RL environments (bandits). This repository contains the code to reproduce results from our EVOLvE paper.

📰 News

🚀 Features

  • Flexible framework for evaluating LLMs for In-Context Reinforcement Learning (ICRL)
  • Support for both multi-armed and contextual bandit scenarios
  • Mixin-based design for LLM agents with customizable Context Layers
  • Built-in support for few-shot learning and demonstration
  • Includes popular benchmark environments (e.g., MovieLens)

🛠️ Installation

Option 1: Install from PyPI (Recommended for Users)

pip install banditbench

Option 2: Install from Source (Recommended for Developers)

git clone https://github.com/allenanie/EVOLvE.git
cd EVOLvE
pip install -e .  # Install in editable mode for development

🎯 Bandit Scenario

We provide two types of bandit scenarios:

Multi-Armed Bandit Scenario

  • Classic exploration-exploitation problem with stochastic reward sampled from a fixed distributions
  • Agent learns to select the best arm without any contextual information
  • Example: Choosing between 5 different TikTok videos to show, without knowing which one is more popular at first

Contextual Bandit Scenario

  • Reward distributions depend on a context (e.g., user features)
  • Agent learns to map contexts to optimal actions
  • Example: Recommending movies to users based on their age, location (e.g., suggesting "The Dark Knight" to a 25-year-old who enjoys action movies and lives in an urban area)
<p align="center"> <img src="https://github.com/allenanie/EVOLvE/blob/main/assets/bandit_scenario.png?raw=true" alt="Bandit Scenario Example"/> </p>

🎮 Quick Start

Evaluate LLMs for their In-Context Reinforcement Learning Performance

In this example, we will compare the performance of two agents (LLM and one of the classic agents) on a multi-armed bandit task.

from banditbench.tasks.mab import BernoulliBandit, VerbalMultiArmedBandit
from banditbench.agents.llm import LLMAgent
from banditbench.agents.classics import UCBAgent

# this is a 5-armed bandit
# with the probability of getting a reward to be [0.2, 0.2, 0.2, 0.2, 0.5]
core_bandit = BernoulliBandit(5, horizon=100, arm_params=[0.2, 0.2, 0.2, 0.2, 0.5])

# The scenario is "ClothesShopping", agent sees actions as clothing items
verbal_bandit = VerbalMultiArmedBandit(core_bandit, "ClothesShopping")

# we create an LLM agent that uses summary statistics (mean, number of times, etc.)
agent = LLMAgent.build_with_env(verbal_bandit, summary=True, model="gpt-3.5-turbo")

llm_result = agent.in_context_learn(verbal_bandit, n_trajs=5)

# we create a UCB agent, which is a classic agent that uses 
# Upper Confidence Bound to make decisions
classic_agent = UCBAgent(core_bandit)

# we run the classic agent in-context learning on the core bandit for 5 trajectories
classic_result = classic_agent.in_context_learn(core_bandit, n_trajs=5)

classic_result.plot_performance(llm_result, labels=['UCB', 'GPT-3.5 Turbo'])

Doing this will give you a plot like this:

<p align="left"> <img src="https://github.com/allenanie/EVOLvE/blob/main/assets/UCBvsLLM.png?raw=true" alt="UCB vs LLM" style="width: 60%;"/> </p>

Getting Task Instruction and Prompts

If you want to obtain task instructions and decision prompts, you can follow the steps below (useful when you want to create your own agent without extending from our agent base class):

For Multi-Armed Bandit:

# with the probability of getting a reward to be [0.2, 0.2, 0.2, 0.2, 0.5]
core_bandit = BernoulliBandit(5, horizon=100, arm_params=[0.2, 0.2, 0.2, 0.2, 0.5])

# The scenario is "ClothesShopping", agent sees actions as clothing items
verbal_bandit = VerbalMultiArmedBandit(core_bandit, "ClothesShopping")

# We create a dummy agent to access instruction
agent = LLMAgent.build_with_env(verbal_bandit, summary=True, model="gpt-3.5-turbo")

done = False
while not done:
    # Get verbal prompts for this step
    task_instruction = agent.get_task_instruction()
    action_history = agent.get_action_history()
    decision_query = agent.get_decision_query()

    action_verbal = agent.act()

    verbal_prompts.append({
        'task_instruction': task_instruction,
        'action_history': action_history,
        'decision_query': decision_query,
        'label': action_verbal
    })
    _, reward, done, info = verbal_bandit.step(action_verbal)

    action = info['interaction'].mapped_action

    agent.update(action, reward, info)

For Contextual Bandit:

from banditbench.tasks.cb.movielens import MovieLens, MovieLensVerbal

env = MovieLens('100k-ratings', num_arms=5, horizon=200, rank_k=5, mode='train',
                        save_data_dir='./tensorflow_datasets/')
verbal_env = MovieLensVerbal(env)

agent = LLMAgent.build_with_env(verbal_env, model="gpt-3.5-turbo")

state, _ = verbal_env.reset(seed=1)

done = False
while not done:
    # Get verbal prompts for this step
    task_instruction = agent.get_task_instruction()
    action_history = agent.get_action_history()
    decision_query = agent.get_decision_query(state)

    action_verbal = agent.act(state)

    new_state, reward, done, info = verbal_env.step(state, action_verbal)

    action = info['interaction'].mapped_action

    agent.update(state, action, reward, info)
    state = new_state

💰 Evaluation Cost

Each of the benchmark has a cost estimation tool for the inference cost. The listed cost is in $ amount which contains all trials and repetitions.

from banditbench import HardCoreBench, HardCorePlusBench, FullBench, CoreBench, MovieBench
bench = HardCoreBench()
cost = bench.calculate_eval_cost([
    'gemini-1.5-pro',
    'gemini-1.5-flash',
    'gpt-4o-2024-11-20',
    "gpt-4o-mini-2024-07-18",
    "o1-2024-12-17",
    "o1-mini-2024-09-12",
    "claude-3-5-sonnet-20241022",
    "claude-3-5-haiku-20241022"
])

You can evaluate an agent by doing:

from banditbench.agents.llm import LLMAgent
from banditbench.agents.guide import UCBGuide

env_to_agent_results = bench.evaluate([
  LLMAgent.build(),  # Raw History Context Layer
  LLMAgent.build(summary=True),  # Summary Context Layer
  LLMAgent.build(summary=True, guide=UCBGuide(env))  # Summary + UCB Guide Context Layer
])

Cost estimation is performed for a single agent with raw history (the longest context). If you evaluate multiple agent, you can simply multiply this cost by the number of agents.

| Model | Core | HardCore | HardCore+ | Full | MovieBench | |---------------------------|----------|--------------|---------------|-----------|----------------| | gemini-1.5-flash | $31.05 | $14.91 | $39.18 | $83.44 | $31.05 | | gpt-4o-mini-2024-07-18 | $62.10 | $29.83 | $78.36 | $166.88 | $62.10 | | claude-3-5-haiku-20241022 | $414.33 | $198.97 | $522.64 | $1113.18 | $414.33 | | gemini-1.5-pro | $517.54 | $248.55 | $652.98 | $1390.69 | $517.54 | | gpt-4o-2024-11-20 | $1035.07 | $497.11 | $1305.96 | $2781.38 | $1035.07 | | o1-mini-2024-09-12 | $1242.09 | $596.53 | $1567.16 | $3337.66 | $1242.09 | | claude-3-5-sonnet-20241022| $1243.00 | $596.91 | $1567.91 | $3339.53 | $1243.00 | | o1-2024-12-17 | $6210.45 | $2982.64 | $7835.79 | $16688.31 | $6210.45 |

🌍 Environments & 🤖 Agents

Here are a list of agents that are supported by EVOLvE:

For Multi-Armed Bandit Scenario:

| Agent Name | Code | Interaction History | Algorithm Guide | |--------

Related Skills

View on GitHub
GitHub Stars11
CategoryDevelopment
Updated28d ago
Forks1

Languages

Python

Security Score

90/100

Audited on Mar 3, 2026

No findings