Opro
official code for "Large Language Models as Optimizers"
Install / Use
/learn @google-deepmind/OproREADME
Large Language Models as Optimizers
This repository contains the code for the paper
<p align="center"> <img src="img/workflow.png" alt="workflow" width="48%"> <img src="img/gpt_meta_prompt.png" alt="workflow" width="40%"> </p>Large Language Models as Optimizers
Chengrun Yang*, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen* [* Equal Contribution]
arXiv: 2309.03409
Dependency requirements
The code has been verified to work under Python 3.10.13 with the following dependencies:
- absl-py (2.0.0)
- google.generativeai (0.1.0)
- immutabledict (3.0.0)
- openai (0.27.2)
Usage
Prompt optimization
Use opro/optimization/optimize_instructions.py, follow the steps at the top.
A quickstarter:
python optimize_instructions.py --optimizer="gpt-3.5-turbo" --scorer="text-bison" --instruction_pos="Q_begin" --dataset="gsm8k" --task="train" --palm_api_key="<your_palm_api_key>" --openai_api_key="<your_openai_api_key>"
Prompt evaluation
Use opro/evaluation/evaluate_instructions.py, follow the steps at the top.
A quickstarter:
python evaluate_instructions.py --scorer="text-bison" --dataset="gsm8k" --task="test" --instruction_pos="Q_begin" --evaluate_training_fold=false --evaluate_test_fold=true --palm_api_key="<your_palm_api_key>"
Linear regression
Use opro/optimization/optimize_linear_regression.py, follow the steps at the top.
Traveling salesman problem
Use opro/optimization/optimize_tsp.py, follow the steps at the top.
Supported models
The code in this repository currently supports text-bison and GPT models. Alternatively, you may serve your own model and plug it in here, similar to the existing prompting APIs in opro/prompt_utils.py.
Precaution on API costs
Calling the PaLM or GPT APIs for prompt optimization and evaluation may incur unexpectedly large costs. Please carefully estimate the cost and/or start with lighter use (e.g., evaluate on a smaller portion of the benchmark dataset or run optimization for fewer steps) before the formal experimentations, or prompt self-served models instead.
Citation
If you have used our code in your research, please cite our paper:
@article{yang2023large,
title={Large language models as optimizers},
author={Yang, Chengrun and Wang, Xuezhi and Lu, Yifeng and Liu, Hanxiao and Le, Quoc V and Zhou, Denny and Chen, Xinyun},
journal={arXiv preprint arXiv:2309.03409},
year={2023}
}
Disclaimer: this is not an officially supported Google product.
Related Skills
node-connect
339.3kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
83.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
339.3kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
83.9kCommit, push, and open a PR
