ScreenAgent
ScreenAgent: A Computer Control Agent Driven by Visual Language Large Model (IJCAI-24)
Install / Use
/learn @niuzaisheng/ScreenAgentREADME
View ScreenAgent Paper arxiv:2402.07945
News
- (2024-4-17) The ScreenAgent paper has been accepted for presentation at IJCAI 2024!
- (2024-5-19) ScreenAgent Web Client released, a simpler way to experience controlling a desktop with a large model.
We have built the ScreenAgent project, creating an environment for Visual Language Model agents (VLM Agent) to interact with real computer screens. In this environment, the agent can observe screenshots and manipulate the GUI by outputting mouse and keyboard operations. We have also designed an automatic control process, which includes planning, action, and reflection stages, guiding the agent to continuously interact with the environment and complete multi-step tasks. In addition, we have built the ScreenAgent dataset, which collects screenshots and action sequences when completing various daily computer tasks.
<div align="center"> <img src="assets/Conception.png" alt="Motivation" width="50%"> <p><i>ScreenAgent Design Motivation</i></p> </div>To guide the VLM Agent to interact continuously with the computer screen, we have built a process that includes "planning-execution-reflection". In the planning phase, the Agent is asked to break down the user task into subtasks. In the execution phase, the Agent will observe the screenshot and give specific mouse and keyboard actions to execute the subtasks. The controller will execute these actions and feed back the execution results to the Agent. In the reflection phase, the Agent will observe the execution results, judge the current state, and choose to continue execution, retry, or adjust the plan. This process will continue until the task is completed.
<div align="center"> <img src="assets/figure2.png" alt="Running process" width="100%"> <p><i>Running Process</i></p> </div>We referred to the VNC remote desktop connection protocol to design the action space of the Agent, which are all the most basic mouse and keyboard operations. Most of the mouse click operations require the Agent to give the exact screen coordinate position. Compared with calling specific APIs to complete tasks, this method is more universal and can be applied to various desktop operating systems and applications.
<div align="center"> <img src="assets/ActionSpace.png" alt="Action Space" width="50%"> <p><i>Supported Action Types and Action Attributes</i></p> </div>Teaching the Agent to use a computer is not a simple matter. It requires the Agent to have multiple comprehensive abilities such as task planning, image understanding, visual positioning, and tool use. For this reason, we manually annotated the ScreenAgent dataset. This dataset covers a variety of daily computer tasks, including file operations, web browsing, gaming entertainment and other scenarios. We build a session according to the above "planning-execution-reflection" process.
<div align="center"> <img src="assets/Dataset.png" alt="Dataset Task Type Distribution" width="50%"> <p><i>Dataset Task Type Distribution</i></p> </div>The project mainly includes the following parts:
ScreenAgent
├── client # Controller client code
│ ├── prompt # Prompt template
│ ├── config.yml # Controller client configuration file template
│ └── tasks.txt # Task list
├── data # Contains the ScreenAgent dataset and other vision positioning related datasets
├── model_workers # VLM inferencer
└── train # Model training code
Preparation
Step 1, Prepare the desktop to be controlled
First, you need to prepare the desktop operating system to be controlled, where the VNC Server is installed, such as TightVNC. Or you can use a Docker container with a GUI. We have prepared a container niuniushan/screenagent-env. You can use the following command to pull and start this container:
docker run -d --name ScreenAgent -e RESOLUTION=1024x768 -p 5900:5900 -p 8001:8001 -e VNC_PASSWORD=<VNC_PASSWORD> -e CLIPBOARD_SERVER_SECRET_TOKEN=<CLIPBOARD_SERVER_SECRET_TOKEN> -v /dev/shm:/dev/shm niuniushan/screenagent-env:latest
Please replace <VNC_PASSWORD> with your new VNC password, and <CLIPBOARD_SERVER_SECRET_TOKEN> with your clipboard service password. Since keyboard input of long strings of text or unicode characters relies on the clipboard, if the clipboard service is not enabled, you can only input ASCII strings by pressing the keyboard in sequence, and you cannot input Chinese and other unicode characters. This image already contains a clipboard service, which listens to port 8001 by default. You need to set a password to protect your clipboard service. niuniushan/screenagent-env is built based on fcwu/docker-ubuntu-vnc-desktop. You can find more information about this image here.
If you want to use an existing desktop environment, such as Windows, Linux Desktop, or any other desktop environment, you need to run any VNC Server and note its IP address and port number. If you want to enable the clipboard service, please perform the following steps in your desktop environment:
# Install dependencies
pip install fastapi pydantic uvicorn pyperclip
# Set password in environment variable
export CLIPBOARD_SERVER_SECRET_TOKEN=<CLIPBOARD_SERVER_SECRET_TOKEN>
# Start clipboard service
python client/clipboard_server.py
clipboard_server.py will listen to port 8001 and receive the (text) instruction for keyboard input of long strings of text from the controller.
After keeping it running, you can test whether the clipboard service is working properly, for example:
curl --location 'http://localhost:8001/clipboard' \
--header 'Content-Type: application/json' \
--data '{
"text":"Hello world",
"token":"<CLIPBOARD_SERVER_SECRET_TOKEN>"
}'
If it works correctly, you will receive a response of {"success": True, "message": "Text copied to clipboard"}.
If you encounter an error "Pyperclip could not find a copy/paste mechanism for your system.", please add an environment variable specifying the X server location before running python client/clipboard_server.py:
export DISPLAY=:0.0
Please adjust according to your system environment. If you still encounter errors, please refer to here.
Please fill in the above information in the remote_vnc_server item of the configuration file client/config.yml.
Step 2, Prepare the controller code running environment
You need to run the controller code, which has three missions: First, the controller will connect to the VNC Server, collect screenshots, and send commands such as mouse and keyboard; Second, the controller maintains a state machine internally, implementing an automatic control process of planning, action, and reflection, guiding the agent to continuously interact with the environment; Finally, the controller will construct complete prompts based on the prompt word template, send them to the large model inference API, and parse the control commands in the large model generated reply. The controller is a program based on PyQt5, you need to install some dependencies:
pip install -r client/requirements.txt
Step 3, Prepare the large model inferencer or API
Please choose a VLM as the Agent, we provide inferencers for 4 models in model_workers, they are: GPT-4V, LLaVA-1.5, CogAgent, and ScreenAgent. You can also implement an inferencer yourself or use a third-party API, you can refer to the code in client/interface_api to implement a new API call interface.
Please refer to the llm_api part in client/config.yml to prepare the configuration file, only keep one model under llm_api.
llm_api:
# Select ONE of the following models to use:
GPT4V:
model_name: "gpt-4-vision-preview"
openai_api_key: "<YOUR-OPENAI-API-KEY>"
target_url: "https://api.openai.com/v1/chat/completions"
LLaVA:
model_name: "LLaVA-1.5"
target_url: "http://localhost:40000/worker_generate"
CogAgent:
target_url: "http://localhost:40000/worker_generate"
ScreenAgent:
target_url: "http://localhost:40000/worker_generate"
# Common settings for all models
temperature: 1.0
top_p: 0.9
max_tokens: 500
If you use GPT-4V as the Agent
Please set llm_api to GPT4V in client/config.yml, and fill in your OpenAI API Key, please always pay attention to your account balance.
If you use LLaVA-1.5 as the Agent
Please refer to the LLaVA project to download and prepare the LLaVA-1.5 model, for example:
git clone https://github.com/haotian-liu/LLaVA.git
cd LLaVA
conda create -n llava python=3.10 -y
conda activate llava
pip install --upgrade pip # enable PEP 660 support
pip install -e .
model_workers/llava_model_worker.py provides a non-streaming output inferencer for LLaVA-1.5. You can copy it to llava/serve/model_worker and start it with the following command:
cd llava
python -m llava.serve.llava_model_worker --host 0.0.0.0 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b --no-register
If using CogAgent as the Agent
Please refer to the CogVLM project to download and prepare the CogAgent model. Download the sat version of the CogAgent weights cogagent-chat.zip from here, unzip it and place it in the train/saved_models/cogagent-chat directory.
train/cogagent_model_worker.py provides a non-streaming output inferencer for CogAgent. You can
