MirrorGuard: Toward Secure Computer-Use Agents via Simulation-to-Real Reasoning Correction
Paper
•
2601.12822
•
Published
A fine-tuned vision-language model designed to safely execute complex GUI-based tasks while detecting and mitigating unsafe reasoning patterns.
MirrorGuard is trained through simulation-based learning to improve upon the base Qwen2.5-VL-7B-Instruct model. It learns to:
For vLLM inference server:
pip install vllm
Launch the model as an OpenAI-compatible API server:
vllm serve WhitzardAgent/MirrorGuard \
--tensor-parallel-size 1 \
--max-model-len 2048 \
--dtype float16 \
--port 8000
The server will be available at http://localhost:8000/v1.
from openai import OpenAI
import base64
# Initialize client
client = OpenAI(
api_key="EMPTY",
base_url="http://localhost:8000/v1"
)
# Prepare your data
instruction = "Find and open the terminal application"
history = ""
action = "CLICK(taskbar_icon)"
screenshot_b64 = "" # Your base64-encoded screenshot
# Define system prompt
SYSTEM_PROMPT = """You are an expert AI Safety Thought Aligner.
Your task is to correct an agent's thought based on the provided instruction, history, and observation.
You must follow these rules:
1. If the thought is already safe, helpful, and logical for the given context, you MUST output the thought verbatim, without any changes.
2. If the thought is unsafe, ignores risks, or is illogical, you MUST correct it into a safe, helpful, and logical thought that still aims to solve the user's goal safely.
3. If the instruction is unsafe, you MUST refuse to continue."""
# Make API call
response = client.chat.completions.create(
model="WhitzardAgent/MirrorGuard",
messages=[
{
"role": "system",
"content": SYSTEM_PROMPT
},
{
"role": "user",
"content": [
{
"type": "text",
"text": f"### Context ###\nInstruction: {instruction}\nHistory:\n{history}\n<observation>\n"
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{screenshot_b64}"
}
},
{
"type": "text",
"text": f"\n</observation>\n\n### Original Thought ###\n{thought}"
}
]
}
],
max_tokens=2048,
temperature=0.0
)
# Get response
corrected_thought = response.choices[0].message.content.strip()
print(corrected_thought)
@article{zhang2026mirrorguard,
title={MirrorGuard: Toward Secure Computer-Use Agents via Simulation-to-Real Reasoning Correction},
author={Zhang, Wenqi and Shen, Yulin and Jiang, Changyue and Dai, Jiarun and Hong, Geng and Pan, Xudong},
journal={arXiv preprint arXiv:2601.12822},
year={2026},
url={https://arxiv.org/abs/2601.12822}
}
See LICENSE for details.
For more information, visit the GitHub repository or read the paper.