Debug plugin for LLM. Adds a model which echos its input without hitting an API or executing a local LLM.
Install this plugin in the same environment as LLM.
llm install llm-echo
The plugin adds a echo
model which simply echos the prompt details back to you as JSON.
llm -m echo prompt -s 'system prompt'
Output:
{
"prompt": "prompt",
"system": "system prompt",
"attachments": [],
"stream": true,
"previous": []
}
You can also add one example option like this:
llm -m echo prompt -o example_bool 1
Output:
{
"prompt": "prompt",
"system": "",
"attachments": [],
"stream": true,
"previous": [],
"options": {
"example_bool": true
}
}
You can use llm-echo
to test tool calling without needing to run prompts through an actual LLM. In your prompt, send something like this:
{
"prompt": "This will be treated as the prompt",
"tool_calls": [
{
"name": "example",
"arguments": {
"input": "Hello, world!"
}
}
]
}
You can assemble a test that looks like this:
def example(input: str) -> str:
return f"Example output for {input}"
model = llm.get_model("echo")
chain_response = model.chain(
json.dumps(
{
"tool_calls": [
{
"name": "example",
"arguments": {"input": "test"},
}
],
"prompt": "prompt",
}
),
system="system",
tools=[example],
)
responses = list(chain_response.responses())
tool_calls = responses[0].tool_calls()
assert tool_calls == [
llm.ToolCall(name="example", arguments={"input": "test"}, tool_call_id=None)
]
assert responses[1].prompt.tool_results == [
llm.models.ToolResult(
name="example", output="Example output for test", tool_call_id=None
)
]
Or you can read the JSON from the last response in the chain:
response_info = json.loads(responses[-1].text())
And run assertions against the "tool_results"
key, which should look something like this:
{
"prompt": "",
"system": "",
"...": "...",
"tool_results": [
{
"name": "example",
"output": "Example output for test",
"tool_call_id": null
}
]
}
Take a look at the test suite for llm-tools-simpleeval for an example of how to write tests against tools.
Sometimes it can be useful to output an exact string, for example if you are testing the --extract
option in LLM.
If your prompt is JSON with a "raw"
key that string is the only thing that will be returned. For example:
{
"raw": "This is the raw response"
}
Will return:
This is the raw response
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-echo
python -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
python -m pip install -e '.[test]'
To run the tests:
python -m pytest