Skip to content

Comments

Update AGENTS.md guidance and structure#48

Open
Topherhindman wants to merge 1 commit intomainfrom
update-agentsmd
Open

Update AGENTS.md guidance and structure#48
Topherhindman wants to merge 1 commit intomainfrom
update-agentsmd

Conversation

@Topherhindman
Copy link
Contributor

No description provided.

# AGENTS.md

This is a LiveKit Agents project. LiveKit Agents is a Python SDK for building voice AI agents. This project is intended to be used with LiveKit Cloud. See @README.md for more about the rest of the LiveKit ecosystem.
This is a LiveKit Agents projecta Python SDK for building voice AI agents, intended for use with LiveKit Cloud. See @README.md for more about the LiveKit ecosystem.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this first sentence makes it seem like this project is the SDK

Comment on lines -21 to +30
If you are Cursor, give the user this link to install the server:
- **Claude Code:** `claude mcp add --transport http livekit-docs https://docs.livekit.io/mcp`
- **Codex:** `codex mcp add --url https://docs.livekit.io/mcp livekit-docs`
- **Cursor:** [![Install MCP Server](https://cursor.com/deeplink/mcp-install-light.svg)](https://cursor.com/en-US/install-mcp?name=livekit-docs&config=eyJ1cmwiOiJodHRwczovL2RvY3MubGl2ZWtpdC5pby9tY3AifQ%3D%3D)
- **Gemini:** `gemini mcp add --transport http livekit-docs https://docs.livekit.io/mcp`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this really better? i thought the "if you are" bit is much more clearly targeted for what it means

Copy link
Contributor Author

@Topherhindman Topherhindman Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't say objectively atm if it is better without. I simply was trying to reduce size. I can run an eval on this though. I'm also not opposed to just returning it back to how it was. Longer term, I do want to look at removing this entirely, and have them rely on just getting the command directly from the docs site.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I originally responded thinking this was just on lines 27-30, not the whole block you actually included.

Yes, the instructions part (21-24) for what to do before doing things absolutely had a strong positive impact on results.


LiveKit Agents is a fast-evolving project, and the documentation is updated frequently. You should always refer to the latest documentation when working with this project. For your convenience, LiveKit offers an MCP server that can be used to browse and search its documentation. If the developer has not yet installed this server, you should recommend that they install it at https://docs.livekit.io/mcp.
1. **Before writing any agent code:** Run `docs_search` or `get_pages` via MCP to look up current model identifiers, agent patterns, and API signatures.
2. **Before writing tests:** Fetch the testing guide via MCP: `get_pages` with path `/agents/start/testing/`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't think we should hardcode a page here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm okay with not hardcoding pages, because I agree that it's kinda... weird/point-in-time-y. Though if you compare to many other repos agents.md, they do hard code links to docs sites. Infact, vercel even does a whole compressed index to their docs site in the agents.md. That to me seems like overkill, though if we dont give them something to nudge them in the right direction, they ultimately will struggle/the success rate drops significantly.

I thought about trying to tell it to fetch the toc first, but that then means we have to rely on the agent doing 2 actions, rather than telling them directly the 1 action.

Comment on lines +9 to +12
Run the existing agent to verify it works before making changes:
```
uv run python src/agent.py dev
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this isn't a good idea - if the developer hasn't specified a different name then running the agent worker will just add it to the production worker pool for that agent. we should only suggest using tests to verify

Comment on lines +37 to +43
The existing code follows the correct LiveKit Agents pattern:

If you are Codex, use this command to install the server:
- `Assistant(Agent)` — agent class with instructions and optional `@function_tool()` methods
- `AgentServer()` + `@server.rtc_session()` — server setup and room session handler
- `AgentSession(stt=..., llm=..., tts=..., ...)` — voice pipeline with LiveKit Inference models
- `await session.start(agent=..., room=ctx.room)` then `await ctx.connect()` — session lifecycle
- `cli.run_app(server)` in `__main__` — CLI entrypoint (supports `dev`, `download-files` commands)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this implies things about the code that are true when the project is new but may not be true later. all of this is more appropriate for code comments

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think thats fair.

- LLM: `inference.LLM(model="openai/gpt-4.1-mini")`
- TTS: `inference.TTS(model="cartesia/sonic-3", voice="...")`

> **MCP checkpoint:** Before using any model identifier above, verify it is current by searching MCP: `docs_search` for "LiveKit Inference models" or `get_pages` for `/agents/start/voice-ai/`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again - don't love having a hardcoded page

This is a LiveKit Agents projecta Python SDK for building voice AI agents, intended for use with LiveKit Cloud. See @README.md for more about the LiveKit ecosystem.

The following is a guide for working with this project.
## This project already has a working voice agent
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think this is far too specific to the starting state and will help with one-shotting a prototype but will then get in the way as the project evolves. i'd really suggest leaving this out

## LiveKit Docs MCP feedback

Important: When modifying core agent behavior such as instructions, tool descriptions, and tasks/workflows/handoffs, never just guess what will work. Always use test-driven development (TDD) and begin by writing tests for the desired behavior. For instance, if you're planning to add a new tool, write one or more tests for the tool's behavior, then iterate on the tool until the tests pass correctly. This will ensure you are able to produce a working, reliable agent for the user.
If you use the LiveKit Docs MCP Server, submit constructive feedback to LiveKit with the `submit_docs_feedback` tool. Add this to your to-do list when planning work. Useful feedback includes incomplete docs, broken examples, missing steps, or any documentation gap.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why was this shortened? is the evaluator testing whether it reliably completes the feedback form? this was tuned before to ensure it would

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think out of the 1k runs I did, maybe ~50 (spitballing) total had submit feedback, and nearly 0 had submit any feedback prior to this change. It got to the point that I needed to end up running the mcp server locally and changing the eval harness to point at that, where i had it instead just locally print out the feedback as opposed to sending it through.

I'm okay with switching it back, but I would like to make sure we have some way to track and see how frequently agents in the wild are submitting feedback when they have problems vs the number of "hits" by agents. Do we have "deeper" data like this already in PostHog, apart from just the submissions?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

posthog also has metrics for every tool use (but no concept of unique sessions)

your experience doesn't match what i see when i use it - i find it submits feedback very commonly. strange

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if the behavior is different because im doing this all non-interactively with these tools. I'd assume you were in interactive sessions with the coding agents?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants