1. What are the built-in plugins available in BotCompose?
STT Plugins
azure_fast
azure_real_time
deepgram
deepgram_streaming
google
google_streaming
sarvam
LLM Plugins
openai_like
TTS Plugins
azure
cartesia
elevenlabs
google
openai
sarvam
smallestai
2. Which STT plugins support streaming?
Currently only these plugins support streaming STT in BotCompose:
deepgram_streaming
google_streaming
3. How to enable streaming STT?
Two requirements:
While adding bot set stt_config.plugin_name to: deepgram_streaming or google_streaming
In the make call payload set: "streaming_useraudio": true within the call_params.
Both are required.
4. Can plugin configurations be changed during a live call?
Yes. BotCompose supports runtime plugin reconfiguration through built-in tools.
For example:
Switch STT language dynamically
Change TTS voice/model mid-conversation
A common use case is multilingual bots that detect language and switch STT/TTS configurations in real time.
5. How does BotCompose handle observability and analytics?
Every call generates a detailed CDR (Call Detail Record) JSON containing:
Call metadata
Full transcript
Tool invocation history
Latency metrics (STT / LLM / TTS)
Usage metrics (tokens, duration, cache savings)
CDRs can be automatically pushed to partner systems for downstream processing.
6. How does TTS caching work?
BotCompose supports TTS caching to reduce latency and repeated provider calls.
Add reusable phrases while creating a bot
When the LLM generates the same sentence, BotCompose serves audio from cache instead of requesting fresh synthesis from TTS.
This improves response speed and lowers TTS costs.
7. How do I enable TTS cache?
Define phrases inside:
tts_cache_config.sentence_list
Example:
{
"sentence": "Thank you for your valuable feedback and time.",
"trim_silence": true
}