You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Vocode is an open source library that makes it easy to build voice-based LLM apps. Using Vocode, you can build real-time streaming conversations with LLMs and deploy them to phone calls, Zoom meetings, and more. You can also build personal assistants or apps like voice-based chess. Vocode provides easy abstractions and integrations so that everything you need is in a single library.
We're actively looking for community maintainers, so please reach out if interested!
We're an open source project and are extremely open to contributors adding new features, integrations, and documentation! Please don't hesitate to reach out and get started building with us.
We'd love to talk to you on Discord about new ideas and contributing!
🚀 Quickstart
pip install vocode
importasyncioimportsignalfrompydantic_settingsimportBaseSettings, SettingsConfigDictfromvocode.helpersimportcreate_streaming_microphone_input_and_speaker_outputfromvocode.loggingimportconfigure_pretty_loggingfromvocode.streaming.agent.chat_gpt_agentimportChatGPTAgentfromvocode.streaming.models.agentimportChatGPTAgentConfigfromvocode.streaming.models.messageimportBaseMessagefromvocode.streaming.models.synthesizerimportAzureSynthesizerConfigfromvocode.streaming.models.transcriberimport (
DeepgramTranscriberConfig,
PunctuationEndpointingConfig,
)
fromvocode.streaming.streaming_conversationimportStreamingConversationfromvocode.streaming.synthesizer.azure_synthesizerimportAzureSynthesizerfromvocode.streaming.transcriber.deepgram_transcriberimportDeepgramTranscriberconfigure_pretty_logging()
classSettings(BaseSettings):
""" Settings for the streaming conversation quickstart. These parameters can be configured with environment variables. """openai_api_key: str="ENTER_YOUR_OPENAI_API_KEY_HERE"azure_speech_key: str="ENTER_YOUR_AZURE_KEY_HERE"deepgram_api_key: str="ENTER_YOUR_DEEPGRAM_API_KEY_HERE"azure_speech_region: str="eastus"# This means a .env file can be used to overload these settings# ex: "OPENAI_API_KEY=my_key" will set openai_api_key over the default abovemodel_config=SettingsConfigDict(
env_file=".env",
env_file_encoding="utf-8",
extra="ignore",
)
settings=Settings()
asyncdefmain():
(
microphone_input,
speaker_output,
) =create_streaming_microphone_input_and_speaker_output(
use_default_devices=False,
)
conversation=StreamingConversation(
output_device=speaker_output,
transcriber=DeepgramTranscriber(
DeepgramTranscriberConfig.from_input_device(
microphone_input,
endpointing_config=PunctuationEndpointingConfig(),
api_key=settings.deepgram_api_key,
),
),
agent=ChatGPTAgent(
ChatGPTAgentConfig(
openai_api_key=settings.openai_api_key,
initial_message=BaseMessage(text="What up"),
prompt_preamble="""The AI is having a pleasant conversation about life""",
)
),
synthesizer=AzureSynthesizer(
AzureSynthesizerConfig.from_output_device(speaker_output),
azure_speech_key=settings.azure_speech_key,
azure_speech_region=settings.azure_speech_region,
),
)
awaitconversation.start()
print("Conversation started, press Ctrl+C to end")
signal.signal(signal.SIGINT, lambda_0, _1: asyncio.create_task(conversation.terminate()))
whileconversation.is_active():
chunk=awaitmicrophone_input.get_audio()
conversation.receive_audio(chunk)
if__name__=="__main__":
asyncio.run(main())