Bot Framework Composer is an integrated development tool for developers and multi-disciplinary teams to build bots and conversational experiences with the Microsoft Bot Framework.
Within this tool, you'll find everything you need to build a sophisticated conversational experience.
Bot Framework Composer is an integrated development tool designed to build bots and conversational experiences using the Microsoft Bot Framework. This tool provides developers and multi-disciplinary teams with everything needed to create sophisticated conversational applications.
Key Features:
Visual Design Canvas: Create and manage complex dialog flows visually, making it easier to design user interactions.
Multi-Team Collaboration: Support for team collaboration, allowing multiple users to work together on the same project.
AI Service Integration: Seamlessly integrate AI services like Azure Cognitive Services to enhance bot capabilities.
Testing & Debugging Tools: Built-in tools to test and debug conversational flows, ensuring robust performance.
Extensibility: Ability to extend functionality through custom actions, connectors, and middleware.
Documentation & Learning Resources: Access to comprehensive documentation and learning materials to help users get started.
Audience & Benefit:
Ideal for developers, data scientists, and multi-disciplinary teams focused on building conversational AI solutions. Bot Framework Composer enables efficient creation, testing, and deployment of bots, helping teams deliver scalable and intelligent conversational experiences. It can be installed via winget.
README
ARCHIVE NOTICE:
Bot Framework Composer migration to Microsoft Copilot Studio
We are in the process of archiving the Bot Framework Composer and Bot Framework CLI repositories on GitHub. This means that these projects will no longer be updated or maintained. Customers using these tools will not be disrupted. However, the tools will no longer be supported through service tickets in the Azure portal and will not receive product updates.
Bot Framework Composer provides a user interface to work with custom code bots built using the Azure Bot Framework. For customers looking to transition away from Bot Framework Composer, consider Microsoft Copilot Studio. Copilot Studio is a SaaS based application that provides a user interface that supports users in building modern agent applications in a similar capacity using topics, messages and other familiar tools that Bot Composer users will be used to.
Check out the Copilot Studio docs for more information on Getting Started with Copilot Studio, and sign up for a free trial.
Archived projects are still available for use by developers, however they will not be actively maintained or serviced by Microsoft.
Microsoft Bot Framework Composer
Overview
Bot Framework Composer is an open-source, visual authoring canvas for developers and multi-disciplinary teams to design and build conversational experiences with Language Understanding and QnA Maker, and a sophisticated composition of bot replies (Language Generation). Within this tool, you'll have everything you need to build a sophisticated conversational experience.
A visual editing canvas for conversation flows
In context editing for language understanding (NLU)
LM Studio is an easy to use desktop app for experimenting with local and open-source Large Language Models (LLMs). The LM Studio cross platform desktop app allows you to download and run any ggml-compatible model from Hugging Face, and provides a simple yet powerful model configuration and inferencing UI. The app leverages your GPU when possible.
Dive is an open-source AI Agent desktop application that seamlessly integrates any Tools Call-supported LLM with frontend MCP Server—part of the Open Agent Platform initiative. ✨
Features:
- 🌐 Universal LLM Support: Compatible with ChatGPT, Anthropic, Ollama and OpenAI-compatible models
- 💻 Cross-Platform: Available for Windows, MacOS, and Linux
- 🔄 Model Context Protocol: Enabling seamless AI agent integration
- 🔌 MCP Server Integration: External data access and processing capabilities
- 🌍 Multi-Language Support: Traditional Chinese, English, with more coming soon
- ⚙️ Advanced API Management: Multiple API keys and model switching support
- 💡 Custom Instructions: Personalized system prompts for tailored AI behavior
- 💬 Intuitive Chat Interface: Real-time context management and user-friendly design
- 🚀 Upcoming Features: Prompt Schedule and OpenAgentPlatform MarketPlace
Large Language Models (LLMs) based AI bots are amazing. However, their behavior can be random and different bots excel at different tasks. If you want the best experience, don't try them one by one. ChatALL (Chinese name: 齐叨) can send prompt to several AI bots concurrently, help you to discover the best results. All you need to do is download, install and ask.
AI as Workspace - A better AI (LLM) client. Full-featured, lightweight. Support multiple workspaces, plugin system, cross-platform, local first + real-time cloud sync, Artifacts, MCP
Features:
Consistent Experience Across All Platforms
- Supported platforms: Windows, Linux, Mac OS, Android, Web (PWA)
- Multiple AI providers: OpenAI, Anthropic, Google, DeepSeek, xAI, Azure, etc.
Conversation Interface
- User input preview
- Modifications and regenerations presented as branches
- Customizable keyboard shortcuts
- Quick scrolling to the beginning/end of a message
Multiple Workspaces
- Create multiple workspaces to separate conversations by themes
- Group workspaces into folders; supports nesting
- Create multiple assistants within a workspace or global assistants
Data Storage
- Data is stored locally first, accessible offline and loads instantly
- Cloud synchronization available after login for cross-device syncing
- Multi-window collaboration: open multiple tabs in the same browser with responsive data synchronization
Design Details
- Support for text files (code, csv, etc.) as attachments; AI can see file contents and names without occupying display space
- For large text blocks, use Ctrl + V outside the input box to paste as an attachment; prevents large content from cluttering the display
- Quote content from previous messages to user inputs for targeted follow-up questions
- Select multiple lines of message text to copy the original Markdown
- Automatically wrap code pasted from VSCode in code blocks with language specification
MCP Protocol
- Support for MCP Tools, Prompts, Resources
- STDIO and SSE connection methods
- Install MCP-type plugins from the plugin marketplace or manually add MCP servers
Artifacts
- Convert any part of assistant responses into Artifacts
- User-editable with version control and code highlighting
- Control assistant read/write permissions for Artifacts
- Open multiple Artifacts simultaneously
Plugin System
- Built-in calculator, document parsing, video parsing, image generation plugins
- Install additional plugins from the marketplace
- Configure Gradio applications as plugins; compatible with some LobeChat plugins
- Plugins are more than just tool calling
Lightweight and High Performance
- Quick startup with no waiting
- Smooth conversation switching
Dynamic Prompts
- Create prompt variables using template syntax for dynamic, reusable prompts
- Extract repetitive parts into workspace variables for prompt reusability
Additional Features
Assistant marketplace, dark mode, customizable theme colors, and more
LM Studio is an easy to use desktop app for experimenting with local and open-source Large Language Models (LLMs). The LM Studio cross platform desktop app allows you to download and run any ggml-compatible model from Hugging Face, and provides a simple yet powerful model configuration and inferencing UI. The app leverages your GPU when possible.
Dive is an open-source AI Agent desktop application that seamlessly integrates any Tools Call-supported LLM with frontend MCP Server—part of the Open Agent Platform initiative. ✨
Features:
- 🌐 Universal LLM Support: Compatible with ChatGPT, Anthropic, Ollama and OpenAI-compatible models
- 💻 Cross-Platform: Available for Windows, MacOS, and Linux
- 🔄 Model Context Protocol: Enabling seamless AI agent integration
- 🔌 MCP Server Integration: External data access and processing capabilities
- 🌍 Multi-Language Support: Traditional Chinese, English, with more coming soon
- ⚙️ Advanced API Management: Multiple API keys and model switching support
- 💡 Custom Instructions: Personalized system prompts for tailored AI behavior
- 💬 Intuitive Chat Interface: Real-time context management and user-friendly design
- 🚀 Upcoming Features: Prompt Schedule and OpenAgentPlatform MarketPlace
Large Language Models (LLMs) based AI bots are amazing. However, their behavior can be random and different bots excel at different tasks. If you want the best experience, don't try them one by one. ChatALL (Chinese name: 齐叨) can send prompt to several AI bots concurrently, help you to discover the best results. All you need to do is download, install and ask.
AI as Workspace - A better AI (LLM) client. Full-featured, lightweight. Support multiple workspaces, plugin system, cross-platform, local first + real-time cloud sync, Artifacts, MCP
Features:
Consistent Experience Across All Platforms
- Supported platforms: Windows, Linux, Mac OS, Android, Web (PWA)
- Multiple AI providers: OpenAI, Anthropic, Google, DeepSeek, xAI, Azure, etc.
Conversation Interface
- User input preview
- Modifications and regenerations presented as branches
- Customizable keyboard shortcuts
- Quick scrolling to the beginning/end of a message
Multiple Workspaces
- Create multiple workspaces to separate conversations by themes
- Group workspaces into folders; supports nesting
- Create multiple assistants within a workspace or global assistants
Data Storage
- Data is stored locally first, accessible offline and loads instantly
- Cloud synchronization available after login for cross-device syncing
- Multi-window collaboration: open multiple tabs in the same browser with responsive data synchronization
Design Details
- Support for text files (code, csv, etc.) as attachments; AI can see file contents and names without occupying display space
- For large text blocks, use Ctrl + V outside the input box to paste as an attachment; prevents large content from cluttering the display
- Quote content from previous messages to user inputs for targeted follow-up questions
- Select multiple lines of message text to copy the original Markdown
- Automatically wrap code pasted from VSCode in code blocks with language specification
MCP Protocol
- Support for MCP Tools, Prompts, Resources
- STDIO and SSE connection methods
- Install MCP-type plugins from the plugin marketplace or manually add MCP servers
Artifacts
- Convert any part of assistant responses into Artifacts
- User-editable with version control and code highlighting
- Control assistant read/write permissions for Artifacts
- Open multiple Artifacts simultaneously
Plugin System
- Built-in calculator, document parsing, video parsing, image generation plugins
- Install additional plugins from the marketplace
- Configure Gradio applications as plugins; compatible with some LobeChat plugins
- Plugins are more than just tool calling
Lightweight and High Performance
- Quick startup with no waiting
- Smooth conversation switching
Dynamic Prompts
- Create prompt variables using template syntax for dynamic, reusable prompts
- Extract repetitive parts into workspace variables for prompt reusability
Additional Features
Assistant marketplace, dark mode, customizable theme colors, and more
Tools to train, test and manage language understanding (NLU) and QnA components
Language generation and templating system
A ready-to-use bot runtime executable
The Bot Framework Composer is an open source tool based on the Bot Framework SDK. It is available as a desktop application as well as a web-based component
To find the most recent release and learn what has changed in Bot Framework Composer, see the latest release.
Build Composer Locally
To build and run the Composer project locally as a web application, clone the source code from Github and build the application using the instructions below.
git clone https://github.com/microsoft/BotFramework-Composer.git
cd BotFramework-Composer
cd Composer # switch to Composer folder
yarn install # install dependencies
yarn build # build extensions and libs
yarn startall # start client and server at the same time
Extend Composer with Extensions
Many aspects of Composer's functionality can be customized and extended through extensions. Features such as authentication, storage, publishing and even the samples and templates available on the homescreen can be customized by creating new extensions.
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
Also, see current known issues for high impact bugs you may experience.
Submitting pull requests
If you'd like to contribute pull requests to Composer, see the contributing guide for helpful information on our development workflow.
Reporting security issues
Security issues and bugs should be reported privately, via email, to the Microsoft Security
Response Center (MSRC) at secure@microsoft.com. You should
receive a response within 24 hours. If for some reason you do not, please follow up via
email to ensure we received your original message. Further information, including the
MSRC PGP key, can be found in
the Security TechCenter.
AI Shell is a CLI tool that brings the power of artificial intelligence directly to your command line! Designed to help you get command assistance from various AI assistants, AI Shell is a versatile tool to help you become more productive in the command line. We call these various AI assistant providers agents. You can use agents to interact with different generative AI models or other AI/ML/assistant providers in a conversational manner.
AI Shell is a CLI tool that brings the power of artificial intelligence directly to your command line! Designed to help you get command assistance from various AI assistants, AI Shell is a versatile tool to help you become more productive in the command line. We call these various AI assistant providers agents. You can use agents to interact with different generative AI models or other AI/ML/assistant providers in a conversational manner.
HyperChat is an open Chat client that can use various LLM APIs to provide the best Chat experience and implement productivity tools through the MCP protocol.
- Supports OpenAI-style LLMs, OpenAI, Claude(OpenRouter), Qwen, Deepseek, GLM, Ollama.
- Built-in MCP plugin market with user-friendly MCP installation configuration, one-click installation, and welcome to submit HyperChatMCP.
- Also supports manual installation of third-party MCPs; simply fill in command, args, and env.
HyperChat is an open Chat client that can use various LLM APIs to provide the best Chat experience and implement productivity tools through the MCP protocol.
- Supports OpenAI-style LLMs, OpenAI, Claude(OpenRouter), Qwen, Deepseek, GLM, Ollama.
- Built-in MCP plugin market with user-friendly MCP installation configuration, one-click installation, and welcome to submit HyperChatMCP.
- Also supports manual installation of third-party MCPs; simply fill in command, args, and env.
Khoj is a personal AI app to extend your capabilities. It smoothly scales up from an on-device personal AI to a cloud-scale enterprise AI.
- Chat with any local or online LLM (e.g llama3, qwen, gemma, mistral, gpt, claude, gemini).
- Get answers from the internet and your docs (including image, pdf, markdown, org-mode, word, notion files).
- Access it from your Browser, Obsidian, Emacs, Desktop, Phone or Whatsapp.
- Create agents with custom knowledge, persona, chat model and tools to take on any role.
- Automate away repetitive research. Get personal newsletters and smart notifications delivered to your inbox.
- Find relevant docs quickly and easily using our advanced semantic search.
- Generate images, talk out loud, play your messages.
- Khoj is open-source, self-hostable. Always.
- Run it privately on your computer or try it on our cloud app.
Khoj is a personal AI app to extend your capabilities. It smoothly scales up from an on-device personal AI to a cloud-scale enterprise AI.
- Chat with any local or online LLM (e.g llama3, qwen, gemma, mistral, gpt, claude, gemini).
- Get answers from the internet and your docs (including image, pdf, markdown, org-mode, word, notion files).
- Access it from your Browser, Obsidian, Emacs, Desktop, Phone or Whatsapp.
- Create agents with custom knowledge, persona, chat model and tools to take on any role.
- Automate away repetitive research. Get personal newsletters and smart notifications delivered to your inbox.
- Find relevant docs quickly and easily using our advanced semantic search.
- Generate images, talk out loud, play your messages.
- Khoj is open-source, self-hostable. Always.
- Run it privately on your computer or try it on our cloud app.
Are you tired of using chatbots that invade your privacy and store your data indefinitely? Look no further! My DxGPTAi is here to provide you with a secure and reliable chatbot experience. 💬 With DxGPTAi, you can enjoy conversations without worrying about your data being mishandled. Our platform is designed to delete all temporarily stored information after shutdown, ensuring your privacy is protected to the fullest.
🎙️ And that's not all! We've also added a microphone transcription feature to make your experience even more convenient. With just a few clicks, you can chat with your ChatGPT using your voice instead of typing.
🔥 Plus, we've included a range of other cool features to make your chatbot experience even better. Download our free API from the official ChatGPT website and start chatting today!
Are you tired of using chatbots that invade your privacy and store your data indefinitely? Look no further! My DxGPTAi is here to provide you with a secure and reliable chatbot experience. 💬 With DxGPTAi, you can enjoy conversations without worrying about your data being mishandled. Our platform is designed to delete all temporarily stored information after shutdown, ensuring your privacy is protected to the fullest.
🎙️ And that's not all! We've also added a microphone transcription feature to make your experience even more convenient. With just a few clicks, you can chat with your ChatGPT using your voice instead of typing.
🔥 Plus, we've included a range of other cool features to make your chatbot experience even better. Download our free API from the official ChatGPT website and start chatting today!
Are you tired of using chatbots that invade your privacy and store your data indefinitely? Look no further! My DxGPTAi is here to provide you with a secure and reliable chatbot experience. 💬 With DxGPTAi, you can enjoy conversations without worrying about your data being mishandled. Our platform is designed to delete all temporarily stored information after shutdown, ensuring your privacy is protected to the fullest.
🎙️ And that's not all! We've also added a microphone transcription feature to make your experience even more convenient. With just a few clicks, you can chat with your ChatGPT using your voice instead of typing.
🔥 Plus, we've included a range of other cool features to make your chatbot experience even better. Download our free API from the official ChatGPT website and start chatting today!
NeatChat is a new version of NextChat with a number of optimizations. NeatChat currently has three branches: main, mini and preview.
The preview branch is the preview version of the main branch, which will be merged into the main branch when it stabilizes, and the mini branch is a separate simplified version (for the mini branch, go to NeatChat-Mini).
The main branch's mission is to optimize the UI and add features so that it can grow independently of NextChat, while the mini branch will be fine-tuned and trimmed down from NextChat to keep up with NextChat, with only the most important features from the main branch going down to the mini branch. Since the main branch and the mini branch have different purposes, they will also have two UIs.
NeatChat is a new version of NextChat with a number of optimizations. NeatChat currently has three branches: main, mini and preview.
The preview branch is the preview version of the main branch, which will be merged into the main branch when it stabilizes, and the mini branch is a separate simplified version (for the mini branch, go to NeatChat-Mini).
The main branch's mission is to optimize the UI and add features so that it can grow independently of NextChat, while the mini branch will be fine-tuned and trimmed down from NextChat to keep up with NextChat, with only the most important features from the main branch going down to the mini branch. Since the main branch and the mini branch have different purposes, they will also have two UIs.
NeatChat is a new version of NextChat with a number of optimizations. NeatChat currently has three branches: main, mini and preview.
The preview branch is the preview version of the main branch, which will be merged into the main branch when it stabilizes, and the mini branch is a separate simplified version (for the mini branch, go to NeatChat-Mini).
The main branch's mission is to optimize the UI and add features so that it can grow independently of NextChat, while the mini branch will be fine-tuned and trimmed down from NextChat to keep up with NextChat, with only the most important features from the main branch going down to the mini branch. Since the main branch and the mini branch have different purposes, they will also have two UIs.