Deploy your private Gemini application for free with one click, supporting Gemini 1.5 Pro, Gemini 1.5 Flash, Gemini Pro and Gemini Pro Vision models.
Gemini Next Chat is a software tool designed to deploy private Gemini applications effortlessly, enabling users to leverage advanced AI capabilities with ease. It supports popular Gemini models such as Gemini 1.5 Pro, Gemini 1.5 Flash, Gemini Pro, and Gemini Pro Vision, providing robust functionality for various use cases.
Key Features:
One-click deployment of Gemini applications, simplifying setup and integration.
Cross-platform support, allowing seamless operation on Windows, MacOS, and Linux.
Advanced multimodal capabilities, including image recognition and voice interaction.
Extensive plugin system with built-in tools like Web search, Web reader, Arxiv search, and Weather plugins.
Efficient client design that enhances productivity by staying in the menu bar.
Privacy-focused architecture, ensuring all data remains locally stored.
Ideal for developers, researchers, and businesses seeking to integrate Gemini AI into their workflows without high costs or complexity. By offering a user-friendly interface and powerful features, Gemini Next Chat streamlines the development and deployment of AI-driven applications while maintaining security and efficiency. Install via winget for quick setup and start building your private Gemini application today.
README
Gemini Next Chat
Deploy your private Gemini application for free with one click, supporting Gemini 1.5 Pro, Gemini 1.5 Flash, Gemini Pro and Gemini Pro Vision models.
Cursor is a new, intelligent IDE, empowered by seamless integrations with AI. Built upon VSCode, Cursor is quick to learn, but can make you extraordinarily productive.
LM Studio is an easy to use desktop app for experimenting with local and open-source Large Language Models (LLMs). The LM Studio cross platform desktop app allows you to download and run any ggml-compatible model from Hugging Face, and provides a simple yet powerful model configuration and inferencing UI. The app leverages your GPU when possible.
Dive is an open-source AI Agent desktop application that seamlessly integrates any Tools Call-supported LLM with frontend MCP Server—part of the Open Agent Platform initiative. ✨
Features:
- 🌐 Universal LLM Support: Compatible with ChatGPT, Anthropic, Ollama and OpenAI-compatible models
- 💻 Cross-Platform: Available for Windows, MacOS, and Linux
- 🔄 Model Context Protocol: Enabling seamless AI agent integration
- 🔌 MCP Server Integration: External data access and processing capabilities
- 🌍 Multi-Language Support: Traditional Chinese, English, with more coming soon
- ⚙️ Advanced API Management: Multiple API keys and model switching support
- 💡 Custom Instructions: Personalized system prompts for tailored AI behavior
- 💬 Intuitive Chat Interface: Real-time context management and user-friendly design
- 🚀 Upcoming Features: Prompt Schedule and OpenAgentPlatform MarketPlace
AI as Workspace - A better AI (LLM) client. Full-featured, lightweight. Support multiple workspaces, plugin system, cross-platform, local first + real-time cloud sync, Artifacts, MCP
Features:
Consistent Experience Across All Platforms
- Supported platforms: Windows, Linux, Mac OS, Android, Web (PWA)
- Multiple AI providers: OpenAI, Anthropic, Google, DeepSeek, xAI, Azure, etc.
Conversation Interface
- User input preview
- Modifications and regenerations presented as branches
- Customizable keyboard shortcuts
- Quick scrolling to the beginning/end of a message
Multiple Workspaces
- Create multiple workspaces to separate conversations by themes
- Group workspaces into folders; supports nesting
- Create multiple assistants within a workspace or global assistants
Data Storage
- Data is stored locally first, accessible offline and loads instantly
- Cloud synchronization available after login for cross-device syncing
- Multi-window collaboration: open multiple tabs in the same browser with responsive data synchronization
Design Details
- Support for text files (code, csv, etc.) as attachments; AI can see file contents and names without occupying display space
- For large text blocks, use Ctrl + V outside the input box to paste as an attachment; prevents large content from cluttering the display
- Quote content from previous messages to user inputs for targeted follow-up questions
- Select multiple lines of message text to copy the original Markdown
- Automatically wrap code pasted from VSCode in code blocks with language specification
MCP Protocol
- Support for MCP Tools, Prompts, Resources
- STDIO and SSE connection methods
- Install MCP-type plugins from the plugin marketplace or manually add MCP servers
Artifacts
- Convert any part of assistant responses into Artifacts
- User-editable with version control and code highlighting
- Control assistant read/write permissions for Artifacts
- Open multiple Artifacts simultaneously
Plugin System
- Built-in calculator, document parsing, video parsing, image generation plugins
- Install additional plugins from the marketplace
- Configure Gradio applications as plugins; compatible with some LobeChat plugins
- Plugins are more than just tool calling
Lightweight and High Performance
- Quick startup with no waiting
- Smooth conversation switching
Dynamic Prompts
- Create prompt variables using template syntax for dynamic, reusable prompts
- Extract repetitive parts into workspace variables for prompt reusability
Additional Features
Assistant marketplace, dark mode, customizable theme colors, and more
Cursor is a new, intelligent IDE, empowered by seamless integrations with AI. Built upon VSCode, Cursor is quick to learn, but can make you extraordinarily productive.
LM Studio is an easy to use desktop app for experimenting with local and open-source Large Language Models (LLMs). The LM Studio cross platform desktop app allows you to download and run any ggml-compatible model from Hugging Face, and provides a simple yet powerful model configuration and inferencing UI. The app leverages your GPU when possible.
Dive is an open-source AI Agent desktop application that seamlessly integrates any Tools Call-supported LLM with frontend MCP Server—part of the Open Agent Platform initiative. ✨
Features:
- 🌐 Universal LLM Support: Compatible with ChatGPT, Anthropic, Ollama and OpenAI-compatible models
- 💻 Cross-Platform: Available for Windows, MacOS, and Linux
- 🔄 Model Context Protocol: Enabling seamless AI agent integration
- 🔌 MCP Server Integration: External data access and processing capabilities
- 🌍 Multi-Language Support: Traditional Chinese, English, with more coming soon
- ⚙️ Advanced API Management: Multiple API keys and model switching support
- 💡 Custom Instructions: Personalized system prompts for tailored AI behavior
- 💬 Intuitive Chat Interface: Real-time context management and user-friendly design
- 🚀 Upcoming Features: Prompt Schedule and OpenAgentPlatform MarketPlace
AI as Workspace - A better AI (LLM) client. Full-featured, lightweight. Support multiple workspaces, plugin system, cross-platform, local first + real-time cloud sync, Artifacts, MCP
Features:
Consistent Experience Across All Platforms
- Supported platforms: Windows, Linux, Mac OS, Android, Web (PWA)
- Multiple AI providers: OpenAI, Anthropic, Google, DeepSeek, xAI, Azure, etc.
Conversation Interface
- User input preview
- Modifications and regenerations presented as branches
- Customizable keyboard shortcuts
- Quick scrolling to the beginning/end of a message
Multiple Workspaces
- Create multiple workspaces to separate conversations by themes
- Group workspaces into folders; supports nesting
- Create multiple assistants within a workspace or global assistants
Data Storage
- Data is stored locally first, accessible offline and loads instantly
- Cloud synchronization available after login for cross-device syncing
- Multi-window collaboration: open multiple tabs in the same browser with responsive data synchronization
Design Details
- Support for text files (code, csv, etc.) as attachments; AI can see file contents and names without occupying display space
- For large text blocks, use Ctrl + V outside the input box to paste as an attachment; prevents large content from cluttering the display
- Quote content from previous messages to user inputs for targeted follow-up questions
- Select multiple lines of message text to copy the original Markdown
- Automatically wrap code pasted from VSCode in code blocks with language specification
MCP Protocol
- Support for MCP Tools, Prompts, Resources
- STDIO and SSE connection methods
- Install MCP-type plugins from the plugin marketplace or manually add MCP servers
Artifacts
- Convert any part of assistant responses into Artifacts
- User-editable with version control and code highlighting
- Control assistant read/write permissions for Artifacts
- Open multiple Artifacts simultaneously
Plugin System
- Built-in calculator, document parsing, video parsing, image generation plugins
- Install additional plugins from the marketplace
- Configure Gradio applications as plugins; compatible with some LobeChat plugins
- Plugins are more than just tool calling
Lightweight and High Performance
- Quick startup with no waiting
- Smooth conversation switching
Dynamic Prompts
- Create prompt variables using template syntax for dynamic, reusable prompts
- Extract repetitive parts into workspace variables for prompt reusability
Additional Features
Assistant marketplace, dark mode, customizable theme colors, and more
If you want to update instantly, you can check out the GitHub documentation to learn how to synchronize a forked project with upstream code.
You can star or watch this project or follow author to get release notifications in time.
Environment Variables
GEMINI_API_KEY (optional)
Your Gemini api key. This is required if you need to enable the server api. This variable does not affect the value of the Gemini key on the frontend pages.
Supports multiple keys, each key is separated by ,, i.e. key1,key2,key3
Override the Gemini api request base url. In order to avoid server-side proxy url leakage, the value in the front-end page will not be overwritten and affected.
NEXT_PUBLIC_GEMINI_MODEL_LIST (optional)
Custom model list, default: all.
NEXT_PUBLIC_UPLOAD_LIMIT (optional)
File upload size limit. There is no file size limit by default.
ACCESS_PASSWORD (optional)
Access password.
HEAD_SCRIPTS (optional)
Injected script code can be used for statistics or error tracking.
This project provides limited access control. Please add an environment variable named ACCESS_PASSWORD on the vercel environment variables page.
After adding or modifying this environment variable, please redeploy the project for the changes to take effect.
Custom model list
This project supports custom model lists. Please add an environment variable named NEXT_PUBLIC_GEMINI_MODEL_LIST in the .env file or environment variables page.
The default model list is represented by all, and multiple models are separated by ,.
If you need to add a new model, please directly write the model name all,new-model-name, or use the + symbol plus the model name to add, that is, all,+new-model-name.
If you want to remove a model from the model list, use the - symbol followed by the model name to indicate removal, i.e. all,-existing-model-name. If you want to remove the default model list, you can use -all.
If you want to set a default model, you can use the @ symbol plus the model name to indicate the default model, that is, all,@default-model-name.
Development
If you have not installed pnpm
npm install -g pnpm
# 1. install nodejs and yarn first
# 2. config local variables, please change `.env.example` to `.env` or `.env.local`
# 3. run
pnpm install
pnpm dev
Requirements
NodeJS >= 18, Docker >= 20
Deployment
Docker (Recommended)
> The Docker version needs to be 20 or above, otherwise it will prompt that the image cannot be found.
> ⚠️ Note: Most of the time, the docker version will lag behind the latest version by 1 to 2 days, so the "update exists" prompt will continue to appear after deployment, which is normal.
You can also build a static page version directly, and then upload all files in the out directory to any website service that supports static pages, such as Github Page, Cloudflare, Vercel, etc..
pnpm build:export
If you deploy the project in a subdirectory and encounter resource loading failures when accessing, please add EXPORT_BASE_PATH=/path/project in the .env file or variable setting page.
Solution for “User location is not supported for the API use”
Use Cloudflare AI Gateway to forward APIs. Currently, Cloudflare AI Gateway already supports Google Vertex AI related APIs. For how to use it, please refer to How to Use Cloudflare AI Gateway. This solution is fast and stable, and is recommended.
Use Cloudflare Worker for API proxy forwarding. For detailed settings, please refer to How to Use Cloudflare Worker Proxy API. Note that this solution may not work properly in some cases.
Why can't I access the website in China after deploying it with one click using Vercel
The domain name generated after deploying Vercel was blocked by the Chinese network a few years ago, but the server's IP address was not blocked. You can customize the domain name and access it normally in China. Since Vercel does not have a server in China, it is normal to have some network fluctuations sometimes. For how to set the domain name, you can refer to the solution article Vercel binds a custom domain name that I found online.
Why can't I use Multimodal Live
Currently, the Multimodal Live API is only supported by the Gemini 2.0 Flash model, so you need to use the Gemini 2.0 Flash model to use it. Since the Gemini Multimodal Live API is not accessible in China, you may need to deploy a proxy forwarding API using Cloudflare Worker. For more information, refer to Proxying the Multimodal Live API with Cloudflare Worker.
Currently, Multimodal Live API does not support Chinese voice output.
Contributing
Contributions to this project are welcome! If you would like to contribute, please follow these steps:
Fork the repository on GitHub.
Clone your fork to your local machine.
Create a new branch for your changes.
Make your changes and commit them to your branch.
Push your changes to your fork on GitHub.
Open a pull request from your branch to the main repository.
Please ensure that your code follows the project's coding style and that all tests pass before submitting a pull request. If you find any bugs or have suggestions for improvements, feel free to open an issue on GitHub.
LICENSE
This project is licensed under the MIT License. See the LICENSE file for the full license text.
HyperChat is an open Chat client that can use various LLM APIs to provide the best Chat experience and implement productivity tools through the MCP protocol.
- Supports OpenAI-style LLMs, OpenAI, Claude(OpenRouter), Qwen, Deepseek, GLM, Ollama.
- Built-in MCP plugin market with user-friendly MCP installation configuration, one-click installation, and welcome to submit HyperChatMCP.
- Also supports manual installation of third-party MCPs; simply fill in command, args, and env.
HyperChat is an open Chat client that can use various LLM APIs to provide the best Chat experience and implement productivity tools through the MCP protocol.
- Supports OpenAI-style LLMs, OpenAI, Claude(OpenRouter), Qwen, Deepseek, GLM, Ollama.
- Built-in MCP plugin market with user-friendly MCP installation configuration, one-click installation, and welcome to submit HyperChatMCP.
- Also supports manual installation of third-party MCPs; simply fill in command, args, and env.
Large Language Models (LLMs) based AI bots are amazing. However, their behavior can be random and different bots excel at different tasks. If you want the best experience, don't try them one by one. ChatALL (Chinese name: 齐叨) can send prompt to several AI bots concurrently, help you to discover the best results. All you need to do is download, install and ask.
Large Language Models (LLMs) based AI bots are amazing. However, their behavior can be random and different bots excel at different tasks. If you want the best experience, don't try them one by one. ChatALL (Chinese name: 齐叨) can send prompt to several AI bots concurrently, help you to discover the best results. All you need to do is download, install and ask.
Are you tired of using chatbots that invade your privacy and store your data indefinitely? Look no further! My DxGPTAi is here to provide you with a secure and reliable chatbot experience. 💬 With DxGPTAi, you can enjoy conversations without worrying about your data being mishandled. Our platform is designed to delete all temporarily stored information after shutdown, ensuring your privacy is protected to the fullest.
🎙️ And that's not all! We've also added a microphone transcription feature to make your experience even more convenient. With just a few clicks, you can chat with your ChatGPT using your voice instead of typing.
🔥 Plus, we've included a range of other cool features to make your chatbot experience even better. Download our free API from the official ChatGPT website and start chatting today!
Are you tired of using chatbots that invade your privacy and store your data indefinitely? Look no further! My DxGPTAi is here to provide you with a secure and reliable chatbot experience. 💬 With DxGPTAi, you can enjoy conversations without worrying about your data being mishandled. Our platform is designed to delete all temporarily stored information after shutdown, ensuring your privacy is protected to the fullest.
🎙️ And that's not all! We've also added a microphone transcription feature to make your experience even more convenient. With just a few clicks, you can chat with your ChatGPT using your voice instead of typing.
🔥 Plus, we've included a range of other cool features to make your chatbot experience even better. Download our free API from the official ChatGPT website and start chatting today!
An open-source, modern-design AI chat framework. Supports Multi AI Providers (OpenAI / Claude 4 / Gemini / Ollama / DeepSeek / Qwen), Knowledge Base (file upload / knowledge management / RAG), Multi-Modals (Plugins/Artifacts) and Thinking.
An open-source, modern-design AI chat framework. Supports Multi AI Providers (OpenAI / Claude 4 / Gemini / Ollama / DeepSeek / Qwen), Knowledge Base (file upload / knowledge management / RAG), Multi-Modals (Plugins/Artifacts) and Thinking.
Transformer Lab is a free, open-source LLM workspace that you can run on your own computer. It is designed to go beyond what most modern open LLM applications allow. Using Transformer Lab you can easily finetune, evaluate, export and test LLMs across different inference engines and platforms.
Transformer Lab is a free, open-source LLM workspace that you can run on your own computer. It is designed to go beyond what most modern open LLM applications allow. Using Transformer Lab you can easily finetune, evaluate, export and test LLMs across different inference engines and platforms.
Transformer Lab is a free, open-source LLM workspace that you can run on your own computer. It is designed to go beyond what most modern open LLM applications allow. Using Transformer Lab you can easily finetune, evaluate, export and test LLMs across different inference engines and platforms.