Home >Software Tutorial >Mobile Application >Summary of FAQs for DeepSeek usage
DeepSeek AI Tool User Guide and FAQ
DeepSeek is a powerful AI intelligent tool. This article will answer some common usage questions to help you get started quickly.
FAQ:
Differences between different access methods: There is no difference in function between web version, App version and API calls, and App is just a wrapper for web version. The local deployment uses a distillation model, which is slightly inferior to the full version of DeepSeek-R1, but the 32-bit model theoretically has 90% full version capability.
What is SillyTavern? SillyTavern is a front-end interface that requires calling the AI model through the API or Ollama.
What is breaking the limit? AI models often have moral restrictions built in, breaking limits means bypassing these restrictions through specific prompt words to access more content.
Reasons for interruption of AI responses: Your response may touch on sensitive topics and be filtered by secondary review.
How to bypass the moral restrictions of AI? Please refer to the relevant resources in the group file (due to group rules limitations, please visit the unlimited group for more resources).
Is the AI deployed by Nvidia complete? Nvidia deploys the full 671B model, but may respond slowly than the official version, especially when dealing with long text.
Is there a context restriction for DeepSeek? How to break through? There are context restrictions. For example, calling the API in Cherry can adjust the context length in the model settings.
How to deploy locally? Please refer to the video tutorial of UP host NathMath of B station: BV1NGf2YtE8r (Tutorial for full local deployment of DeepSeek R1 inference model).
How to choose the right model size? Also refer to NathMath's video. In short, for independent graphics card users (preferably N cards), the memory size plus the video memory size determines the most appropriate model size. For example, 4050 graphics card (8G video memory) has 16G memory, it is recommended to choose the 14B model.
How to operate after Cherry Studio is connected? After Cherry Studio has completed the connection, return to the main interface and you can select the model above the main interface. The Nvidia version of DeepSeek is named deepseek-ai/deepseek-r1.
How to use .md files? .md files are text files in Markdown format. It is recommended to use Typora or VS Code to open and edit.
Can Git and Node.js be installed on other disks? It is recommended to install it on the same disk to avoid subsequent configuration errors.
Is DeepSeek complete in 360nm AI? No, DeepSeek in 360nm AI is not a complete version, and its capabilities are weak and not recommended.
The reason why AI response speed is slowing? The amount of AI computing will increase with the increase in text length, which is normal for slowing down. It is recommended to start a new conversation.
The reason why AI output is blank? It may be caused by external cyber attacks.
Dialogue record saving time? API calls are saved locally and last for an unlimited period of time (unless manually deleted). The saving time of the web and app versions is currently unknown.
Are the web version and App version 671B models? Yes.
Can tools such as Chatbox directly connect to the API? Currently, it is impossible to directly connect to the official API.
The above is the detailed content of Summary of FAQs for DeepSeek usage. For more information, please follow other related articles on the PHP Chinese website!