Home >Hardware Tutorial >Hardware News >AMD Ryzen AI CPUs and Radeon RX 7000 GPUs now support running LLM and AI chatbots natively
News from this site on March 7, AMD announced today that users can run GPT-based large language models (LLM) locally to build exclusive AI chatbots.
AMD says users can run LLM and AI chatbots natively on devices including Ryzen 7000 and Ryzen 8000 series APUs powered by AMD’s new XDNA NPU, as well as Radeon RX 7000 series GPUs with built-in AI acceleration cores.
AMD detailed the running steps in the announcement, such as running Mistral with 7 billion parameters, search and download "TheBloke/OpenHermes-2.5-Mistral-7B- GGUF"; If running LLAMA v2 with 7 billion parameters, search and download "TheBloke/Llama-2-7B-Chat-GGUF".
##AMD is not the first company to do this. NVIDIA has also recently Introduced "Chat with RTX," an artificial intelligence chatbot powered by GeForce RTX 40 and RTX 30 series GPUs. It is accelerated with the TensorRT-LLM feature set to deliver rapidly generated AI results based on localized datasets. The original address of the announcement is attached to this website. Interested users can visit and read it.The above is the detailed content of AMD Ryzen AI CPUs and Radeon RX 7000 GPUs now support running LLM and AI chatbots natively. For more information, please follow other related articles on the PHP Chinese website!