Search
Login

intel-analytics/ipex-llm

Category
active
Contributors
11
Created
2016/08/29 07:59
Created Time Notion
2024/04/18 17:03
Created time
2024/04/18 17:03
Days Since
36
Description
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). A PyTorch LLM library that seamlessly integrates with llama.cpp, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, ModelScope, etc.
Edited Time Notion
2024/04/18 17:03
Forks
1193
Full Description
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). A PyTorch LLM library that seamlessly integrates with llama.cpp, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, ModelScope, etc.
Last Commit
2024/04/18 16:53
Last Update
2024/04/18 16:53
Score
0.083
Stars
5886
TOP