llama.cpp
Original author(s) | Georgi Gerganov |
---|---|
Developer(s) | Georgi Gerganov and community |
Initial release | March 10, 2023[1] |
Repository | github |
Written in | C++, C |
Type | Library, CLI, and Web server for Large language models |
License | MIT License[2] |
llama.cpp is an open source software library mostly written in C++ that performs inference on various Large Language Models such as Llama.[3] Along with the library a CLI and web server is included.[4][5] It is co-developed alongside the GGML project, a general-purpose tensor library.[6]
History
[edit]Towards the end of September 2022, Georgi Gerganov started work on the GGML library, a C library implementing tensor algebra. Gerganov developed the library with the intention of the strict memory management and multi-threading. The creation of GGML was inspired by Fabrice Bellard's work on LibNC.[7]
llama.cpp began development in March 2023 by Georgi Gerganov as an implementation of the Llama inference code in pure C/C++ with no dependencies. This bettered performance on computers without GPU or other dedicated hardware.[3][8] As of July 2024 it has 61 thousand stars on GitHub.[9] Before llama.cpp, Gerganov worked on a similar library called whisper.cpp[10] which implemented Whisper, a speech to text model by OpenAI. llama.cpp gained traction with users who lacked specialized hardware as it could run on just a CPU including on Android devices.[8][11][12]
llamafile created by Mozilla using the cosmopolitan tool created by Justine Tunney, bundles models and llama.cpp into a single file that runs on multiple operating systems.[11][13] Tunney et. al. introduced new optimized matrix multiplication kernels for x86 and ARM CPUs, improving prompt evaluation performance for FP16 and 8-bit quantized data types.[14][15][16] These improvements were committed upstream to llama.cpp.[4]
Architecture
[edit]llama.cpp initially could only run on CPUs but now can run on GPUs using multiple different back-ends including Vulkan and SYCL. These back-ends make up the GGML tensor library which is used by the front-end model-specific llama.cpp code.[17] llama.cpp supports ahead of time model quantization as opposed to on-the-fly quantization.[18] llama.cpp makes use of several CPU extensions for optimization: AVX, AVX2 and AVX-512 for X86-64, and Neon on ARM. Apple silicon is an important target for the project.[9][16]
GGUF file format
[edit]Filename extension | .gguf |
---|---|
Magic number | 0x47 0x47 0x55 0x46 |
Developed by | Georgi Gerganov and community |
Initial release | August 22, 2023[19] |
Latest release | v3[20] |
Type of format | Machine-learning tensors |
The GGUF file format is a binary format used by llama.cpp that stores both tensors and metadata in a single file.[21] It was created to better maintain backwards compatibility as llama.cpp expanded it's support for other model architectures.[22]
GGUF files are typically created by converting models developed with a different machine learning library such as PyTorch, although fine-tuning is supported natively.[23]
The format focuses on quantization, the act of reducing precision in the model weights. This can lead to reduced memory usage, and increased speed at the expense of lower model accuracy.[24][22]
Supported data types
[edit]GGUF supports common floating-point data formats float32, float16, and bfloat16, as well as 1.5-bit and 2-bit to 8-bit quantized integer types.
Supported models
[edit]References
[edit]- ^ "Initial release · ggerganov/llama.cpp@26c0846". GitHub. Retrieved 15 May 2024.
- ^ "llama.cpp/LICENSE at master · ggerganov/llama.cpp". GitHub.
- ^ a b Connatser, Matthew. "How this open source LLM chatbot runner hit the gas on x86, Arm CPUs". theregister.com. Retrieved 15 April 2024.
- ^ a b Hood, Stephen. "Llamafile: four months of progress towards democratizing AI". Mozilla Innovations. Retrieved 28 July 2024.
- ^ Alden, Daroc. "Portable LLMs with llamafile [LWN.net]". lwn.net. Retrieved 30 July 2024.
- ^ Gerganov, Georgi (17 May 2024). "ggerganov/ggml".
- ^ "Bringing Whisper and LLaMA to the masses with Georgi Gerganov (Changelog Interviews #532)". Changelog. Changelog. 22 March 2023. Retrieved 28 July 2024.
- ^ a b Edwards, Benj (13 March 2023). "You can now run a GPT-3-level AI model on your laptop, phone, and Raspberry Pi". arstechnica.com. Retrieved 15 April 2024.
- ^ a b "ggerganov/llama.cpp". GitHub.
- ^ "ggerganov/whisper.cpp". GitHub.
- ^ a b Hood, Stephen. "llamafile: bringing LLMs to the people, and to your own computer". Mozilla Innovations. Retrieved 28 July 2024.
- ^ "Democratizing AI with open-source language models". lwn.net. Retrieved 28 July 2024.
- ^ Papp, Donald (3 December 2023). "Mozilla Lets Folks Turn AI LLMs Into Single-File Executables". Hackaday. Retrieved 27 July 2024.
- ^ Connatser, Matthew. "Llamafile LLM driver project boosts performance on CPU cores". www.theregister.com. Retrieved 10 May 2024.
- ^ Tunney, Justine. "LLaMA Now Goes Faster on CPUs". justine.lol. Retrieved 24 July 2024.
- ^ a b Larabel, Michael. "Llamafile 0.7 Brings AVX-512 Support: 10x Faster Prompt Eval Times For AMD Zen 4". www.phoronix.com.
- ^ Pounder, Les (25 March 2023). "How To Create Your Own AI Chatbot Server With Raspberry Pi 4". tomshardware.com. Retrieved 16 April 2024.
- ^ Walkowiak, Bartosz; Walkowiak, Tomasz (2024). "Implementation of language models within an infrastructure designed for Natural Language Processing" (PDF). International Journal of Electronics and Telecommunications. 70 (1): 153–159. doi:10.24425/ijet.2024.149525. Retrieved 8 May 2024.
- ^ "GGUF by ggerganov · Pull Request #2398 · ggerganov/llama.cpp". GitHub.
- ^ "ggml/docs/gguf.md at master · ggerganov/ggml". GitHub.
- ^ "GGUF". huggingface.co. Retrieved 9 May 2024.
- ^ a b Mucci, Tim (3 July 2024). "GGUF versus GGML". www.ibm.com. Retrieved 26 July 2024.
- ^ Boykis, Vicki (28 February 2024). "GGUF, the long way around". Vicki Boykis. Retrieved 26 July 2024.
- ^ Labonne, Maxime (29 November 2023). "Quantize Llama models with GGUF and llama.cpp". Medium. Towards Data Science. Retrieved 9 May 2024.