Georgi Gerganov
|
35985acffa
gitignore : tokenize
|
2 жил өмнө |
slaren
|
e937066420
gguf-py : export chat templates (#4125)
|
2 жил өмнө |
Kerfuffle
|
28a2e6e7d4
tokenize example: Respect normal add BOS token behavior (#4126)
|
2 жил өмнө |
Galunid
|
0b5c3b0457
scripts : Remove missed baichuan convert script (#4127)
|
2 жил өмнө |
Kerfuffle
|
2923f17f6f
Clean up ggml-cuda.cu warnings when compiling with clang (for ROCM) (#4124)
|
2 жил өмнө |
slaren
|
bbecf3f415
llama : increase max nodes (#4115)
|
2 жил өмнө |
Roger Meier
|
8e9361089d
build : support ppc64le build for make and CMake (#3963)
|
2 жил өмнө |
Georgi Gerganov
|
5ad387e994
tokenize : fix trailing whitespace
|
2 жил өмнө |
zakkor
|
2fa02b4b3d
examples : add tokenize (#4039)
|
2 жил өмнө |
Don Mahurin
|
2ab0707acb
convert : use 'model' value if it exists. This allows karpathy/tinyllamas to load (#4089)
|
2 жил өмнө |
John
|
11173c92d6
py : Falcon HF compatibility (#4104)
|
2 жил өмнө |
Jannis Schönleber
|
9e87ef60e1
common : improve yaml log escaping (#4080)
|
2 жил өмнө |
Huawei Lin
|
c7cce1246e
llava : fix compilation warning that fread return value is not used (#4069)
|
2 жил өмнө |
Jiří Podivín
|
f7d5e97542
py : remove superfluous import statements (#4076)
|
2 жил өмнө |
Jiří Podivín
|
ba4cf5c0bf
train : move number of gpu layers argument parsing to common/train.cpp (#4074)
|
2 жил өмнө |
slaren
|
e85bb1a8e7
llama : add functions to get the model's metadata (#4013)
|
2 жил өмнө |
gwjr
|
3e916a07ac
finetune : speed-up ggml_compute_forward_out_prod_f32 via BLAS (#4079)
|
2 жил өмнө |
Andrew Godfrey
|
947f64f163
finetune : zero the loraB initial vectors (#4082)
|
2 жил өмнө |
Andrew Godfrey
|
b83e149ec6
cuda : get_row_rounding F32 (#4095)
|
2 жил өмнө |
Georgi Gerganov
|
4f447a4833
llama : fix data units (#4101)
|
2 жил өмнө |
Kerfuffle
|
91f6499393
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
|
2 жил өмнө |
texmex76
|
8da46278e1
gguf : fix potential infinite loops while parsing (#4100)
|
2 жил өмнө |
Jared Van Bortel
|
a6fc554e26
llama : restore prefix space in llama tokenizer (#4081)
|
2 жил өмнө |
slaren
|
1cf2850d52
ggml-cuda : increase max graph size (#4084)
|
2 жил өмнө |
Michael Potter
|
6bb4908a17
Fix MacOS Sonoma model quantization (#4052)
|
2 жил өмнө |
Galunid
|
36eed0c42c
stablelm : StableLM support (#3586)
|
2 жил өмнө |
afrideva
|
b46d12f86d
convert.py: also look for plain model.safetensors (#4043)
|
2 жил өмнө |
M. Yusuf Sarıgöz
|
bd90eca237
llava : fix regression for square images in #3613 (#4056)
|
2 жил өмнө |
Georgi Gerganov
|
3d68f364f1
ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060)
|
2 жил өмнө |
Georgi Gerganov
|
c049b37d7b
readme : update hot topics
|
2 жил өмнө |