0cc4m
|
5fd89a70ea
Vulkan Optimizations and Fixes (#8959)
|
1 year ago |
compilade
|
98a532d474
server : fix segfault on long system prompt (#8987)
|
1 year ago |
Georgi Gerganov
|
43bdd3ce18
cmake : remove unused option GGML_CURL (#9011)
|
1 year ago |
Daniel Bevenius
|
06943a69f6
ggml : move rope type enum to ggml.h (#8949)
|
1 year ago |
Xuan Son Nguyen
|
828d6ff7d7
export-lora : throw error if lora is quantized (#9002)
|
1 year ago |
Diogo Teles Sant'Anna
|
fc4ca27b25
ci : fix github workflow vulnerable to script injection (#9008)
|
1 year ago |
Radoslav Gerganov
|
1f67436c5e
ci : enable RPC in all of the released builds (#9006)
|
1 year ago |
Nico Bosshard
|
0fd93cdef5
llama : model-based max number of graph nodes calculation (#8970)
|
1 year ago |
Frank Mai
|
84eb2f4fad
docs: introduce gpustack and gguf-parser (#8873)
|
1 year ago |
DavidKorczynski
|
1262e7ed13
grammar-parser : fix possible null-deref (#9004)
|
1 year ago |
DavidKorczynski
|
df5478fbea
ggml: fix div-by-zero (#9003)
|
1 year ago |
Liu Jia
|
2589292cde
Fix a spelling mistake (#9001)
|
1 year ago |
Georgi Gerganov
|
d3ae0ee8d7
py : fix requirements check '==' -> '~=' (#8982)
|
1 year ago |
Georgi Gerganov
|
5ef07e25ac
server : handle models with missing EOS token (#8997)
|
1 year ago |
compilade
|
4134999e01
gguf-py : Numpy dequantization for most types (#8939)
|
1 year ago |
Georgi Gerganov
|
8cd1bcfd3f
flake.lock: Update (#8979)
|
1 year ago |
Neo Zhang
|
a21c6fd450
update guide (#8909)
|
1 year ago |
fairydreaming
|
33309f661a
llama : check all graph nodes when searching for result_embd_pooled (#8956)
|
1 year ago |
Markus Tavenrath
|
7c5bfd57f8
Optimize Vulkan backend for better CPU performance and less GPU synchronization overhead. (#8943)
|
1 year ago |
slaren
|
6e02327e8b
metal : fix uninitialized abort_callback (#8968)
|
1 year ago |
Xuan Son Nguyen
|
7eb23840ed
llama : default n_swa for phi-3 (#8931)
|
1 year ago |
fairydreaming
|
7c3f55c100
Add support for encoder-only T5 models (#8900)
|
1 year ago |
Matteo Mortari
|
911b437f22
gguf-py : fix double call to add_architecture() (#8952)
|
1 year ago |
Georgi Gerganov
|
b72942fac9
Merge commit from fork
|
1 year ago |
fairydreaming
|
6afd1a99dc
llama : add support for lora adapters in T5 model (#8938)
|
1 year ago |
Georgi Gerganov
|
272e3bd95e
make : fix llava obj file race (#8946)
|
1 year ago |
Georgi Gerganov
|
45a55b91aa
llama : better replace_all (cont) (#8926)
|
1 year ago |
tc-mb
|
3071c0a5f2
llava : support MiniCPM-V-2.5 (#7599)
|
1 year ago |
Georgi Gerganov
|
4305b57c80
sync : ggml
|
1 year ago |
Matt Stephenson
|
70c0ea3560
whisper : use vulkan as gpu backend when available (whisper/2302)
|
1 year ago |