Historique des commits

Auteur SHA1 Message Date
  Theia Vogel 877b4d0c62 llama : add support for control vectors (#5970) il y a 1 an
  Michael Podvitskiy 69ff61397d llama : support models without vocabulary (#5798) il y a 1 an
  slaren f30ea47a87 llama : add pipeline parallelism support (#6017) il y a 1 an
  Georgi Gerganov 05b06210c9 llama : more consistent names of count variables (#5994) il y a 1 an
  Georgi Gerganov ee35600b90 llama : fix F16/F32 downcast + improve names (#5980) il y a 1 an
  DAN™ bcebd7dbf6 llama : add support for GritLM (#5959) il y a 1 an
  compilade c2101a2e90 llama : support Mamba Selective State Space Models (#5328) il y a 1 an
  Georgi Gerganov 29ae62d2ae llama : fix embeddings (#5796) il y a 1 an
  Douglas Hanley 475df1d6cf llama : allow for user specified embedding pooling type (#5849) il y a 1 an
  Michael Podvitskiy 4a6e2d6142 llama : add abort_callback to interrupt computation (#5409) il y a 1 an
  Pierrick Hymbert 3ab8b3a92e llama : cleanup unused mmq flags (#5772) il y a 1 an
  Marcus Dunn d5ab29757e llama : constified `llama_set_state_data`'s `src` (#5774) il y a 1 an
  Georgi Gerganov 08c5ee87e4 llama : remove deprecated API (#5770) il y a 1 an
  Kawrakow 0becb22ac0 IQ4_XS: a 4.25 bpw quantization (#5747) il y a 1 an
  Georgi Gerganov 9d533a77d0 llama : fix defrag bugs + add parameter (#5735) il y a 1 an
  Kawrakow a33e6a0d2a Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range (#5721) il y a 1 an
  Georgi Gerganov bf08e00643 llama : refactor k-shift implementation + KV defragmentation (#5691) il y a 1 an
  Georgi Gerganov ab336a9d5e code : normalize enum names (#5697) il y a 1 an
  Kawrakow 4c4cb30736 IQ3_S: a much better alternative to Q3_K (#5676) il y a 1 an
  Xuan Son Nguyen 7c8bcc11dc Add docs for llama_chat_apply_template (#5645) il y a 1 an
  Kawrakow a14679cc30 IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590) il y a 1 an
  Xuan Son Nguyen 11b12de39b llama : add llama_chat_apply_template() (#5538) il y a 1 an
  Kawrakow bd2d4e393b 1.5 bit quantization (#5453) il y a 1 an
  bmwl f486f6e1e5 ggml : add numa options (#5377) il y a 2 ans
  Douglas Hanley 4524290e87 Use correct type of pooling for embedding models (#5500) il y a 2 ans
  Douglas Hanley 03bf161eb6 llama : support batched embeddings (#5466) il y a 2 ans
  Douglas Hanley 2891c8aa9a Add support for BERT embedding models (#5423) il y a 2 ans
  Jared Van Bortel 1ec3332ade YaRN : store rope scaling type as int32_t in memory (#5285) il y a 2 ans
  Georgi Gerganov 5cb04dbc16 llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240) il y a 2 ans
  Kawrakow f4d7e54974 SOTA 3-bit quants (#5196) il y a 2 ans