|
|
2dbd964c28
|
add schema reference to config.yaml
|
2026-02-26 00:43:16 +01:00 |
|
|
|
7712aac0f5
|
configure llama-swap to log llama.cpp output
|
2026-02-26 00:39:58 +01:00 |
|
|
|
c7bc79f574
|
add Qwen3-Coder-Next model
|
2026-02-26 00:10:53 +01:00 |
|
|
|
b21f8e402b
|
add abliterated versions of qwen3-vl
|
2025-12-06 23:33:56 +01:00 |
|
|
|
65e75a4d39
|
Add 8B and 2B variants of qwen3-vl
|
2025-11-15 22:21:10 +01:00 |
|
|
|
6c7457d095
|
fix Qwen3-VL-4B-Instruct-GGUF models looping issue
|
2025-11-15 20:40:27 +01:00 |
|
|
|
9b556e98a9
|
add qwen3-vl thinking variant
|
2025-11-15 19:31:53 +01:00 |
|
|
|
202ebc7b86
|
add qwen3-vl, fix librechat taking over settings and clean up llama config
|
2025-11-15 19:18:43 +01:00 |
|
|
|
708ffe203c
|
Add Qwen2.5-VL models
|
2025-09-13 02:42:21 +02:00 |
|
|
|
9c61d47fda
|
add qwen3-4b-2507 model
|
2025-08-18 02:50:46 +02:00 |
|
|
|
c4628523bc
|
llama automatic unloading and longer start timeout
|
2025-07-29 02:31:39 +02:00 |
|
|
|
071e87ee44
|
disable warmups
|
2025-07-29 02:24:14 +02:00 |
|
|
|
9e17aadb56
|
add gemma3 model
|
2025-07-29 02:22:52 +02:00 |
|
|
|
9765f1cf86
|
add gemma3n
|
2025-07-23 23:46:44 +02:00 |
|
|
|
5f3a00b382
|
add qwen3 no thinking
|
2025-07-23 22:56:52 +02:00 |
|