461 Commits

Author SHA1 Message Date
57eb77917a chore(deps): update renovate/renovate docker tag to v43.104.3 2026-04-05 00:00:47 +00:00
a0814e76ee increase pvc for llama to 300 Gi
All checks were successful
ci/woodpecker/push/flux-reconcile-source Pipeline was successful
2026-04-04 22:49:26 +02:00
da163398a5 add notes about woodpecker to readme
All checks were successful
ci/woodpecker/push/flux-reconcile-source Pipeline was successful
2026-04-04 03:29:15 +02:00
8160a52176 add gemma 4 models
All checks were successful
ci/woodpecker/push/flux-reconcile-source Pipeline was successful
2026-04-04 02:48:02 +02:00
ad3b2229c2 get rid of openrouter proxying via llama-swap
All checks were successful
ci/woodpecker/push/flux-reconcile-source Pipeline was successful
2026-04-04 02:39:26 +02:00
57c2c7ea8d add woodpecker pipeline to reconcile flux
All checks were successful
ci/woodpecker/push/flux-reconcile-source Pipeline was successful
2026-04-04 02:31:08 +02:00
f2d60e0b15 add kubernetes secret engine and approle auth to openbao 2026-04-04 02:06:18 +02:00
9d5dd332fc Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v199-vulkan-b8637' (#196) from renovate/ghcr.io-mostlygeek-llama-swap-199.x into fresh-start 2026-04-04 00:00:57 +00:00
e923fc3c30 chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v199-vulkan-b8637 2026-04-04 00:00:54 +00:00
1945f2a9bc remove test woodpeeker pipeline 2026-04-03 23:20:49 +02:00
fdd6755c2f rip out all garm related stuff 2026-04-03 23:20:36 +02:00
3d85148c5a add woodpecker cli 2026-04-03 23:14:46 +02:00
ab5a551124 update devenv 2026-04-03 23:12:10 +02:00
1bb357b3c8 enable web search in opencode 2026-04-03 22:56:58 +02:00
6a0b544bad Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v199-vulkan-b8606' (#193) from renovate/ghcr.io-mostlygeek-llama-swap-199.x into fresh-start
All checks were successful
ci/woodpecker/push/my-first-workflow Pipeline was successful
2026-04-03 00:00:36 +00:00
4e30c9b94d chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v199-vulkan-b8606 2026-04-03 00:00:32 +00:00
dfafadb4e3 add woodpecker to giitea's allowed host list 2026-04-02 23:01:14 +02:00
ae42e342ca add test workflow
All checks were successful
ci/woodpecker/push/my-first-workflow Pipeline was successful
2026-04-02 22:57:48 +02:00
670312d75b add woodpecker ci 2026-04-02 22:35:28 +02:00
0ce1a797fc Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v199-vulkan-b8589' (#191) from renovate/ghcr.io-mostlygeek-llama-swap-199.x into fresh-start 2026-04-02 00:00:33 +00:00
3d53b4b10b chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v199-vulkan-b8589 2026-04-02 00:00:30 +00:00
98f63b1576 Merge pull request 'chore(deps): update helm release immich to v1.2.2' (#190) from renovate/immich-1.x into fresh-start 2026-04-01 00:00:35 +00:00
edba33b552 chore(deps): update helm release immich to v1.2.2 2026-04-01 00:00:32 +00:00
054df42d8b update qwen3.5 4b ctx size to 128k 2026-03-30 21:05:00 +02:00
08db022d0d Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v199-vulkan-b8576' (#189) from renovate/ghcr.io-mostlygeek-llama-swap-199.x into fresh-start 2026-03-30 00:00:52 +00:00
e485a4fc7f chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v199-vulkan-b8576 2026-03-30 00:00:49 +00:00
9e74ed6a19 increase --fit-target to 1.5GB 2026-03-29 23:50:45 +02:00
42e89c9bb7 Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v199-vulkan-b8562' (#188) from renovate/ghcr.io-mostlygeek-llama-swap-199.x into fresh-start 2026-03-29 00:00:53 +00:00
99bc04b76a chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v199-vulkan-b8562 2026-03-29 00:00:50 +00:00
7ee77e33d4 Merge pull request 'chore(deps): update helm release cert-manager to v1.20.1' (#186) from renovate/cert-manager-1.x into fresh-start 2026-03-28 00:05:47 +00:00
8bdd5f2196 chore(deps): update helm release cert-manager to v1.20.1 2026-03-28 00:05:44 +00:00
1d8cb85bd4 Merge pull request 'chore(deps): update renovate/renovate docker tag to v43.95.0' (#163) from renovate/renovate-renovate-43.x into fresh-start
Reviewed-on: #163
2026-03-27 17:43:07 +00:00
eeb302b63b Merge pull request 'chore(deps): update helm release immich to v1.2.1' (#175) from renovate/immich-1.x into fresh-start
Reviewed-on: #175
2026-03-27 17:42:59 +00:00
69b437ed3b Merge pull request 'chore(deps): update helm release k8up to v4.9.0' (#182) from renovate/k8up-4.x into fresh-start
Reviewed-on: #182
2026-03-27 17:42:52 +00:00
54674a6e79 Merge pull request 'chore(deps): update helm release open-webui to v12.13.0' (#183) from renovate/open-webui-12.x into fresh-start
Reviewed-on: #183
2026-03-27 17:42:46 +00:00
a9da405326 chore(deps): update renovate/renovate docker tag to v43.95.0 2026-03-27 17:42:10 +00:00
264871bf68 Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v199-vulkan-b8547' (#185) from renovate/ghcr.io-mostlygeek-llama-swap-199.x into fresh-start 2026-03-27 17:42:09 +00:00
6bcd0ba464 chore(deps): update helm release open-webui to v12.13.0 2026-03-27 17:42:07 +00:00
cb53301926 chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v199-vulkan-b8547 2026-03-27 17:42:04 +00:00
110817b748 Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v199' (#184) from renovate/ghcr.io-mostlygeek-llama-swap-199.x into fresh-start
Reviewed-on: #184
2026-03-27 17:40:38 +00:00
66cb3c9d82 chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v199 2026-03-27 00:00:28 +00:00
42ae7af649 chore(deps): update helm release k8up to v4.9.0 2026-03-26 00:00:57 +00:00
cffcb1cc2d Merge pull request 'chore(deps): update helm release openbao to v0.26.2' (#181) from renovate/openbao-0.x into fresh-start 2026-03-26 00:00:57 +00:00
a4a7dd6fe6 Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8508' (#180) from renovate/ghcr.io-mostlygeek-llama-swap-198.x into fresh-start 2026-03-26 00:00:54 +00:00
52b8ca79dc chore(deps): update helm release openbao to v0.26.2 2026-03-26 00:00:54 +00:00
9a1fe1f740 chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8508 2026-03-26 00:00:49 +00:00
e996a60378 Merge pull request 'chore(deps): update helm release cert-manager-webhook-ovh to v0.9.5' (#179) from renovate/cert-manager-webhook-ovh-0.x into fresh-start 2026-03-25 00:00:35 +00:00
0ccd4d93f1 chore(deps): update helm release immich to v1.2.1 2026-03-25 00:00:34 +00:00
d667c6c0fc Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8496' (#178) from renovate/ghcr.io-mostlygeek-llama-swap-198.x into fresh-start 2026-03-25 00:00:33 +00:00
4254ebc9ef chore(deps): update helm release cert-manager-webhook-ovh to v0.9.5 2026-03-25 00:00:32 +00:00
8cf02fea0e chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8496 2026-03-25 00:00:29 +00:00
aa3c74d6a7 Merge pull request 'chore(deps): update helm release cilium to v1.19.2' (#177) from renovate/cilium-1.x into fresh-start 2026-03-24 00:00:44 +00:00
289089428e Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8477' (#176) from renovate/ghcr.io-mostlygeek-llama-swap-198.x into fresh-start 2026-03-24 00:00:41 +00:00
a93f6ec36f chore(deps): update helm release cilium to v1.19.2 2026-03-24 00:00:41 +00:00
1d85bf3a88 chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8477 2026-03-24 00:00:39 +00:00
f495debf25 Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8468' (#174) from renovate/ghcr.io-mostlygeek-llama-swap-198.x into fresh-start 2026-03-23 00:00:24 +00:00
bfede17c87 chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8468 2026-03-23 00:00:21 +00:00
08ca3f4c4e Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8461' (#173) from renovate/ghcr.io-mostlygeek-llama-swap-198.x into fresh-start 2026-03-22 00:00:27 +00:00
471c0ba62d chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8461 2026-03-22 00:00:23 +00:00
261141f509 Merge pull request 'chore(deps): update helm release k8up to v4.8.7' (#172) from renovate/k8up-4.x into fresh-start 2026-03-20 22:31:45 +00:00
86d5751842 Merge pull request 'chore(deps): update helm release immich to v1.1.3' (#171) from renovate/immich-1.x into fresh-start 2026-03-20 22:31:42 +00:00
43e531a3ca chore(deps): update helm release k8up to v4.8.7 2026-03-20 22:31:41 +00:00
9a0764268b Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8445' (#170) from renovate/ghcr.io-mostlygeek-llama-swap-198.x into fresh-start 2026-03-20 22:31:39 +00:00
7c88498756 chore(deps): update helm release immich to v1.1.3 2026-03-20 22:31:38 +00:00
8717526358 chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8445 2026-03-20 22:31:36 +00:00
b6a7e5092c Merge pull request 'chore(deps): update helm release ingress-nginx to v4.15.1' (#169) from renovate/ingress-nginx-4.x into fresh-start 2026-03-20 00:00:56 +00:00
27f7a5f29a Merge pull request 'chore(deps): update helm release immich to v1.1.2' (#168) from renovate/immich-1.x into fresh-start 2026-03-20 00:00:52 +00:00
9d0fd0981a chore(deps): update helm release ingress-nginx to v4.15.1 2026-03-20 00:00:52 +00:00
51bc53dbbc chore(deps): update helm release immich to v1.1.2 2026-03-20 00:00:50 +00:00
ce0b13ebb3 change kv cache quant to q8_0 2026-03-20 00:57:39 +01:00
516e157d39 Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8400' (#167) from renovate/ghcr.io-mostlygeek-llama-swap-198.x into fresh-start 2026-03-19 00:00:38 +00:00
73d6d1f15a chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8400 2026-03-19 00:00:34 +00:00
c51fc2a5ef Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8390' (#166) from renovate/ghcr.io-mostlygeek-llama-swap-198.x into fresh-start 2026-03-18 00:00:31 +00:00
8d994e7aa1 chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8390 2026-03-18 00:00:28 +00:00
5b551c6c6e switch pullPolicy to Always on crawl4ai-proxy 2026-03-17 01:47:29 +01:00
7e7b3e3d71 add max ctx on llama.cpp 2026-03-17 01:33:35 +01:00
9f315b38e3 use modded crawl4ai proxy image 2026-03-17 01:24:09 +01:00
3e1a806db1 Merge pull request 'chore(deps): update helm release openbao to v0.26.1' (#165) from renovate/openbao-0.x into fresh-start 2026-03-17 00:01:02 +00:00
f7dba45165 Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8369' (#164) from renovate/ghcr.io-mostlygeek-llama-swap-198.x into fresh-start 2026-03-17 00:01:00 +00:00
c8fac3201a chore(deps): update helm release openbao to v0.26.1 2026-03-17 00:01:00 +00:00
82864a4738 chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8369 2026-03-17 00:00:58 +00:00
b54c05b956 add crawl4ai-proxy for openwebui 2026-03-16 20:25:30 +01:00
afdada25a0 add crawl4ai deployment 2026-03-16 19:42:01 +01:00
79315d32db add GLM-4.7-Flash model 2026-03-16 18:19:28 +01:00
a2a5cd72a9 configure open webui to use sso from authentik 2026-03-16 17:30:16 +01:00
c2706a8af2 Merge pull request 'chore(deps): update renovate/renovate docker tag to v43.76.1' (#157) from renovate/renovate-renovate-43.x into fresh-start
Reviewed-on: #157
2026-03-15 17:40:55 +00:00
610ca0017e Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8352' (#162) from renovate/ghcr.io-mostlygeek-llama-swap-198.x into fresh-start 2026-03-15 17:40:29 +00:00
466932347a chore(deps): update renovate/renovate docker tag to v43.76.1 2026-03-15 17:40:29 +00:00
afbcea4e82 chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198-vulkan-b8352 2026-03-15 17:40:26 +00:00
20ad26ed31 Merge pull request 'chore(deps): update alpine docker tag to v3.23' (#158) from renovate/alpine-3.x into fresh-start
Reviewed-on: #158
2026-03-15 17:38:29 +00:00
7a2d1e0437 Merge pull request 'chore(deps): update helm release openbao to v0.26.0' (#159) from renovate/openbao-0.x into fresh-start
Reviewed-on: #159
2026-03-15 17:38:19 +00:00
6b5929fb95 Merge pull request 'chore(deps): update golang docker tag to v1.26' (#160) from renovate/golang-1.x into fresh-start
Reviewed-on: #160
2026-03-15 17:37:51 +00:00
6b64f1a8b8 Merge pull request 'chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198' (#161) from renovate/ghcr.io-mostlygeek-llama-swap-198.x into fresh-start
Reviewed-on: #161
2026-03-15 17:37:40 +00:00
4b4cec10be chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v198 2026-03-15 00:00:34 +00:00
1f319d607a chore(deps): update golang docker tag to v1.26 2026-03-15 00:00:32 +00:00
7d90001f18 chore(deps): update alpine docker tag to v3.23 2026-03-15 00:00:30 +00:00
7948f53d1d add authentik vault policies 2026-03-14 20:12:01 +01:00
829a5a3fd8 add authentik deployment 2026-03-14 20:08:48 +01:00
cf28dcb5eb add missing allowed renovate command 2026-03-14 19:58:35 +01:00
4f1764d192 fix shell completion in garm-cli 2026-03-14 19:27:45 +01:00
49f88e4f96 remove non-functional garm image update workflow 2026-03-14 19:27:35 +01:00
4aca8daecd add mermaid preview extenstion to vscode recommendations 2026-03-14 19:01:29 +01:00
005b52dc4f update devenv and add opencode and tea 2026-03-14 18:27:44 +01:00
d39846422b change gitea port to 80 as workaround of runner bug 2026-03-14 15:51:40 +01:00
bc4f378df3 increase proxy body size on gitea ingress 2026-03-14 03:40:17 +01:00
db91415017 add missing permission to get namespaces to garm 2026-03-14 03:04:02 +01:00
3c071b88df add action to automatically update garm runner 2026-03-14 02:55:03 +01:00
c5ef5e2273 update garm to main branch 2026-03-14 02:42:23 +01:00
c55c37f0ac add ingress for garm 2026-03-14 01:40:11 +01:00
493f939551 chore(deps): update helm release openbao to v0.26.0 2026-03-14 00:00:29 +00:00
168f480c75 add gitea actions runner manager 2026-03-13 22:37:21 +01:00
c056d86da2 Add nginx ingress annotation to increase proxy body size limit 2026-03-13 04:00:10 +01:00
58634b82ba Categorize and add missing entries to app list 2026-03-13 04:00:10 +01:00
5d1ddd6e5d Remake Ansible playbook to target MikroTik router
Basically, I've exported configuration from Mikrotik router using /export and vibe-coded playbook using the file.
2026-03-13 04:00:10 +01:00
09a3251902 chore(deps): update helm release cert-manager to v1.20.0 2026-03-13 04:00:10 +01:00
162f5529e2 chore(deps): update renovate/renovate docker tag to v43.64.6 2026-03-13 04:00:10 +01:00
75531925ef chore(deps): update helm release openbao to v0.25.7 2026-03-13 04:00:10 +01:00
9fa7888799 chore(deps): update registry.k8s.io/coredns/coredns docker tag to v1.14.2 2026-03-13 04:00:10 +01:00
b0c4e17aa8 chore(deps): update helm release cert-manager-webhook-ovh to v0.9.4 2026-03-13 04:00:10 +01:00
2d295d24e0 add 27b q3 variant of qwen3.5 2026-03-13 04:00:10 +01:00
e8efa9ddc1 lower kv cache quant to q4_0 and increase ctx to 64k 2026-03-13 04:00:10 +01:00
c88dd2899a remove ttl of all models in llama-swap 2026-03-13 04:00:10 +01:00
e2d2b32208 chore(deps): update helm release cert-manager-webhook-ovh to v0.9.3 2026-03-13 04:00:10 +01:00
8d280bc9dc chore(deps): update renovate/renovate docker tag to v43.60.6 2026-03-13 04:00:10 +01:00
f219abb74f chore(deps): update ghcr.io/mostlygeek/llama-swap docker tag to v197-vulkan-b8248 2026-03-13 04:00:10 +01:00
0130991c74 refactor: add move llama-swap package config to renovate.json 2026-03-13 04:00:10 +01:00
bbb57cc174 configure renovate to automatically merge patch updates 2026-03-13 04:00:10 +01:00
966d2c50c0 update renovate comment for llama-swap image tag management 2026-03-13 04:00:10 +01:00
fb4fcc7c12 Update renovate/renovate Docker tag to v43.60.4 2026-03-13 04:00:10 +01:00
1026beb722 Update Helm release ingress-nginx to v4.15.0 2026-03-13 04:00:10 +01:00
af737ab82b Update caddy Docker tag to v2.11.2 2026-03-13 04:00:10 +01:00
6dc09ec242 Update Helm release open-webui to v12.10.0 2026-03-13 04:00:10 +01:00
39fc38d62b add qwen3.5 4b heretic 2026-03-13 04:00:10 +01:00
e72a79be8f add glm-5 from openrouter to llama-swap 2026-03-13 04:00:10 +01:00
4fda343b01 clean up llama-swap config 2026-03-13 04:00:10 +01:00
266ced7362 adjust parameters of qwen3-coder-next 2026-03-13 04:00:10 +01:00
8a074839b1 automatically fit context on qwen3.5 2b and 4b 2026-03-13 04:00:10 +01:00
42038207fc Add Q3_K_M variand of Qwen3.5-9B 2026-03-13 04:00:10 +01:00
28cb53c031 fiix thinking versions of Qwen3.5 small 2026-03-13 04:00:10 +01:00
88a73cbb41 set strategy to recreate on llama-swap deployment 2026-03-13 04:00:10 +01:00
46a7e24932 add 2B, 4B, 9B versions of Qwen3.5 in thinking + nonthinking variants 2026-03-13 04:00:10 +01:00
cd7ebac6b9 increase target margin of 2048MB of VRAM 2026-03-13 04:00:10 +01:00
ba9db6ce41 add Qwen3.5 Small 0.8B model and replace Qwen3-VL-2B as task model 2026-03-13 04:00:10 +01:00
6dd9a717e2 shorten context for qwen3-vl-2b and lower kv cache quant 2026-03-13 04:00:10 +01:00
c67b6f7ebe add path to mmproj in qwen3.5 heretic 2026-03-13 04:00:10 +01:00
8d7cf402fd manually update llama-swap image tag 2026-03-13 04:00:10 +01:00
2a59555c3b Add more README 2026-03-13 04:00:10 +01:00
f236b89cca Update Helm release immich to v1.1.1 2026-03-13 04:00:10 +01:00
5f3f3d33ee Update renovate/renovate Docker tag to v43.46.6 2026-03-13 04:00:10 +01:00
b22498c60f Update caddy Docker tag to v2.11.1 2026-03-13 04:00:10 +01:00
13aaae7620 Update Helm release cert-manager to v1.19.4 2026-03-13 04:00:10 +01:00
1d7fba80d4 Update Helm release cert-manager-webhook-ovh to v0.9.2 2026-03-13 04:00:10 +01:00
3fdad80b22 Update Helm release openbao to v0.25.6 2026-03-13 04:00:10 +01:00
865a98ed97 revamp readme 2026-03-13 04:00:10 +01:00
78a81c5b72 Add mmproj-url for Qwen3.5-35B-A3B-heretic model 2026-03-13 04:00:10 +01:00
2bb23c4ed0 add gemma-3-270m-it-qat model 2026-03-13 04:00:10 +01:00
8c29fc8018 Add Qwen3.5-35B-A3B-heretic models 2026-03-13 04:00:10 +01:00
2836542569 Add always loaded Qwen3-VL-2B-Instruct 2026-03-13 04:00:10 +01:00
1e68450d8a Add Qwen3.5-35-A3B model 2026-03-13 04:00:10 +01:00
0a57fdd22d update CoreDNS logging configuration to include all log classes 2026-03-13 04:00:10 +01:00
a0a7b85cc2 custom config of coredns to deny ipv6 huggingface 2026-03-13 04:00:10 +01:00
2c83eb26b3 automatically fit models by llama.cpp 2026-03-13 04:00:10 +01:00
ec038d7154 fix models mount 2026-03-13 04:00:10 +01:00
b61e3b5c08 add schema reference to config.yaml 2026-03-13 04:00:10 +01:00
59bf4a1aa6 configure llama-swap to log llama.cpp output 2026-03-13 04:00:10 +01:00
63a8e2f7ac add Qwen3-Coder-Next model 2026-03-13 04:00:10 +01:00
1ddef7951a update llama-swap image 2026-03-13 04:00:10 +01:00
b431b9c038 disable built in open-webui ingress 2026-03-13 04:00:10 +01:00
6b0c50b104 increase openwebui storage to 10Gi 2026-03-13 04:00:10 +01:00
9f55d67ffa migrate llama models to ssd 2026-03-13 04:00:10 +01:00
3ffadc8628 add ssd volume for llama models 2026-03-13 04:00:10 +01:00
a138171c2f add lvmpv ssd storage class 2026-03-13 04:00:10 +01:00
a986aea9ed add openwebui 2026-03-13 04:00:10 +01:00
3939bc9138 add workaround for cert-manager-webhook-ovh 2026-03-13 04:00:10 +01:00
d8c380ac7c remove configVersion from cert-manager-webhook-ovh 2026-03-13 04:00:10 +01:00
9d086645ad Update Helm release cloudnative-pg to v0.27.1 2026-03-13 04:00:10 +01:00
2cd866b33c Update renovate/renovate Docker tag to v43.31.1 2026-03-13 04:00:10 +01:00
b72d2d93d6 Update Helm release cilium to v1.19.1 2026-03-13 04:00:10 +01:00
8183285cc9 Update Helm release openbao to v0.25.5 2026-03-13 04:00:10 +01:00
514568ae40 Update Helm release cert-manager-webhook-ovh to v0.9.1 2026-03-13 04:00:09 +01:00
f4294de967 Update Helm release vault-secrets-operator to v1.3.0 2026-03-13 04:00:09 +01:00
ec0b479ef2 Update Helm release immich to v1.1.0 2026-03-13 04:00:09 +01:00
0ca2136333 change router's ip to ::1 2026-03-13 04:00:09 +01:00
726e61b54a update talos to 1.12.4 2026-03-13 04:00:09 +01:00
d0bd54cde9 remove mayastor related talos config 2026-03-13 04:00:09 +01:00
41d3629e8a clean up old mayastor config 2026-03-13 04:00:09 +01:00
0e756c46a8 disable loki and alloy 2026-03-13 04:00:09 +01:00
17f7ee8515 disable mayastor 2026-03-13 04:00:09 +01:00
596d54ae0c remove mayastor storageclass, snapshotclass 2026-03-13 04:00:09 +01:00
2290599f7e switch searxng persistent data to lvm hdd 2026-03-13 04:00:09 +01:00
a3f30873f9 switch llama models dir to lvm hdd 2026-03-13 04:00:09 +01:00
96e5202e6d add lvm hdd llama models pvc 2026-03-13 04:00:09 +01:00
8b51286a28 move openbao's data volume to lvm 2026-03-13 04:00:09 +01:00
d210a340a7 add lvm hdd openbao volume 2026-03-13 04:00:09 +01:00
93cd4605ad remove docker registry 2026-03-13 04:00:09 +01:00
664268dbfe clean up old library volume, postgres and redis 2026-03-13 04:00:09 +01:00
99d6c36e16 switch immich to new valkey 2026-03-13 04:00:09 +01:00
70ad1e0ab3 add redis authentication 2026-03-13 04:00:09 +01:00
9d3dc4a5a2 add immich valkey server 2026-03-13 04:00:09 +01:00
28d485b7b2 reconfigure immich to use new db 2026-03-13 04:00:09 +01:00
d7e3a77f73 add new postgres cluster 2026-03-13 04:00:09 +01:00
96cb5e53b1 migrate immich to new library pvc 2026-03-13 04:00:09 +01:00
0951b5173b add new immich library volume 2026-03-13 04:00:09 +01:00
acfebdef11 add explicit volume for gitea valkey 2026-03-13 04:00:09 +01:00
d7dd1f73fc migrate gitea shared storage to new volume 2026-03-13 04:00:09 +01:00
4c561cbcad add explicit gitea shared storage volume 2026-03-13 04:00:09 +01:00
976422c174 remove old postgres cluster 2026-03-13 04:00:09 +01:00
fe1d3ca12a migrate gitea to lvmhdd backed postgres 2026-03-13 04:00:09 +01:00
3144ccdb38 fix fsType on gitea postgres volume 2026-03-13 04:00:09 +01:00
ce8eb9ae13 fix storage class name on gitea postgres vol 2026-03-13 04:00:09 +01:00
673739e2c4 add btrfs extension 2026-03-13 04:00:09 +01:00
6bfc99d066 add browse-pvc krew plugin 2026-03-13 04:00:09 +01:00
a5d9082006 use separate kubeconfig 2026-03-13 04:00:09 +01:00
b20194bc13 Update redis Docker tag to v24.1.3 2026-03-13 04:00:09 +01:00
ecf1327f53 Update Helm release gitea to v12.5.0 2026-03-13 04:00:09 +01:00
038ffbf499 Update Helm release ingress-nginx to v4.14.3 2026-03-13 04:00:09 +01:00
985a0dc3b1 Update Helm release openbao to v0.25.0 2026-03-13 04:00:09 +01:00
e344ba26e8 Update registry.k8s.io/coredns/coredns Docker tag to v1.14.1 2026-03-13 04:00:09 +01:00
6ea969b44a Update alpine Docker tag to v3.23.3 2026-03-13 04:00:09 +01:00
f2ef3fdb6a Update Helm release immich to v1.0.12 2026-03-13 04:00:09 +01:00
08a09ecb9d Update renovate/renovate Docker tag to v43 2026-03-13 04:00:09 +01:00
00d8236ad8 Update Helm release cert-manager to v1.19.3 2026-03-13 04:00:09 +01:00
a06700fd53 add pv for new postgres' gitea cluster 2026-03-13 04:00:09 +01:00
4e60185ade add backup volume snapshot class for girea postgress 2026-03-13 04:00:09 +01:00
e5cadafd19 move frigate deployment to new pvcs 2026-03-13 04:00:09 +01:00
fe5ba29264 add temporary frigate volume to migrate data 2026-03-13 04:00:09 +01:00
b978c01af4 migrate from raw flake to devenv 2026-03-13 04:00:09 +01:00
547c7d9b11 enable ts3 after copying files 2026-03-13 04:00:09 +01:00
28fbd523aa add utility to run temporary pod with pvc mounted 2026-03-13 04:00:09 +01:00
3d58fb6724 add ispeak3 ts3 server 2026-03-13 04:00:09 +01:00
5fdc621bc9 add pv-migrate to tools 2026-03-13 04:00:09 +01:00
ee23d02ec4 delete old nas pvc and use new 2026-03-13 04:00:09 +01:00
e92150a5de add secondary nas volume 2026-03-13 04:00:09 +01:00
cc9c2bca52 add lvmpv-hdd storage class 2026-03-13 04:00:09 +01:00
61d43700e9 enable openebs lvm-localpv controller 2026-03-13 04:00:09 +01:00
13cc582c7b Update Helm release cilium to v1.18.6 2026-03-13 04:00:09 +01:00
24b600427e Update registry.k8s.io/coredns/coredns Docker tag to v1.13.2 2026-03-13 04:00:09 +01:00
45a6944776 Update renovate/renovate Docker tag to v42.84.1 2026-03-13 04:00:09 +01:00
9f29aa7251 Update Helm release immich to v1.0.9 2026-03-13 04:00:08 +01:00
77904beb30 Update alpine Docker tag to v3.23.2 2026-03-13 04:00:08 +01:00
3bec27a13d Update Helm release openebs to v4.4.0 2026-03-13 04:00:08 +01:00
6a64f6cb5a Update redis Docker tag to v24 2026-03-13 04:00:08 +01:00
2d28c3aa21 Update Helm release cert-manager to v1.19.2 2026-03-13 04:00:08 +01:00
8f13e38eae Update Helm release openbao to v0.23.3 2026-03-13 04:00:08 +01:00
928136e7bf Update Helm release ingress-nginx to v4.14.1 2026-03-13 04:00:08 +01:00
ea55bf43ea Update Helm release cloudnative-pg to v0.27.0 2026-03-13 04:00:08 +01:00
72020c9f77 Update Helm release vault-secrets-operator to v1.2.0 2026-03-13 04:00:08 +01:00
3714d5663c disable librechat release, it's using bitnami's mongodb 2026-03-13 04:00:08 +01:00
20b32f1ae0 Update renovate/renovate Docker tag to v42.84.0 2026-03-13 04:00:08 +01:00
a3c6f85d1c update immich 2026-03-13 04:00:08 +01:00
9032060930 add abliterated versions of qwen3-vl 2026-03-13 04:00:08 +01:00
95879f05d7 increase free space limit on frigate to 24h and enable two-way sync 2026-03-13 04:00:08 +01:00
f13c3ae3e7 Add 8B and 2B variants of qwen3-vl 2026-03-13 04:00:08 +01:00
669beccc35 fix Qwen3-VL-4B-Instruct-GGUF models looping issue 2026-03-13 04:00:08 +01:00
5eb7b7bb0c add qwen3-vl thinking variant 2026-03-13 04:00:08 +01:00
0b677d0faf add qwen3-vl, fix librechat taking over settings and clean up llama config 2026-03-13 04:00:08 +01:00
e3325670de fix cache location after llama-swap update 2026-03-13 04:00:08 +01:00
b9200d3a4c update llama-swap 2026-03-13 04:00:08 +01:00
00ba40d168 Update Helm release cilium to v1.18.4 2026-03-13 04:00:08 +01:00
d3e00bfbc2 Update Helm release cloudnative-pg to v0.26.1 2026-03-13 04:00:08 +01:00
1db1394c6a Update Helm release openbao to v0.19.2 2026-03-13 04:00:08 +01:00
7841f58b3d Update registry.k8s.io/coredns/coredns Docker tag to v1.13.1 2026-03-13 04:00:08 +01:00
a038f5aa8c Update Helm release immich to v1.0.6 2026-03-13 04:00:08 +01:00
9cefdefa75 Update Helm release ingress-nginx to v4.14.0 2026-03-13 04:00:08 +01:00
c116a30fe3 Update renovate/renovate Docker tag to v42 2026-03-13 04:00:08 +01:00
d1a95c6001 add nas deployment 2026-03-13 04:00:08 +01:00
8063cbaf80 update llama-swap docker image 2026-03-13 04:00:08 +01:00
77ebe2cc89 Update caddy Docker tag to v2.10.2 2026-03-13 04:00:08 +01:00
4d42cd2fd6 Update Helm release cert-manager to v1.19.1 2026-03-13 04:00:08 +01:00
1137079fb6 Update renovate/renovate Docker tag to v41.152.7 2026-03-13 04:00:08 +01:00
049641cc6b Update Helm release immich to v1 2026-03-13 04:00:08 +01:00
86cae7f8eb Update Helm release openbao to v0.19.0 2026-03-13 04:00:08 +01:00
ee3323fa05 Update Helm release vault-secrets-operator to v1 2026-03-13 04:00:08 +01:00
9ac289316c Update redis Docker tag to v23 2026-03-13 04:00:08 +01:00
f239b568c4 Update Helm release immich to v0.9.7 2026-03-13 04:00:08 +01:00
b073db7438 Update Helm release librechat to v1.9.1 2026-03-13 04:00:08 +01:00
f7e9d6ee5b Update Helm release openebs to v4.3.3 2026-03-13 04:00:08 +01:00
7af6905af2 Update registry.k8s.io/coredns/coredns Docker tag to v1.13.0 2026-03-13 04:00:08 +01:00
84d553daa7 Update Helm release ingress-nginx to v4.13.3 2026-03-13 04:00:08 +01:00
50066769cd Update Helm release k8up to v4.8.6 2026-03-13 04:00:08 +01:00
2863587fc1 Update Helm release cilium to v1.18.2 2026-03-13 04:00:08 +01:00
381aba63f1 fix cert-manager-webhook-ovh config after update 2026-03-13 04:00:08 +01:00
00f3188f01 update values to current values schema 2026-03-13 04:00:08 +01:00
0ae32844c4 Update Helm release cert-manager-webhook-ovh to v0.8.0 2026-03-13 04:00:07 +01:00
072d161be7 Update Helm release gitea to v12.4.0 2026-03-13 04:00:07 +01:00
9544f4719f Add Qwen2.5-VL models 2026-03-13 04:00:07 +01:00
d5e487f831 Update renovate/renovate Docker tag to v41.82.10 2026-03-13 04:00:07 +01:00
2c46e7789f remove ollama 2026-03-13 04:00:07 +01:00
a38363662c Update Helm release gitea to v12.2.0 2026-03-13 04:00:07 +01:00
36ab225f52 Update redis Docker tag to v22 2026-03-13 04:00:07 +01:00
4347ceebeb Update Helm release ingress-nginx to v4.13.1 2026-03-13 04:00:07 +01:00
b5d27092b8 Update Helm release immich to v0.7.5 2026-03-13 04:00:07 +01:00
2543b43592 Update Helm release openbao to v0.16.3 2026-03-13 04:00:07 +01:00
033214f219 Update Helm release cloudnative-pg to v0.26.0 2026-03-13 04:00:07 +01:00
6fb2cda000 Update Helm release cilium to v1.18.1 2026-03-13 04:00:07 +01:00
2056e3be5a increase frigate config volume to 5Gi 2026-03-13 04:00:07 +01:00
624aad4938 add searxng 2026-03-13 04:00:07 +01:00
eb4ac7acf4 add qwen3-4b-2507 model 2026-03-13 04:00:07 +01:00
f447bf86fc decreate mtu on anapistuala delrosalae to 1280, hack 2026-03-13 04:00:07 +01:00
5ad66355be disable gpu accel in frigate 2026-03-13 04:00:07 +01:00
8817f18aa3 remove old nginx ingress controller 2026-03-13 04:00:07 +01:00
4d16128b5d Revert "add cameras vlan"
This reverts commit 9269f21692.
2026-03-13 04:00:07 +01:00
60fafe2a91 move all ingresses to new nginx ingress 2026-03-13 04:00:07 +01:00
e87c1df74b update gitea to new ingress 2026-03-13 04:00:07 +01:00
e363113c5e add nginx-ingress 2026-03-13 04:00:07 +01:00
feaf805208 update llama-swap 2026-03-13 04:00:07 +01:00
52c868a8dd add cameras vlan 2026-03-13 04:00:07 +01:00
c47423632a Update Helm release immich to v0.7.2 2026-03-13 04:00:07 +01:00
bac36e4c94 Update renovate/renovate Docker tag to v41.51.0 2026-03-13 04:00:07 +01:00
4ea09d6cdc Update Helm release cilium to v1.18.0 2026-03-13 04:00:07 +01:00
355f05e733 Update Helm release ollama to v1.25.0 2026-03-13 04:00:07 +01:00
3f989984ab Update Helm release immich to v0.7.1 2026-03-13 04:00:07 +01:00
7dc2ae7d87 fix nginx disconnecting too fast 2026-03-13 04:00:07 +01:00
862b411ff1 fix api endpoint in librechat 2026-03-13 04:00:07 +01:00
f9a6c0faac fix image upload in librechat 2026-03-13 04:00:07 +01:00
bf2dd44081 change chart source and update librechat 2026-03-13 04:00:07 +01:00
151d3528fb increase immich uploads volume 2026-03-13 04:00:07 +01:00
8565fb57a2 allow websockets to immich 2026-03-13 04:00:07 +01:00
93855dc712 llama automatic unloading and longer start timeout 2026-03-13 04:00:07 +01:00
241dce4524 disable warmups 2026-03-13 04:00:07 +01:00
17805e6b31 add gemma3 model 2026-03-13 04:00:07 +01:00
4b0c2020b9 use immich chart provided ingress 2026-03-13 04:00:07 +01:00
c72d798549 Update Helm release cloudnative-pg to v0.25.0 2026-03-13 04:00:07 +01:00
41dc36a52a Update renovate/renovate Docker tag to v41.43.5 2026-03-13 04:00:07 +01:00
f9a1cedc7e Update Helm release immich to v0.7.0 2026-03-13 04:00:07 +01:00
9d26ccff04 install immich 2026-03-13 04:00:07 +01:00
6f3e612dde move llama models to ssd 2026-03-13 04:00:07 +01:00
853d01f4d4 add ssd 2026-03-13 04:00:07 +01:00
8e39dafe00 fix immich postgres cluster 2026-03-13 04:00:07 +01:00
224089fe16 redis for immich 2026-03-13 04:00:07 +01:00
0848057867 Update renovate/renovate Docker tag to v41.43.2 2026-03-13 04:00:07 +01:00
fd83f896ee add immich 2026-03-13 04:00:07 +01:00
32eea7c3af add gemma3n 2026-03-13 04:00:07 +01:00
de3ef465f0 add qwen3 no thinking 2026-03-13 04:00:07 +01:00
fc8860f89a increase context size 2026-03-13 04:00:07 +01:00
869cc79898 add qwen3 2026-03-13 04:00:07 +01:00
5813db75dc gpu offload in llama.cpp 2026-03-13 04:00:07 +01:00
f0dd38fc0b add llama.cpp to librechat 2026-03-13 04:00:07 +01:00
156598de64 Update Helm release ollama to v1.24.0 2026-03-13 04:00:07 +01:00
cad6d0a471 Update Helm release openbao to v0.16.2 2026-03-13 04:00:07 +01:00
e53623dbb5 Update renovate/renovate Docker tag to v41.42.9 2026-03-13 04:00:07 +01:00
8579ff451c Update Helm release cilium to v1.17.6 2026-03-13 04:00:07 +01:00
b892de6b34 Update Helm release nginx-ingress to v2.2.1 2026-03-13 04:00:07 +01:00
a922097081 Update Helm release gitea to v12.1.2 2026-03-13 04:00:07 +01:00
af6545444b llama-swap 2026-03-13 04:00:07 +01:00
a724b3c727 adjust motion masks 2026-03-13 04:00:07 +01:00
3d8bf2d195 introduce person mask 2026-03-13 04:00:07 +01:00
ae7ca9f40d Update renovate/renovate Docker tag to v41.23.1 2026-03-13 04:00:07 +01:00
3ca6365ca4 Update Helm release ollama to v1.23.0 2026-03-13 04:00:07 +01:00
fe6dffff0e Update Helm release cert-manager to v1.18.2 2026-03-13 04:00:07 +01:00
b9b490d2ba fix config validation error 2026-03-13 04:00:07 +01:00
4c5abfcd18 run renovate once daily 2026-03-13 04:00:07 +01:00
1b2ba62394 update nix flake 2026-03-13 04:00:07 +01:00
837b97b5be tune detection objects and retention 2026-03-13 04:00:07 +01:00
411797cb07 add motion mask on cameras 2026-03-13 04:00:07 +01:00
e769ce747c fix expanding volumes 2026-03-13 04:00:07 +01:00
b0c0e0a577 increase storage for recordings 2026-03-13 04:00:07 +01:00
cdf031527f enable audio in recordings frigate 2026-03-13 04:00:07 +01:00
39ec796a2e switch to openvino cpu detector 2026-03-13 04:00:07 +01:00
5190457aa1 enable hwaccel in frigate 2026-03-13 04:00:07 +01:00
c31f567d42 use go2rtc restream to remove need for two streams from camera 2026-03-13 04:00:07 +01:00
55d24aebb9 Configure frigate webrtc 2026-03-13 04:00:07 +01:00
5f558c447e enable ingress to frigate 2026-03-13 04:00:07 +01:00
3f119c515c add cameras to frigate 2026-03-13 04:00:07 +01:00
933929511e add frigate nvr 2026-03-13 04:00:07 +01:00
11409081fb Update Helm release cert-manager-webhook-ovh to v0.7.5 2026-03-13 04:00:07 +01:00
0bb0b21a6e Update Helm release cloudnative-pg to v0.24.0 2026-03-13 04:00:07 +01:00
97a322c5c9 Update Helm release ollama to v1.21.0 2026-03-13 04:00:06 +01:00
dd5b7a5156 fix openbao injector not starting 2026-03-13 04:00:06 +01:00
067cff0043 Update Helm release openbao to v0.16.1 2026-03-13 04:00:06 +01:00
515c0c58ae Update Helm release cert-manager to v1.18.1 2026-03-13 04:00:06 +01:00
bb54cebe28 Update renovate/renovate Docker tag to v41 2026-03-13 04:00:06 +01:00
1b3f5df139 fix openebs after update 2026-03-13 04:00:06 +01:00
4a9aa5ca9e Update Helm release openebs to v4.3.2 2026-03-13 04:00:06 +01:00
a9bb43be24 Update registry.k8s.io/coredns/coredns Docker tag to v1.12.2 2026-03-13 04:00:06 +01:00
ed5f74c237 Update Helm release gitea to v12.1.1 2026-03-13 04:00:06 +01:00
8202ee0d9f Update Helm release cilium to v1.17.5 2026-03-13 04:00:06 +01:00
9b6dfe4efb Update Helm release cilium to v1.17.4 2026-03-13 04:00:06 +01:00
05686a7913 Update renovate/renovate Docker tag to v40.14.3 2026-03-13 04:00:06 +01:00
76b44470b7 fix valkey persistence in gitea chart 2026-03-13 04:00:06 +01:00
1db42b409a rename mentions of redis to valkey in gitea 2026-03-13 04:00:06 +01:00
37bd3f615c Update Helm release gitea to v12 2026-03-13 04:00:06 +01:00
db5d67be37 Update Helm release ollama to v1.17.0 2026-03-13 04:00:06 +01:00
693d8c820e move ollama api key to valut 2026-03-13 04:00:06 +01:00
f670536eeb move ovh cert-manager secret to vault 2026-03-13 04:00:06 +01:00
8251d8088a move renovate gitea token to vault 2026-03-13 04:00:06 +01:00
c2e2e91931 move some settings of renovate to configmap 2026-03-13 04:00:06 +01:00
ae6dfee85e Update renovate/renovate Docker tag to v40.11.6 2026-03-13 04:00:06 +01:00
9cac367f07 add vault secret of gitea backups 2026-03-13 04:00:06 +01:00
45dfd864e0 add vault secrets operator 2026-03-13 04:00:06 +01:00
37fdc4e939 add external-secrets 2026-03-13 04:00:06 +01:00
84cba4378c Update Helm release ollama to v1.16.0 2026-03-13 04:00:06 +01:00
b45154cc47 Update Helm release cert-manager to v1.17.2 2026-03-13 04:00:06 +01:00
9802eb1bcb Update caddy Docker tag to v2.10.0 2026-03-13 04:00:06 +01:00
dabe3cf0bf Update Helm release librechat to v1.8.10 2026-03-13 04:00:06 +01:00
0e18758068 Update renovate/renovate Docker tag to v40 2026-03-13 04:00:06 +01:00
13de92656d pin cores to minimum frequency 2026-03-13 04:00:06 +01:00
29ad46ced9 add basedpyright and make it happy 2026-03-13 04:00:06 +01:00
7d389c0a8a use nix provided python as default interpreter 2026-03-13 04:00:06 +01:00
dc7f1cc42b synchronize kubernetes auth method in recoincile script 2026-03-13 04:00:06 +01:00
36b0b83b26 gitea switch to database from cloudnativepg 2026-03-13 04:00:06 +01:00
ec9f32f901 increase ollama proxy-read-timeout on ingress 2026-03-13 04:00:06 +01:00
a85d98b5d6 fix apps kustomization 2026-03-13 04:00:06 +01:00
c7c5056562 Update renovate/renovate Docker tag to v39.253.2 2026-03-13 04:00:06 +01:00
54d5dec257 Update Helm release cilium to v1.17.3 2026-03-13 04:00:06 +01:00
854e5fa7ae Update Helm release nginx-ingress to v2.1.0 2026-03-13 04:00:06 +01:00
6671f60bde Update Helm release openbao to v0.12.0 2026-03-13 04:00:06 +01:00
4bf7bce92b remove gpt-researcher 2026-03-13 04:00:06 +01:00
dec8b8361f use tavily and openrouter in gpt researcher 2026-03-13 04:00:00 +01:00
b45a0f9263 change models used by gpt-researcher 2026-03-13 03:59:13 +01:00
b4a883cff9 enable support for websockets for researcher 2026-03-13 03:59:13 +01:00
26a9f4a03d use our own image for gpt researcher 2026-03-13 03:59:13 +01:00
7c42307aa9 add docker registry 2026-03-13 03:59:13 +01:00
d26b5ff485 add gpt-researcher 2026-03-13 03:59:13 +01:00
faf3ecfa6d update network config 2026-03-13 03:59:13 +01:00
c1b8f2d9f3 increase ollama proxy timeout 2026-03-13 03:59:13 +01:00
883d705436 Update renovate/renovate Docker tag to v39.240.1 2026-03-13 03:59:13 +01:00
e96f17230a Update Helm release ollama to v1.14.0 2026-03-13 03:59:13 +01:00
c4d7311a25 Update registry.k8s.io/coredns/coredns Docker tag to v1.12.1 2026-03-13 03:59:13 +01:00
de886071eb Update Helm release community-operator to v0.13.0 2026-03-13 03:59:13 +01:00
b1d1197373 disable proxy bufferring in ollama ingress 2026-03-13 03:59:13 +01:00
35cd6cad03 deploy gitea postgres cluster 2026-03-13 03:59:13 +01:00
da9a61c086 Fix librechat kustomization typo 2026-03-13 03:59:13 +01:00
e64ef24f11 Split renovate deployment to files 2026-03-13 03:59:13 +01:00
52b0feec66 Split librechat deployment to files 2026-03-13 03:59:12 +01:00
9a9c1a45db split ollama deployment to files 2026-03-13 03:59:12 +01:00
8ad179c72f split gitea deployment to files 2026-03-13 03:59:12 +01:00
432d03772a Move gitea kustomization to subdir 2026-03-13 03:59:12 +01:00
59703c8d12 install cloudnativepg 2026-03-13 03:59:12 +01:00
88de916e22 Update renovate/renovate Docker tag to v39.233.3 2026-03-13 03:59:12 +01:00
db4e79e3e6 Update Helm release community-operator to v0.12.1 2026-03-13 03:59:12 +01:00
2c30aaed8c Update Helm release ollama to v1.13.0 2026-03-13 03:59:12 +01:00
be103c322c enable search in librechat 2026-03-13 03:59:12 +01:00
1c4b540fdb add ingress to librechat 2026-03-13 03:59:12 +01:00
535a70d85e Install librechat from different chart 2026-03-13 03:59:12 +01:00
1b6ba010fd Remove old librechat deployment 2026-03-13 03:59:12 +01:00
81fd0c6d08 Add librechat 2026-03-13 03:59:12 +01:00
af99a3566e Add mongodb database for librechat 2026-03-13 03:59:12 +01:00
1210865c54 Mongodb operator 2026-03-13 03:59:12 +01:00
f5bc134dcf Update renovate/renovate Docker tag to v39.221.0 2026-03-13 03:59:12 +01:00
0386244e10 vulkan support in ollama 2026-03-13 03:59:12 +01:00
7e4a5fd170 Disable flux network policy 2026-03-13 03:59:12 +01:00
de211a74c6 Update renovate/renovate Docker tag to v39.220.4 2026-03-13 03:59:12 +01:00
853f1b14a3 Update Helm release ollama to v1.12.0 2026-03-13 03:59:12 +01:00
465eb1cd5e Ollama proxy fix secret ref 2026-03-13 03:59:12 +01:00
5d0b6d1b99 add cert-manager annotation to ollama ingress 2026-03-13 03:59:12 +01:00
0ad763649b disable https for caddy 2026-03-13 03:59:12 +01:00
c5d4b70fd4 add ollama proxy and ingress 2026-03-13 03:59:12 +01:00
d918a548fd Update renovate/renovate Docker tag to v39.218.1 2026-03-13 03:59:12 +01:00
f832e58040 Update Helm release gitea to v11.0.1 2026-03-13 03:59:12 +01:00
f9d79ad402 add ollama deployment 2026-03-13 03:59:12 +01:00
461e2e0f01 Reapply "Merge pull request 'Update Helm release gitea to v11' (#9) from renovate/gitea-11.x into fresh-start"
This reverts commit d9a22723ef.
2026-03-13 03:59:12 +01:00
4a4e646b0a Revert "Merge pull request 'Update Helm release gitea to v11' (#9) from renovate/gitea-11.x into fresh-start"
This reverts commit f97a655ad5, reversing
changes made to f36ce88026.
2026-03-13 03:59:12 +01:00
4020b93dca Remove custom gitea tag from values 2026-03-13 03:59:12 +01:00
fb2d5cbcea Update Helm release gitea to v11 2026-03-13 03:59:12 +01:00
177bfa0d1a Update Helm release openebs to v4.2.0 2026-03-13 03:59:12 +01:00
066555c312 Update renovate/renovate Docker tag to v39.216.1 2026-03-13 03:59:12 +01:00
d2854403cd renovate improve yaml matching 2026-03-13 03:59:12 +01:00
0a715524fc Update Helm release openbao to v0.10.1 2026-03-13 03:59:12 +01:00
fb819fbd4a Update Helm release k8up to v4.8.4 2026-03-13 03:59:12 +01:00
d9a761c02a Update Helm release cert-manager to v1.17.1 2026-03-13 03:59:12 +01:00
200 changed files with 6110 additions and 823 deletions

12
.envrc Normal file
View File

@@ -0,0 +1,12 @@
#!/usr/bin/env bash
export DIRENV_WARN_TIMEOUT=20s
eval "$(devenv direnvrc)"
# `use devenv` supports the same options as the `devenv shell` command.
#
# To silence all output, use `--quiet`.
#
# Example usage: use devenv --quiet --impure --option services.postgres.enable:bool true
use devenv

13
.gitignore vendored
View File

@@ -1,2 +1,13 @@
secrets.yaml
talos/generated
talos/generated
# Devenv
.devenv*
devenv.local.nix
devenv.local.yaml
# direnv
.direnv
# pre-commit
.pre-commit-config.yaml
.opencode

3
.gitmodules vendored
View File

@@ -1,3 +0,0 @@
[submodule "openwrt/roles/ansible-openwrt"]
path = openwrt/roles/ansible-openwrt
url = https://github.com/gekmihesg/ansible-openwrt.git

View File

@@ -1,3 +1,8 @@
{
"recommendations": ["arrterian.nix-env-selector", "jnoortheen.nix-ide"]
"recommendations": [
"jnoortheen.nix-ide",
"detachhead.basedpyright",
"mkhl.direnv",
"mermaidchart.vscode-mermaid-chart"
]
}

12
.vscode/settings.json vendored
View File

@@ -1,12 +1,4 @@
{
"nixEnvSelector.nixFile": "${workspaceFolder}/shell.nix",
"terminal.integrated.profiles.linux": {
"Nix Shell": {
"path": "nix",
"args": ["develop"],
"icon": "terminal-linux"
}
},
"terminal.integrated.defaultProfile.linux": "Nix Shell",
"ansible.python.interpreterPath": "/bin/python"
"ansible.python.interpreterPath": "/bin/python",
"python.defaultInterpreterPath": "${env:PYTHON_BIN}"
}

View File

@@ -0,0 +1,49 @@
when:
- event: push
branch: fresh-start
skip_clone: true
steps:
- name: Get kubernetes access from OpenBao
image: quay.io/openbao/openbao:2.5.2
environment:
VAULT_ADDR: https://openbao.lumpiasty.xyz:8200
ROLE_ID:
from_secret: flux_reconcile_role_id
SECRET_ID:
from_secret: flux_reconcile_secret_id
commands:
- bao write -field token auth/approle/login
role_id=$ROLE_ID
secret_id=$SECRET_ID > /woodpecker/.vault_id
- export VAULT_TOKEN=$(cat /woodpecker/.vault_id)
- bao write -format json -f /kubernetes/creds/flux-reconcile > /woodpecker/kube_credentials
- name: Construct Kubeconfig
image: alpine/k8s:1.32.13
environment:
KUBECONFIG: /woodpecker/kubeconfig
commands:
- kubectl config set-cluster cluster
--server=https://$KUBERNETES_SERVICE_HOST
--certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- kubectl config set-credentials cluster
--token=$(jq -r .data.service_account_token /woodpecker/kube_credentials)
- kubectl config set-context cluster
--cluster cluster
--user cluster
--namespace flux-system
- kubectl config use-context cluster
- name: Reconcile git source
image: ghcr.io/fluxcd/flux-cli:v2.8.3
environment:
KUBECONFIG: /woodpecker/kubeconfig
commands:
- flux reconcile source git flux-system
- name: Invalidate OpenBao token
image: quay.io/openbao/openbao:2.5.2
environment:
VAULT_ADDR: https://openbao.lumpiasty.xyz:8200
commands:
- export VAULT_TOKEN=$(cat /woodpecker/.vault_id)
- bao write -f auth/token/revoke-self

View File

@@ -1,12 +1,29 @@
SHELL := /usr/bin/env bash
.PHONY: install-router gen-talos-config apply-talos-config get-kubeconfig
install-router:
ansible-playbook ansible/playbook.yml -i ansible/hosts
gen-talos-config:
mkdir -p talos/generated
talosctl gen config --with-secrets secrets.yaml --config-patch @talos/patches/controlplane.patch --config-patch @talos/patches/openebs.patch --config-patch @talos/patches/openbao.patch --config-patch @talos/patches/anapistula-delrosalae.patch --output-types controlplane -o talos/generated/anapistula-delrosalae.yaml homelab https://kube-api.homelab.lumpiasty.xyz:6443
talosctl gen config \
--with-secrets secrets.yaml \
--config-patch @talos/patches/controlplane.patch \
--config-patch @talos/patches/openebs.patch \
--config-patch @talos/patches/openbao.patch \
--config-patch @talos/patches/ollama.patch \
--config-patch @talos/patches/llama.patch \
--config-patch @talos/patches/frigate.patch \
--config-patch @talos/patches/anapistula-delrosalae.patch \
--output-types controlplane -o talos/generated/anapistula-delrosalae.yaml \
homelab https://kube-api.homelab.lumpiasty.xyz:6443
talosctl gen config --with-secrets secrets.yaml --config-patch @talos/patches/controlplane.patch --output-types worker -o talos/generated/worker.yaml homelab https://kube-api.homelab.lumpiasty.xyz:6443
talosctl gen config --with-secrets secrets.yaml --output-types talosconfig -o talos/generated/talosconfig homelab https://kube-api.homelab.lumpiasty.xyz:6443
talosctl config endpoint kube-api.homelab.lumpiasty.xyz
apply-talos-config:
talosctl -n anapistula-delrosalae apply-config -f talos/generated/anapistula-delrosalae.yaml
get-kubeconfig:
talosctl -n anapistula-delrosalae kubeconfig talos/generated/kubeconfig

363
README.md
View File

@@ -1,106 +1,293 @@
# Homelab
## Goals
This repo contains configuration and documentation for my homelab setup, which is based on Talos OS for Kubernetes cluster and MikroTik router.
Wanting to set up homelab kubernetes cluster.
## Architecture
### Software
Physical setup consists of MikroTik router which connects to the internet and serves as a gateway for the cluster and other devices in the home network as shown in the diagram below.
1. Running applications
1. NAS, backups, security recorder
2. Online presence, website, email, communicators (ts3, matrix?)
3. Git server, container registry
4. Environment to deploy my own apps
5. Some LLM server, apps for my own use
6. Public services like Tor, mirrors of linux distros etc.
7. [Some frontends](https://libredirect.github.io/)
8. [Awesome-Selfhosted](https://github.com/awesome-selfhosted/awesome-selfhosted), [Awesome Sysadmin](https://github.com/awesome-foss/awesome-sysadmin)
2. Managing them hopefully using GitOps
1. FluxCD, Argo etc.
2. State of cluster in git, all apps version pinned
3. Some bot to inform about updates?
3. It's a home**lab**
1. Should be open to experimenting
2. Avoiding vendor lock-in, changing my mind shouldn't block me for too long
3. Backups of important data in easy to access format
4. Expecting downtime, no critical workloads
5. Trying to keep it reasonably up anyways
```mermaid
%%{init: {"flowchart": {"ranker": "tight-tree"}}}%%
flowchart TD
subgraph internet[Internet]
ipv4[IPv4 Internet]
ipv6[IPv6 Internet]
he_tunnel[Hurricane Electric IPv6 Tunnel Broker]
isp[ISP]
end
subgraph home[Home network]
router[MikroTik Router]
cluster[Talos cluster]
lan[LAN]
mgmt[Management network]
cam[Camera system]
router --> lan
router --> cluster
router --> mgmt
router --> cam
end
ipv4 -- "Public IPv4 address" --> isp
ipv6 -- "Routed /48 IPv6 prefix" --> he_tunnel -- "6in4 Tunnel" --> isp
isp --> router
```
Devices are separated into VLANs and subnets for isolation and firewalling between devices and services. Whole internal network is configured to eliminate NAT where unnecessary. Pods on the Kubernetes cluster communicate with the router using native IP routing, there is no encapsulation, overlay network nor NAT on the nodes. Router knows where to direct packets destined for the pods because the cluster announces its IP prefixes to the router using BGP. Router also performs NAT for IPv4 traffic from the cluster to and from the internet, while IPv6 traffic is routed directly to the internet without NAT. High level logical routing diagram is shown below.
```mermaid
flowchart TD
isp[ISP] --- gpon
subgraph device[MikroTik CRS418-8P-8G-2s+]
direction TB
gpon[SFP GPON ONU]
pppoe[PPPoE client]
he_tunnel[HE Tunnel]
router[Router]@{ shape: cyl }
dockers["""
Dockers Containers (bridge)
2001:470:61a3:500::/64
172.17.0.0/16
"""]@{ shape: cloud }
tailscale["Tailscale Container"]
lan["""
LAN (vlan2)
2001:470:61a3::/64
192.168.0.0/24
"""]@{ shape: cloud }
mgmt["""
Management network (vlan1)
192.168.255.0/24
"""]@{ shape: cloud }
cam["""
Camera system (vlan3)
192.168.3.0/24
"""]@{ shape: cloud }
cluster["""
Kubernetes cluster (vlan4)
2001:470:61a3:100::/64
192.168.1.0/24
"""]@{ shape: cloud }
gpon --- pppoe -- """
139.28.40.212
Default IPv4 gateway
""" --- router
pppoe --- he_tunnel -- """
2001:470:61a3:: incoming
Default IPv6 gateway
""" --- router
router -- """
2001:470:61a3:500:ffff:ffff:ffff:ffff
172.17.0.1/16
""" --- dockers --- tailscale
router -- """
2001:470:61a3:0:ffff:ffff:ffff:ffff
192.168.0.1
"""--- lan
router -- """
192.168.255.10
"""--- mgmt
router -- "192.168.3.1" --- cam
router -- """
2001:470:61a3:100::1
192.168.1.1
""" --- cluster
end
subgraph k8s[K8s cluster]
direction TB
pod_network["""
Pod networks
2001:470:61a3:200::/104
10.42.0.0/16
(Dynamically allocated /120 IPv6 and /24 IPv4 prefixes per node)
"""]@{ shape: cloud }
service_network["""
Service network
2001:470:61a3:300::/112
10.43.0.0/16
(Advertises vIP addresses via BGP from nodes hosting endpoints)
"""]@{ shape: cloud }
load_balancer["""
Load balancer network
2001:470:61a3:400::/112
10.44.0.0/16
(Advertises vIP addresses via BGP from nodes hosting endpoints)
"""]@{ shape: cloud }
end
cluster -- "Routes exported via BGP" ----- k8s
```
Currently the k8s cluster consists of single node (hostname anapistula-delrosalae), which is a PC with Ryzen 5 3600, 64GB RAM, RX 580 8GB (for accelerating LLMs), 1TB NVMe SSD, 2TB and 3TB HDDs and serves both as control plane and worker node.
## Software stack
The cluster itself is based on [Talos Linux](https://www.talos.dev/) (which is also a Kubernetes distribution) and uses [Cilium](https://cilium.io/) as CNI, IPAM, kube-proxy replacement, Load Balancer, and BGP control plane. Persistent volumes are managed by [OpenEBS LVM LocalPV](https://openebs.io/docs/user-guides/local-storage-user-guide/local-pv-lvm/lvm-overview). Applications are deployed using GitOps (this repo) and reconciled on cluster using [Flux](https://fluxcd.io/). Git repository is hosted on [Gitea](https://gitea.io/) running on a cluster itself. Secets are kept in [OpenBao](https://openbao.org/) (HashiCorp Vault fork) running on a cluster and synced to cluster objects using [Vault Secrets Operator](https://github.com/hashicorp/vault-secrets-operator). Deployments are kept up to date using self hosted [Renovate](https://www.mend.io/renovate/) bot updating manifests in the Git repository. There is a [Woodpecker](https://woodpecker-ci.org/) instance watching repositories on Gitea and scheduling jobs on cluster. Incoming HTTP traffic is routed to cluster using [Nginx Ingress Controller](https://kubernetes.github.io/ingress-nginx/) and certificates are issued by [cert-manager](https://cert-manager.io/) with [Let's Encrypt](https://letsencrypt.org/) ACME issuer with [cert-manager-webhook-ovh](https://github.com/aureq/cert-manager-webhook-ovh) resolving DNS-01 challanges. Cluster also runs [CloudNativePG](https://cloudnative-pg.io/) operator for managing PostgreSQL databases. Router is running [Mikrotik RouterOS](https://help.mikrotik.com/docs/spaces/ROS/pages/328059/RouterOS) and its configuration is managed via [Ansible](https://docs.ansible.com/) playbook in this repo. High level core cluster software architecture is shown on the diagram below.
> Talos Linux is an immutable Linux distribution purpose-built for running Kubernetes. The OS is distributed as an OCI (Docker) image and does not contain any package manager, shell, SSH, or any other tools for managing the system. Instead, all operations are performed using API, which can be accessed using `talosctl` CLI tool.
```mermaid
flowchart TD
router[MikroTik Router]
router -- "Routes HTTP traffic" --> nginx
cilium -- "Announces routes via BGP" --> router
subgraph cluster[K8s cluster]
direction TB
flux[Flux CD] -- "Reconciles manifests" --> kubeapi[Kube API Server]
flux -- "Fetches Git repo" --> gitea[Gitea]
kubeapi -- "Configs, Services, Pods" --> cilium[Cilium]
cilium -- "Routing" --> services[Services] -- "Endpoints" --> pods[Pods]
cilium -- "Configures routing, interfaces, IPAM" --> pods[Pods]
kubeapi -- "Ingress rules" --> nginx[NGINX Ingress Controller] -- "Routes HTTP traffic" ---> pods
kubeapi -- "Certificate requests" --> cert_manager[cert-manager] -- "Provides certificates" --> nginx
cert_manager -- "ACME DNS-01 challanges" --> dns_webhook[cert-manager-webhook-ovh] -- "Resolves DNS challanges" --> ovh[OVH DNS]
cert_manager -- "Requests DNS-01 challanges" --> acme[Let's Encrypt ACME server] -- "Verifies domain ownership" --> ovh
kubeapi -- "Assigns pods" --> kubelet[Kubelet] -- "Manages" --> pods
kubeapi -- "PVs, LvmVols" --> openebs[OpenEBS LVM LocalPV]
openebs -- "Mounts volumes" --> pods
openebs -- "Manages" --> lv[LVM LVs]
kubeapi -- "Gets Secret refs" --> vault_operator[Vault Secrets Operator] -- "Syncs secrets" --> kubeapi
vault_operator -- "Retrieves secrets" --> vault[OpenBao] -- "Secret storage" --> lv
vault -- "Auth method" --> kubeapi
gitea -- "Receives events" --> woodpecker[Woodpecker CI] -- "Schedules jobs" --> kubeapi
gitea -- "Stores repositories" --> lv
gitea--> renovate[Renovate Bot] -- "Updates manifests" --> gitea
end
```
### Reconcilation paths of each component
- Kubernetes manifests are reconciled using Flux triggerred by Woodpecker CI on push
- RouterOS configs are applied by Ansible <!-- ran by Gitea Action on push -->
- Talos configs are applied using makefile <!-- switch to ansible and trigger on action push -->
- Vault policies are applied by running `synchronize-vault.py` <!-- triggerred by Gitea action on push -->
<!-- - Docker images are built and pushed to registry by Gitea Actions on push -->
<!-- TODO: Backups, monitoring, logging, deployment with ansible etc -->
## Software
### Infrastructure
1. Using commodity hardware
2. Reasonably scalable
3. Preferably mobile workloads, software should be a bit more flexible than me moving disks and data
4. Replication is overkill for most data
5. Preferably dynamically configured network
1. BGP with OpenWRT router
2. Dynamically allocated host subnets
3. Load-balancing (MetalLB?), ECMP on router
4. Static IP configurations on nodes
6. IPv6 native, IPv4 accessible
1. IPv6 has whole block routed to us which gives us control over address routing and usage
2. Which allows us to expose services directly to the internet without complex router config
3. Which allows us to use eg. ExternalDNS to autoconfigure domain names for LB
4. But majority of the world still runs IPv4, which should be supported for public services
5. Exposing IPv4 service may require additional reconfiguration of router, port forwarding, manual domain setting or controller doing this some day in future
6. One public IPv4 address means probably extensive use of rule-based ingress controllers
7. IPv6 internet from pods should not be NATed
8. IPv4 internet from pods should be NATed by router
### Operating systems
### Current implementation idea
| Logo | Name | Description |
|------|------|-------------|
| <img src="docs/assets/talos.svg" alt="Talos Linux" height="50" width="50"> | Talos Linux | Kubernetes distribution and operating system for cluster nodes |
| <img src="docs/assets/mikrotik.svg" alt="MikroTik RouterOS" height="50" width="50"> | MikroTik RouterOS | Router operating system for MikroTik devices |
1. Cluster server nodes running Talos
2. OpenWRT router
1. VLAN / virtual interface, for cluster
2. Configuring using Ansible
3. Peering with cluster using BGP
4. Load-balancing using ECMP
3. Cluster networking
1. Cilium CNI
2. Native routing, no encapsulation or overlay
3. Using Cilium's network policies for firewall needs
4. IPv6 address pool
1. Nodes: 2001:470:61a3:100::/64
2. Pods: 2001:470:61a3:200::/64
3. Services: 2001:470:61a3:300::/112
4. Load balancer: 2001:470:61a3:400::/112
5. IPv4 address pool
1. Nodes: 192.168.1.32/27
2. Pods: 10.42.0.0/16
3. Services: 10.43.0.0/16
4. Load balancer: 10.44.0.0/16
4. Storage
1. OS is installed on dedicated disk
2. Mayastor managing all data disks
1. DiskPool for each data disk in cluster, labelled by type SSD or HDD
2. Creating StorageClass for each topology need (type, whether to replicate, on which node etc.)
### Configuration management
## Working with repo
| Logo | Name | Description |
|------|------|-------------|
| <img src="docs/assets/flux.svg" alt="Flux CD" height="50" width="50"> | Flux CD | GitOps operator for reconciling cluster state with Git repository |
| <img src="docs/assets/ansible.svg" alt="Ansible" height="50" width="50"> | Ansible | Configuration management and automation tool |
| | Vault Secrets Operator | Kubernetes operator for syncing secrets from OpenBao/Vault to Kubernetes |
Repo is preconfigured to use with nix and vscode
### Networking
Install nix, vscode should pick up settings and launch terminals in `nix develop` with all needed utils.
| Logo | Name | Description |
|------|------|-------------|
| <img src="docs/assets/cilium.svg" alt="Cilium" height="50" width="50"> | Cilium | CNI, BGP control plane, kube-proxy replacement and Load Balancer for cluster networking |
| <img src="docs/assets/nginx.svg" alt="Nginx" height="50" width="50"> | Nginx Ingress Controller | Ingress controller for routing external traffic to services in the cluster |
| <img src="docs/assets/cert-manager.svg" alt="cert-manager" height="50" width="50"> | cert-manager | Automatic TLS certificate management |
## Bootstrapping cluster
### Storage
1. Configure OpenWRT, create dedicated interface for connecting server
1. Set up node subnet, routing
2. Create static host entry `kube-api.homelab.lumpiasty.xyz` pointing at ipv6 of first node
2. Connect server
3. Grab Talos ISO, dd it to usb stick
4. Boot it and using keyboard set up static ip ipv6 subnet, should become reachable from pc
5. `talosctl gen config homelab https://kube-api.homelab.lumpiasty.xyz:6443`
6. Generate secrets `talosctl gen secrets`, **backup, keep `secrets.yml` safe**
7. Generate config files `make gen-talos-config`
8. Apply config to first node `talosctl apply-config --insecure -n 2001:470:61a3:100::2 -f controlplane.yml`
9. Wait for reboot then `talosctl bootstrap --talosconfig=talosconfig -n 2001:470:61a3:100::2`
10. Set up router and CNI
| Logo | Name | Description |
|------|------|-------------|
| <img src="docs/assets/openebs.svg" alt="OpenEBS" height="50" width="50"> | OpenEBS LVM LocalPV | Container Storage Interface for managing persistent volumes on local LVM pools |
| <img src="docs/assets/openbao.svg" alt="OpenBao" height="50" width="50"> | OpenBao | Secret storage (HashiCorp Vault compatible) |
| <img src="docs/assets/cloudnativepg.svg" alt="CloudNativePG" height="50" width="50"> | CloudNativePG | PostgreSQL operator for managing PostgreSQL instances |
## Updating Talos config
### Development tools
Update patches and re-generate and apply configs.
| Logo | Name | Description |
|------|------|-------------|
| <img src="docs/assets/devenv.svg" alt="devenv" height="50" width="50"> | devenv | Tool for declarative managment of development environment using Nix |
| <img src="docs/assets/renovate.svg" alt="Renovate" height="50" width="50"> | Renovate | Bot for keeping dependencies up to date |
| <img src="docs/assets/woodpecker.svg" alt="Woodpecker" height="50" width="50"> | Woodpecker CI | Continous Integration system |
```
make gen-talos-config
make apply-talos-config
```
### AI infrastructure
| Logo | Name | Address | Description |
|------|------|---------|-------------|
| <img src="docs/assets/llama-cpp.svg" alt="LLaMA.cpp" height="50" width="50"> | LLaMA.cpp | https://llama.lumpiasty.xyz/ | LLM inference server running local models with GPU acceleration |
### Applications/Services
| Logo | Name | Address | Description |
|------|------|---------|-------------|
| <img src="docs/assets/gitea.svg" alt="Gitea" height="50" width="50"> | Gitea | https://gitea.lumpiasty.xyz/ | Private Git repository hosting and artifact storage (Docker, Helm charts) |
| <img src="docs/assets/open-webui.png" alt="Open WebUI" height="50" width="50"> | Open WebUI | https://openwebui.lumpiasty.xyz/ | Web UI for chatting with LLMs running on the cluster |
| <img src="docs/assets/teamspeak.svg" alt="iSpeak3" height="50" width="50"> | iSpeak3.pl | [ts3server://ispeak3.pl](ts3server://ispeak3.pl) | Public TeamSpeak 3 voice communication server |
| <img src="docs/assets/immich.svg" alt="Immich" height="50" width="50"> | Immich | https://immich.lumpiasty.xyz/ | Self-hosted photo and video backup and streaming service |
| <img src="docs/assets/frigate.svg" alt="Frigate" height="50" width="50"> | Frigate | https://frigate.lumpiasty.xyz/ | NVR for camera system with AI object detection and classification |
## Development
This repo leverages [devenv](https://devenv.sh/) for easy setup of a development environment. Install devenv, clone this repo and run `devenv shell` to make the tools and enviornment variables available in your shell. Alternatively, you can use direnv to automate enabling enviornment after entering directory in your shell. You can also install [direnv extension](https://marketplace.visualstudio.com/items?itemName=mkhl.direnv) in VSCode to automatically set up environment after opening workspace so all the fancy intellisense and extensions detect stuff correctly.
### App deployment
This repo is being watched by Flux running on cluster. To change config/add new app, simply commit to this repo and wait a while for cluster to reconcile changes. You can speed up this process by "notifying" Flux using `flux reconcile source git flux-system`.
Flux watches 3 kustomizations in this repo:
- flux-system - [cluster/flux-system](cluster/flux-system) directory, contains flux manifests
- infra - [infra](infra) directory, contains cluster infrastructure manifests like storage classes, network policies, monitoring etc.
- apps - [apps](apps) directory, contains manifests for applications deployed on cluster
### Talos config changes
Talos config in this repo is stored as yaml patches under [talos/patches](talos/patches) directory. Those patches can then be compiled into full Talos config files using `make gen-talos-config` command. Full config can then be applied to cluster using `make apply-talos-config` command, which applies config to all nodes in cluster.
To compile config, you need to have secrets file, which contains certificates and keys for cluster. Those secrets are then incorporated into final config files. That is also why we can not store full config in repo.
### Router config changes
Router config is stored as Ansible playbook under `ansible/` directory. To apply changes to router, run `ansible-playbook playbooks/routeros.yml` command in `ansible/` directory Before running playbook, you can check what changes will be applied to router using `--check` flag to `ansible-playbook` command, which will run playbook in "check mode" and show you the changes that would be applied without actually applying them. This is useful for verifying that your changes are correct before applying them to the router.
To run Ansible playbook, you need to have required Ansible collections installed. You can install them using `ansible-galaxy collection install -r ansible/requirements.yml` command. Configuring this in devenv is yet to be done, so you might need to install collections manually for now.
Secrets needed to access the router API are stored in OpenBao and loaded on demand when running playbook so you need to have access to appropriate secrets.
### Kube API access
To generate kubeconfig for accessing cluster API, run `make get-kubeconfig` command, which will generate kubeconfig under `talos/generated/kubeconfig` path. Devenv automatically sets `KUBECONFIG` enviornment variable to point to this file, so you can start using `kubectl` right away.
Like above, you need secrets file to generate kubeconfig.
<!-- TODO: Add instructions for setting up Router -->

20
ansible/README.md Normal file
View File

@@ -0,0 +1,20 @@
## RouterOS Ansible
This directory contains the new Ansible automation for the MikroTik router.
- Transport: RouterOS API (`community.routeros` collection), not SSH CLI scraping.
- Layout: one playbook (`playbooks/routeros.yml`) importing domain task files from `tasks/`.
- Goal: idempotent convergence using `community.routeros.api_modify` for managed paths.
### Quick start
1. Install dependencies:
- `ansible-galaxy collection install -r ansible/requirements.yml`
- `python -m pip install librouteros hvac`
2. Configure secret references in `ansible/vars/routeros-secrets.yml`.
3. Store required fields in OpenBao under configured KV path.
4. Export token (`OPENBAO_TOKEN` or `VAULT_TOKEN`).
5. Run:
- `ANSIBLE_CONFIG=ansible/ansible.cfg ansible-playbook ansible/playbooks/routeros.yml`
More details and design rationale: `docs/ansible/routeros-design.md`.

5
ansible/ansible.cfg Normal file
View File

@@ -0,0 +1,5 @@
[defaults]
inventory = inventory/hosts.yml
host_key_checking = False
retry_files_enabled = False
result_format = yaml

View File

@@ -1,2 +0,0 @@
[openwrt]
2001:470:61a3:100:ffff:ffff:ffff:ffff ansible_scp_extra_args="-O"

View File

@@ -0,0 +1,6 @@
all:
children:
mikrotik:
hosts:
crs418:
ansible_host: 192.168.255.10

View File

@@ -1,6 +0,0 @@
- name: Configure router
hosts: openwrt
remote_user: root
roles:
- ansible-openwrt
- router

View File

@@ -0,0 +1,92 @@
---
- name: Converge MikroTik RouterOS config
hosts: mikrotik
gather_facts: false
connection: local
vars_files:
- ../vars/routeros-secrets.yml
pre_tasks:
- name: Load router secrets from OpenBao
ansible.builtin.set_fact:
routeros_api_username: >-
{{
lookup(
'community.hashi_vault.vault_kv2_get',
openbao_fields.routeros_api.path,
engine_mount_point=openbao_kv_mount
).secret[openbao_fields.routeros_api.username_key]
}}
routeros_api_password: >-
{{
lookup(
'community.hashi_vault.vault_kv2_get',
openbao_fields.routeros_api.path,
engine_mount_point=openbao_kv_mount
).secret[openbao_fields.routeros_api.password_key]
}}
routeros_pppoe_username: >-
{{
lookup(
'community.hashi_vault.vault_kv2_get',
openbao_fields.wan_pppoe.path,
engine_mount_point=openbao_kv_mount
).secret[openbao_fields.wan_pppoe.username_key]
}}
routeros_pppoe_password: >-
{{
lookup(
'community.hashi_vault.vault_kv2_get',
openbao_fields.wan_pppoe.path,
engine_mount_point=openbao_kv_mount
).secret[openbao_fields.wan_pppoe.password_key]
}}
routeros_tailscale_container_password: >-
{{
lookup(
'community.hashi_vault.vault_kv2_get',
openbao_fields.routeros_tailscale_container.path,
engine_mount_point=openbao_kv_mount
).secret[openbao_fields.routeros_tailscale_container.container_password_key]
}}
no_log: true
module_defaults:
group/community.routeros.api:
hostname: "{{ ansible_host }}"
username: "{{ routeros_api_username }}"
password: "{{ routeros_api_password }}"
tls: true
validate_certs: false
validate_cert_hostname: false
force_no_cert: true
encoding: UTF-8
tasks:
- name: Preflight checks
ansible.builtin.import_tasks: ../tasks/preflight.yml
- name: Base network configuration
ansible.builtin.import_tasks: ../tasks/base.yml
- name: WAN and tunnel interfaces
ansible.builtin.import_tasks: ../tasks/wan.yml
- name: Hardware and platform tuning
ansible.builtin.import_tasks: ../tasks/hardware.yml
- name: RouterOS container configuration
ansible.builtin.import_tasks: ../tasks/containers.yml
- name: Addressing configuration
ansible.builtin.import_tasks: ../tasks/addressing.yml
- name: Firewall configuration
ansible.builtin.import_tasks: ../tasks/firewall.yml
- name: Routing configuration
ansible.builtin.import_tasks: ../tasks/routing.yml
- name: System configuration
ansible.builtin.import_tasks: ../tasks/system.yml

5
ansible/requirements.yml Normal file
View File

@@ -0,0 +1,5 @@
collections:
- name: community.routeros
version: ">=3.16.0"
- name: community.hashi_vault
version: ">=7.1.0"

View File

@@ -1,53 +0,0 @@
# Would never work without this awesome blogpost
# https://farcaller.net/2024/making-cilium-bgp-work-with-ipv6/
log "/tmp/bird.log" all;
log syslog all;
#Router ID
router id 192.168.1.1;
protocol kernel kernel4 {
learn;
scan time 10;
merge paths yes;
ipv4 {
import none;
export all;
};
}
protocol kernel kernel6 {
learn;
scan time 10;
merge paths yes;
ipv6 {
import none;
export all;
};
}
protocol device {
scan time 10;
}
protocol direct {
interface "*";
}
protocol bgp homelab {
debug { events };
passive;
direct;
local 2001:470:61a3:100:ffff:ffff:ffff:ffff as 65000;
neighbor range 2001:470:61a3:100::/64 as 65000;
ipv4 {
extended next hop yes;
import all;
export all;
};
ipv6 {
import all;
export all;
};
}

View File

@@ -1,5 +0,0 @@
- name: Reload bird
service:
name: bird
state: restarted
enabled: true

View File

@@ -1,16 +0,0 @@
---
- name: Install bird2
opkg:
name: "{{ item }}"
state: present
# Workaround for opkg module not handling multiple names at once well
loop:
- bird2
- bird2c
- name: Set up bird.conf
ansible.builtin.copy:
src: bird.conf
dest: /etc/bird.conf
mode: "644"
notify: Reload bird

View File

@@ -0,0 +1,48 @@
---
- name: Configure IPv4 addresses
community.routeros.api_modify:
path: ip address
data:
- address: 172.17.0.1/16
interface: dockers
network: 172.17.0.0
- address: 192.168.4.1/24
interface: lo
network: 192.168.4.0
- address: 192.168.100.20/24
interface: sfp-sfpplus1
network: 192.168.100.0
- address: 192.168.255.10/24
interface: bridge1
network: 192.168.255.0
- address: 192.168.0.1/24
interface: vlan2
network: 192.168.0.0
- address: 192.168.1.1/24
interface: vlan4
network: 192.168.1.0
- address: 192.168.3.1/24
interface: vlan3
network: 192.168.3.0
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure IPv6 addresses
community.routeros.api_modify:
path: ipv6 address
data:
- address: 2001:470:70:dd::2/64
advertise: false
interface: sit1
- address: ::ffff:ffff:ffff:ffff/64
from-pool: pool1
interface: vlan2
- address: 2001:470:61a3:500:ffff:ffff:ffff:ffff/64
interface: dockers
- address: 2001:470:61a3:100::1/64
advertise: false
interface: vlan4
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true

226
ansible/tasks/base.yml Normal file
View File

@@ -0,0 +1,226 @@
---
- name: Configure bridges
community.routeros.api_modify:
path: interface bridge
data:
- name: bridge1
vlan-filtering: true
- name: dockers
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure VLAN interfaces
community.routeros.api_modify:
path: interface vlan
data:
- name: vlan2
comment: LAN (PC, WIFI)
interface: bridge1
vlan-id: 2
- name: vlan3
comment: KAMERY
interface: bridge1
vlan-id: 3
- name: vlan4
comment: SERVER LAN
interface: bridge1
vlan-id: 4
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure interface lists
community.routeros.api_modify:
path: interface list
data:
- name: wan
comment: contains interfaces facing internet
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure interface list members
community.routeros.api_modify:
path: interface list member
data:
- interface: pppoe-gpon
list: wan
- interface: lte1
list: wan
- interface: sit1
list: wan
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure bridge ports
community.routeros.api_modify:
path: interface bridge port
data:
- bridge: dockers
interface: veth1
comment: Tailscale container interface
- bridge: bridge1
interface: ether1
pvid: 2
- bridge: bridge1
interface: ether2
pvid: 2
- bridge: bridge1
interface: ether8
pvid: 4
- bridge: bridge1
interface: ether9
pvid: 2
- bridge: bridge1
interface: ether10
pvid: 3
- bridge: bridge1
interface: sfp-sfpplus2
- bridge: bridge1
interface: ether11
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure bridge VLAN membership
community.routeros.api_modify:
path: interface bridge vlan
data:
- bridge: bridge1
tagged: sfp-sfpplus2
untagged: ether1,ether2,ether9
vlan-ids: 2
- bridge: bridge1
tagged: sfp-sfpplus2
untagged: ether10
vlan-ids: 3
- bridge: bridge1
untagged: ether8
vlan-ids: 4
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure IPv4 pools
community.routeros.api_modify:
path: ip pool
data:
- name: dhcp_pool0
ranges: 192.168.0.50-192.168.0.250
comment: LAN DHCP pool
- name: dhcp_pool1
ranges: 192.168.255.1-192.168.255.9,192.168.255.11-192.168.255.254
comment: MGMT DHCP pool
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure DHCP servers
community.routeros.api_modify:
path: ip dhcp-server
data:
- name: dhcp1
address-pool: dhcp_pool0
interface: vlan2
lease-time: 30m
comment: LAN
- name: dhcp2
address-pool: dhcp_pool1
interface: bridge1
lease-time: 30m
comment: MGMT
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure DHCP networks
community.routeros.api_modify:
path: ip dhcp-server network
data:
- address: 192.168.0.0/24
dns-server: 192.168.0.1
gateway: 192.168.0.1
- address: 192.168.255.0/24
dns-none: true
gateway: 192.168.255.10
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
# TODO: IPv6 pools are useful when we have dynamic prefix, but we don't
# We can remove it now
- name: Configure IPv6 pools
community.routeros.api_modify:
path: ipv6 pool
data:
- name: pool1
prefix: 2001:470:61a3::/48
prefix-length: 64
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure DNS
community.routeros.api_find_and_modify:
ignore_dynamic: false
path: ip dns
find: {}
values:
allow-remote-requests: true
cache-size: 20480
servers: 1.1.1.1,1.0.0.1,2606:4700:4700::1111,2606:4700:4700::1001
- name: Configure NAT-PMP global settings
community.routeros.api_find_and_modify:
ignore_dynamic: false
path: ip nat-pmp
find: {}
values:
enabled: true
- name: Configure NAT-PMP interfaces
community.routeros.api_modify:
path: ip nat-pmp interfaces
data:
- interface: dockers
type: internal
- interface: pppoe-gpon
type: external
- interface: vlan2
type: internal
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure UPnP global settings
community.routeros.api_find_and_modify:
ignore_dynamic: false
path: ip upnp
find: {}
values:
enabled: true
- name: Configure UPnP interfaces
community.routeros.api_modify:
path: ip upnp interfaces
data:
- interface: dockers
type: internal
- interface: pppoe-gpon
type: external
- interface: vlan2
type: internal
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure IPv6 ND defaults
community.routeros.api_find_and_modify:
ignore_dynamic: false
path: ipv6 nd
find:
default: true
values:
advertise-dns: true

View File

@@ -0,0 +1,66 @@
---
- name: Configure container runtime defaults
community.routeros.api_find_and_modify:
ignore_dynamic: false
path: container config
find: {}
values:
registry-url: https://ghcr.io
tmpdir: /tmp1/pull
- name: Configure container env lists
community.routeros.api_modify:
path: container envs
data:
- key: ADVERTISE_ROUTES
list: tailscale
value: 192.168.0.0/24,192.168.1.0/24,192.168.4.1/32,192.168.100.1/32,192.168.255.0/24,10.42.0.0/16,10.43.0.0/16,10.44.0.0/16,2001:470:61a3::/48
- key: CONTAINER_GATEWAY
list: tailscale
value: 172.17.0.1
- key: PASSWORD
list: tailscale
value: "{{ routeros_tailscale_container_password }}"
- key: TAILSCALE_ARGS
list: tailscale
value: --accept-routes --advertise-exit-node --snat-subnet-routes=false
- key: UPDATE_TAILSCALE
list: tailscale
value: y
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure container mounts
community.routeros.api_modify:
path: container mounts
data:
- dst: /var/lib/tailscale
list: tailscale
src: /usb1/tailscale
- dst: /root
list: tailscale-root
src: /tmp1/tailscale-root
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure tailscale container
community.routeros.api_modify:
path: container
data:
- dns: 172.17.0.1
envlists: tailscale
hostname: mikrotik
interface: veth1
layer-dir: ""
mountlists: tailscale
name: tailscale-mikrotik:latest
remote-image: fluent-networks/tailscale-mikrotik:latest
root-dir: /usb1/containers/tailscale
start-on-boot: true
tmpfs: /tmp:67108864:01777
workdir: /
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true

480
ansible/tasks/firewall.yml Normal file
View File

@@ -0,0 +1,480 @@
---
- name: Configure IPv4 firewall filter rules
community.routeros.api_modify:
path: ip firewall filter
data:
- action: fasttrack-connection
chain: forward
connection-state: established,related
- action: accept
chain: forward
comment: Allow all already established connections
connection-state: established,related
- action: accept
chain: forward
comment: Allow LTE modem management (next rule forbids it otherwise)
dst-address: 192.168.8.1
out-interface: lte1
- action: reject
chain: forward
comment: Forbid forwarding 192.168.0.0/16 to WAN
dst-address: 192.168.0.0/16
out-interface-list: wan
reject-with: icmp-network-unreachable
- action: reject
chain: forward
comment: Forbid forwarding 10.0.0.0/8 to WAN
dst-address: 10.0.0.0/8
out-interface-list: wan
reject-with: icmp-network-unreachable
- action: reject
chain: forward
comment: Forbid forwarding 172.16.0.0/12 to WAN
dst-address: 172.16.0.0/12
out-interface-list: wan
reject-with: icmp-network-unreachable
- action: reject
chain: forward
comment: Forbid forwarding 100.64.0.0/10 to WAN
dst-address: 100.64.0.0/10
out-interface-list: wan
reject-with: icmp-network-unreachable
- action: accept
chain: forward
comment: Allow from LAN to everywhere
in-interface: vlan2
- action: accept
chain: forward
comment: Allow from SRV to internet
in-interface: vlan4
out-interface-list: wan
- action: accept
chain: forward
comment: Allow from SRV to CAM
in-interface: vlan4
out-interface: vlan3
- action: accept
chain: forward
comment: Allow from dockers to everywhere
in-interface: dockers
- action: jump
chain: forward
comment: Allow port forwards
in-interface: pppoe-gpon
jump-target: allow-ports
- action: reject
chain: forward
comment: Reject all remaining (port unreachable from WAN)
in-interface-list: wan
log-prefix: FORWARD REJECT
reject-with: icmp-port-unreachable
- action: reject
chain: forward
comment: Reject all remaining (net prohibited from LAN)
log-prefix: FORWARD REJECT
reject-with: icmp-net-prohibited
- action: accept
chain: input
comment: Allow all already established connections
connection-state: established,related
- action: accept
chain: input
comment: Allow HE tunnel
in-interface: pppoe-gpon
protocol: ipv6-encap
- action: accept
chain: input
comment: Allow ICMP
protocol: icmp
- action: accept
chain: input
comment: Allow Winbox
dst-port: 8291
log: true
protocol: tcp
- action: accept
chain: input
comment: Allow SSH Mikrotik
dst-port: 2137
log: true
protocol: tcp
- action: accept
chain: input
comment: Allow RouterOS API-SSL from MGMT
dst-port: 8729
protocol: tcp
- action: accept
chain: input
comment: Allow DNS from LAN
dst-port: 53
in-interface: vlan2
protocol: udp
- action: accept
chain: input
dst-port: 53
in-interface: vlan2
protocol: tcp
- action: accept
chain: input
comment: Allow DNS from SRV
dst-port: 53
in-interface: vlan4
protocol: udp
- action: accept
chain: input
dst-port: 53
in-interface: vlan4
protocol: tcp
- action: accept
chain: input
comment: Allow DNS from dockers
dst-port: 53
in-interface: dockers
protocol: udp
- action: accept
chain: input
dst-port: 53
in-interface: dockers
protocol: tcp
- action: accept
chain: input
comment: Allow BGP from SRV
dst-port: 179
in-interface: vlan4
protocol: udp
- action: accept
chain: input
comment: NAT-PMP from LAN
dst-port: 5351
in-interface: vlan2
protocol: udp
- action: accept
chain: input
comment: NAT-PMP from dockers (for tailscale)
dst-port: 5351
in-interface: dockers
protocol: udp
- action: reject
chain: input
comment: Reject all remaining
log-prefix: INPUT REJECT
reject-with: icmp-port-unreachable
- action: accept
chain: allow-ports
comment: Allow TS3
dst-port: 9987
out-interface: vlan4
protocol: udp
- action: accept
chain: allow-ports
dst-port: 30033
out-interface: vlan4
protocol: tcp
- action: accept
chain: allow-ports
comment: Allow HTTP
dst-port: 80
out-interface: vlan4
protocol: tcp
- action: accept
chain: allow-ports
comment: Allow HTTPS
dst-port: 443
out-interface: vlan4
protocol: tcp
- action: accept
chain: allow-ports
comment: Allow SSH Gitea
dst-port: 22
out-interface: vlan4
protocol: tcp
- action: accept
chain: allow-ports
comment: Allow anything udp to Tailscale
dst-address: 172.17.0.2
out-interface: dockers
protocol: udp
- action: accept
chain: allow-ports
comment: Allow anything from GPON to LAN (NAT-PMP)
dst-address: 192.168.0.0/24
in-interface: pppoe-gpon
out-interface: vlan2
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure IPv4 NAT rules
community.routeros.api_modify:
path: ip firewall nat
data:
- action: masquerade
chain: srcnat
comment: Masquerade to internet
out-interface-list: wan
- action: masquerade
chain: srcnat
comment: GPON ONT management
dst-address: 192.168.100.1
- action: masquerade
chain: srcnat
comment: LTE Modem management
dst-address: 192.168.8.1
- action: dst-nat
chain: dstnat
comment: TS3
dst-address: 139.28.40.212
dst-port: 9987
protocol: udp
to-addresses: 10.44.0.0
- action: dst-nat
chain: dstnat
dst-address: 139.28.40.212
dst-port: 30033
protocol: tcp
to-addresses: 10.44.0.0
- action: src-nat
chain: srcnat
comment: src-nat from LAN to TS3 to some Greenland address
dst-address: 10.44.0.0
dst-port: 9987
in-interface: '!pppoe-gpon'
protocol: udp
to-addresses: 128.0.70.5
- action: src-nat
chain: srcnat
dst-address: 10.44.0.0
dst-port: 30033
in-interface: '!pppoe-gpon'
protocol: tcp
to-addresses: 128.0.70.5
- action: dst-nat
chain: dstnat
comment: HTTPS
dst-address: 139.28.40.212
dst-port: 443
protocol: tcp
to-addresses: 10.44.0.6
- action: dst-nat
chain: dstnat
comment: HTTP
dst-address: 139.28.40.212
dst-port: 80
protocol: tcp
to-addresses: 10.44.0.6
- action: dst-nat
chain: dstnat
comment: SSH Gitea
dst-address: 139.28.40.212
dst-port: 22
protocol: tcp
to-addresses: 10.44.0.6
- action: dst-nat
chain: dstnat
comment: sunshine
dst-address: 139.28.40.212
dst-port: 47984
in-interface: pppoe-gpon
protocol: tcp
to-addresses: 192.168.0.67
- action: dst-nat
chain: dstnat
comment: sunshine
dst-address: 139.28.40.212
dst-port: 47989
in-interface: pppoe-gpon
protocol: tcp
to-addresses: 192.168.0.67
- action: dst-nat
chain: dstnat
comment: sunshine
dst-address: 139.28.40.212
dst-port: 48010
in-interface: pppoe-gpon
protocol: tcp
to-addresses: 192.168.0.67
- action: dst-nat
chain: dstnat
comment: sunshine
dst-address: 139.28.40.212
dst-port: 48010
in-interface: pppoe-gpon
protocol: udp
to-addresses: 192.168.0.67
- action: dst-nat
chain: dstnat
comment: sunshine
dst-address: 139.28.40.212
dst-port: 47998-48000
in-interface: pppoe-gpon
protocol: udp
to-addresses: 192.168.0.67
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure IPv6 firewall filter rules
community.routeros.api_modify:
path: ipv6 firewall filter
data:
- action: fasttrack-connection
chain: forward
connection-state: established,related
- action: accept
chain: forward
comment: Allow all already established connections
connection-state: established,related
- action: reject
chain: forward
comment: Forbid forwarding routed /48 from tunnelbroker to WAN
dst-address: 2001:470:61a3::/48
out-interface-list: wan
reject-with: icmp-no-route
- action: reject
chain: forward
comment: Forbid forwarding routed /64 from tunnelbroker to WAN
dst-address: 2001:470:71:dd::/64
out-interface-list: wan
reject-with: icmp-no-route
- action: accept
chain: forward
comment: Allow from LAN to everywhere
in-interface: vlan2
- action: accept
chain: forward
comment: Allow ICMPv6 from internet to LAN
in-interface-list: wan
out-interface: vlan2
protocol: icmpv6
- action: accept
chain: forward
comment: Allow from SRV to internet
in-interface: vlan4
out-interface-list: wan
- action: accept
chain: forward
comment: Allow from internet to SRV nodes
dst-address: 2001:470:61a3:100::/64
in-interface-list: wan
out-interface: vlan4
- action: accept
chain: forward
comment: Allow from internet to homelab LB
dst-address: 2001:470:61a3:400::/112
in-interface-list: wan
out-interface: vlan4
- action: accept
chain: forward
comment: Allow from SRV to CAM
in-interface: vlan4
out-interface: vlan3
- action: accept
chain: forward
comment: Allow from dockers to everywhere
in-interface: dockers
- action: accept
chain: forward
comment: Allow from internet to dockers
dst-address: 2001:470:61a3:500::/64
in-interface-list: wan
out-interface: dockers
- action: accept
chain: forward
comment: Allow tcp transmission port to LAN
dst-port: 51413
out-interface: vlan2
protocol: tcp
- action: accept
chain: forward
comment: Allow udp transmission port to LAN
dst-port: 51413
out-interface: vlan2
protocol: udp
- action: reject
chain: forward
comment: Reject all remaining
reject-with: icmp-no-route
- action: accept
chain: input
comment: Allow all already established connections
connection-state: established,related
- action: accept
chain: input
comment: Allow ICMPv6
protocol: icmpv6
- action: accept
chain: input
comment: Allow Winbox
dst-port: 8291
protocol: tcp
- action: accept
chain: input
comment: Allow SSH Mikrotik
dst-port: 2137
protocol: tcp
- action: accept
chain: input
comment: Allow DNS from LAN
dst-port: 53
in-interface: vlan2
protocol: udp
- action: accept
chain: input
dst-port: 53
in-interface: vlan2
protocol: tcp
- action: accept
chain: input
comment: Allow DNS from SRV
dst-port: 53
in-interface: vlan4
protocol: udp
- action: accept
chain: input
dst-port: 53
in-interface: vlan4
protocol: tcp
- action: accept
chain: input
comment: Allow DNS from dockers
dst-port: 53
in-interface: dockers
protocol: udp
- action: accept
chain: input
dst-port: 53
in-interface: dockers
protocol: tcp
- action: accept
chain: input
comment: Allow BGP from SRV
dst-port: 179
in-interface: vlan4
protocol: tcp
src-address: 2001:470:61a3:100::/64
- action: reject
chain: input
comment: Reject all remaining
reject-with: icmp-admin-prohibited
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure IPv6 NAT rules
community.routeros.api_modify:
path: ipv6 firewall nat
data:
- action: src-nat
chain: srcnat
comment: src-nat tailnet to internet
out-interface-list: wan
src-address: fd7a:115c:a1e0::/48
to-address: 2001:470:61a3:600::/64
- action: masquerade
chain: srcnat
disabled: true
in-interface: vlan2
out-interface: vlan4
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true

103
ansible/tasks/hardware.yml Normal file
View File

@@ -0,0 +1,103 @@
---
- name: Configure ethernet interface metadata and SFP options
community.routeros.api_find_and_modify:
ignore_dynamic: false
path: interface ethernet
find:
default-name: "{{ item.default_name }}"
values: "{{ item.config }}"
loop:
- default_name: ether1
config:
comment: Mój pc
- default_name: ether2
config:
comment: Wifi środek
- default_name: ether8
config:
comment: Serwer
- default_name: ether9
config:
comment: Wifi góra
- default_name: ether10
config:
comment: Kamera na domu
- default_name: ether11
config:
comment: KVM serwer
- default_name: sfp-sfpplus1
config:
auto-negotiation: false
comment: GPON WAN
speed: 2.5G-baseX
- default_name: sfp-sfpplus2
config:
comment: GARAŻ
loop_control:
label: "{{ item.default_name }}"
- name: Configure LTE interface defaults
community.routeros.api_find_and_modify:
ignore_dynamic: false
path: interface lte
find:
default-name: lte1
values:
apn-profiles: default-nodns
comment: Backup LTE WAN
- name: Configure LTE APN profiles
community.routeros.api_modify:
path: interface lte apn
data:
- add-default-route: false
apn: internet
comment: default but without dns and default route
ipv6-interface: lte1
name: default-nodns
use-network-apn: true
use-peer-dns: false
# Default APN we can't really remove yet I don't want to reconfigure it
- add-default-route: true
apn: internet
authentication: none
default-route-distance: 2
ip-type: auto
name: default
use-network-apn: true
use-peer-dns: true
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
- name: Configure temporary disk for containers
community.routeros.api_modify:
path: disk
data:
- slot: tmp1
type: tmpfs
# This is not ideal, there's no unique identifier for usb disk,
# after reinstall it might be assigned to another slot
# Just adding disk with slot usb1 and not specifying anything else
# so ansible doesn't touch it
- slot: usb1
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
- name: Configure switch settings
community.routeros.api_find_and_modify:
ignore_dynamic: false
path: interface ethernet switch
find:
.id: "0"
values:
qos-hw-offloading: true
# Enabling L3 offloading would cause all packets to skip firewall and NAT
l3-hw-offloading: false
- name: Configure neighbor discovery settings
community.routeros.api_find_and_modify:
ignore_dynamic: false
path: ip neighbor discovery-settings
find: {}
values:
discover-interface-list: '!dynamic'

View File

@@ -0,0 +1,46 @@
---
- name: Verify API connectivity and fetch basic facts
community.routeros.api_facts:
gather_subset:
- default
- hardware
- name: Show target identity
ansible.builtin.debug:
msg: "Managing {{ ansible_host }} ({{ ansible_facts['net_model'] | default('unknown model') }})"
- name: Assert expected router model
ansible.builtin.assert:
that:
- ansible_facts['net_model'] is defined
- ansible_facts['net_model'] == "CRS418-8P-8G-2S+"
fail_msg: "Unexpected router model: {{ ansible_facts['net_model'] | default('unknown') }}"
success_msg: "Router model matches expected CRS418-8P-8G-2S+"
- name: Read RouterOS device-mode flags
community.routeros.api:
path: system/device-mode
register: routeros_device_mode
check_mode: false
changed_when: false
- name: Assert container feature is enabled in device mode
ansible.builtin.assert:
that:
- not (routeros_device_mode.skipped | default(false))
- (routeros_device_mode | to_nice_json | lower) is search('container[^a-z0-9]+(yes|true)')
fail_msg: "RouterOS device-mode does not report container as enabled. Payload: {{ routeros_device_mode | to_nice_json }}"
success_msg: "RouterOS device-mode confirms container=yes"
- name: Read configured disks
community.routeros.api_info:
path: disk
register: routeros_disks
check_mode: false
- name: Assert usb1 disk is present
ansible.builtin.assert:
that:
- (routeros_disks.result | selectattr('slot', 'equalto', 'usb1') | list | length) > 0
fail_msg: "Required disk slot usb1 is not present on router."
success_msg: "Required disk usb1 is present"

99
ansible/tasks/routing.yml Normal file
View File

@@ -0,0 +1,99 @@
---
- name: Configure IPv4 routes
community.routeros.api_modify:
path: ip route
data:
- comment: Tailnet
disabled: false
distance: 1
dst-address: 100.64.0.0/10
gateway: 172.17.0.2
routing-table: main
scope: 30
suppress-hw-offload: false
target-scope: 10
- disabled: false
distance: 1
dst-address: 0.0.0.0/0
gateway: pppoe-gpon
routing-table: main
scope: 30
suppress-hw-offload: false
target-scope: 10
vrf-interface: pppoe-gpon
- disabled: false
distance: 2
dst-address: 0.0.0.0/0
gateway: 192.168.8.1
routing-table: main
scope: 30
suppress-hw-offload: false
target-scope: 10
vrf-interface: lte1
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
- name: Configure IPv6 routes
community.routeros.api_modify:
path: ipv6 route
data:
- disabled: false
distance: 1
dst-address: 2000::/3
gateway: 2001:470:70:dd::1
scope: 30
target-scope: 10
- comment: Tailnet
disabled: false
dst-address: fd7a:115c:a1e0::/48
gateway: 2001:470:61a3:500::1
pref-src: ""
routing-table: main
suppress-hw-offload: false
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
- name: Configure BGP instance
community.routeros.api_modify:
path: routing bgp instance
data:
- name: bgp-homelab
as: 65000
disabled: false
router-id: 192.168.1.1
routing-table: main
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure BGP templates
community.routeros.api_modify:
path: routing bgp template
data:
- name: klaster
afi: ip,ipv6
as: 6500
disabled: false
# Default template
- name: default
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
- name: Configure BGP connections
community.routeros.api_modify:
path: routing bgp connection
data:
- name: bgp1
afi: ip,ipv6
as: 65000
connect: true
disabled: false
instance: bgp-homelab
listen: true
local.role: ibgp
remote.address: 2001:470:61a3:100::3/128
routing-table: main
templates: klaster
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true

43
ansible/tasks/system.yml Normal file
View File

@@ -0,0 +1,43 @@
---
- name: Configure system clock
community.routeros.api_find_and_modify:
ignore_dynamic: false
path: system clock
find: {}
values:
time-zone-name: Europe/Warsaw
- name: Configure dedicated Ansible management user
community.routeros.api_modify:
path: user
data:
- name: "{{ routeros_api_username }}"
group: full
password: "{{ routeros_api_password }}"
disabled: false
comment: "Ansible management user"
handle_absent_entries: ignore
handle_entries_content: remove_as_much_as_possible
- name: Configure service ports and service enablement
community.routeros.api_find_and_modify:
ignore_dynamic: false
path: ip service
find:
name: "{{ item.name }}"
values: "{{ item }}"
loop:
- name: ftp
disabled: true
- name: telnet
disabled: true
- name: www
disabled: true
- name: ssh
port: 2137
- name: api
disabled: true
- name: api-ssl
disabled: false
loop_control:
label: "{{ item.name }}"

44
ansible/tasks/wan.yml Normal file
View File

@@ -0,0 +1,44 @@
---
- name: Configure PPPoE client
community.routeros.api_modify:
path: interface pppoe-client
data:
- disabled: false
interface: sfp-sfpplus1
keepalive-timeout: 2
name: pppoe-gpon
password: "{{ routeros_pppoe_password }}"
use-peer-dns: true
user: "{{ routeros_pppoe_username }}"
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure 6to4 tunnel interface
community.routeros.api_modify:
path: interface 6to4
data:
- comment: Hurricane Electric IPv6 Tunnel Broker
local-address: 139.28.40.212
mtu: 1472
name: sit1
remote-address: 216.66.80.162
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true
- name: Configure veth interface for containers
community.routeros.api_modify:
path: interface veth
data:
- address: 172.17.0.2/16,2001:470:61a3:500::1/64
container-mac-address: 7E:7E:A1:B1:2A:7C
dhcp: false
gateway: 172.17.0.1
gateway6: 2001:470:61a3:500:ffff:ffff:ffff:ffff
mac-address: 7E:7E:A1:B1:2A:7B
name: veth1
comment: Tailscale container
handle_absent_entries: remove
handle_entries_content: remove_as_much_as_possible
ensure_order: true

View File

@@ -0,0 +1,19 @@
---
# Secret references only; actual values are loaded from OpenBao/Vault at runtime.
# KVv2 mount and secret path (full secret path is <mount>/data/<path>).
openbao_kv_mount: secret
# Field names expected in the OpenBao secret.
openbao_fields:
routeros_api:
path: routeros_api
username_key: username
password_key: password
wan_pppoe:
path: wan_pppoe
username_key: username
password_key: password
routeros_tailscale_container:
path: router_tailscale
container_password_key: container_password

View File

@@ -0,0 +1,8 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- postgres-volume.yaml
- postgres-cluster.yaml
- secret.yaml
- release.yaml

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: authentik

View File

@@ -0,0 +1,23 @@
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: authentik-postgresql-cluster-lvmhdd
namespace: authentik
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:17.4
bootstrap:
initdb:
database: authentik
owner: authentik
storage:
pvcTemplate:
storageClassName: hdd-lvmpv
resources:
requests:
storage: 10Gi
volumeName: authentik-postgresql-cluster-lvmhdd-1

View File

@@ -0,0 +1,33 @@
apiVersion: local.openebs.io/v1alpha1
kind: LVMVolume
metadata:
labels:
kubernetes.io/nodename: anapistula-delrosalae
name: authentik-postgresql-cluster-lvmhdd-1
namespace: openebs
spec:
capacity: 10Gi
ownerNodeID: anapistula-delrosalae
shared: "yes"
thinProvision: "no"
vgPattern: ^openebs-hdd$
volGroup: openebs-hdd
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: authentik-postgresql-cluster-lvmhdd-1
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: hdd-lvmpv
volumeMode: Filesystem
csi:
driver: local.csi.openebs.io
fsType: btrfs
volumeHandle: authentik-postgresql-cluster-lvmhdd-1
---
# PVCs are dynamically created by the Postgres operator

View File

@@ -0,0 +1,61 @@
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: authentik
namespace: authentik
spec:
interval: 24h
url: https://charts.goauthentik.io
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: authentik
namespace: authentik
spec:
interval: 30m
chart:
spec:
chart: authentik
version: 2026.2.1
sourceRef:
kind: HelmRepository
name: authentik
namespace: authentik
interval: 12h
values:
authentik:
postgresql:
host: authentik-postgresql-cluster-lvmhdd-rw
name: authentik
user: authentik
global:
env:
- name: AUTHENTIK_SECRET_KEY
valueFrom:
secretKeyRef:
name: authentik-secret
key: secret_key
- name: AUTHENTIK_POSTGRESQL__PASSWORD
valueFrom:
secretKeyRef:
name: authentik-postgresql-cluster-lvmhdd-app
key: password
postgresql:
enabled: false
server:
ingress:
enabled: true
ingressClassName: nginx-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt
hosts:
- authentik.lumpiasty.xyz
tls:
- secretName: authentik-ingress
hosts:
- authentik.lumpiasty.xyz

View File

@@ -0,0 +1,38 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: authentik-secret
namespace: authentik
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: authentik
namespace: authentik
spec:
method: kubernetes
mount: kubernetes
kubernetes:
role: authentik
serviceAccount: authentik-secret
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: authentik-secret
namespace: authentik
spec:
type: kv-v2
mount: secret
path: authentik
destination:
create: true
name: authentik-secret
type: Opaque
transformation:
excludeRaw: true
vaultAuthRef: authentik

View File

@@ -0,0 +1,48 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: crawl4ai-proxy
namespace: crawl4ai
spec:
replicas: 1
selector:
matchLabels:
app: crawl4ai-proxy
template:
metadata:
labels:
app: crawl4ai-proxy
spec:
containers:
- name: crawl4ai-proxy
image: gitea.lumpiasty.xyz/lumpiasty/crawl4ai-proxy-fit:latest
imagePullPolicy: Always
env:
- name: LISTEN_PORT
value: "8000"
- name: CRAWL4AI_ENDPOINT
value: http://crawl4ai.crawl4ai.svc.cluster.local:11235/crawl
ports:
- name: http
containerPort: 8000
readinessProbe:
tcpSocket:
port: http
initialDelaySeconds: 3
periodSeconds: 10
timeoutSeconds: 2
failureThreshold: 6
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: 10
periodSeconds: 15
timeoutSeconds: 2
failureThreshold: 6
resources:
requests:
cpu: 25m
memory: 32Mi
limits:
cpu: 200m
memory: 128Mi

View File

@@ -0,0 +1,5 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml

View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: crawl4ai-proxy
namespace: crawl4ai
spec:
type: ClusterIP
selector:
app: crawl4ai-proxy
ports:
- name: http
port: 8000
targetPort: 8000
protocol: TCP

View File

@@ -0,0 +1,62 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: crawl4ai
namespace: crawl4ai
spec:
replicas: 1
selector:
matchLabels:
app: crawl4ai
template:
metadata:
labels:
app: crawl4ai
spec:
containers:
- name: crawl4ai
image: unclecode/crawl4ai:latest
imagePullPolicy: IfNotPresent
env:
- name: CRAWL4AI_API_TOKEN
valueFrom:
secretKeyRef:
name: crawl4ai-secret
key: api_token
optional: false
- name: MAX_CONCURRENT_TASKS
value: "5"
ports:
- name: http
containerPort: 11235
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 3
failureThreshold: 6
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 30
periodSeconds: 15
timeoutSeconds: 3
failureThreshold: 6
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: "2"
memory: 4Gi
volumeMounts:
- name: dshm
mountPath: /dev/shm
volumes:
- name: dshm
emptyDir:
medium: Memory
sizeLimit: 1Gi

View File

@@ -1,7 +1,7 @@
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- secret.yaml
- deployment.yaml
- ingress.yaml
- service.yaml

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: crawl4ai

38
apps/crawl4ai/secret.yaml Normal file
View File

@@ -0,0 +1,38 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: crawl4ai-secret
namespace: crawl4ai
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: crawl4ai
namespace: crawl4ai
spec:
method: kubernetes
mount: kubernetes
kubernetes:
role: crawl4ai
serviceAccount: crawl4ai-secret
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: crawl4ai-secret
namespace: crawl4ai
spec:
type: kv-v2
mount: secret
path: crawl4ai
destination:
create: true
name: crawl4ai-secret
type: Opaque
transformation:
excludeRaw: true
vaultAuthRef: crawl4ai

View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: crawl4ai
namespace: crawl4ai
spec:
type: ClusterIP
selector:
app: crawl4ai
ports:
- name: http
port: 11235
targetPort: 11235
protocol: TCP

View File

@@ -0,0 +1,49 @@
apiVersion: local.openebs.io/v1alpha1
kind: LVMVolume
metadata:
labels:
kubernetes.io/nodename: anapistula-delrosalae
name: frigate-config
namespace: openebs
spec:
capacity: 5Gi
ownerNodeID: anapistula-delrosalae
shared: "yes"
thinProvision: "no"
vgPattern: ^openebs-hdd$
volGroup: openebs-hdd
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: frigate-config
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: openebs-lvmpv
volumeMode: Filesystem
csi:
driver: local.csi.openebs.io
volumeHandle: frigate-config
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
namespace: frigate
name: frigate-config
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: frigate-config
namespace: frigate
spec:
storageClassName: openebs-lvmpv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeName: frigate-config

View File

@@ -0,0 +1,9 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- secret.yaml
- config-pvc.yaml
- media-pvc.yaml
- release.yaml
- webrtc-svc.yaml

View File

@@ -0,0 +1,49 @@
apiVersion: local.openebs.io/v1alpha1
kind: LVMVolume
metadata:
labels:
kubernetes.io/nodename: anapistula-delrosalae
name: frigate-media
namespace: openebs
spec:
capacity: 500Gi
ownerNodeID: anapistula-delrosalae
shared: "yes"
thinProvision: "no"
vgPattern: ^openebs-hdd$
volGroup: openebs-hdd
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: frigate-media
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: openebs-lvmpv
volumeMode: Filesystem
csi:
driver: local.csi.openebs.io
volumeHandle: frigate-media
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
namespace: frigate
name: frigate-media
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: frigate-media
namespace: frigate
spec:
storageClassName: openebs-lvmpv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
volumeName: frigate-media

View File

@@ -2,4 +2,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: registry
name: frigate

181
apps/frigate/release.yaml Normal file
View File

@@ -0,0 +1,181 @@
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: blakeblackshear
namespace: frigate
spec:
interval: 24h
url: https://blakeblackshear.github.io/blakeshome-charts/
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: frigate
namespace: frigate
spec:
interval: 30m
chart:
spec:
chart: frigate
version: 7.8.0
sourceRef:
kind: HelmRepository
name: blakeblackshear
namespace: frigate
interval: 12h
values:
config: |
mqtt:
enabled: False
tls:
enabled: False
auth:
enabled: True
cookie_secure: True
record:
expire_interval: 1440 # 24h
sync_recordings: True
enabled: True
retain:
days: 90
mode: motion
objects:
track:
- person
- bicycle
- car
- motorcycle
- cat
- dog
- horse
- sheep
- cow
- bear
review:
alerts:
labels:
- person
- bicycle
- car
- motorcycle
- cat
- dog
- horse
- sheep
- cow
- bear
cameras:
dom:
enabled: True
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/dom
roles:
- audio
- detect
- record
output_args:
record: preset-record-generic-audio-copy
motion:
mask:
# Sasiad
- 0.436,0,0.421,0.072,0.424,0.124,0.304,0.242,0.295,0.194,0.035,0.497,0.035,0.6,0,0.664,0,0
garaz:
enabled: True
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/garaz
roles:
- audio
- detect
- record
output_args:
record: preset-record-generic-audio-copy
motion:
mask:
# Sasiad
- 0.662,0.212,0.569,0.2,0.566,0.149,0.549,0.119,0.532,0.169,0.495,0.14,0.491,0,0.881,0,1,0.154,1,0.221,0.986,0.296,0.94,0.28,0.944,0.178,0.664,0.126
# Drzewo
- 0.087,0.032,0,0.174,0,0.508,0.139,0.226,0.12,0.108
objects:
filters:
person:
# Uparty false positive
mask: 0.739,0.725,0.856,0.76,0.862,0.659,0.746,0.614
# ffmpeg:
# hwaccel_args: preset-vaapi
detectors:
ov_0:
type: openvino
device: CPU
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
path: /openvino-model/ssdlite_mobilenet_v2.xml
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
go2rtc:
streams:
dom:
- rtsp://{FRIGATE_RTSP_DOM_USER}:{FRIGATE_RTSP_DOM_PASSWORD_URLENCODED}@192.168.3.10:554/Streaming/Channels/101
garaz:
- rtsp://{FRIGATE_RTSP_GARAZ_USER}:{FRIGATE_RTSP_GARAZ_PASSWORD_URLENCODED}@192.168.3.11:554/Streaming/Channels/101
webrtc:
candidates:
- frigate-rtc.lumpiasty.xyz:8555
persistence:
media:
enabled: true
size: 500Gi
storageClass: mayastor-single-hdd
skipuninstall: true
config:
enabled: true
size: 5Gi
storageClass: mayastor-single-hdd
skipuninstall: true
envFromSecrets:
- frigate-camera-rtsp
ingress:
enabled: true
ingressClassName: nginx-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt
hosts:
- host: frigate.lumpiasty.xyz
paths:
- path: /
portName: http-auth
tls:
- hosts:
- frigate.lumpiasty.xyz
secretName: frigate-ingress
nodeSelector:
kubernetes.io/hostname: anapistula-delrosalae
# GPU access
# extraVolumes:
# - name: dri
# hostPath:
# path: /dev/dri/renderD128
# type: CharDevice
# extraVolumeMounts:
# - name: dri
# mountPath: /dev/dri/renderD128
# securityContext:
# # Not ideal
# privileged: true

43
apps/frigate/secret.yaml Normal file
View File

@@ -0,0 +1,43 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: camera
namespace: frigate
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: camera
namespace: frigate
spec:
method: kubernetes
mount: kubernetes
kubernetes:
role: frigate-camera
serviceAccount: camera
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: frigate-camera-rtsp
namespace: frigate
spec:
type: kv-v2
mount: secret
path: cameras
destination:
create: true
name: frigate-camera-rtsp
type: Opaque
transformation:
excludeRaw: true
templates:
FRIGATE_RTSP_DOM_PASSWORD_URLENCODED:
text: '{{ urlquery (get .Secrets "FRIGATE_RTSP_DOM_PASSWORD") }}'
FRIGATE_RTSP_GARAZ_PASSWORD_URLENCODED:
text: '{{ urlquery (get .Secrets "FRIGATE_RTSP_GARAZ_PASSWORD") }}'
vaultAuthRef: camera

View File

@@ -0,0 +1,20 @@
apiVersion: v1
kind: Service
metadata:
name: go2rtc
namespace: frigate
spec:
type: LoadBalancer
selector:
app.kubernetes.io/instance: frigate
app.kubernetes.io/name: frigate
ipFamilyPolicy: RequireDualStack
ports:
- name: webrtc-tcp
protocol: TCP
port: 8555
targetPort: webrtc-tcp
- name: webrtc-udp
protocol: UDP
port: 8555
targetPort: webrtc-udp

View File

@@ -7,17 +7,17 @@ spec:
backend:
# Manually adding secrets for now
repoPasswordSecretRef:
name: restic-repo
name: gitea-backup-restic
key: password
s3:
endpoint: https://s3.eu-central-003.backblazeb2.com
bucket: lumpiasty-backups
accessKeyIDSecretRef:
name: backblaze
key: keyid
name: gitea-backup-backblaze
key: aws_access_key_id
secretAccessKeySecretRef:
name: backblaze
key: secret
name: gitea-backup-backblaze
key: aws_secret_access_key
backup:
schedule: "@daily-random"
failedJobsHistoryLimit: 2

View File

@@ -0,0 +1,46 @@
---
apiVersion: local.openebs.io/v1alpha1
kind: LVMVolume
metadata:
labels:
kubernetes.io/nodename: anapistula-delrosalae
name: gitea-shared-storage-lvmhdd
namespace: openebs
spec:
capacity: 10Gi
ownerNodeID: anapistula-delrosalae
shared: "yes"
thinProvision: "no"
vgPattern: ^openebs-hdd$
volGroup: openebs-hdd
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: gitea-shared-storage-lvmhdd
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: hdd-lvmpv
volumeMode: Filesystem
csi:
driver: local.csi.openebs.io
fsType: btrfs
volumeHandle: gitea-shared-storage-lvmhdd
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: gitea-shared-storage-lvmhdd
namespace: gitea
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: hdd-lvmpv
volumeName: gitea-shared-storage-lvmhdd

View File

@@ -2,6 +2,10 @@ apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- release.yaml
- backups.yaml
- postgres-volume.yaml
- postgres-cluster.yaml
- gitea-shared-volume.yaml
- valkey-volume.yaml
- release.yaml
- secret.yaml
- backups.yaml

View File

@@ -2,11 +2,27 @@
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: gitea-postgresql-cluster
name: gitea-postgresql-cluster-lvmhdd
namespace: gitea
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:17.4
storage:
size: 10Gi
storageClass: mayastor-single-hdd
pvcTemplate:
storageClassName: hdd-lvmpv
resources:
requests:
storage: 20Gi
volumeName: gitea-postgresql-cluster-lvmhdd-1
# Just to avoid bootstrapping the instance agian
# I migrated data manually using pv_migrate because this feature is broken
# when source and target volumes are in different storage classes
# CNPG just sets dataSource to the PVC and expects the underlying storage
# to handle the migration, but it doesn't work here
bootstrap:
recovery:
backup:
name: backup-migration

View File

@@ -0,0 +1,33 @@
apiVersion: local.openebs.io/v1alpha1
kind: LVMVolume
metadata:
labels:
kubernetes.io/nodename: anapistula-delrosalae
name: gitea-postgresql-cluster-lvmhdd-1
namespace: openebs
spec:
capacity: 20Gi
ownerNodeID: anapistula-delrosalae
shared: "yes"
thinProvision: "no"
vgPattern: ^openebs-hdd$
volGroup: openebs-hdd
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: gitea-postgresql-cluster-lvmhdd-1
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: hdd-lvmpv
volumeMode: Filesystem
csi:
driver: local.csi.openebs.io
fsType: btrfs
volumeHandle: gitea-postgresql-cluster-lvmhdd-1
---
# PVCs are dynamically created by the Postgres operator

View File

@@ -17,7 +17,7 @@ spec:
chart:
spec:
chart: gitea
version: 11.0.1
version: 12.5.0
sourceRef:
kind: HelmRepository
name: gitea-charts
@@ -28,7 +28,7 @@ spec:
enabled: false
postgresql:
enabled: true
enabled: false
primary:
persistence:
enabled: true
@@ -37,30 +37,43 @@ spec:
requests:
cpu: 0
redis-cluster:
valkey-cluster:
enabled: false
redis:
valkey:
enabled: true
master:
primary:
persistence:
enabled: true
storageClass: mayastor-single-hdd
existingClaim: gitea-valkey-primary-lvmhdd-0
resources:
requests:
cpu: 0
persistence:
enabled: true
storageClass: mayastor-single-hdd
# We'll create PV and PVC manually
create: false
claimName: gitea-shared-storage-lvmhdd
gitea:
additionalConfigFromEnvs:
- name: GITEA__DATABASE__PASSWD
valueFrom:
secretKeyRef:
name: gitea-postgresql-cluster-lvmhdd-app
key: password
config:
database:
DB_TYPE: postgres
HOST: gitea-postgresql-cluster-lvmhdd-rw:5432
NAME: app
USER: app
indexer:
ISSUE_INDEXER_TYPE: bleve
REPO_INDEXER_ENABLED: true
webhook:
ALLOWED_HOST_LIST: woodpecker.lumpiasty.xyz
admin:
username: GiteaAdmin
email: gi@tea.com
@@ -70,20 +83,26 @@ spec:
ssh:
annotations:
lbipam.cilium.io/sharing-key: gitea
lbipam.cilium.io/sharing-cross-namespace: nginx-ingress-controller
lbipam.cilium.io/ips: 10.44.0.0,2001:470:61a3:400::1
lbipam.cilium.io/sharing-cross-namespace: nginx-ingress
lbipam.cilium.io/ips: 10.44.0.6,2001:470:61a3:400::6
type: LoadBalancer
port: 22
# Requirement for sharing ip with other service
externalTrafficPolicy: Cluster
ipFamilyPolicy: RequireDualStack
http:
type: ClusterIP
# We need the service to be at port 80 specifically
# to work around bug of Actions Runner
port: 80
ingress:
enabled: true
className: nginx
className: nginx-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt
acme.cert-manager.io/http01-edit-in-place: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "1g"
hosts:
- host: gitea.lumpiasty.xyz
paths:

58
apps/gitea/secret.yaml Normal file
View File

@@ -0,0 +1,58 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: backup
namespace: gitea
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: backup
namespace: gitea
spec:
method: kubernetes
mount: kubernetes
kubernetes:
role: backup
serviceAccount: backup
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: gitea-backup-restic
namespace: gitea
spec:
type: kv-v2
mount: secret
path: restic
destination:
create: true
name: gitea-backup-restic
type: Opaque
transformation:
excludeRaw: true
vaultAuthRef: backup
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: gitea-backup-backblaze
namespace: gitea
spec:
type: kv-v2
mount: secret
path: backblaze
destination:
create: true
name: gitea-backup-backblaze
type: Opaque
transformation:
excludeRaw: true
vaultAuthRef: backup

View File

@@ -0,0 +1,46 @@
---
apiVersion: local.openebs.io/v1alpha1
kind: LVMVolume
metadata:
labels:
kubernetes.io/nodename: anapistula-delrosalae
name: gitea-valkey-primary-lvmhdd-0
namespace: openebs
spec:
capacity: 1Gi
ownerNodeID: anapistula-delrosalae
shared: "yes"
thinProvision: "no"
vgPattern: ^openebs-hdd$
volGroup: openebs-hdd
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: gitea-valkey-primary-lvmhdd-0
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: hdd-lvmpv
volumeMode: Filesystem
csi:
driver: local.csi.openebs.io
fsType: btrfs
volumeHandle: gitea-valkey-primary-lvmhdd-0
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: gitea-valkey-primary-lvmhdd-0
namespace: gitea
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: hdd-lvmpv
volumeName: gitea-valkey-primary-lvmhdd-0

View File

@@ -0,0 +1,46 @@
---
apiVersion: local.openebs.io/v1alpha1
kind: LVMVolume
metadata:
labels:
kubernetes.io/nodename: anapistula-delrosalae
name: immich-library-lvmhdd
namespace: openebs
spec:
capacity: 150Gi
ownerNodeID: anapistula-delrosalae
shared: "yes"
thinProvision: "no"
vgPattern: ^openebs-hdd$
volGroup: openebs-hdd
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: immich-library-lvmhdd
spec:
capacity:
storage: 150Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: hdd-lvmpv
volumeMode: Filesystem
csi:
driver: local.csi.openebs.io
fsType: btrfs
volumeHandle: immich-library-lvmhdd
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: library-lvmhdd
namespace: immich
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 150Gi
storageClassName: hdd-lvmpv
volumeName: immich-library-lvmhdd

View File

@@ -0,0 +1,11 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- valkey-volume.yaml
- redis.yaml
- postgres-password.yaml
- postgres-volume.yaml
- postgres-cluster.yaml
- immich-library.yaml
- release.yaml

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: immich

View File

@@ -0,0 +1,42 @@
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: immich-db-lvmhdd
namespace: immich
spec:
# TODO: Configure renovate to handle imageName
imageName: ghcr.io/tensorchord/cloudnative-vectorchord:14-0.4.3
instances: 1
storage:
pvcTemplate:
storageClassName: hdd-lvmpv
resources:
requests:
storage: 10Gi
volumeName: immich-db-lvmhdd-1
# Just to avoid bootstrapping the instance again
# I migrated data manually using pv_migrate because this feature is broken
# when source and target volumes are in different storage classes
# CNPG just sets dataSource to the PVC and expects the underlying storage
# to handle the migration, but it doesn't work here
bootstrap:
recovery:
backup:
name: backup-migration
# We need to create custom role because default one does not allow to set up
# vectorchord extension
managed:
roles:
- name: immich
createdb: true
login: true
superuser: true
# We need to manually create secret
# https://github.com/cloudnative-pg/cloudnative-pg/issues/3788
passwordSecret:
name: immich-db-immich

View File

@@ -0,0 +1,38 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: immich-password
namespace: immich
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: immich
namespace: immich
spec:
method: kubernetes
mount: kubernetes
kubernetes:
role: immich
serviceAccount: immich-password
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: immich-db
namespace: immich
spec:
type: kv-v2
mount: secret
path: immich-db
destination:
create: true
name: immich-db-immich
type: kubernetes.io/basic-auth
transformation:
excludeRaw: true
vaultAuthRef: immich

View File

@@ -0,0 +1,33 @@
apiVersion: local.openebs.io/v1alpha1
kind: LVMVolume
metadata:
labels:
kubernetes.io/nodename: anapistula-delrosalae
name: immich-db-lvmhdd-1
namespace: openebs
spec:
capacity: 10Gi
ownerNodeID: anapistula-delrosalae
shared: "yes"
thinProvision: "no"
vgPattern: ^openebs-hdd$
volGroup: openebs-hdd
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: immich-db-lvmhdd-1
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: hdd-lvmpv
volumeMode: Filesystem
csi:
driver: local.csi.openebs.io
fsType: btrfs
volumeHandle: immich-db-lvmhdd-1
---
# PVCs are dynamically created by the Postgres operator

36
apps/immich/redis.yaml Normal file
View File

@@ -0,0 +1,36 @@
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: valkey
namespace: immich
spec:
interval: 24h
url: https://valkey.io/valkey-helm/
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: valkey
namespace: immich
spec:
interval: 30m
chart:
spec:
chart: valkey
version: 0.9.3
sourceRef:
kind: HelmRepository
name: valkey
values:
dataStorage:
enabled: true
persistentVolumeClaimName: immich-valkey
auth:
enabled: true
usersExistingSecret: redis
aclUsers:
default:
passwordKey: redis-password
permissions: "~* &* +@all"

69
apps/immich/release.yaml Normal file
View File

@@ -0,0 +1,69 @@
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: secustor
namespace: immich
spec:
interval: 24h
url: https://secustor.dev/helm-charts
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: immich
namespace: immich
spec:
interval: 30m
chart:
spec:
chart: immich
version: 1.2.2
sourceRef:
kind: HelmRepository
name: secustor
values:
common:
config:
vecotrExtension: vectorchord
postgres:
host: immich-db-lvmhdd-rw
existingSecret:
enabled: true
secretName: immich-db-immich
usernameKey: username
passwordKey: password
redis:
host: valkey
existingSecret:
enabled: true
secretName: redis
passwordKey: redis-password
server:
volumeMounts:
- mountPath: /usr/src/app/upload
name: uploads
volumes:
- name: uploads
persistentVolumeClaim:
claimName: library-lvmhdd
machineLearning:
enabled: true
ingress:
enabled: true
className: nginx-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/proxy-body-size: "0"
hosts:
- host: immich.lumpiasty.xyz
paths:
- path: /
pathType: Prefix
tls:
- hosts:
- immich.lumpiasty.xyz
secretName: immich-ingress

View File

@@ -0,0 +1,46 @@
---
apiVersion: local.openebs.io/v1alpha1
kind: LVMVolume
metadata:
labels:
kubernetes.io/nodename: anapistula-delrosalae
name: immich-valkey
namespace: openebs
spec:
capacity: 1Gi
ownerNodeID: anapistula-delrosalae
shared: "yes"
thinProvision: "no"
vgPattern: ^openebs-hdd$
volGroup: openebs-hdd
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: immich-valkey
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: hdd-lvmpv
volumeMode: Filesystem
csi:
driver: local.csi.openebs.io
fsType: btrfs
volumeHandle: immich-valkey
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: immich-valkey
namespace: immich
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: hdd-lvmpv
volumeName: immich-valkey

View File

@@ -0,0 +1,7 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- pvc.yaml
- statefulset.yaml
- service.yaml

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: ispeak3

49
apps/ispeak3/pvc.yaml Normal file
View File

@@ -0,0 +1,49 @@
apiVersion: local.openebs.io/v1alpha1
kind: LVMVolume
metadata:
labels:
kubernetes.io/nodename: anapistula-delrosalae
name: ispeak3-ts3-data
namespace: openebs
spec:
capacity: 1Gi
ownerNodeID: anapistula-delrosalae
shared: "yes"
thinProvision: "no"
vgPattern: ^openebs-hdd$
volGroup: openebs-hdd
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ispeak3-ts3-data
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: openebs-lvmpv
volumeMode: Filesystem
csi:
driver: local.csi.openebs.io
volumeHandle: ispeak3-ts3-data
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
namespace: ispeak3
name: ispeak3-ts3-data
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ispeak3-ts3-data
namespace: ispeak3
spec:
storageClassName: openebs-lvmpv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeName: ispeak3-ts3-data

20
apps/ispeak3/service.yaml Normal file
View File

@@ -0,0 +1,20 @@
apiVersion: v1
kind: Service
metadata:
name: teamspeak3
namespace: ispeak3
spec:
selector:
app: teamspeak3
ports:
- name: voice
protocol: UDP
port: 9987
targetPort: 9987
- name: filetransfer
protocol: TCP
port: 30033
targetPort: 30033
type: LoadBalancer
externalTrafficPolicy: Local
ipFamilyPolicy: PreferDualStack

View File

@@ -0,0 +1,34 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: teamspeak3-server
namespace: ispeak3
spec:
serviceName: "teamspeak3"
replicas: 1
selector:
matchLabels:
app: teamspeak3
template:
metadata:
labels:
app: teamspeak3
spec:
containers:
- name: teamspeak3
image: teamspeak:3.13.7
ports:
- containerPort: 9987
name: voice
protocol: UDP
- containerPort: 10011
name: query
- containerPort: 30033
name: filetransfer
volumeMounts:
- name: ts3-data
mountPath: /var/ts3server/
volumes:
- name: ts3-data
persistentVolumeClaim:
claimName: ispeak3-ts3-data

View File

@@ -1,9 +1,17 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- crawl4ai
- crawl4ai-proxy
- authentik
- gitea
- registry
- renovate
- ollama
- librechat
- researcher
- frigate
- llama
- immich
- nas
- searxng
- ispeak3
- openwebui
- woodpecker

View File

@@ -2,89 +2,119 @@
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: bat-librechat
name: dynomite567-charts
namespace: librechat
spec:
interval: 24h
url: https://charts.blue-atlas.de
url: https://dynomite567.github.io/helm-charts/
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: librechat
namespace: librechat
spec:
interval: 30m
chart:
spec:
chart: librechat
version: 1.8.9
sourceRef:
kind: HelmRepository
name: bat-librechat
values:
global:
librechat:
existingSecretName: librechat
librechat:
configEnv:
PLUGIN_MODELS: null
ALLOW_REGISTRATION: "false"
TRUST_PROXY: "1"
DOMAIN_CLIENT: https://librechat.lumpiasty.xyz
SEARCH: "true"
existingSecretName: librechat
configYamlContent: |
version: 1.0.3
# apiVersion: helm.toolkit.fluxcd.io/v2
# kind: HelmRelease
# metadata:
# name: librechat
# namespace: librechat
# spec:
# interval: 30m
# chart:
# spec:
# chart: librechat
# version: 1.9.1
# sourceRef:
# kind: HelmRepository
# name: dynomite567-charts
# values:
# global:
# librechat:
# existingSecretName: librechat
# librechat:
# configEnv:
# PLUGIN_MODELS: null
# ALLOW_REGISTRATION: "false"
# TRUST_PROXY: "1"
# DOMAIN_CLIENT: https://librechat.lumpiasty.xyz
# SEARCH: "true"
# existingSecretName: librechat
# configYamlContent: |
# version: 1.0.3
endpoints:
custom:
- name: "Ollama"
apiKey: "ollama"
baseURL: "http://ollama.ollama.svc.cluster.local:11434/v1/chat/completions"
models:
default: [
"llama2",
"mistral",
"codellama",
"dolphin-mixtral",
"mistral-openorca"
]
# fetching list of models is supported but the `name` field must start
# with `ollama` (case-insensitive), as it does in this example.
fetch: true
titleConvo: true
titleModel: "current_model"
summarize: false
summaryModel: "current_model"
forcePrompt: false
modelDisplayLabel: "Ollama"
imageVolume:
enabled: true
size: 10G
accessModes: ReadWriteOnce
storageClassName: mayastor-single-hdd
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt
hosts:
- host: librechat.lumpiasty.xyz
paths:
- path: /
pathType: ImplementationSpecific
tls:
- hosts:
- librechat.lumpiasty.xyz
secretName: librechat-ingress
# endpoints:
# custom:
# - name: "Llama.cpp"
# apiKey: "llama"
# baseURL: "http://llama.llama.svc.cluster.local:11434/v1"
# models:
# default: [
# "DeepSeek-R1-0528-Qwen3-8B-GGUF",
# "Qwen3-8B-GGUF",
# "Qwen3-8B-GGUF-no-thinking",
# "gemma3n-e4b",
# "gemma3-12b",
# "gemma3-12b-q2",
# "gemma3-12b-novision",
# "gemma3-4b",
# "gemma3-4b-novision",
# "Qwen3-4B-Thinking-2507",
# "Qwen3-4B-Thinking-2507-long-ctx",
# "Qwen2.5-VL-7B-Instruct-GGUF",
# "Qwen2.5-VL-32B-Instruct-GGUF-IQ1_S",
# "Qwen2.5-VL-32B-Instruct-GGUF-Q2_K_L",
# "Qwen3-VL-2B-Instruct-GGUF",
# "Qwen3-VL-2B-Instruct-GGUF-unslothish",
# "Qwen3-VL-2B-Thinking-GGUF",
# "Qwen3-VL-4B-Instruct-GGUF",
# "Qwen3-VL-4B-Instruct-GGUF-unslothish",
# "Qwen3-VL-4B-Thinking-GGUF",
# "Qwen3-VL-8B-Instruct-GGUF",
# "Qwen3-VL-8B-Instruct-GGUF-unslothish",
# "Qwen3-VL-8B-Thinking-GGUF",
# "Huihui-Qwen3-VL-8B-Instruct-abliterated-GGUF",
# "Huihui-Qwen3-VL-8B-Thinking-abliterated-GGUF"
# ]
# titleConvo: true
# titleModel: "gemma3-4b-novision"
# summarize: false
# summaryModel: "gemma3-4b-novision"
# forcePrompt: false
# modelDisplayLabel: "Llama.cpp"
mongodb:
persistence:
storageClass: mayastor-single-hdd
# # ✨ IMPORTANT: let llama-swap/llama-server own all these
# dropParams:
# - "temperature"
# - "top_p"
# - "top_k"
# - "presence_penalty"
# - "frequency_penalty"
# - "stop"
# - "max_tokens"
# imageVolume:
# enabled: true
# size: 10G
# accessModes: ReadWriteOnce
# storageClassName: mayastor-single-hdd
# ingress:
# enabled: true
# className: nginx-ingress
# annotations:
# cert-manager.io/cluster-issuer: letsencrypt
# nginx.ingress.kubernetes.io/proxy-body-size: "0"
# nginx.ingress.kubernetes.io/proxy-buffering: "false"
# nginx.ingress.kubernetes.io/proxy-read-timeout: 30m
# hosts:
# - host: librechat.lumpiasty.xyz
# paths:
# - path: /
# pathType: ImplementationSpecific
# tls:
# - hosts:
# - librechat.lumpiasty.xyz
# secretName: librechat-ingress
meilisearch:
persistence:
storageClass: mayastor-single-hdd
auth:
existingMasterKeySecret: librechat
# mongodb:
# persistence:
# storageClass: mayastor-single-hdd
# meilisearch:
# persistence:
# storageClass: mayastor-single-hdd
# auth:
# existingMasterKeySecret: librechat

View File

@@ -2,21 +2,21 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ollama-proxy
namespace: ollama
name: llama-proxy
namespace: llama
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ollama-proxy
app.kubernetes.io/name: llama-proxy
template:
metadata:
labels:
app.kubernetes.io/name: ollama-proxy
app.kubernetes.io/name: llama-proxy
spec:
containers:
- name: caddy
image: caddy:2.9.1-alpine
image: caddy:2.11.2-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /etc/caddy
@@ -25,21 +25,21 @@ spec:
- name: API_KEY
valueFrom:
secretKeyRef:
name: ollama-api-key
name: llama-api-key
key: API_KEY
volumes:
- name: proxy-config
configMap:
name: ollama-proxy-config
name: llama-proxy-config
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: ollama
name: ollama-proxy-config
namespace: llama
name: llama-proxy-config
data:
Caddyfile: |
http://ollama.lumpiasty.xyz {
http://llama.lumpiasty.xyz {
@requireAuth {
not header Authorization "Bearer {env.API_KEY}"
@@ -47,7 +47,7 @@ data:
respond @requireAuth "Unauthorized" 401
reverse_proxy ollama:11434 {
reverse_proxy llama:11434 {
flush_interval -1
}
}
@@ -55,12 +55,12 @@ data:
apiVersion: v1
kind: Service
metadata:
namespace: ollama
name: ollama-proxy
namespace: llama
name: llama-proxy
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: ollama-proxy
app.kubernetes.io/name: llama-proxy
ports:
- name: http
port: 80

View File

@@ -0,0 +1,285 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/mostlygeek/llama-swap/refs/heads/main/config-schema.json
healthCheckTimeout: 600
logToStdout: "both" # proxy and upstream
macros:
base_args: "--no-warmup --port ${PORT}"
common_args: "--fit-target 1536 --no-warmup --port ${PORT}"
ctx_128k: "--ctx-size 131072"
ctx_256k: "--ctx-size 262144"
gemma_sampling: "--prio 2 --temp 1.0 --repeat-penalty 1.0 --min-p 0.00 --top-k 64 --top-p 0.95"
qwen35_sampling: "--temp 0.6 --top-p 0.95 --top-k 20 --min-p 0.00 -ctk q8_0 -ctv q8_0"
qwen35_35b_args: "--temp 1.0 --min-p 0.00 --top-p 0.95 --top-k 20 -ctk q8_0 -ctv q8_0"
qwen35_35b_heretic_mmproj: "--mmproj-url https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF/resolve/main/mmproj-F16.gguf --mmproj /root/.cache/llama.cpp/unsloth_Qwen3.5-35B-A3B-GGUF_mmproj-F16.gguf"
qwen35_4b_heretic_mmproj: "--mmproj-url https://huggingface.co/unsloth/Qwen3.5-4B-GGUF/resolve/main/mmproj-F16.gguf --mmproj /root/.cache/llama.cpp/unsloth_Qwen3.5-4B-GGUF_mmproj-F16.gguf"
glm47_flash_args: "--temp 0.7 --top-p 1.0 --min-p 0.01 --repeat-penalty 1.0"
gemma4_sampling: "--temp 1.0 --top-p 0.95 --top-k 64"
thinking_on: "--chat-template-kwargs '{\"enable_thinking\": true}'"
thinking_off: "--chat-template-kwargs '{\"enable_thinking\": false}'"
hooks:
on_startup:
preload:
- "Qwen3.5-0.8B-GGUF-nothink:Q4_K_XL"
groups:
always:
persistent: true
exclusive: false
swap: false
members:
- "Qwen3.5-0.8B-GGUF-nothink:Q4_K_XL"
models:
"gemma3-12b":
cmd: |
/app/llama-server
-hf unsloth/gemma-3-12b-it-GGUF:Q4_K_M
${ctx_128k}
${gemma_sampling}
${common_args}
"gemma3-12b-novision":
cmd: |
/app/llama-server
-hf unsloth/gemma-3-12b-it-GGUF:Q4_K_M
${ctx_128k}
${gemma_sampling}
--no-mmproj
${common_args}
"gemma3-4b":
cmd: |
/app/llama-server
-hf unsloth/gemma-3-4b-it-GGUF:Q4_K_M
${ctx_128k}
${gemma_sampling}
${common_args}
"gemma3-4b-novision":
cmd: |
/app/llama-server
-hf unsloth/gemma-3-4b-it-GGUF:Q4_K_M
${ctx_128k}
${gemma_sampling}
--no-mmproj
${common_args}
"Qwen3-Coder-Next-GGUF:Q4_K_M":
cmd: |
/app/llama-server
-hf unsloth/Qwen3-Coder-Next-GGUF:Q4_K_M
--ctx-size 65536
--predict 8192
--temp 1.0
--min-p 0.01
--top-p 0.95
--top-k 40
--repeat-penalty 1.0
-ctk q8_0 -ctv q8_0
${common_args}
"Qwen3.5-35B-A3B-GGUF:Q4_K_M":
cmd: |
/app/llama-server
-hf unsloth/Qwen3.5-35B-A3B-GGUF:Q4_K_M
${ctx_256k}
${qwen35_35b_args}
${common_args}
"Qwen3.5-35B-A3B-GGUF-nothink:Q4_K_M":
cmd: |
/app/llama-server
-hf unsloth/Qwen3.5-35B-A3B-GGUF:Q4_K_M
${ctx_256k}
${qwen35_35b_args}
${common_args}
${thinking_off}
# The "heretic" version does not provide the mmproj
# so providing url to the one from the non-heretic version.
"Qwen3.5-35B-A3B-heretic-GGUF:Q4_K_M":
cmd: |
/app/llama-server
-hf mradermacher/Qwen3.5-35B-A3B-heretic-GGUF:Q4_K_M
${qwen35_35b_heretic_mmproj}
${ctx_256k}
${qwen35_35b_args}
${common_args}
"Qwen3.5-35B-A3B-heretic-GGUF-nothink:Q4_K_M":
cmd: |
/app/llama-server
-hf mradermacher/Qwen3.5-35B-A3B-heretic-GGUF:Q4_K_M
${qwen35_35b_heretic_mmproj}
${ctx_256k}
${qwen35_35b_args}
${common_args}
${thinking_off}
"Qwen3.5-0.8B-GGUF:Q4_K_XL":
cmd: |
/app/llama-server
-hf unsloth/Qwen3.5-0.8B-GGUF:Q4_K_XL
${ctx_256k}
${qwen35_sampling}
${base_args}
${thinking_on}
"Qwen3.5-0.8B-GGUF-nothink:Q4_K_XL":
cmd: |
/app/llama-server
-hf unsloth/Qwen3.5-0.8B-GGUF:Q4_K_XL
--ctx-size 4096
${qwen35_sampling}
${base_args}
${thinking_off}
"Qwen3.5-2B-GGUF:Q4_K_M":
cmd: |
/app/llama-server
-hf unsloth/Qwen3.5-2B-GGUF:Q4_K_M
${ctx_256k}
${qwen35_sampling}
${common_args}
${thinking_on}
"Qwen3.5-2B-GGUF-nothink:Q4_K_M":
cmd: |
/app/llama-server
-hf unsloth/Qwen3.5-2B-GGUF:Q4_K_M
${ctx_256k}
${qwen35_sampling}
${common_args}
${thinking_off}
"Qwen3.5-4B-GGUF:Q4_K_M":
cmd: |
/app/llama-server
-hf unsloth/Qwen3.5-4B-GGUF:Q4_K_M
${ctx_128k}
${qwen35_sampling}
${common_args}
${thinking_on}
"Qwen3.5-4B-GGUF-nothink:Q4_K_M":
cmd: |
/app/llama-server
-hf unsloth/Qwen3.5-4B-GGUF:Q4_K_M
${ctx_128k}
${qwen35_sampling}
${common_args}
${thinking_off}
"Qwen3.5-4B-heretic-GGUF:Q4_K_M":
cmd: |
/app/llama-server
-hf mradermacher/Qwen3.5-4B-heretic-GGUF:Q4_K_M
${qwen35_4b_heretic_mmproj}
${ctx_128k}
${qwen35_sampling}
${common_args}
${thinking_on}
"Qwen3.5-4B-heretic-GGUF-nothink:Q4_K_M":
cmd: |
/app/llama-server
-hf mradermacher/Qwen3.5-4B-heretic-GGUF:Q4_K_M
${qwen35_4b_heretic_mmproj}
${ctx_128k}
${qwen35_sampling}
${common_args}
${thinking_off}
"Qwen3.5-9B-GGUF:Q4_K_M":
cmd: |
/app/llama-server
-hf unsloth/Qwen3.5-9B-GGUF:Q4_K_M
${ctx_256k}
${qwen35_sampling}
${common_args}
${thinking_on}
"Qwen3.5-9B-GGUF-nothink:Q4_K_M":
cmd: |
/app/llama-server
-hf unsloth/Qwen3.5-9B-GGUF:Q4_K_M
${ctx_256k}
${qwen35_sampling}
${common_args}
${thinking_off}
"Qwen3.5-9B-GGUF:Q3_K_M":
cmd: |
/app/llama-server
-hf unsloth/Qwen3.5-9B-GGUF:Q3_K_M
${ctx_256k}
${qwen35_sampling}
${common_args}
${thinking_on}
"Qwen3.5-9B-GGUF-nothink:Q3_K_M":
cmd: |
/app/llama-server
-hf unsloth/Qwen3.5-9B-GGUF:Q3_K_M
${ctx_256k}
${qwen35_sampling}
${common_args}
${thinking_off}
"Qwen3.5-27B-GGUF:Q3_K_M":
cmd: |
/app/llama-server
-hf unsloth/Qwen3.5-27B-GGUF:Q3_K_M
${ctx_256k}
${qwen35_sampling}
${common_args}
${thinking_on}
"Qwen3.5-27B-GGUF-nothink:Q3_K_M":
cmd: |
/app/llama-server
-hf unsloth/Qwen3.5-27B-GGUF:Q3_K_M
${ctx_256k}
${qwen35_sampling}
${common_args}
${thinking_off}
"GLM-4.7-Flash-GGUF:Q4_K_M":
cmd: |
/app/llama-server
-hf unsloth/GLM-4.7-Flash-GGUF:Q4_K_M
${glm47_flash_args}
${common_args}
"gemma-4-26B-A4B-it:UD-Q4_K_XL":
cmd: |
/app/llama-server
-hf unsloth/gemma-4-26B-A4B-it-GGUF:UD-Q4_K_XL \
${ctx_256k}
${gemma4_sampling}
${common_args}
"gemma-4-26B-A4B-it:UD-Q2_K_XL":
cmd: |
/app/llama-server
-hf unsloth/gemma-4-26B-A4B-it-GGUF:UD-Q2_K_XL \
${ctx_256k}
${gemma4_sampling}
${common_args}
"unsloth/gemma-4-E4B-it-GGUF:UD-Q4_K_XL":
cmd: |
/app/llama-server
-hf unsloth/gemma-4-E4B-it-GGUF:UD-Q4_K_XL \
${ctx_128k}
${gemma4_sampling}
${common_args}
"unsloth/gemma-4-E2B-it-GGUF:UD-Q4_K_XL":
cmd: |
/app/llama-server
-hf unsloth/gemma-4-E2B-it-GGUF:UD-Q4_K_XL \
${ctx_128k}
${gemma4_sampling}
${common_args}

View File

@@ -0,0 +1,101 @@
{%- if not add_generation_prompt is defined %}
{%- set add_generation_prompt = false %}
{%- endif %}
{%- set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='', is_first_sp=true, is_last_user=false) %}
{%- for message in messages %}
{%- if message['role'] == 'system' %}
{%- if ns.is_first_sp %}
{%- set ns.system_prompt = ns.system_prompt + message['content'] %}
{%- set ns.is_first_sp = false %}
{%- else %}
{%- set ns.system_prompt = ns.system_prompt + '\n\n' + message['content'] %}
{%- endif %}
{%- endif %}
{%- endfor %}
{#- Adapted from https://github.com/sgl-project/sglang/blob/main/examples/chat_template/tool_chat_template_deepseekr1.jinja #}
{%- if tools is defined and tools is not none %}
{%- set tool_ns = namespace(text='You are a helpful assistant with tool calling capabilities. ' + 'When a tool call is needed, you MUST use the following format to issue the call:\n' + '<tool▁calls▁begin><tool▁call▁begin>function<tool▁sep>FUNCTION_NAME\n' + '```json\n{"param1": "value1", "param2": "value2"}\n```<tool▁call▁end><tool▁calls▁end>\n\n' + 'Make sure the JSON is valid.' + '## Tools\n\n### Function\n\nYou have the following functions available:\n\n') %}
{%- for tool in tools %}
{%- set tool_ns.text = tool_ns.text + '\n```json\n' + (tool | tojson) + '\n```\n' %}
{%- endfor %}
{%- if ns.system_prompt|length != 0 %}
{%- set ns.system_prompt = ns.system_prompt + '\n\n' + tool_ns.text %}
{%- else %}
{%- set ns.system_prompt = tool_ns.text %}
{%- endif %}
{%- endif %}
{{- bos_token }}
{{- '/no_think' + ns.system_prompt }}
{%- set last_index = (messages|length - 1) %}
{%- for message in messages %}
{%- set content = message['content'] %}
{%- if message['role'] == 'user' %}
{%- set ns.is_tool = false -%}
{%- set ns.is_first = false -%}
{%- set ns.is_last_user = true -%}
{%- if loop.index0 == last_index %}
{{- '<User>' + content }}
{%- else %}
{{- '<User>' + content + '<Assistant>'}}
{%- endif %}
{%- endif %}
{%- if message['role'] == 'assistant' %}
{%- if '</think>' in content %}
{%- set content = (content.split('</think>')|last) %}
{%- endif %}
{%- endif %}
{%- if message['role'] == 'assistant' and message['tool_calls'] is defined and message['tool_calls'] is not none %}
{%- set ns.is_last_user = false -%}
{%- if ns.is_tool %}
{{- '<tool▁outputs▁end>'}}
{%- endif %}
{%- set ns.is_first = false %}
{%- set ns.is_tool = false -%}
{%- set ns.is_output_first = true %}
{%- for tool in message['tool_calls'] %}
{%- set arguments = tool['function']['arguments'] %}
{%- if arguments is not string %}
{%- set arguments = arguments|tojson %}
{%- endif %}
{%- if not ns.is_first %}
{%- if content is none %}
{{- '<tool▁calls▁begin><tool▁call▁begin>' + tool['type'] + '<tool▁sep>' + tool['function']['name'] + '\n' + '```json' + '\n' + arguments + '\n' + '```' + '<tool▁call▁end>'}}
}
{%- else %}
{{- content + '<tool▁calls▁begin><tool▁call▁begin>' + tool['type'] + '<tool▁sep>' + tool['function']['name'] + '\n' + '```json' + '\n' + arguments + '\n' + '```' + '<tool▁call▁end>'}}
{%- endif %}
{%- set ns.is_first = true -%}
{%- else %}
{{- '\n' + '<tool▁call▁begin>' + tool['type'] + '<tool▁sep>' + tool['function']['name'] + '\n' + '```json' + '\n' + arguments + '\n' + '```' + '<tool▁call▁end>'}}
{%- endif %}
{%- endfor %}
{{- '<tool▁calls▁end><end▁of▁sentence>'}}
{%- endif %}
{%- if message['role'] == 'assistant' and (message['tool_calls'] is not defined or message['tool_calls'] is none) %}
{%- set ns.is_last_user = false -%}
{%- if ns.is_tool %}
{{- '<tool▁outputs▁end>' + content + '<end▁of▁sentence>'}}
{%- set ns.is_tool = false -%}
{%- else %}
{{- content + '<end▁of▁sentence>'}}
{%- endif %}
{%- endif %}
{%- if message['role'] == 'tool' %}
{%- set ns.is_last_user = false -%}
{%- set ns.is_tool = true -%}
{%- if ns.is_output_first %}
{{- '<tool▁outputs▁begin><tool▁output▁begin>' + content + '<tool▁output▁end>'}}
{%- set ns.is_output_first = false %}
{%- else %}
{{- '\n<tool▁output▁begin>' + content + '<tool▁output▁end>'}}
{%- endif %}
{%- endif %}
{%- endfor -%}
{%- if ns.is_tool %}
{{- '<tool▁outputs▁end>'}}
{%- endif %}
{#- if add_generation_prompt and not ns.is_last_user and not ns.is_tool #}
{%- if add_generation_prompt and not ns.is_tool %}
{{- '<Assistant>'}}
{%- endif %}

View File

@@ -0,0 +1,72 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: llama-swap
namespace: llama
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: llama-swap
template:
metadata:
labels:
app: llama-swap
spec:
containers:
- name: llama-swap
image: ghcr.io/mostlygeek/llama-swap:v199-vulkan-b8637
imagePullPolicy: IfNotPresent
command:
- /app/llama-swap
args:
- --config=/config/config.yaml
- --watch-config
ports:
- containerPort: 8080
name: http
protocol: TCP
volumeMounts:
- name: models
mountPath: /root/.cache
- mountPath: /dev/kfd
name: kfd
- mountPath: /dev/dri
name: dri
- mountPath: /config
name: config
securityContext:
privileged: true
volumes:
- name: models
persistentVolumeClaim:
claimName: llama-models-lvmssd
- name: kfd
hostPath:
path: /dev/kfd
type: CharDevice
- name: dri
hostPath:
path: /dev/dri
type: Directory
- name: config
configMap:
name: llama-swap
---
apiVersion: v1
kind: Service
metadata:
name: llama
namespace: llama
spec:
type: ClusterIP
ports:
- name: http
port: 11434
targetPort: 8080
protocol: TCP
selector:
app: llama-swap

View File

@@ -2,27 +2,27 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: ollama
name: ollama
namespace: llama
name: llama
annotations:
cert-manager.io/cluster-issuer: letsencrypt
acme.cert-manager.io/http01-edit-in-place: "true"
nginx.ingress.kubernetes.io/proxy-buffering: "false"
nginx.ingress.kubernetes.io/proxy-read-timeout: 30m
spec:
ingressClassName: nginx
ingressClassName: nginx-ingress
rules:
- host: ollama.lumpiasty.xyz
- host: llama.lumpiasty.xyz
http:
paths:
- backend:
service:
name: ollama-proxy
name: llama-proxy
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- ollama.lumpiasty.xyz
secretName: ollama-ingress
- llama.lumpiasty.xyz
secretName: llama-ingress

View File

@@ -0,0 +1,15 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- secret.yaml
- auth-proxy.yaml
- ingress.yaml
- pvc-ssd.yaml
- deployment.yaml
configMapGenerator:
- name: llama-swap
namespace: llama
files:
- config.yaml=configs/config.yaml
- qwen_nothink_chat_template.jinja=configs/qwen_nothink_chat_template.jinja

View File

@@ -2,4 +2,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: ollama
name: llama

46
apps/llama/pvc-ssd.yaml Normal file
View File

@@ -0,0 +1,46 @@
---
apiVersion: local.openebs.io/v1alpha1
kind: LVMVolume
metadata:
labels:
kubernetes.io/nodename: anapistula-delrosalae
name: llama-models-lvmssd
namespace: openebs
spec:
capacity: "322122547200"
ownerNodeID: anapistula-delrosalae
shared: "yes"
thinProvision: "no"
vgPattern: ^openebs-ssd$
volGroup: openebs-ssd
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: llama-models-lvmssd
spec:
capacity:
storage: 300Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: ssd-lvmpv
volumeMode: Filesystem
csi:
driver: local.csi.openebs.io
fsType: btrfs
volumeHandle: llama-models-lvmssd
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: llama-models-lvmssd
namespace: llama
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 300Gi
storageClassName: ssd-lvmpv
volumeName: llama-models-lvmssd

38
apps/llama/secret.yaml Normal file
View File

@@ -0,0 +1,38 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: llama-proxy
namespace: llama
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: llama
namespace: llama
spec:
method: kubernetes
mount: kubernetes
kubernetes:
role: llama-proxy
serviceAccount: llama-proxy
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: llama-api-key
namespace: llama
spec:
type: kv-v2
mount: secret
path: ollama
destination:
create: true
name: llama-api-key
type: Opaque
transformation:
excludeRaw: true
vaultAuthRef: llama

28
apps/nas/configmap.yaml Normal file
View File

@@ -0,0 +1,28 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: nas-sftp-config
namespace: nas
data:
sftp.json: |
{
"Global": {
"Chroot": {
"Directory": "%h",
"StartPath": "data"
},
"Directories": [
"data"
]
},
"Users": [
{
"Username": "nas",
"UID": 1000,
"GID": 1000,
"PublicKeys": [
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCresbDFZijI+rZMgd3LdciPjpb4x4S5B7y0U+EoYPaz6hILT72fyz3QdcgKJJv8JUJI6g0811/yFRuOzCXgWaA922c/S/t6HMUrorh7mPVQMTN2dc/SVBvMa7S2M9NYBj6z1X2LRHs+g1JTMCtL202PIjes/E9qu0as0Vx6n/6HHNmtmA9LrpiAmurbeKXDmrYe2yWg/FA6cX5d86SJb21Dj8WqdCd3Hz0Pi6FzMKXhpWvs5Hfei1htsjsRzCxkpSTjlgFEFVfmHIXPfB06Sa6aCnkxAFnE7N+xNa9RIWeZmOXdA74LsfSKQ9eAXSrsC/IRxo2ce8cBzXJy+Itxw24fUqGYXBiCgx8i3ZA9IdwI1u71xYo9lyNjav5VykzKnAHRAYnDm9UsCf8k04reBevcLdtxL11vPCtind3xn76Nhy2b45dcp/MdYFANGsCcXJOMb6Aisb03HPGhs/aU3tCAQbTVe195mL9FWhGqIK2wBmF1SKW+4ssX2bIU6YaCYc= cardno:23_671_999"
]
}
]
}

68
apps/nas/deployment.yaml Normal file
View File

@@ -0,0 +1,68 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nas-sftp
namespace: nas
spec:
replicas: 1
selector:
matchLabels:
app: nas-sftp
template:
metadata:
labels:
app: nas-sftp
spec:
initContainers:
- name: prepare-home
image: alpine:3.23.3
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- |
set -euo pipefail
mkdir -p /volume/sftp-root
chown root:root /volume/sftp-root
chmod 755 /volume/sftp-root
mkdir -p /volume/sftp-root/data
chown 1000:1000 /volume/sftp-root/data
chmod 750 /volume/sftp-root/data
mkdir -p /volume/host-keys
chown root:root /volume/host-keys
chmod 700 /volume/host-keys
volumeMounts:
- name: home
mountPath: /volume
containers:
- name: sftp
image: docker.io/emberstack/sftp:build-5.1.72
imagePullPolicy: IfNotPresent
ports:
- containerPort: 22
name: sftp
protocol: TCP
volumeMounts:
- name: config
mountPath: /app/config/sftp.json
subPath: sftp.json
readOnly: true
- name: home
mountPath: /home/nas
subPath: sftp-root
- name: home
mountPath: /etc/ssh/keys
subPath: host-keys
resources:
requests:
cpu: 50m
memory: 128Mi
limits:
memory: 512Mi
volumes:
- name: home
persistentVolumeClaim:
claimName: nas-data-lvm-hdd
- name: config
configMap:
name: nas-sftp-config

View File

@@ -1,8 +1,8 @@
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- volume.yaml
- configmap.yaml
- pvc.yaml
- deployment.yaml
- ingress.yaml
- service.yaml

4
apps/nas/namespace.yaml Normal file
View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: nas

49
apps/nas/pvc.yaml Normal file
View File

@@ -0,0 +1,49 @@
apiVersion: local.openebs.io/v1alpha1
kind: LVMVolume
metadata:
labels:
kubernetes.io/nodename: anapistula-delrosalae
name: nas-data-lvm-hdd
namespace: openebs
spec:
capacity: 4Gi
ownerNodeID: anapistula-delrosalae
shared: "yes"
thinProvision: "no"
vgPattern: ^openebs-hdd$
volGroup: openebs-hdd
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: nas-data-lvm-hdd
spec:
capacity:
storage: 4Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: openebs-lvmpv
volumeMode: Filesystem
csi:
driver: local.csi.openebs.io
volumeHandle: nas-data-lvm-hdd
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
namespace: nas
name: nas-data-lvm-hdd
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nas-data-lvm-hdd
namespace: nas
spec:
storageClassName: openebs-lvmpv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
volumeName: nas-data-lvm-hdd

15
apps/nas/service.yaml Normal file
View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: nas-sftp
namespace: nas
spec:
type: LoadBalancer
externalTrafficPolicy: Cluster
ports:
- name: sftp
port: 22
targetPort: 22
protocol: TCP
selector:
app: nas-sftp

View File

@@ -1,60 +0,0 @@
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: ollama-helm
namespace: ollama
spec:
interval: 24h
url: https://otwld.github.io/ollama-helm/
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: ollama
namespace: ollama
spec:
interval: 30m
chart:
spec:
chart: ollama
version: 1.14.0
sourceRef:
kind: HelmRepository
name: ollama-helm
namespace: ollama
interval: 12h
values:
ollama:
gpu:
enabled: false
persistentVolume:
enabled: true
storageClass: mayastor-single-hdd
size: 200Gi
# GPU support
# Rewrite of options in
# https://hub.docker.com/r/grinco/ollama-amd-apu
image:
repository: grinco/ollama-amd-apu
tag: vulkan
securityContext:
# Not ideal
privileged: true
capabilities:
add:
- PERFMON
volumeMounts:
- name: kfd
mountPath: /dev/kfd
- name: dri
mountPath: /dev/dri
volumes:
- name: kfd
hostPath:
path: /dev/kfd
type: CharDevice
- name: dri
hostPath:
path: /dev/dri
type: Directory

View File

@@ -0,0 +1,44 @@
---
apiVersion: v1
kind: Service
metadata:
namespace: openwebui
name: openwebui-web
spec:
type: ClusterIP
selector:
app.kubernetes.io/component: open-webui
app.kubernetes.io/instance: openwebui
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: openwebui
name: openwebui
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-buffering: "false"
nginx.ingress.kubernetes.io/proxy-read-timeout: 30m
spec:
ingressClassName: nginx-ingress
rules:
- host: openwebui.lumpiasty.xyz
http:
paths:
- backend:
service:
name: openwebui-web
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- openwebui.lumpiasty.xyz
secretName: openwebui-ingress

View File

@@ -2,6 +2,8 @@ apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- pvc.yaml
- pvc-pipelines.yaml
- secret.yaml
- release.yaml
- auth-proxy.yaml
- ingress.yaml

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: openwebui

View File

@@ -0,0 +1,46 @@
---
apiVersion: local.openebs.io/v1alpha1
kind: LVMVolume
metadata:
labels:
kubernetes.io/nodename: anapistula-delrosalae
name: openwebui-pipelines-lvmhdd
namespace: openebs
spec:
capacity: 1Gi
ownerNodeID: anapistula-delrosalae
shared: "yes"
thinProvision: "no"
vgPattern: ^openebs-hdd$
volGroup: openebs-hdd
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: openwebui-pipelines-lvmhdd
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: hdd-lvmpv
volumeMode: Filesystem
csi:
driver: local.csi.openebs.io
fsType: btrfs
volumeHandle: openwebui-pipelines-lvmhdd
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: openwebui-pipelines-lvmhdd
namespace: openwebui
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: hdd-lvmpv
volumeName: openwebui-pipelines-lvmhdd

46
apps/openwebui/pvc.yaml Normal file
View File

@@ -0,0 +1,46 @@
---
apiVersion: local.openebs.io/v1alpha1
kind: LVMVolume
metadata:
labels:
kubernetes.io/nodename: anapistula-delrosalae
name: openwebui-lvmhdd
namespace: openebs
spec:
capacity: 10Gi
ownerNodeID: anapistula-delrosalae
shared: "yes"
thinProvision: "no"
vgPattern: ^openebs-hdd$
volGroup: openebs-hdd
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: openwebui-lvmhdd
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: hdd-lvmpv
volumeMode: Filesystem
csi:
driver: local.csi.openebs.io
fsType: btrfs
volumeHandle: openwebui-lvmhdd
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: openwebui-lvmhdd
namespace: openwebui
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: hdd-lvmpv
volumeName: openwebui-lvmhdd

View File

@@ -0,0 +1,73 @@
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: open-webui
namespace: openwebui
spec:
interval: 24h
url: https://open-webui.github.io/helm-charts
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: openwebui
namespace: openwebui
spec:
interval: 30m
chart:
spec:
chart: open-webui
version: 12.13.0
sourceRef:
kind: HelmRepository
name: open-webui
values:
# Disable built in ingress, service is broken in chart
# They have hard coded wrong target port
# Reimplementing that in ingress.yaml
ingress:
enabled: false
persistence:
enabled: true
existingClaim: openwebui-lvmhdd
enableOpenaiApi: true
openaiBaseApiUrl: "http://llama.llama.svc.cluster.local:11434/v1"
ollama:
enabled: false
pipelines:
enabled: true
persistence:
enabled: true
existingClaim: openwebui-pipelines-lvmhdd
# SSO with Authentik
extraEnvVars:
- name: WEBUI_URL
value: "https://openwebui.lumpiasty.xyz"
- name: OAUTH_CLIENT_ID
valueFrom:
secretKeyRef:
name: openwebui-authentik
key: client_id
- name: OAUTH_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: openwebui-authentik
key: client_secret
- name: OAUTH_PROVIDER_NAME
value: "authentik"
- name: OPENID_PROVIDER_URL
value: "https://authentik.lumpiasty.xyz/application/o/open-web-ui/.well-known/openid-configuration"
- name: OPENID_REDIRECT_URI
value: "https://openwebui.lumpiasty.xyz/oauth/oidc/callback"
- name: ENABLE_OAUTH_SIGNUP
value: "true"
- name: ENABLE_LOGIN_FORM
value: "false"
- name: OAUTH_MERGE_ACCOUNTS_BY_EMAIL
value: "true"

View File

@@ -0,0 +1,43 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: openwebui-secret
namespace: openwebui
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: openwebui
namespace: openwebui
spec:
method: kubernetes
mount: kubernetes
kubernetes:
role: openwebui
serviceAccount: openwebui-secret
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: openwebui-authentik
namespace: openwebui
spec:
type: kv-v2
mount: secret
path: authentik/openwebui
destination:
create: true
name: openwebui-authentik
type: Opaque
transformation:
excludeRaw: true
templates:
client_id:
text: '{{ get .Secrets "client_id" }}'
client_secret:
text: '{{ get .Secrets "client_secret" }}'
vaultAuthRef: openwebui

View File

@@ -1,40 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: registry
namespace: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:3.0.0
ports:
- containerPort: 5000
volumeMounts:
- name: data
mountPath: /var/lib/registry
volumes:
- name: data
persistentVolumeClaim:
claimName: registry-data
---
apiVersion: v1
kind: Service
metadata:
name: registry-service
namespace: registry
spec:
selector:
app: registry
ports:
- protocol: TCP
port: 80
targetPort: 5000

View File

@@ -1,13 +0,0 @@
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: registry-data
namespace: registry
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: mayastor-single-hdd

View File

@@ -0,0 +1,11 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: renovate
name: renovate-config
data:
RENOVATE_AUTODISCOVER: "true"
RENOVATE_ENDPOINT: https://gitea.lumpiasty.xyz/api/v1
RENOVATE_PLATFORM: gitea
RENOVATE_GIT_AUTHOR: Renovate Bot <renovate@lumpiasty.xyz>

View File

@@ -5,7 +5,7 @@ metadata:
name: renovate
namespace: renovate
spec:
schedule: "@hourly"
schedule: "@daily"
concurrencyPolicy: Forbid
jobTemplate:
spec:
@@ -15,8 +15,10 @@ spec:
- name: renovate
# Update this to the latest available and then enable Renovate on
# the manifest
image: renovate/renovate:39.251.2-full
image: renovate/renovate:43.104.3-full
envFrom:
- secretRef:
name: renovate-env
name: renovate-gitea-token
- configMapRef:
name: renovate-config
restartPolicy: Never

Some files were not shown because too many files have changed in this diff Show More