social.tchncs.de is one of the many independent Mastodon servers you can use to participate in the fediverse.
A friendly server from Germany – which tends to attract techy people, but welcomes everybody. This is one of the oldest Mastodon instances.

Administered by:

Server stats:

3.8K
active users

#ollama

10 posts10 participants0 posts today

Irgendwie nutze ich meine lokale LLM eher als Suchmaschine als normale Suchmaschinen. Keine Cookie Banner, keine "Deaktiviere den Adblock" Popups und die Informationen mehrerer Seiten sind einfach zusammengefasst.

Obwohl der Server nicht leistungsstark ist und eine Antwort etwas länger dauert, bin ich immernoch schneller damit,als mich durch sinnlose Foren-Einträge zu wühlen.

Replied in thread

@Gina If I had to choose, I'd probably go with #Ollama (which has been mentioned several times already). It's licensed under the MIT license and the models are about as close to open source as you can get. When I play with LLMs, it's what I use. Locally run and with an API that could be used to integrate with other stuff. I also have #OpenWebUI to make things prettier. Both can run locally, though OpenWebUI can integrate with cloud LLMs too. Of course, tomorrow everything could change.

🌘 在NixOS WSL上運行Nvidia和Ollama:全天候啟動您的遊戲PC上的LLM
➤ 在你的遊戲PC上輕鬆運行大型語言模型
yomaq.github.io/posts/nvidia-o
這篇文章詳細介紹了作者如何在遊戲PC上使用NixOS和WSL環境,成功配置Nvidia GPU和Ollama,實現LLM模型的持續運行。作者解決了vram鎖定、WSL自動關閉以及NixOS對Nvidia的初始支持不足等問題,並分享了詳細的配置步驟,包括保持WSL運行的設置、NixOS的安裝配置、Nvidia Container Toolkit的設置以及Ollama Docker容器的配置,並整合了Tailscale以簡化網絡連接。
+ 這篇文章真的很有用,我一直想試試本地LLM,但設定起來太麻煩了。這個方法看起來更可行!
+ NixOS看起來很強大,但是學習曲線有點陡峭。不過為了能在本地運行LLM,我
#NixOS #WSL #Nvidia #Ollama #LLM #Docker

yomaq · Nvidia on Nixos WSL - Ollama up 24/7 on your gaming PCConvenient LLMs at Home

You can now use Ollama model with Continue.Dev to do “agentic” code generation in VSCode. However if you use LiteLLM to manage your model access, you won’t be able to take advantage of this feature just yet github.com/continuedev/continu

#selfhosted #ai #ollama (brainsteam.co.uk/notes/2025/04)

Validations I believe this is a way to improve. I'll try to join the Continue Discord for questions I'm not able to find an open issue that requests the same enhancement Problem Currently extension...
GitHubSupport of "Agents" for models with "openai" provider · Issue #5044 · continuedev/continueBy ibuziuk

Good or bad? My laptop with a 12th gen i7-12800H and an #nvidia A1000 gpu can sustain 3.8ghz turbo on all cores, almost a 1 ghz turbo (sustained) over the 2.9ghz promise from #intel. I am using Ollama and Gemma3:27b to beat on it. GPU is a bit of a lap potato of course and hovers around 65C, while the CPU rides 96C.

Tokens per second is about 25.

#ollama#AI#LLM

Hallo schlaues Fediverse, ich tauche gerade in ein völlig absurdes #Rabbithole und mein M1 Macbook hat dank #LlmStudio und #Ollama seinen Lüfter wiederentdeckt… Aktuell ist lokal bei #LLM mit 8-12b Schluss (32GB Ram). Gibt es irgendwo #Benchmarks die mir bitte ausreden, dass das mit einem M4 >48GB RAM drastisch besser wird? Oder wäre was ganz anderes schlauer? Oder anderes Hobby? Muss Mobil (erreichbar) sein, weil zu unsteter Lebenswandel für ein Desktop. Empfehlungen gern in den Kommentaren.

Introducing MoonPiLlama

Adam Jenkins has made a Youtube video showing how to install #MoodleBox and Ollama on a Raspberry Pi4. MoodleBox is a custom distribution of #Moodle specifically for the Raspberry Pi. However the Moodle part includes good information on generally how to get Ollama and Moodle to work together. It also includes gratuitous use of a yellow rubber duck
(Yes Pi 4 not Pi 5).

youtube.com/watch?v=KqQfzhJJFP