Tag: Ollama

AI/ML

Mattermost AI chatbot with image generation support from Automatic1111

How about AI chatbot integraton in you Mattermost server? With possiblity to generate images using StableDiffusion… So, here is my Indatify’s Mattermost server which I have been playing around for last few nights. It is obvious that interaction with LLM model and generating images is way more playful in Mattermost than using Open WebUI or other TinyChat solution. So here you have an example of such integration. It is regular Mattermost on-premise server: Mattermost First, we need to configure Mattermost to be able to host AI chatbots. Configure Bot account Enable bot account creation, which is disabled by default. Of

AI/ML

“You’re trying to frame the request as a documentary photograph”

LLMs contain built-in policies for protecting minors, animals etc. Monkey eating sausage should be against policy. But it can be fooled and finally model stops complaining and describe what we want to. Tried: to generate funny/controversial pictures. Actuall image generate takes place at Stable Diffusion and not at those conversational LLMs. However, once aksed to generate something dubious or funny they tend to reject such requests hiding befind their policies. Refusals from nexusraven and granite3-dense First I asked for Proboscis Monkey holding can of beer and eating sausage. LLM model called nexusraven refused with that request: nexusraven: I cannot fulfill

AI/ML

Code generation and artifacts preview with WebUI and codegemma:7b

Generate WebGL, Three.JS, HTML, CSS, JavaScript, no Python code, single page with rotating cube, ambient lighting. Load libraries from CDN. Let ambient lighting be as such cube edges are visible. add directional lighting also pointing at the cube. Scene needs to be navigable using arrow keys. Ensure browser compability. With codegemma:7b you can generate source code. If asked properly then in WebUI chat a artifacts feature will appear, interpreting your source code immediately, just after source code is generated. This feature is useful for designers, developers and marketers who would like to speed-up scaffolding and migrating from brainstorm into visible

AI/ML

Ollama, WebUI, Automatic1111 – your own, personal, local AI from scratch

My local toolbox was empty, now it’s full. Lately I have been writing about Ollama, WebUI and StableDiffusion on top of Automatic1111. I found myself struggling a little bit to keep up with all those information about how to run it in specific environments. So here you have an extract of step by step installation. Starting with NVIDIA driver and some basic requirements: Next we go for Docker. Ollama, but with binaries instead of Docker container. It will be much easier, and does not require installing Docker extensions for GPU acceleration support: If running Ollama on different server, then need

AI/ML

Single vs multiple GPU power load

slight utlization drop when dealing with multi GPU setup TLDR Power usage and GPU utilization varies between single GPU models and multi GPU models. Deal with it. My latest finding is that single GPU load in Ollama/Gemma or Automatic1111/StableDiffusion is higher than using multiple GPUs load with Ollama when model does not fit into one GPU’s memory. Take a look. GPU utilization of Stable Diffusion is at 100% with 90 – 100% fan speed and temperature over 80 degress C. Compare this to load spread across two GPUs. You can clearly see that GPU utilization is much lower, as well

AI/ML

Generate images with Stable Diffusion, Gemma and WebUI on NVIDIA GPUs

With Ollama paired with Gemma3 model, Open WebUI with RAG and search capabilities and finally Automatic1111 running Stable Diffusion you can have quite complete set of AI features at home in a price of 2 consumer grade GPUs and some home electricity. With 500 iterations and image size of 512×256 it took around a minute to generate response. I find it funny to be able to generate images with AI techniques. Tried Stable Diffusion in the past, but now with help of Gemma and integratino with Automatic1111 on WebUI, it’s damn easy. Step by step Prerequisites You can find information

AI/ML

Run DeepSeek-R1:70b on CPU and RAM

Utilize both CPU, RAM and GPU computational resources With Ollama you can use not only GPU but also CPU with regular RAM go run LLM models, like DeepSeek-R1:70b. Of course you need to have fast both CPU and RAM and have plenty of it. My Lab setup contains 24 vCPU (2 x 6 cores * 2 threads) and from 128 to 384 GB of RAM. Once started, Ollama allocates 22.4GB in RAM (RES) and 119GB of vritual memory. It occupies 1200% CPU utilization causing system load to go up to 12. However, CPU utilization is only 50% in total. It

AI/ML

Ollama with Open WebUI on 2 x RTX 3060 12 GB

Ollama with WebUI on 2 “powerful” GPUs feels like commercial GPTs online I thought that Exo would do the job and utilize both of my Lab servers. Unfortunately, it does not work on Linux/NVIDIA with my setup and following official documentation. So I went back to Ollama and I found it great. I have 2 x NVIDIA RTX 3060 with 12GB VRAM each giving me in total 24GB which can run Gemma3:27b or DeepSeek-r1:32b. Ollama can utilize both GPUs in my system which can be seen in nvidia-smi. How to run Ollama in Docker with GPU acceleration you can read

AI/ML

NVIDIA CC 7.0+: how to run Ollama/moondream:1.8b

Well, in one of the previous articles I described how to invoke Ollama/moondream:1.8b using cURL, however I forgot to tell how to even run it in Docker container. So here you go: You can specify to run particular model in background (-d) or in foreground (without parameter -d). You can also define parallelism and maximum queue in Ollama server: One important one regarding stability of Ollama server. Once it runs for over few hours there might be issue with GPU driver which requires restart, so Ollama needs to be monitored for such scenario. Moreover after minutes of idle time it