Tag: AI

AI/ML

“You’re trying to frame the request as a documentary photograph”

LLMs contain built-in policies for protecting minors, animals etc. Monkey eating sausage should be against policy. But it can be fooled and finally model stops complaining and describe what we want to. Tried: to generate funny/controversial pictures. Actuall image generate takes place at Stable Diffusion and not at those conversational LLMs. However, once aksed to generate something dubious or funny they tend to reject such requests hiding befind their policies. Refusals from nexusraven and granite3-dense First I asked for Proboscis Monkey holding can of beer and eating sausage. LLM model called nexusraven refused with that request: nexusraven: I cannot fulfill

AI/ML

Code generation and artifacts preview with WebUI and codegemma:7b

Generate WebGL, Three.JS, HTML, CSS, JavaScript, no Python code, single page with rotating cube, ambient lighting. Load libraries from CDN. Let ambient lighting be as such cube edges are visible. add directional lighting also pointing at the cube. Scene needs to be navigable using arrow keys. Ensure browser compability. With codegemma:7b you can generate source code. If asked properly then in WebUI chat a artifacts feature will appear, interpreting your source code immediately, just after source code is generated. This feature is useful for designers, developers and marketers who would like to speed-up scaffolding and migrating from brainstorm into visible

AI/ML

Custom Gemma AI system prompt to create own chatbot experience

I want to create custom chatbot experience. I want to be based on Google’s Gemma AI Large Language Models. I find Gemma3, especially 27b version very capable while problem solving. It has been trained on such data that I find it interesting. I will use Open WebUI to create custom “model hat” and provide chatbot experience TLDR In order to create your own chatbot, only 3 steps are required: To create own chatbot experience I can use System Prompt feature which is core part of model itself. Running on Ollama, Gemma3:27b is actually a 4-bit quantized version of full 16-bit

AI/ML

Generate images with Stable Diffusion, Gemma and WebUI on NVIDIA GPUs

With Ollama paired with Gemma3 model, Open WebUI with RAG and search capabilities and finally Automatic1111 running Stable Diffusion you can have quite complete set of AI features at home in a price of 2 consumer grade GPUs and some home electricity. With 500 iterations and image size of 512×256 it took around a minute to generate response. I find it funny to be able to generate images with AI techniques. Tried Stable Diffusion in the past, but now with help of Gemma and integratino with Automatic1111 on WebUI, it’s damn easy. Step by step Prerequisites You can find information

AI/ML

Run DeepSeek-R1:70b on CPU and RAM

Utilize both CPU, RAM and GPU computational resources With Ollama you can use not only GPU but also CPU with regular RAM go run LLM models, like DeepSeek-R1:70b. Of course you need to have fast both CPU and RAM and have plenty of it. My Lab setup contains 24 vCPU (2 x 6 cores * 2 threads) and from 128 to 384 GB of RAM. Once started, Ollama allocates 22.4GB in RAM (RES) and 119GB of vritual memory. It occupies 1200% CPU utilization causing system load to go up to 12. However, CPU utilization is only 50% in total. It

AI/ML

Ollama with Open WebUI on 2 x RTX 3060 12 GB

Ollama with WebUI on 2 “powerful” GPUs feels like commercial GPTs online I thought that Exo would do the job and utilize both of my Lab servers. Unfortunately, it does not work on Linux/NVIDIA with my setup and following official documentation. So I went back to Ollama and I found it great. I have 2 x NVIDIA RTX 3060 with 12GB VRAM each giving me in total 24GB which can run Gemma3:27b or DeepSeek-r1:32b. Ollama can utilize both GPUs in my system which can be seen in nvidia-smi. How to run Ollama in Docker with GPU acceleration you can read

AI/ML

Object detection and scene description: various libraries/frameworks tested lately

No, cant use Tesla K20xm with 6GB VRAM for modern computation as it has Compute Capability parameter lower than required 7.0. Here you have table of my findings about libraries/frameworks, required hardware and its purpose. I started with DeepStack, where I was able to run API server for object detection, Frigate has support for it. Later on, with TensorRT on NVIDIA GPU I can run Yolov7x-640 model also for object detection, Frigate works well with it. With Google Coral TPU USB module we can run SSD MobileNet or EfficientDet models with great power efficency for good price. Ollama with moondream

AI/ML

AI video surveillance with DeepStack, Python and ZoneMinder

For those using ZoneMinder and trying to figure out how to detect objects, there is deepquestai/deepstack AI model and builtin HTTP server. You can grab video frames by using ZoneMinder API or UI API: You need to specify address, monitor ID, user, password. You can also specify single frame (mode=single) or motion (mode=jpeg). Zoneminder uses internally /usr/lib/zoneminder/cgi-bin/nph-zms program binary to grab frame from configured IP ONVIF RTSP camera. It is probably to the most quickest option, but it is convenient one. Using OpenCV in Python you could also access RTSP stream and grab frames manually. However for sake of simplicity

AI/ML

Run Bielik LLM from SpeakLeash using LM Studio on your local machine

Did you know that you can use the Polish LLM Bielik from SpeakLeash locally, on your private computer? The easiest way to do this is LM Studio (from lmstudio.ai). Why use a model locally? Just for fun. Where we don’t have internet. Because we don’t want to share our data and conversations etc… You can run it on macOS, Windows and Linux. It requires support for AVX2 CPU instructions, a large amount of RAM and, preferably, a dedicated and modern graphics card. Note: for example, on a Thinkpad t460p with i5 6300HQ with a dedicated 940MX 2GB VRAM card basically