Your own AI: Running LLMs locally via Ollama - A simple guide for people who don't like to share (their data)
My Session Status
What:
Session
When:
12:00 PM, Wednesday 25 Feb 2026
(25 minutes)
Theme:
Speaker
If you’d rather not share your data with mysterious servers in faraway places, this session might be for you. We’ll start by clarifying what “open source” means and highlighting a few open-source LLMs. You’ll learn the key differences between locally run and cloud-based models, and why those differences matter for privacy and security. Then we’ll introduce Ollama, a simple tool that makes it easy to download small language models and run them locally. We’ll wrap up with a step-by-step demo on installing Ollama on Windows and running your first small, local model.