While the service binds to the localhost address at 127.0.0[.]1:11434 by default, it's possible to expose it to the public internet by means of a trivial change: configuring it to bind to 0.0.0[.]0 or a public interface.
Ollama provides a framework that makes it possible to run large language models locally, on a desktop machine or server. Cisco decided to research it because, in the words of Senior Incident Response Architect Dr. Giannis Tziakouris, Ollama has "gained popularity for its ease of use and local deployment capabilities." Talos researchers used the Shodan scanning tool to find unsecured Ollama servers, and spotted over 1,100, around 20 percent of which are "actively hosting models susceptible to unauthorized access."