<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
  xmlns:content="http://purl.org/rss/1.0/modules/content/"
  xmlns:atom="http://www.w3.org/2005/Atom"
  xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>TensorFoundry News</title>
    <link>https://tensorfoundry.io/news</link>
    <description>Latest news, updates and announcements from TensorFoundry</description>
    <language>en-AU</language>
    <copyright>Copyright © 2026 TensorFoundry</copyright>
    <lastBuildDate>Sat, 04 Apr 2026 12:09:51 GMT</lastBuildDate>
    <pubDate>Sat, 04 Apr 2026 12:09:51 GMT</pubDate>
    <docs>https://www.rssboard.org/rss-specification</docs>
    <generator>TensorFoundry RSS Generator</generator>
    <managingEditor>hello@tensorfoundry.io (TensorFoundry Team)</managingEditor>
    <webMaster>hello@tensorfoundry.io (TensorFoundry Team)</webMaster>
    <atom:link href="https://tensorfoundry.io/rss/news.xml" rel="self" type="application/rss+xml" />

    <item>
      <title>Introducing Kaizen - AI Coding Agent Early Access</title>
      <link>https://tensorfoundry.io/products/kaizen</link>
      <guid isPermaLink="true">https://tensorfoundry.io/products/kaizen</guid>
      <description>Kaizen is now available for early access — a terminal-first AI coding agent with persistent cross-session memory, multi-agent orchestration (scout, cody, and sage agents), and CAS-based undo so you can code with confidence.</description>
      <content:encoded><![CDATA[We're excited to announce that Kaizen, TensorFoundry's terminal-first AI coding agent, is now open for early access sign-ups.

Kaizen is built around the Helm orchestrator, which coordinates three specialised agents: Scout explores and maps your codebase, Cody handles code generation and editing, and Sage reviews changes for correctness and quality. Together they work in concert to understand your project, make targeted changes, and verify the results — all without leaving your terminal.

Persistent memory is a core part of Kaizen's design. Using a local SQLite store, Kaizen remembers context across sessions so you're never starting from scratch. Pair that with CAS-backed checkpoints and you get a full undo history for every change — roll back any edit at any point, with confidence.

Kaizen ships as a single binary with no cloud backend required. Your code stays on your machine.

Join the waitlist at tensorfoundry.io/products/kaizen/waitlist to secure early access ahead of the Q3 2026 release.]]></content:encoded>
      <pubDate>Sat, 04 Apr 2026 00:00:00 GMT</pubDate>
      <author>hello@tensorfoundry.io (TensorFoundry Team)</author>
      <category>Kaizen</category>
      <category>Product</category>
      <category>Early Access</category>
      <category>AI Coding Agent</category>
    </item>
    <item>
      <title>Olla v0.0.24: Agentic Workload Bugfixes</title>
      <link>https://tensorfoundry.io/products/olla</link>
      <guid isPermaLink="true">https://tensorfoundry.io/products/olla</guid>
      <description>Olla v0.0.24 is a targeted bugfix release for agentic workloads, resolving issues with translator mode, agent tooling, Anthropic tool calls, and output configuration — plus improved logging throughout.</description>
      <content:encoded><![CDATA[Olla v0.0.24 ships a focused set of fixes aimed at users running agentic workloads. Translator mode has been corrected, an Anthropic tooling bug that caused incorrect tool call behaviour is resolved, a duplicate increment issue is addressed, and missing output configuration in Anthropic requests is now properly handled. The Sherpa interface also gains a flush fix for more reliable streaming. Improved logging across the board makes diagnosing issues in complex multi-step pipelines considerably easier.]]></content:encoded>
      <pubDate>Sun, 22 Feb 2026 00:00:00 GMT</pubDate>
      <author>hello@tensorfoundry.io (TensorFoundry Team)</author>
      <category>Olla</category>
      <category>Release</category>
      <category>Bugfix</category>
      <category>Anthropic</category>
      <category>Agents</category>
    </item>
    <item>
      <title>Olla v0.0.23: Docker Model Runner, vLLM-MLX &amp; Anthropic Passthrough</title>
      <link>https://tensorfoundry.io/products/olla</link>
      <guid isPermaLink="true">https://tensorfoundry.io/products/olla</guid>
      <description>Olla v0.0.23 is a major release adding two new backends — Docker Model Runner and vLLM-MLX — plus Anthropic Passthrough support, sensible lean-config defaults, proxy path bugfixes, and expanded integration tests.</description>
      <content:encoded><![CDATA[Olla v0.0.23 expands the backend ecosystem significantly. Docker Model Runner and vLLM-MLX are now fully supported, giving teams more flexibility in how they deploy local inference. Anthropic Passthrough is available on capable backends such as vLLM, letting you route Anthropic-style requests without a translation layer. Configuration gets friendlier too — sensible defaults mean lean config files work out of the box with less boilerplate. A long-standing proxy path bug affecting /olla/proxy routing is fixed, documentation has been refined, and additional integration tests provide greater confidence across the board. Security and dependency updates are also included.]]></content:encoded>
      <pubDate>Fri, 20 Feb 2026 00:00:00 GMT</pubDate>
      <author>hello@tensorfoundry.io (TensorFoundry Team)</author>
      <category>Olla</category>
      <category>Release</category>
      <category>Docker</category>
      <category>vLLM</category>
      <category>Anthropic</category>
      <category>Backends</category>
    </item>
    <item>
      <title>Olla v0.0.22: Model URL Resolution &amp; Profile Path Fixes</title>
      <link>https://tensorfoundry.io/products/olla</link>
      <guid isPermaLink="true">https://tensorfoundry.io/products/olla</guid>
      <description>Olla v0.0.22 corrects model_url resolution from endpoint configuration, adds an alternative method for resolving profile paths, and ships maintenance fixes with dependency updates.</description>
      <content:encoded><![CDATA[Olla v0.0.22 is a maintenance release with two notable fixes. Model URL resolution from endpoint configuration now behaves correctly, preventing misrouted requests in certain setups. An alternative profile path resolution method has been added to handle environments where the primary resolution strategy falls short. Alongside these fixes, routine dependency updates keep the project current and secure.]]></content:encoded>
      <pubDate>Mon, 15 Dec 2025 00:00:00 GMT</pubDate>
      <author>hello@tensorfoundry.io (TensorFoundry Team)</author>
      <category>Olla</category>
      <category>Release</category>
      <category>Bugfix</category>
      <category>Maintenance</category>
    </item>
    <item>
      <title>Olla v0.0.21: Path Preservation &amp; OpenAI Routing Fix</title>
      <link>https://tensorfoundry.io/products/olla</link>
      <guid isPermaLink="true">https://tensorfoundry.io/products/olla</guid>
      <description>Olla v0.0.21 introduces a new preserve_path setting for endpoints, making Docker Model Runner integration seamless, and fixes OpenAI-compatible routing. Refreshed OpenAI profiles are also included.</description>
      <content:encoded><![CDATA[Olla v0.0.21 adds the preserve_path setting to endpoint configuration, which passes the original request path through to the upstream backend unchanged. This is particularly useful when working with Docker Model Runner, where path fidelity is required for correct routing. A bug affecting OpenAI-compatible endpoints configured with type: openai-compatible is resolved, ensuring requests reach the right backend. OpenAI profiles have also been refreshed to reflect current best practices.]]></content:encoded>
      <pubDate>Thu, 06 Nov 2025 00:00:00 GMT</pubDate>
      <author>hello@tensorfoundry.io (TensorFoundry Team)</author>
      <category>Olla</category>
      <category>Release</category>
      <category>Docker</category>
      <category>OpenAI</category>
      <category>Routing</category>
    </item>
    <item>
      <title>Olla v0.0.20: LlamaCpp Integration &amp; Anthropic API Translation</title>
      <link>https://tensorfoundry.io/news/olla-v0-0-20-release</link>
      <guid isPermaLink="true">https://tensorfoundry.io/news/olla-v0-0-20-release</guid>
      <description>Olla v0.0.20 brings back native LlamaCpp integration and introduces experimental support for Anthropic message translation, making it easier to run local models and route Anthropic-style requests through your own infrastructure.</description>
      <content:encoded><![CDATA[Olla v0.0.20 reintroduces native LlamaCpp support, letting you run quantised models directly through Olla without an intermediary server. This release also adds experimental Anthropic message translation — send Anthropic-format requests to Olla and have them transparently forwarded to compatible local backends. Together these additions make Olla a more complete hub for local inference across a wider range of model formats and API styles.]]></content:encoded>
      <pubDate>Wed, 22 Oct 2025 00:00:00 GMT</pubDate>
      <author>hello@tensorfoundry.io (TensorFoundry Team)</author>
      <category>Olla</category>
      <category>Release</category>
      <category>LlamaCpp</category>
      <category>Anthropic</category>
    </item>
    <item>
      <title>TensorFoundry at NVIDIA AI Days Sydney 2025</title>
      <link>https://tensorfoundry.io/news/tensorfoundry-at-nvidia-ai-days-sydney-2025</link>
      <guid isPermaLink="true">https://tensorfoundry.io/news/tensorfoundry-at-nvidia-ai-days-sydney-2025</guid>
      <description>Join us at NVIDIA AI Days Sydney 2025 to see how NVIDIA&apos;s technology enables the latest AI breakthroughs, connect with peers and experts and help create what&apos;s next.</description>
      <content:encoded><![CDATA[We're thrilled to be part of NVIDIA AI Days Sydney 2025! Learn about how customers are using NVIDIA's technology in Australia and come talk to our team about how TensorFoundry can help you leverage your NVIDIA investments to deliver exceptional local AI experiences for your team and customers.]]></content:encoded>
      <pubDate>Wed, 15 Oct 2025 00:00:00 GMT</pubDate>
      <author>hello@tensorfoundry.io (TensorFoundry Team)</author>
      <category>NVIDIA</category>
      <category>Event</category>
      <category>AI</category>
      <category>Sydney</category>
    </item>
    <item>
      <title>TensorFoundry Launches: Deploy LLMs on Your Own Infrastructure</title>
      <link>https://tensorfoundry.io/news/tensorfoundry-launch</link>
      <guid isPermaLink="true">https://tensorfoundry.io/news/tensorfoundry-launch</guid>
      <description>TensorFoundry officially launches its website and introduces its product suite  -  Olla, FoundryOS, and AgentOS  -  built around a simple mission: run large language models on your own infrastructure, without cloud lock-in.</description>
      <content:encoded><![CDATA[TensorFoundry officially launches its website and introduces its product suite  -  Olla, FoundryOS, and AgentOS  -  built around a simple mission: run large language models on your own infrastructure, without cloud lock-in.]]></content:encoded>
      <pubDate>Tue, 01 Jul 2025 00:00:00 GMT</pubDate>
      <author>hello@tensorfoundry.io (TensorFoundry Team)</author>
      <category>News</category>
      <category>Launch</category>
      <category>Website</category>
    </item>
  </channel>
</rss>