Introduction 

What does it do? 

nr-llm is the shared AI foundation for TYPO3. It lets administrators configure LLM providers once in the backend — and every AI-powered extension on the site uses them automatically.

For extension developers, it eliminates the need to build provider integrations, manage API keys, or implement caching and streaming. Add AI capabilities to your extension with three lines of dependency injection.

For administrators, it provides a single backend module to manage all AI connections, encrypted API keys, and provider configurations. Switch from OpenAI to Anthropic without touching any extension code.

For agencies, it means consistent AI architecture across client projects, no vendor lock-in, and a local-first option via Ollama for data-sensitive environments.

The extension enables developers to:

  • Access multiple AI providers through a single, consistent API.
  • Switch providers transparently without code changes.
  • Leverage specialized services for common AI tasks (translation, vision, embeddings).
  • Cache responses to reduce API costs and improve performance.
  • Stream responses for real-time user experiences.
  • Store API keys securely with sodium encryption or nr-vault envelope encryption.

Supported providers 

Provider Models Capabilities
OpenAI GPT-5.x series, o-series reasoning models Chat, completions, embeddings, vision, streaming, tools.
Anthropic Claude Claude Opus 4.5, Claude Sonnet 4.5, Claude Haiku 4.5 Chat, completions, vision, streaming, tools.
Google Gemini Gemini 3 Pro, Gemini 3 Flash, Gemini 2.5 series Chat, completions, embeddings, vision, streaming, tools.
Ollama Local models (Llama, Mistral, etc.) Chat, embeddings, streaming (local).
OpenRouter Multi-provider access Chat, embeddings, vision, streaming, tools.
Mistral Mistral models Chat, embeddings, streaming.
Groq Fast inference models Chat, streaming (fast inference).
Azure OpenAI Same as OpenAI Same as OpenAI.
Custom OpenAI-compatible endpoints Varies by endpoint.

Key features 

AI-powered wizards 

Built-in wizards reduce manual setup to a minimum:

  • Setup wizard guides first-time configuration in five steps (provider, connection test, model fetch, configuration, test prompt).
  • Configuration wizard generates a complete LLM configuration from a plain-language description of your use case.
  • Task wizard creates reusable one-shot prompt templates the same way.
  • Model discovery fetches available models directly from the provider API.

See AI-powered wizards for details and screenshots.

Unified provider API 

All providers implement a common interface, allowing you to:

  • Switch between providers with a single configuration change.
  • Test with different models without modifying application code.
  • Implement provider fallbacks for increased reliability.
Example: Using the provider abstraction layer
// Use database configurations for consistent settings
$config = $configRepository->findByIdentifier('blog-summarizer');
$adapter = $adapterRegistry->createAdapterFromModel($config->getModel());
$response = $adapter->chatCompletion($messages, $config->toOptions());

// Or use inline provider selection
$response = $llmManager->chat($messages, ['provider' => 'openai']);
$response = $llmManager->chat($messages, ['provider' => 'claude']);
Copied!

Specialized feature services 

High-level services for common AI tasks:

CompletionService
Text generation with format control (JSON, Markdown) and creativity presets.
EmbeddingService
Text-to-vector conversion with caching and similarity calculations.
VisionService
Image analysis with specialized prompts for alt-text, titles, descriptions.
TranslationService
Language translation with formality control, domain-specific terminology, and glossaries.
PromptTemplateService
Centralized prompt management with variable substitution and versioning.

Streaming support 

Real-time response streaming for better user experience:

Example: Streaming chat responses
foreach ($llmManager->streamChat($messages) as $chunk) {
    echo $chunk;
    flush();
}
Copied!

Tool/function calling 

Execute custom functions based on AI decisions:

Example: Tool/function calling
$response = $llmManager->chatWithTools($messages, $tools);
if ($response->hasToolCalls()) {
    // Process tool calls
}
Copied!

Intelligent caching 

  • Automatic response caching using TYPO3's caching framework.
  • Deterministic embedding caching (24-hour default TTL).
  • Configurable cache lifetimes per operation type.

Use cases 

Content generation 

  • Generate product descriptions.
  • Create meta descriptions and SEO content.
  • Draft blog posts and articles.
  • Summarize long-form content.

Translation 

  • Translate website content.
  • Maintain consistent terminology with glossaries.
  • Preserve formatting in technical documents.

Image processing 

  • Generate accessibility-compliant alt-text.
  • Create SEO-optimized image titles.
  • Analyze and categorize image content.

Search and discovery 

  • Semantic search using embeddings.
  • Content similarity detection.
  • Recommendation systems.

Chatbots and assistants 

  • Customer support chatbots.
  • FAQ answering systems.
  • Guided navigation assistants.

Requirements 

  • PHP: 8.2 or higher.
  • TYPO3: v13.4 or higher.
  • HTTP client: PSR-18 compatible (e.g., guzzlehttp/guzzle ).

Provider requirements 

To use specific providers, you need:

Credits 

This extension is developed and maintained by:

Netresearch DTT GmbH
https://www.netresearch.de

Built with the assistance of modern AI development tools and following TYPO3 coding standards and best practices.