TYPO3 LLM extension 

Extension key

nr_llm

Package name

netresearch/nr-llm

Version

0.2

Language

en

Author

Netresearch DTT GmbH

License

This document is published under the GPL-2.0-or-later license.

Rendered

Sun, 01 Mar 2026 08:41:15 +0000


A unified Large Language Model (LLM) provider abstraction layer for TYPO3 v13.4+.

This extension provides a standardized interface to interact with multiple AI providers (OpenAI, Anthropic Claude, Google Gemini) through a single, consistent API. It includes specialized services for common AI tasks like text completion, translation, embeddings, and image analysis.


🚀 Quick start 

Get started quickly with installation and basic usage examples.

🔧 Configuration 

Configure API keys, providers, and extension settings.

👨‍💻 Developer guide 

Technical documentation for developers integrating LLM capabilities.

🏗️ Architecture 

Three-tier configuration architecture and service design.


Table of contents

Introduction 

What does it do? 

The TYPO3 LLM extension provides a unified abstraction layer for integrating Large Language Models (LLMs) into TYPO3 applications. It enables developers to:

  • Access multiple AI providers through a single, consistent API.
  • Switch providers transparently without code changes.
  • Leverage specialized services for common AI tasks.
  • Cache responses to reduce API costs and improve performance.
  • Stream responses for real-time user experiences.

Supported providers 

Provider Models Capabilities
OpenAI GPT-5.x series, o-series reasoning models Chat, completions, embeddings, vision, streaming, tools.
Anthropic Claude Claude Opus 4.5, Claude Sonnet 4.5, Claude Haiku 4.5 Chat, completions, vision, streaming, tools.
Google Gemini Gemini 3 Pro, Gemini 3 Flash, Gemini 2.5 series Chat, completions, embeddings, vision, streaming, tools.
Ollama Local models (Llama, Mistral, etc.) Chat, embeddings, streaming (local).
OpenRouter Multi-provider access Chat, vision, streaming, tools.
Mistral Mistral models Chat, embeddings, streaming.
Groq Fast inference models Chat, streaming (fast inference).
Azure OpenAI Same as OpenAI Same as OpenAI.
Custom OpenAI-compatible endpoints Varies by endpoint.

Key features 

Unified provider API 

All providers implement a common interface, allowing you to:

  • Switch between providers with a single configuration change.
  • Test with different models without modifying application code.
  • Implement provider fallbacks for increased reliability.
Example: Using the provider abstraction layer
// Use database configurations for consistent settings
$config = $configRepository->findByIdentifier('blog-summarizer');
$adapter = $adapterRegistry->createAdapterFromModel($config->getModel());
$response = $adapter->chatCompletion($messages, $config->toOptions());

// Or use inline provider selection
$response = $llmManager->chat($messages, ['provider' => 'openai']);
$response = $llmManager->chat($messages, ['provider' => 'claude']);
Copied!

Specialized feature services 

High-level services for common AI tasks:

CompletionService
Text generation with format control (JSON, Markdown) and creativity presets.
EmbeddingService
Text-to-vector conversion with caching and similarity calculations.
VisionService
Image analysis with specialized prompts for alt-text, titles, descriptions.
TranslationService
Language translation with formality control, domain-specific terminology, and glossaries.
PromptTemplateService
Centralized prompt management with variable substitution and versioning.

Streaming support 

Real-time response streaming for better user experience:

Example: Streaming chat responses
foreach ($llmManager->streamChat($messages) as $chunk) {
    echo $chunk;
    flush();
}
Copied!

Tool/function calling 

Execute custom functions based on AI decisions:

Example: Tool/function calling
$response = $llmManager->chatWithTools($messages, $tools);
if ($response->hasToolCalls()) {
    // Process tool calls
}
Copied!

Intelligent caching 

  • Automatic response caching using TYPO3's caching framework.
  • Deterministic embedding caching (24-hour default TTL).
  • Configurable cache lifetimes per operation type.

Use cases 

Content generation 

  • Generate product descriptions.
  • Create meta descriptions and SEO content.
  • Draft blog posts and articles.
  • Summarize long-form content.

Translation 

  • Translate website content.
  • Maintain consistent terminology with glossaries.
  • Preserve formatting in technical documents.

Image processing 

  • Generate accessibility-compliant alt-text.
  • Create SEO-optimized image titles.
  • Analyze and categorize image content.

Search and discovery 

  • Semantic search using embeddings.
  • Content similarity detection.
  • Recommendation systems.

Chatbots and assistants 

  • Customer support chatbots.
  • FAQ answering systems.
  • Guided navigation assistants.

Requirements 

  • PHP: 8.2 or higher.
  • TYPO3: v13.4 or higher.
  • HTTP client: PSR-18 compatible (e.g., guzzlehttp/guzzle ).

Provider requirements 

To use specific providers, you need:

Credits 

This extension is developed and maintained by:

Netresearch DTT GmbH
https://www.netresearch.de

Built with the assistance of modern AI development tools and following TYPO3 coding standards and best practices.

Installation 

Quick start 

The recommended way to install this extension is via Composer:

Install via Composer
composer require netresearch/nr-llm
Copied!

After installation:

  1. Activate the extension in Admin Tools > Extension Manager.
  2. Configure providers and API keys in Admin Tools > LLM > Providers.
  3. Define available models in Admin Tools > LLM > Models.
  4. Create configurations in Admin Tools > LLM > Configurations.
  5. Clear caches.

Composer installation 

Requirements 

Ensure your system meets these requirements:

  • PHP 8.2 or higher.
  • TYPO3 v13.4 or higher.
  • Composer 2.x.
  • netresearch/nr-vault ^0.4.0 (required for API key encryption; installed automatically via Composer).

Installation steps 

  1. Add the package

    Install via Composer
    composer require netresearch/nr-llm
    Copied!
  2. Activate the extension

    Navigate to Admin Tools > Extension Manager and activate EXT:nr_llm.

  3. Configure API keys

    See Configuration for detailed setup instructions.

  4. Clear caches

    Flush all caches
    vendor/bin/typo3 cache:flush
    Copied!

Manual installation 

If you cannot use Composer:

  1. Download the extension from the TYPO3 Extension Repository (TER).
  2. Extract to typo3conf/ext/nr_llm.
  3. Activate in Admin Tools > Extension Manager.
  4. Configure API keys and settings.

Database setup 

The extension creates the following database tables automatically:

Table Purpose
tx_nrllm_provider Stores API provider connections with encrypted credentials.
tx_nrllm_model Stores available LLM models with capabilities and pricing.
tx_nrllm_configuration Stores use-case-specific configurations with prompts and parameters.
tx_nrllm_task Stores one-shot prompt tasks for common operations.
tx_nrllm_prompttemplate Stores reusable prompt templates with versioning and performance tracking.
tx_nrllm_service_usage Tracks specialized service usage (translation, speech, image).

Run the database compare tool after installation:

Set up extension database tables
vendor/bin/typo3 extension:setup nr_llm
Copied!

Cache configuration 

The extension uses TYPO3's caching framework. Default configuration is automatically set up, but you can customize it:

config/system/additional.php
$GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['nrllm_responses'] = [
    'frontend' => \TYPO3\CMS\Core\Cache\Frontend\VariableFrontend::class,
    'backend' => \TYPO3\CMS\Core\Cache\Backend\Typo3DatabaseBackend::class,
    'options' => [
        'defaultLifetime' => 3600,
    ],
    'groups' => ['nrllm'],
];
Copied!

Upgrading 

From previous versions 

  1. Backup your database before upgrading.
  2. Run Composer update:

    Update the extension
    composer update netresearch/nr-llm
    Copied!
  3. Run database migrations:

    Update database schema
    vendor/bin/typo3 database:updateschema
    Copied!
  4. Clear all caches:

    Flush all caches
    vendor/bin/typo3 cache:flush
    Copied!

Breaking changes 

Check the Changelog for breaking changes between versions.

Uninstallation 

To remove the extension:

  1. Deactivate in Admin Tools > Extension Manager.
  2. Remove via Composer:

    Remove the extension
    composer remove netresearch/nr-llm
    Copied!
  3. Clean up database tables if desired:

    Drop extension database tables
    DROP TABLE IF EXISTS tx_nrllm_provider;
    DROP TABLE IF EXISTS tx_nrllm_model;
    DROP TABLE IF EXISTS tx_nrllm_configuration;
    DROP TABLE IF EXISTS tx_nrllm_configuration_begroups_mm;
    DROP TABLE IF EXISTS tx_nrllm_task;
    DROP TABLE IF EXISTS tx_nrllm_prompttemplate;
    DROP TABLE IF EXISTS tx_nrllm_service_usage;
    Copied!
  4. Remove any TypoScript includes referencing the extension.

Configuration 

The extension uses a database-based configuration architecture with three levels: Providers, Models, and Configurations. All management is done through the TYPO3 backend module.

Backend module 

Access the LLM management module at Admin Tools > LLM.

The backend module provides four sections:

Dashboard
Overview of registered providers, models, and configurations with status indicators.
Providers
Manage API connections with encrypted credentials. Test connections directly from the interface.
Models
Define available models with their capabilities and pricing. Fetch models from provider APIs.
Configurations
Create use-case-specific configurations with prompts and parameters.

Provider configuration 

Providers represent API connections with credentials. Create providers in Admin Tools > LLM > Providers.

Required fields 

identifier

identifier
Type
string
Required

true

Unique slug for programmatic access (e.g., openai-prod, ollama-local).

name

name
Type
string
Required

true

Display name shown in the backend.

adapter_type

adapter_type
Type
string
Required

true

The protocol to use. Available options:

  • openai - OpenAI API.
  • anthropic - Anthropic Claude API.
  • gemini - Google Gemini API.
  • ollama - Local Ollama instance.
  • openrouter - OpenRouter multi-model API.
  • mistral - Mistral AI API.
  • groq - Groq inference API.
  • azure_openai - Azure OpenAI Service.
  • custom - Custom OpenAI-compatible endpoint.

api_key

api_key
Type
string
Required

true

API key for authentication. Encrypted at rest using sodium_crypto_secretbox. Not required for local providers like Ollama.

Optional fields 

endpoint_url

endpoint_url
Type
string
Default
(adapter default)

Custom API endpoint. Leave empty to use the adapter's default URL.

organization_id

organization_id
Type
string
Default
(empty)

Organization ID for providers that support it (OpenAI, Azure).

timeout

timeout
Type
integer
Default
30

Request timeout in seconds.

max_retries

max_retries
Type
integer
Default
3

Number of retry attempts on failure.

options

options
Type
JSON
Default
{}

JSON object with additional adapter-specific options.

Testing provider connections 

Use the Test Connection button to verify provider configuration. The test makes an actual HTTP request to the provider's API and returns:

  • Connection status (success/failure).
  • Available models (if supported by the provider).
  • Error details (on failure).

Model configuration 

Models represent specific LLM models available through a provider. Create models in Admin Tools > LLM > Models.

Required fields 

identifier (model)

identifier (model)
Type
string
Required

true

Unique slug (e.g., gpt-5, claude-sonnet).

name (model)

name (model)
Type
string
Required

true

Display name (e.g., GPT-5 (128K)).

provider

provider
Type
reference
Required

true

Reference to the parent provider.

model_id

model_id
Type
string
Required

true

The API model identifier. Examples vary by provider:

  • OpenAI: gpt-5, gpt-5.2-instant, o4-mini.
  • Anthropic: claude-opus-4-5-20251101, claude-sonnet-4-5-20251101.
  • Google: gemini-3-pro-preview, gemini-3-flash-preview.

Optional fields 

context_length

context_length
Type
integer
Default
(provider default)

Maximum context window in tokens (e.g., 128000 for GPT-5).

max_output_tokens

max_output_tokens
Type
integer
Default
(model default)

Maximum output tokens (e.g., 16384).

capabilities

capabilities
Type
string (CSV)
Default
chat

Comma-separated list of supported features:

  • chat - Chat completion.
  • completion - Text completion.
  • embeddings - Text-to-vector.
  • vision - Image analysis.
  • streaming - Real-time streaming.
  • tools - Function/tool calling.

cost_input

cost_input
Type
integer
Default
0

Cost per 1M input tokens in cents (for cost tracking).

cost_output

cost_output
Type
integer
Default
0

Cost per 1M output tokens in cents.

is_default

is_default
Type
boolean
Default
false

Mark as default model for this provider.

Fetching models from providers 

Use the Fetch Models action to automatically retrieve available models from the provider's API. This populates the model list with the provider's current offerings.

LLM configuration 

Configurations define specific use cases with model selection and parameters. Create configurations in Admin Tools > LLM > Configurations.

Required fields 

identifier (config)

identifier (config)
Type
string
Required

true

Unique slug for programmatic access (e.g., blog-summarizer).

name (config)

name (config)
Type
string
Required

true

Display name (e.g., Blog Post Summarizer).

model

model
Type
reference
Required

true

Reference to the model to use.

system_prompt

system_prompt
Type
text
Required

true

System message that sets the AI's behavior and context.

Optional fields 

temperature

temperature
Type
float
Default
0.7

Creativity level from 0.0 (deterministic) to 2.0 (creative).

max_tokens (config)

max_tokens (config)
Type
integer
Default
(model default)

Maximum response length in tokens.

top_p

top_p
Type
float
Default
1.0

Nucleus sampling parameter (0.0 - 1.0).

frequency_penalty

frequency_penalty
Type
float
Default
0.0

Reduces word repetition (-2.0 to 2.0).

presence_penalty

presence_penalty
Type
float
Default
0.0

Encourages topic diversity (-2.0 to 2.0).

use_case_type

use_case_type
Type
string
Default
chat

The type of task:

  • chat - Conversational interactions.
  • completion - Text completion.
  • embedding - Vector generation.
  • translation - Language translation.

Using configurations 

Retrieve configurations programmatically:

Example: Using configurations in a controller
use Netresearch\NrLlm\Domain\Repository\LlmConfigurationRepository;
use Netresearch\NrLlm\Provider\ProviderAdapterRegistry;

class MyController
{
    public function __construct(
        private readonly LlmConfigurationRepository $configRepository,
        private readonly ProviderAdapterRegistry $adapterRegistry,
    ) {}

    public function processAction(): void
    {
        // Get configuration by identifier
        $config = $this->configRepository->findByIdentifier('blog-summarizer');

        // Get the model and provider
        $model = $config->getModel();
        $provider = $model->getProvider();

        // Create adapter and make requests
        $adapter = $this->adapterRegistry->createAdapterFromModel($model);
        $response = $adapter->chatCompletion($messages, $config->toOptions());
    }
}
Copied!

TypoScript settings 

Runtime settings can be configured via TypoScript:

Constants 

Configuration/TypoScript/constants.typoscript
plugin.tx_nrllm {
    settings {
        # Default LLM provider (openai, claude, gemini)
        defaultProvider = openai

        # Enable/disable response caching
        enableCaching = 1

        # Cache lifetime in seconds
        cacheLifetime = 3600

        # Per-provider settings
        providers {
            openai {
                enabled = 1
                defaultModel = gpt-4o
                temperature = 0.7
                maxTokens = 4096
            }

            claude {
                enabled = 1
                defaultModel = claude-sonnet-4-20250514
                temperature = 0.7
                maxTokens = 4096
            }

            gemini {
                enabled = 1
                defaultModel = gemini-2.0-flash
                temperature = 0.7
                maxTokens = 4096
            }
        }
    }
}
Copied!

Environment variables 

For deployment flexibility, use environment variables:

.env
# TYPO3 encryption key (used for API key encryption)
TYPO3_CONF_VARS__SYS__encryptionKey=your-secure-encryption-key

# Optional: Override provider settings via environment
TYPO3_NR_LLM_DEFAULT_TIMEOUT=60
Copied!

Security 

API key protection 

  1. Encrypted storage: API keys are encrypted using sodium_crypto_secretbox.
  2. Database security: Ensure database backups are encrypted.
  3. Backend access: Restrict backend module access to authorized users.
  4. Key rotation: Changing the TYPO3 encryptionKey requires re-encryption.

Input sanitization 

Always sanitize user input before sending to LLM providers:

Example: Sanitizing user input
use TYPO3\CMS\Core\Utility\GeneralUtility;

$sanitizedInput = GeneralUtility::removeXSS($userInput);
$response = $adapter->chatCompletion([
    ['role' => 'user', 'content' => $sanitizedInput]
]);
Copied!

Output handling 

Treat LLM responses as untrusted content:

Example: Escaping output
$response = $adapter->chatCompletion($messages);
$safeOutput = htmlspecialchars($response->content, ENT_QUOTES, 'UTF-8');
Copied!

Logging 

Enable detailed logging for debugging:

config/system/additional.php
$GLOBALS['TYPO3_CONF_VARS']['LOG']['Netresearch']['NrLlm'] = [
    'writerConfiguration' => [
        \Psr\Log\LogLevel::DEBUG => [
            \TYPO3\CMS\Core\Log\Writer\FileWriter::class => [
                'logFileInfix' => 'nr_llm',
            ],
        ],
    ],
];
Copied!

Log file location: var/log/typo3_nr_llm_*.log

Caching 

The extension uses TYPO3's caching framework:

  • Cache identifier: nrllm_responses.
  • Default TTL: 3600 seconds (1 hour).
  • Embeddings TTL: 86400 seconds (24 hours).

Clear cache via CLI:

Clear extension caches
vendor/bin/typo3 cache:flush --group=nrllm
Copied!

Architecture 

This section describes the architectural design of the TYPO3 LLM extension.

Three-tier configuration architecture 

The extension uses a three-level hierarchical architecture separating concerns:

┌─────────────────────────────────────────────────────────────────────────┐
│ CONFIGURATION (Use-Case Specific)                                        │
│ "blog-summarizer", "product-description", "support-translator"          │
│                                                                          │
│ Fields: system_prompt, temperature, max_tokens, use_case_type           │
│ References: model_uid → Model                                            │
└──────────────────────────────────┬──────────────────────────────────────┘
                                   │ N:1
┌──────────────────────────────────▼──────────────────────────────────────┐
│ MODEL (Available Models)                                                 │
│ "gpt-5", "claude-sonnet-4-5", "llama-70b", "text-embedding-3-large"     │
│                                                                          │
│ Fields: model_id, context_length, capabilities, pricing                 │
│ References: provider_uid → Provider                                      │
└──────────────────────────────────┬──────────────────────────────────────┘
                                   │ N:1
┌──────────────────────────────────▼──────────────────────────────────────┐
│ PROVIDER (API Connections)                                               │
│ "openai-prod", "openai-dev", "local-ollama", "azure-openai-eu"          │
│                                                                          │
│ Fields: endpoint_url, api_key (encrypted), adapter_type, timeout        │
└─────────────────────────────────────────────────────────────────────────┘
Copied!

Benefits 

  • Multiple API keys per provider type: Separate production and development accounts.
  • Custom endpoints: Azure OpenAI, Ollama, vLLM, local models.
  • Reusable model definitions: Centralized capabilities and pricing.
  • Clear separation of concerns: Connection vs capability vs use-case.

Provider layer 

Represents a specific API connection with credentials.

Database table: tx_nrllm_provider

Field Type Description
identifier string Unique slug (e.g., openai-prod, ollama-local)
name string Display name (e.g., OpenAI Production)
adapter_type string Protocol: openai, anthropic, gemini, ollama, etc.
endpoint_url string Custom endpoint (empty = default)
api_key string Encrypted API key (using sodium_crypto_secretbox)
organization_id string Optional organization ID (OpenAI)
timeout int Request timeout in seconds
max_retries int Retry count on failure
options JSON Additional adapter-specific options

Key design points:

  • One provider = one API key = one billing relationship.
  • Same adapter type can have multiple providers (prod/dev accounts).
  • Adapter type determines the protocol/client class used.
  • API keys are encrypted at rest using sodium.

Model layer 

Represents a specific model available through a provider.

Database table: tx_nrllm_model

Field Type Description
identifier string Unique slug (e.g., gpt-5, claude-sonnet)
name string Display name (e.g., GPT-5 (128K))
provider_uid int Foreign key to Provider
model_id string API model identifier (e.g., gpt-5, claude-opus-4-5-20251101)
context_length int Token limit (e.g., 128000)
max_output_tokens int Output limit (e.g., 16384)
capabilities CSV Supported features: chat,vision,streaming,tools
cost_input int Cents per 1M input tokens
cost_output int Cents per 1M output tokens
is_default bool Default model for this provider

Key design points:

  • Models belong to exactly one provider.
  • Capabilities define what the model can do.
  • Pricing stored as integers (cents/1M tokens) to avoid float issues.
  • Same logical model can exist multiple times (different providers).

Configuration layer 

Represents a specific use case with model and prompt settings.

Database table: tx_nrllm_configuration

Field Type Description
identifier string Unique slug (e.g., blog-summarizer)
name string Display name (e.g., Blog Post Summarizer)
model_uid int Foreign key to Model
system_prompt text System message for the model
temperature float Creativity: 0.0 - 2.0
max_tokens int Response length limit
top_p float Nucleus sampling
presence_penalty float Topic diversity
frequency_penalty float Word repetition penalty
use_case_type string chat, completion, embedding, translation

Key design points:

  • Configurations reference models, not providers directly.
  • All LLM parameters are tunable per use case.
  • Same model can be used by multiple configurations.

Service layer 

The extension follows a layered service architecture:

┌─────────────────────────────────────────┐
│         Your Application Code           │
└────────────────┬────────────────────────┘
                 │
┌────────────────▼────────────────────────┐
│         Feature Services                │
│  (Completion, Embedding, Vision, etc.)  │
└────────────────┬────────────────────────┘
                 │
┌────────────────▼────────────────────────┐
│         LlmServiceManager               │
│    (Provider selection & routing)       │
└────────────────┬────────────────────────┘
                 │
┌────────────────▼────────────────────────┐
│       ProviderAdapterRegistry           │
│    (Maps adapters to database providers)│
└────────────────┬────────────────────────┘
                 │
┌────────────────▼────────────────────────┐
│       Provider Adapters                 │
│  (OpenAI, Claude, Gemini, Ollama, etc.) │
└─────────────────────────────────────────┘
Copied!

Feature services 

High-level services for common AI tasks:

  • CompletionService: Text generation with format control (JSON, Markdown).
  • EmbeddingService: Text-to-vector conversion with caching.
  • VisionService: Image analysis for alt-text, titles, descriptions.
  • TranslationService: Language translation with glossaries.

Provider adapters 

The extension includes adapters for multiple LLM providers:

  • OpenAI (OpenAiProvider): GPT-5.x series, o-series reasoning models.
  • Anthropic (ClaudeProvider): Claude Opus 4.5, Claude Sonnet 4.5, Claude Haiku 4.5.
  • Google (GeminiProvider): Gemini 3 Pro, Gemini 3 Flash, Gemini 2.5 series.
  • Ollama (OllamaProvider): Local model deployment.
  • OpenRouter (OpenRouterProvider): Multi-model routing.
  • Mistral (MistralProvider): Mistral models.
  • Groq (GroqProvider): Fast inference.

Security 

API key encryption 

API keys are encrypted at rest in the database using sodium_crypto_secretbox (XSalsa20-Poly1305).

  • Keys are derived from TYPO3's encryptionKey with domain separation.
  • Nonce is randomly generated per encryption (24 bytes).
  • Encrypted values are prefixed with enc: for detection.
  • Legacy plaintext values are automatically encrypted on first access.

For details, see ADR-012: API key encryption at application level.

Supported adapter types 

Adapter Type PHP Class Default Endpoint
openai OpenAiProvider https://api.openai.com/v1
anthropic ClaudeProvider https://api.anthropic.com/v1
gemini GeminiProvider https://generativelanguage.googleapis.com/v1beta
ollama OllamaProvider http://localhost:11434
openrouter OpenRouterProvider https://openrouter.ai/api/v1
mistral MistralProvider https://api.mistral.ai/v1
groq GroqProvider https://api.groq.com/openai/v1
azure_openai OpenAiProvider (custom Azure endpoint)
custom OpenAiProvider (custom endpoint)

Developer guide 

This guide covers technical details for developers integrating the LLM extension into their TYPO3 projects.

Core concepts 

Architecture overview 

The extension follows a layered architecture:

  1. Providers - Handle direct API communication.
  2. LlmServiceManager - Orchestrates providers and provides unified API.
  3. Feature services - High-level services for specific tasks.
  4. Domain models - Response objects and value types.
┌─────────────────────────────────────────┐
│         Your Application Code           │
└────────────────┬────────────────────────┘
                 │
┌────────────────▼────────────────────────┐
│         Feature Services                │
│  (Completion, Embedding, Vision, etc.)  │
└────────────────┬────────────────────────┘
                 │
┌────────────────▼────────────────────────┐
│         LlmServiceManager               │
│    (Provider selection & routing)       │
└────────────────┬────────────────────────┘
                 │
┌────────────────▼────────────────────────┐
│           Providers                     │
│    (OpenAI, Claude, Gemini, etc.)       │
└─────────────────────────────────────────┘
Copied!

Dependency injection 

All services are available via dependency injection:

Example: Injecting LLM services
use Netresearch\NrLlm\Service\LlmServiceManager;
use Netresearch\NrLlm\Service\Feature\CompletionService;
use Netresearch\NrLlm\Service\Feature\EmbeddingService;
use Netresearch\NrLlm\Service\Feature\VisionService;
use Netresearch\NrLlm\Service\Feature\TranslationService;

class MyController
{
    public function __construct(
        private readonly LlmServiceManager $llmManager,
        private readonly CompletionService $completionService,
        private readonly EmbeddingService $embeddingService,
        private readonly VisionService $visionService,
        private readonly TranslationService $translationService,
    ) {}
}
Copied!

Using LlmServiceManager 

Basic chat 

Example: Basic chat request
$messages = [
    ['role' => 'system', 'content' => 'You are a helpful assistant.'],
    ['role' => 'user', 'content' => 'What is TYPO3?'],
];

$response = $this->llmManager->chat($messages);

// Response properties
$content = $response->content;           // string
$model = $response->model;               // string
$finishReason = $response->finishReason; // string
$usage = $response->usage;               // UsageStatistics

// UsageStatistics
$promptTokens = $usage->promptTokens;
$completionTokens = $usage->completionTokens;
$totalTokens = $usage->totalTokens;
Copied!

Chat with options 

Example: Chat with configuration options
use Netresearch\NrLlm\Service\Option\ChatOptions;

// Using ChatOptions object
$options = ChatOptions::creative()
    ->withMaxTokens(2000)
    ->withSystemPrompt('You are a creative writer.');

$response = $this->llmManager->chat($messages, $options);

// Or using array
$response = $this->llmManager->chat($messages, [
    'provider' => 'claude',
    'model' => 'claude-opus-4-5-20251101',
    'temperature' => 1.2,
    'max_tokens' => 2000,
    'top_p' => 0.9,
    'frequency_penalty' => 0.5,
    'presence_penalty' => 0.5,
]);
Copied!

Simple completion 

Example: Quick completion from a prompt
// Quick completion from a prompt
$response = $this->llmManager->complete('Explain recursion in programming');
Copied!

Embeddings 

Example: Generating embeddings
// Single text
$response = $this->llmManager->embed('Hello, world!');
$vector = $response->getVector(); // array<float>

// Multiple texts
$response = $this->llmManager->embed(['Text 1', 'Text 2', 'Text 3']);
$vectors = $response->embeddings; // array<array<float>>
Copied!

Streaming 

Example: Streaming chat responses
$stream = $this->llmManager->streamChat($messages);

foreach ($stream as $chunk) {
    echo $chunk;
    ob_flush();
    flush();
}
Copied!

Tool/function calling 

Example: Tool/function calling
$tools = [
    [
        'type' => 'function',
        'function' => [
            'name' => 'get_weather',
            'description' => 'Get current weather for a location',
            'parameters' => [
                'type' => 'object',
                'properties' => [
                    'location' => [
                        'type' => 'string',
                        'description' => 'City name',
                    ],
                    'unit' => [
                        'type' => 'string',
                        'enum' => ['celsius', 'fahrenheit'],
                    ],
                ],
                'required' => ['location'],
            ],
        ],
    ],
];

$response = $this->llmManager->chatWithTools($messages, $tools);

if ($response->hasToolCalls()) {
    foreach ($response->toolCalls as $toolCall) {
        $functionName = $toolCall['function']['name'];
        $arguments = json_decode($toolCall['function']['arguments'], true);

        // Execute your function
        $result = match ($functionName) {
            'get_weather' => $this->getWeather($arguments['location']),
            default => throw new \RuntimeException("Unknown function: {$functionName}"),
        };

        // Continue conversation with result
        $messages[] = [
            'role' => 'assistant',
            'content' => null,
            'tool_calls' => [$toolCall],
        ];
        $messages[] = [
            'role' => 'tool',
            'tool_call_id' => $toolCall['id'],
            'content' => json_encode($result),
        ];

        $response = $this->llmManager->chat($messages);
    }
}
Copied!

Response objects 

CompletionResponse 

Domain/Model/CompletionResponse.php
namespace Netresearch\NrLlm\Domain\Model;

final class CompletionResponse
{
    public readonly string $content;
    public readonly string $model;
    public readonly UsageStatistics $usage;
    public readonly string $finishReason;
    public readonly string $provider;
    public readonly ?array $toolCalls;

    public function isComplete(): bool;      // finished normally
    public function wasTruncated(): bool;    // hit max_tokens
    public function wasFiltered(): bool;     // content filtered
    public function hasToolCalls(): bool;    // has tool calls
    public function getText(): string;       // alias for content
}
Copied!

EmbeddingResponse 

Domain/Model/EmbeddingResponse.php
namespace Netresearch\NrLlm\Domain\Model;

final class EmbeddingResponse
{
    /** @var array<int, array<int, float>> */
    public readonly array $embeddings;
    public readonly string $model;
    public readonly UsageStatistics $usage;
    public readonly string $provider;

    public function getVector(): array;   // First embedding
    public static function cosineSimilarity(array $a, array $b): float;
}
Copied!

UsageStatistics 

Domain/Model/UsageStatistics.php
namespace Netresearch\NrLlm\Domain\Model;

final readonly class UsageStatistics
{
    public int $promptTokens;
    public int $completionTokens;
    public int $totalTokens;
    public ?float $estimatedCost;
}
Copied!

Creating custom providers 

Implement a custom provider by extending AbstractProvider:

Example: Custom provider implementation
<?php

namespace MyVendor\MyExtension\Provider;

use Netresearch\NrLlm\Provider\AbstractProvider;
use Netresearch\NrLlm\Provider\Contract\ProviderInterface;

class MyCustomProvider extends AbstractProvider implements ProviderInterface
{
    protected string $baseUrl = 'https://api.example.com/v1';

    public function getName(): string
    {
        return 'My Custom Provider';
    }

    public function getIdentifier(): string
    {
        return 'custom';
    }

    public function isConfigured(): bool
    {
        return !empty($this->apiKey);
    }

    public function chatCompletion(array $messages, array $options = []): CompletionResponse
    {
        $payload = $this->buildChatPayload($messages, $options);
        $response = $this->sendRequest('chat', $payload);

        return new CompletionResponse(
            content: $response['choices'][0]['message']['content'],
            model: $response['model'],
            usage: $this->parseUsage($response['usage']),
            finishReason: $response['choices'][0]['finish_reason'],
            provider: $this->getIdentifier(),
        );
    }

    // Implement other required methods...
}
Copied!

Register your provider in Services.yaml:

Configuration/Services.yaml
MyVendor\MyExtension\Provider\MyCustomProvider:
  arguments:
    $httpClient: '@Psr\Http\Client\ClientInterface'
    $requestFactory: '@Psr\Http\Message\RequestFactoryInterface'
    $streamFactory: '@Psr\Http\Message\StreamFactoryInterface'
    $logger: '@Psr\Log\LoggerInterface'
  tags:
    - name: nr_llm.provider
      priority: 50
Copied!

Error handling 

The extension throws specific exceptions:

Example: Error handling
use Netresearch\NrLlm\Provider\Exception\ProviderException;
use Netresearch\NrLlm\Provider\Exception\ProviderConfigurationException;
use Netresearch\NrLlm\Provider\Exception\ProviderConnectionException;
use Netresearch\NrLlm\Provider\Exception\ProviderResponseException;
use Netresearch\NrLlm\Provider\Exception\UnsupportedFeatureException;
use Netresearch\NrLlm\Exception\InvalidArgumentException;

try {
    $response = $this->llmManager->chat($messages);
} catch (ProviderConfigurationException $e) {
    // Invalid or missing provider configuration
    $this->logger->error('Configuration error: ' . $e->getMessage());
} catch (ProviderConnectionException $e) {
    // Connection to provider failed
    $this->logger->error('Connection failed: ' . $e->getMessage());
} catch (ProviderResponseException $e) {
    // Provider returned an error response
    $this->logger->error('Provider response error: ' . $e->getMessage());
} catch (UnsupportedFeatureException $e) {
    // Requested feature not supported by provider
    $this->logger->warning('Unsupported feature: ' . $e->getMessage());
} catch (ProviderException $e) {
    // General provider error
    $this->logger->error('Provider error: ' . $e->getMessage());
} catch (InvalidArgumentException $e) {
    // Invalid parameters
    $this->logger->error('Invalid argument: ' . $e->getMessage());
}
Copied!

Events 

Best practices 

  1. Use feature services for common tasks instead of raw LlmServiceManager.
  2. Enable caching for deterministic operations like embeddings.
  3. Handle errors gracefully with proper try-catch blocks.
  4. Sanitize input before sending to LLM providers.
  5. Validate output and treat LLM responses as untrusted.
  6. Use streaming for long responses to improve UX.
  7. Set reasonable timeouts based on expected response times.
  8. Monitor usage to control costs and prevent abuse.

Feature services 

High-level AI services for TYPO3 with prompt engineering and response parsing.

Overview 

The feature services layer provides domain-specific AI capabilities for TYPO3 extensions. Each service wraps the core LlmServiceManager with specialized prompts, response parsing, and configuration optimized for specific use cases.

Architecture 

Feature services architecture
┌─────────────────────────────────────────────────────────┐
│            Consuming Extensions                          │
│  (rte-ckeditor-image, textdb, contexts)                 │
└──────────────────────┬──────────────────────────────────┘
                       │ Dependency Injection
┌──────────────────────▼──────────────────────────────────┐
│              Feature Services                            │
│  - CompletionService                                     │
│  - VisionService                                         │
│  - EmbeddingService                                      │
│  - TranslationService                                    │
│  - PromptTemplateService                                 │
└──────────────────────┬──────────────────────────────────┘
                       │ LLM abstraction
┌──────────────────────▼──────────────────────────────────┐
│              LlmServiceManager                           │
│  (Provider routing, caching, rate limiting)             │
└──────────────────────┬──────────────────────────────────┘
                       │ Provider calls
┌──────────────────────▼──────────────────────────────────┐
│            Provider Implementations                      │
│  (OpenAI, Anthropic, Gemini, etc.)                      │
└─────────────────────────────────────────────────────────┘
Copied!

CompletionService 

Purpose: Text generation and completion.

Use cases 

  • Content generation.
  • Rule generation (contexts extension).
  • Content summarization.
  • SEO meta generation.

Key features 

  • JSON response formatting.
  • Markdown generation.
  • Factual mode (low creativity).
  • Creative mode (high creativity).
  • System prompt support.

Example 

Example: Using CompletionService
use Netresearch\NrLlm\Service\Feature\CompletionService;

$completion = $completionService->complete(
    prompt: 'Explain TYPO3 in simple terms',
    options: [
        'temperature' => 0.3,
        'max_tokens' => 200,
        'response_format' => 'markdown',
    ]
);

echo $completion->text;
Copied!

Methods 

CompletionService methods
// Standard completion
$response = $completionService->complete($prompt);

// JSON output
$data = $completionService->completeJson('List 5 colors as a JSON array');

// Markdown output
$markdown = $completionService->completeMarkdown('Write docs for this API');

// Factual (low creativity, high consistency)
$response = $completionService->completeFactual('What is the capital of France?');

// Creative (high creativity)
$response = $completionService->completeCreative('Write a haiku about coding');
Copied!

VisionService 

Purpose: Image analysis and metadata generation.

Use cases 

  • Alt text generation (rte-ckeditor-image).
  • SEO title generation.
  • Detailed descriptions.
  • Custom image analysis.

Key features 

  • WCAG 2.1 compliant alt text.
  • SEO-optimized titles.
  • Batch processing.
  • Base64 and URL support.

Example 

Example: Using VisionService
use Netresearch\NrLlm\Service\Feature\VisionService;

// Single image
$altText = $visionService->generateAltText(
    'https://example.com/image.jpg'
);

// Batch processing
$altTexts = $visionService->generateAltText([
    'https://example.com/img1.jpg',
    'https://example.com/img2.jpg',
]);
Copied!

Methods 

VisionService methods
// Generate WCAG-compliant alt text
$altText = $visionService->generateAltText('https://example.com/image.jpg');

// Generate SEO-optimized title
$title = $visionService->generateTitle('/path/to/local/image.png');

// Generate detailed description
$description = $visionService->generateDescription($imageUrl);

// Custom analysis
$analysis = $visionService->analyzeImage(
    $imageUrl,
    'What colors are prominent in this image?'
);
Copied!

EmbeddingService 

Purpose: Text-to-vector conversion and similarity search.

Use cases 

  • Semantic translation memory (textdb).
  • Content similarity.
  • Duplicate detection.
  • Semantic search.

Key features 

  • Aggressive caching (deterministic).
  • Batch processing.
  • Cosine similarity calculations.
  • Top-K similarity search.

Example 

Example: Using EmbeddingService
use Netresearch\NrLlm\Service\Feature\EmbeddingService;

// Generate embedding
$vector = $embeddingService->embed('Search query text');

// Find similar
$similar = $embeddingService->findMostSimilar(
    queryVector: $vector,
    candidateVectors: $allVectors,
    topK: 5
);
Copied!

Methods 

EmbeddingService methods
// Generate embedding (cached automatically)
$vector = $embeddingService->embed('Some text');

// Full response with metadata
$response = $embeddingService->embedFull('Some text');

// Batch embedding
$vectors = $embeddingService->embedBatch(['Text 1', 'Text 2']);

// Calculate cosine similarity
$similarity = $embeddingService->cosineSimilarity($vectorA, $vectorB);

// Find most similar vectors
$results = $embeddingService->findMostSimilar(
    $queryVector,
    $candidateVectors,
    topK: 5
);

// Normalize a vector
$normalized = $embeddingService->normalize($vector);
Copied!

TranslationService 

Purpose: Language translation with quality control.

Use cases 

  • Translation suggestions (textdb).
  • Content localization.
  • Glossary-aware translation.

Key features 

  • Language detection.
  • Glossary support.
  • Formality levels.
  • Domain specialization.
  • Quality scoring.

Example 

Example: Using TranslationService
use Netresearch\NrLlm\Service\Feature\TranslationService;

$result = $translationService->translate(
    text: 'The TYPO3 extension is great',
    targetLanguage: 'de',
    options: [
        'glossary' => ['TYPO3' => 'TYPO3'],
        'formality' => 'formal',
        'domain' => 'technical',
    ]
);

echo $result->translation;
echo $result->confidence;
Copied!

Methods 

TranslationService methods
// Basic translation
$result = $translationService->translate('Hello, world!', 'de');

// With options
$result = $translationService->translate(
    $text,
    targetLanguage: 'de',
    sourceLanguage: 'en',
    options: [
        'formality' => 'formal',
        'domain' => 'technical',
        'glossary' => [
            'TYPO3' => 'TYPO3',
            'extension' => 'Erweiterung',
        ],
        'preserve_formatting' => true,
    ]
);

// TranslationResult properties
$translation = $result->translation;
$sourceLanguage = $result->sourceLanguage;
$confidence = $result->confidence;

// Batch translation
$results = $translationService->translateBatch($texts, 'de');

// Language detection
$language = $translationService->detectLanguage($text);

// Quality scoring
$score = $translationService->scoreTranslationQuality($source, $translation, 'de');
Copied!

PromptTemplateService 

Purpose: Centralized prompt management.

Key features 

  • Database-driven templates.
  • Variable substitution.
  • Conditional rendering.
  • Version control.
  • A/B testing.
  • Performance tracking.

Example 

Example: Using PromptTemplateService
use Netresearch\NrLlm\Service\PromptTemplateService;

$prompt = $promptService->render(
    identifier: 'vision.alt_text',
    variables: ['image_url' => 'https://example.com/img.jpg']
);

// Use with completion service
$response = $completionService->complete(
    prompt: $prompt->getUserPrompt(),
    options: [
        'system_prompt' => $prompt->getSystemPrompt(),
        'temperature' => $prompt->getTemperature(),
    ]
);
Copied!

Installation 

Dependency injection 

Add to your extension's Configuration/Services.yaml:

Configuration/Services.yaml
services:
  Your\Extension\Service\YourService:
    public: true
    arguments:
      $visionService: '@Netresearch\NrLlm\Service\Feature\VisionService'
      $translationService: '@Netresearch\NrLlm\Service\Feature\TranslationService'
      $completionService: '@Netresearch\NrLlm\Service\Feature\CompletionService'
      $embeddingService: '@Netresearch\NrLlm\Service\Feature\EmbeddingService'
Copied!

Usage in your extension 

Example: Using feature services in your extension
<?php

namespace Your\Extension\Service;

use Netresearch\NrLlm\Service\Feature\VisionService;

class YourService
{
    public function __construct(
        private readonly VisionService $visionService
    ) {}

    public function enhanceImage(string $imageUrl): array
    {
        return [
            'alt' => $this->visionService->generateAltText($imageUrl),
            'title' => $this->visionService->generateTitle($imageUrl),
            'description' => $this->visionService->generateDescription($imageUrl),
        ];
    }
}
Copied!

Default prompts 

The extension includes 10 default prompts optimized for common use cases:

Vision 

  • vision.alt_text - WCAG 2.1 compliant alt text.
  • vision.seo_title - SEO-optimized titles.
  • vision.description - Detailed descriptions.

Translation 

  • translation.general - General purpose translation.
  • translation.technical - Technical documentation.
  • translation.marketing - Marketing copy.

Completion 

  • completion.rule_generation - TYPO3 contexts rules.
  • completion.content_summary - Content summarization.
  • completion.seo_meta - SEO meta descriptions.

Embedding 

  • embedding.semantic_search - Semantic search configuration.

Testing 

Unit tests 

Run feature service tests
# Run all unit tests
Build/Scripts/runTests.sh -s unit

# Alternative: Via Composer script
composer ci:test:php:unit
Copied!

Mocking services 

Example: Mocking feature services in tests
use Netresearch\NrLlm\Service\Feature\VisionService;
use PHPUnit\Framework\TestCase;

class YourServiceTest extends TestCase
{
    public function testImageEnhancement(): void
    {
        $visionMock = $this->createMock(VisionService::class);
        $visionMock->method('generateAltText')
            ->willReturn('Test alt text');

        $service = new YourService($visionMock);
        $result = $service->enhanceImage('test.jpg');

        $this->assertEquals('Test alt text', $result['alt']);
    }
}
Copied!

Performance 

Caching 

  • Embeddings: 24h cache (deterministic).
  • Vision: Short cache (subjective).
  • Translation: Medium cache (context-dependent).
  • Completion: Case-by-case basis.

Batch processing 

Use batch methods for better performance:

Batch processing example
// Good: Single request for multiple images
$altTexts = $visionService->generateAltText($imageUrls);

// Bad: Multiple individual requests
foreach ($imageUrls as $url) {
    $altText = $visionService->generateAltText($url);
}
Copied!

Configuration 

Custom prompts 

Override default prompts via database or configuration:

Custom prompt template in database
INSERT INTO tx_nrllm_prompts (
    identifier,
    title,
    feature,
    system_prompt,
    user_prompt_template,
    temperature,
    max_tokens,
    is_active
) VALUES (
    'custom.vision.alt_text',
    'Custom Alt Text',
    'vision',
    'Custom system prompt...',
    'Custom user prompt with {{image_url}}',
    0.5,
    100,
    1
);
Copied!

Service options 

All services accept configuration options:

Service options example
$result = $completionService->complete(
    prompt: 'Generate text',
    options: [
        'temperature' => 0.7,
        'max_tokens' => 1000,
        'top_p' => 0.9,
        'frequency_penalty' => 0.0,
        'presence_penalty' => 0.0,
        'response_format' => 'json',
        'system_prompt' => 'Custom instructions',
        'stop_sequences' => ['\n\n', 'END'],
    ]
);
Copied!

Extension integration examples 

rte-ckeditor-image 

Example: CKEditor image integration
use Netresearch\NrLlm\Service\Feature\VisionService;

class ImageAiService
{
    public function __construct(
        private readonly VisionService $visionService
    ) {}

    public function enhanceImage(FileReference $file): array
    {
        $url = $file->getPublicUrl();
        return [
            'alt' => $this->visionService->generateAltText($url),
            'title' => $this->visionService->generateTitle($url),
        ];
    }
}
Copied!

textdb 

Example: textdb translation integration
use Netresearch\NrLlm\Service\Feature\TranslationService;
use Netresearch\NrLlm\Service\Feature\EmbeddingService;

class AiTranslationService
{
    public function __construct(
        private readonly TranslationService $translationService,
        private readonly EmbeddingService $embeddingService
    ) {}

    public function suggestTranslation(string $text, string $lang): array
    {
        return [
            'translation' => $this->translationService->translate($text, $lang),
            'similar' => $this->findSimilar($text),
        ];
    }
}
Copied!

contexts 

Example: Contexts rule generation
use Netresearch\NrLlm\Service\Feature\CompletionService;

class RuleGeneratorService
{
    public function __construct(
        private readonly CompletionService $completionService
    ) {}

    public function generateRule(string $description): ?array
    {
        return $this->completionService->completeJson(
            "Generate TYPO3 context rule: $description",
            ['temperature' => 0.2]
        );
    }
}
Copied!

File structure 

Feature services file structure
nr-llm/
├── Classes/
│   ├── Domain/
│   │   └── Model/
│   │       ├── CompletionResponse.php
│   │       ├── VisionResponse.php
│   │       ├── TranslationResult.php
│   │       ├── EmbeddingResponse.php
│   │       ├── UsageStatistics.php
│   │       ├── PromptTemplate.php
│   │       └── RenderedPrompt.php
│   ├── Service/
│   │   ├── Feature/
│   │   │   ├── CompletionService.php
│   │   │   ├── VisionService.php
│   │   │   ├── EmbeddingService.php
│   │   │   └── TranslationService.php
│   │   └── PromptTemplateService.php
│   └── Exception/
│       ├── InvalidArgumentException.php
│       └── PromptTemplateNotFoundException.php
├── Configuration/
│   └── Services.yaml
├── Resources/
│   └── Private/
│       └── Data/
│           └── DefaultPrompts.php
└── Tests/
    └── Unit/
        └── Service/
            └── Feature/
                ├── CompletionServiceTest.php
                ├── VisionServiceTest.php
                └── EmbeddingServiceTest.php
Copied!

Requirements 

  • TYPO3 v13.4+.
  • PHP 8.2+.
  • nr-llm core extension (LlmServiceManager).

API reference 

Complete API reference for the TYPO3 LLM extension.

LlmServiceManager 

The central service for all LLM operations.

class LlmServiceManager
Fully qualified name
\Netresearch\NrLlm\Service\LlmServiceManager

Orchestrates LLM providers and provides unified API access.

chat ( array $messages, array|ChatOptions $options = []) : CompletionResponse

Execute a chat completion request.

param array $messages

Array of message objects with 'role' and 'content' keys

param array|ChatOptions $options

Optional configuration

Message Format:

$messages = [
    ['role' => 'system', 'content' => 'You are a helpful assistant.'],
    ['role' => 'user', 'content' => 'Hello!'],
    ['role' => 'assistant', 'content' => 'Hi there!'],
    ['role' => 'user', 'content' => 'How are you?'],
];
Copied!
Returns

CompletionResponse

complete ( string $prompt, array|ChatOptions $options = []) : CompletionResponse

Simple completion from a single prompt.

param string $prompt

The prompt text

param array|ChatOptions $options

Optional configuration

Returns

CompletionResponse

embed ( string|array $text, array $options = []) : EmbeddingResponse

Generate embeddings for text.

param string|array $text

Single text or array of texts

param array $options

Optional configuration

Returns

EmbeddingResponse

streamChat ( array $messages, array $options = []) : Generator

Stream a chat completion response.

param array $messages

Array of message objects

param array $options

Optional configuration

Returns

Generator yielding string chunks

chatWithTools ( array $messages, array $tools, array $options = []) : CompletionResponse

Chat with tool/function calling capability.

param array $messages

Array of message objects

param array $tools

Array of tool definitions

param array $options

Optional configuration

Returns

CompletionResponse with potential tool calls

getProvider ( string $identifier) : ProviderInterface

Get a specific provider by identifier.

param string $identifier

Provider identifier (openai, claude, gemini)

throws

ProviderNotFoundException

Returns

ProviderInterface

getAvailableProviders ( ) : array

Get all configured and available providers.

Returns

array<string, ProviderInterface>

Feature services 

CompletionService 

class CompletionService
Fully qualified name
\Netresearch\NrLlm\Service\Feature\CompletionService

High-level text completion with format control.

complete ( string $prompt, array $options = []) : CompletionResponse

Standard text completion.

param string $prompt

The prompt text

param array $options

Optional configuration

Returns

CompletionResponse

completeJson ( string $prompt, array $options = []) : array

Completion with JSON output parsing.

param string $prompt

The prompt text

param array $options

Optional configuration

Returns

array Parsed JSON data

completeMarkdown ( string $prompt, array $options = []) : string

Completion with markdown formatting.

param string $prompt

The prompt text

param array $options

Optional configuration

Returns

string Markdown formatted text

completeFactual ( string $prompt, array $options = []) : CompletionResponse

Low-creativity completion for factual responses.

param string $prompt

The prompt text

param array $options

Optional configuration (temperature defaults to 0.1)

Returns

CompletionResponse

completeCreative ( string $prompt, array $options = []) : CompletionResponse

High-creativity completion for creative content.

param string $prompt

The prompt text

param array $options

Optional configuration (temperature defaults to 1.2)

Returns

CompletionResponse

EmbeddingService 

class EmbeddingService
Fully qualified name
\Netresearch\NrLlm\Service\Feature\EmbeddingService

Text-to-vector conversion with caching and similarity operations.

embed ( string $text) : array

Generate embedding vector for text (cached).

param string $text

The text to embed

Returns

array<float> Vector representation

embedFull ( string $text) : EmbeddingResponse

Generate embedding with full response metadata.

param string $text

The text to embed

Returns

EmbeddingResponse

embedBatch ( array $texts) : array

Generate embeddings for multiple texts.

param array $texts

Array of texts

Returns

array<array<float>> Array of vectors

cosineSimilarity ( array $a, array $b) : float

Calculate cosine similarity between two vectors.

param array $a

First vector

param array $b

Second vector

Returns

float Similarity score (-1 to 1)

findMostSimilar ( array $queryVector, array $candidates, int $topK = 5) : array

Find most similar vectors from candidates.

param array $queryVector

The query vector

param array $candidates

Array of candidate vectors

param int $topK

Number of results to return

Returns

array Sorted by similarity (highest first)

normalize ( array $vector) : array

Normalize a vector to unit length.

param array $vector

The vector to normalize

Returns

array Normalized vector

VisionService 

class VisionService
Fully qualified name
\Netresearch\NrLlm\Service\Feature\VisionService

Image analysis with specialized prompts.

generateAltText ( string $imageUrl) : string

Generate WCAG-compliant alt text.

param string $imageUrl

URL or local path to image

Returns

string Accessibility-optimized alt text

generateTitle ( string $imageUrl) : string

Generate SEO-optimized image title.

param string $imageUrl

URL or local path to image

Returns

string SEO-friendly title

generateDescription ( string $imageUrl) : string

Generate detailed image description.

param string $imageUrl

URL or local path to image

Returns

string Detailed description

analyzeImage ( string $imageUrl, string $prompt) : string

Custom image analysis with specific prompt.

param string $imageUrl

URL or local path to image

param string $prompt

Analysis prompt

Returns

string Analysis result

TranslationService 

class TranslationService
Fully qualified name
\Netresearch\NrLlm\Service\Feature\TranslationService

Language translation with quality control.

translate ( string $text, string $targetLanguage, ?string $sourceLanguage = null, array $options = []) : TranslationResult

Translate text to target language.

param string $text

Text to translate

param string $targetLanguage

Target language code (e.g., 'de', 'fr')

param string|null $sourceLanguage

Source language code (auto-detected if null)

param array $options

Translation options

Options:

  • formality: 'formal', 'informal', 'default'
  • domain: 'technical', 'legal', 'medical', 'general'
  • glossary: array of term translations
  • preserve_formatting: bool
Returns

TranslationResult

translateBatch ( array $texts, string $targetLanguage, array $options = []) : array

Translate multiple texts.

param array $texts

Array of texts

param string $targetLanguage

Target language code

param array $options

Translation options

Returns

array<TranslationResult>

detectLanguage ( string $text) : string

Detect the language of text.

param string $text

Text to analyze

Returns

string Language code

scoreTranslationQuality ( string $source, string $translation, string $targetLanguage) : float

Score translation quality.

param string $source

Original text

param string $translation

Translated text

param string $targetLanguage

Target language code

Returns

float Quality score (0.0 to 1.0)

Domain models 

CompletionResponse 

class CompletionResponse
Fully qualified name
\Netresearch\NrLlm\Domain\Model\CompletionResponse

Response from chat/completion operations.

string content

The generated text content.

string model

The model used for generation.

UsageStatistics usage

Token usage statistics.

string finishReason

Why generation stopped: 'stop', 'length', 'content_filter', 'tool_calls'

string provider

The provider identifier.

array|null toolCalls

Tool calls if any were made.

isComplete ( ) : bool

Check if response finished normally.

wasTruncated ( ) : bool

Check if response hit max_tokens limit.

wasFiltered ( ) : bool

Check if content was filtered.

hasToolCalls ( ) : bool

Check if response contains tool calls.

getText ( ) : string

Alias for content property.

EmbeddingResponse 

class EmbeddingResponse
Fully qualified name
\Netresearch\NrLlm\Domain\Model\EmbeddingResponse

Response from embedding operations.

array embeddings

Array of embedding vectors.

string model

The model used for embedding.

UsageStatistics usage

Token usage statistics.

string provider

The provider identifier.

getVector ( ) : array

Get the first embedding vector.

static cosineSimilarity ( array $a, array $b)

Calculate cosine similarity between vectors.

returns

float

TranslationResult 

class TranslationResult
Fully qualified name
\Netresearch\NrLlm\Domain\Model\TranslationResult

Response from translation operations.

string translation

The translated text.

string sourceLanguage

Detected or provided source language.

string targetLanguage

The target language.

float confidence

Confidence score (0.0 to 1.0).

UsageStatistics 

class UsageStatistics
Fully qualified name
\Netresearch\NrLlm\Domain\Model\UsageStatistics

Token usage and cost tracking.

int promptTokens

Tokens in the prompt/input.

int completionTokens

Tokens in the completion/output.

int totalTokens

Total tokens used.

float|null estimatedCost

Estimated cost in USD (if available).

Option classes 

ChatOptions 

class ChatOptions
Fully qualified name
\Netresearch\NrLlm\Service\Option\ChatOptions

Typed options for chat operations.

static factual ( )

Create options optimized for factual responses (temperature: 0.1).

returns

ChatOptions

static creative ( )

Create options for creative content (temperature: 1.2).

returns

ChatOptions

static balanced ( )

Create balanced options (temperature: 0.7).

returns

ChatOptions

static json ( )

Create options for JSON output format.

returns

ChatOptions

static code ( )

Create options optimized for code generation.

returns

ChatOptions

withTemperature ( float $temperature) : self

Set temperature (0.0 - 2.0).

withMaxTokens ( int $maxTokens) : self

Set maximum output tokens.

withTopP ( float $topP) : self

Set nucleus sampling parameter.

withFrequencyPenalty ( float $penalty) : self

Set frequency penalty (-2.0 to 2.0).

withPresencePenalty ( float $penalty) : self

Set presence penalty (-2.0 to 2.0).

withSystemPrompt ( string $prompt) : self

Set system prompt.

withProvider ( string $provider) : self

Set provider (openai, claude, gemini).

withModel ( string $model) : self

Set specific model.

toArray ( ) : array

Convert to array format.

Provider interface 

interface ProviderInterface
Fully qualified name
\Netresearch\NrLlm\Provider\Contract\ProviderInterface

Contract for LLM providers.

getName ( ) : string

Get human-readable provider name.

getIdentifier ( ) : string

Get provider identifier for configuration.

isConfigured ( ) : bool

Check if provider has required configuration.

chatCompletion ( array $messages, array $options = []) : CompletionResponse

Execute chat completion.

getAvailableModels ( ) : array

Get list of available models.

interface EmbeddingCapableInterface
Fully qualified name
\Netresearch\NrLlm\Provider\Contract\EmbeddingCapableInterface

Contract for providers supporting embeddings.

embeddings ( string|array $input, array $options = []) : EmbeddingResponse

Generate embeddings.

interface VisionCapableInterface
Fully qualified name
\Netresearch\NrLlm\Provider\Contract\VisionCapableInterface

Contract for providers supporting vision/image analysis.

analyzeImage ( string $imageUrl, string $prompt, array $options = []) : CompletionResponse

Analyze an image.

interface StreamingCapableInterface
Fully qualified name
\Netresearch\NrLlm\Provider\Contract\StreamingCapableInterface

Contract for providers supporting streaming.

streamChatCompletion ( array $messages, array $options = []) : Generator

Stream chat completion.

interface ToolCapableInterface
Fully qualified name
\Netresearch\NrLlm\Provider\Contract\ToolCapableInterface

Contract for providers supporting tool/function calling.

chatWithTools ( array $messages, array $tools, array $options = []) : CompletionResponse

Chat with tool calling.

Exceptions 

class ProviderException
Fully qualified name
\Netresearch\NrLlm\Provider\Exception\ProviderException

Base exception for provider errors.

getProvider ( ) : string

Get the provider that threw the exception.

class ProviderConfigurationException
Fully qualified name
\Netresearch\NrLlm\Provider\Exception\ProviderConfigurationException

Thrown when a provider is incorrectly configured.

Extends \Netresearch\NrLlm\Provider\Exception\ProviderException

class ProviderConnectionException
Fully qualified name
\Netresearch\NrLlm\Provider\Exception\ProviderConnectionException

Thrown when a connection to the provider fails.

Extends \Netresearch\NrLlm\Provider\Exception\ProviderException

class ProviderResponseException
Fully qualified name
\Netresearch\NrLlm\Provider\Exception\ProviderResponseException

Thrown when the provider returns an unexpected or error response.

Extends \Netresearch\NrLlm\Provider\Exception\ProviderException

class UnsupportedFeatureException
Fully qualified name
\Netresearch\NrLlm\Provider\Exception\UnsupportedFeatureException

Thrown when a requested feature is not supported by the provider.

Extends \Netresearch\NrLlm\Provider\Exception\ProviderException

class InvalidArgumentException
Fully qualified name
\Netresearch\NrLlm\Exception\InvalidArgumentException

Thrown for invalid method arguments.

class ConfigurationNotFoundException
Fully qualified name
\Netresearch\NrLlm\Exception\ConfigurationNotFoundException

Thrown when a named configuration is not found.

Events 

Testing guide 

Comprehensive testing guide for the TYPO3 LLM extension.

Overview 

The extension includes a comprehensive test suite:

Test Type Count Purpose
Unit tests 384 Individual class and method testing.
Integration tests 39 Service interaction and provider testing.
E2E tests 11 Full workflow testing with real APIs.
Functional tests 39 TYPO3 framework integration.
Property tests 25 Fuzzy/property-based testing.

Running tests 

Prerequisites 

Install development dependencies
# Install development dependencies
composer install --dev
Copied!

Unit tests 

Run unit tests
# Recommended: Use runTests.sh (Docker-based, consistent environment)
Build/Scripts/runTests.sh -s unit

# With specific PHP version
Build/Scripts/runTests.sh -s unit -p 8.3

# Alternative: Via Composer script
composer ci:test:php:unit
Copied!

Integration tests 

Run integration tests
# Run integration tests (requires mock server or API keys)
composer ci:test:php:integration

# With real API (set environment variables first)
OPENAI_API_KEY=sk-... Build/Scripts/runTests.sh -s unit
Copied!

Functional tests 

Run functional tests
# Run TYPO3 functional tests
Build/Scripts/runTests.sh -s functional

# Alternative: Via Composer script
composer ci:test:php:functional
Copied!

All tests 

Run complete test suite
# Run all test suites via runTests.sh
Build/Scripts/runTests.sh -s unit
Build/Scripts/runTests.sh -s functional

# Run code quality checks
Build/Scripts/runTests.sh -s cgl
Build/Scripts/runTests.sh -s phpstan
Copied!

Test structure 

Test directory structure
Tests/
├── Unit/
│   ├── Domain/
│   │   └── Model/
│   │       ├── CompletionResponseTest.php
│   │       ├── EmbeddingResponseTest.php
│   │       └── UsageStatisticsTest.php
│   ├── Provider/
│   │   ├── OpenAiProviderTest.php
│   │   ├── ClaudeProviderTest.php
│   │   ├── GeminiProviderTest.php
│   │   └── AbstractProviderTest.php
│   └── Service/
│       ├── LlmServiceManagerTest.php
│       └── Feature/
│           ├── CompletionServiceTest.php
│           ├── EmbeddingServiceTest.php
│           ├── VisionServiceTest.php
│           └── TranslationServiceTest.php
├── Integration/
│   ├── Provider/
│   │   └── ProviderIntegrationTest.php
│   └── Service/
│       └── ServiceIntegrationTest.php
├── Functional/
│   ├── Controller/
│   │   └── BackendControllerTest.php
│   └── Repository/
│       └── PromptTemplateRepositoryTest.php
└── E2E/
    └── WorkflowTest.php
Copied!

Writing tests 

Unit test example 

Example: Unit test
<?php

namespace Netresearch\NrLlm\Tests\Unit\Service;

use Netresearch\NrLlm\Domain\Model\CompletionResponse;
use Netresearch\NrLlm\Domain\Model\UsageStatistics;
use Netresearch\NrLlm\Provider\Contract\ProviderInterface;
use Netresearch\NrLlm\Service\LlmServiceManager;
use PHPUnit\Framework\TestCase;

class LlmServiceManagerTest extends TestCase
{
    private LlmServiceManager $subject;

    protected function setUp(): void
    {
        parent::setUp();

        $mockProvider = $this->createMock(ProviderInterface::class);
        $mockProvider->method('getIdentifier')->willReturn('test');
        $mockProvider->method('isConfigured')->willReturn(true);

        $this->subject = new LlmServiceManager(
            providers: [$mockProvider],
            defaultProvider: 'test'
        );
    }

    public function testChatReturnsCompletionResponse(): void
    {
        $provider = $this->createMock(ProviderInterface::class);
        $provider->method('chatCompletion')->willReturn(
            new CompletionResponse(
                content: 'Hello!',
                model: 'test-model',
                usage: new UsageStatistics(10, 5, 15),
                finishReason: 'stop',
                provider: 'test'
            )
        );

        // ... test implementation
    }

    /**
     * @dataProvider invalidMessagesProvider
     */
    public function testChatThrowsOnInvalidMessages(array $messages): void
    {
        $this->expectException(\InvalidArgumentException::class);
        $this->subject->chat($messages);
    }

    public static function invalidMessagesProvider(): array
    {
        return [
            'empty messages' => [[]],
            'missing role' => [[['content' => 'test']]],
            'missing content' => [[['role' => 'user']]],
            'invalid role' => [[['role' => 'invalid', 'content' => 'test']]],
        ];
    }
}
Copied!

Integration test example 

Example: Integration test
<?php

namespace Netresearch\NrLlm\Tests\Integration\Provider;

use Netresearch\NrLlm\Provider\OpenAiProvider;
use PHPUnit\Framework\TestCase;

class OpenAiProviderIntegrationTest extends TestCase
{
    private ?OpenAiProvider $provider = null;

    protected function setUp(): void
    {
        $apiKey = getenv('OPENAI_API_KEY');
        if (!$apiKey) {
            $this->markTestSkipped('OPENAI_API_KEY not set');
        }

        $this->provider = new OpenAiProvider(
            httpClient: new \GuzzleHttp\Client(),
            requestFactory: new \GuzzleHttp\Psr7\HttpFactory(),
            streamFactory: new \GuzzleHttp\Psr7\HttpFactory(),
            apiKey: $apiKey
        );
    }

    public function testChatCompletionWithRealApi(): void
    {
        $response = $this->provider->chatCompletion([
            ['role' => 'user', 'content' => 'Say "test" and nothing else.'],
        ], [
            'max_tokens' => 10,
        ]);

        $this->assertStringContainsStringIgnoringCase('test', $response->content);
        $this->assertGreaterThan(0, $response->usage->totalTokens);
    }
}
Copied!

Functional test example 

Example: Functional test
<?php

namespace Netresearch\NrLlm\Tests\Functional\Repository;

use Netresearch\NrLlm\Domain\Model\PromptTemplate;
use Netresearch\NrLlm\Domain\Repository\PromptTemplateRepository;
use TYPO3\TestingFramework\Core\Functional\FunctionalTestCase;

class PromptTemplateRepositoryTest extends FunctionalTestCase
{
    protected array $testExtensionsToLoad = [
        'netresearch/nr-llm',
    ];

    private PromptTemplateRepository $repository;

    protected function setUp(): void
    {
        parent::setUp();
        $this->repository = $this->get(PromptTemplateRepository::class);
    }

    public function testFindByIdentifierReturnsTemplate(): void
    {
        $this->importCSVDataSet(__DIR__ . '/Fixtures/prompt_templates.csv');

        $template = $this->repository->findByIdentifier('test-template');

        $this->assertInstanceOf(PromptTemplate::class, $template);
        $this->assertEquals('Test Template', $template->getName());
    }
}
Copied!

Mocking providers 

Using mock provider 

Example: Mock provider
<?php

use Netresearch\NrLlm\Domain\Model\CompletionResponse;
use Netresearch\NrLlm\Domain\Model\UsageStatistics;
use Netresearch\NrLlm\Provider\Contract\ProviderInterface;

$mockProvider = $this->createMock(ProviderInterface::class);

$mockProvider
    ->method('chatCompletion')
    ->willReturn(new CompletionResponse(
        content: 'Mocked response',
        model: 'mock-model',
        usage: new UsageStatistics(100, 50, 150),
        finishReason: 'stop',
        provider: 'mock'
    ));

$mockProvider
    ->method('isConfigured')
    ->willReturn(true);
Copied!

Using HTTP mock 

Example: HTTP mock
<?php

use GuzzleHttp\Client;
use GuzzleHttp\Handler\MockHandler;
use GuzzleHttp\HandlerStack;
use GuzzleHttp\Psr7\Response;

$mock = new MockHandler([
    new Response(200, [], json_encode([
        'choices' => [
            [
                'message' => ['content' => 'Test response'],
                'finish_reason' => 'stop',
            ],
        ],
        'model' => 'gpt-5',
        'usage' => [
            'prompt_tokens' => 10,
            'completion_tokens' => 5,
            'total_tokens' => 15,
        ],
    ])),
]);

$handlerStack = HandlerStack::create($mock);
$client = new Client(['handler' => $handlerStack]);

$provider = new OpenAiProvider(
    httpClient: $client,
    // ...
);
Copied!

Test fixtures 

CSV fixtures 

Tests/Functional/Fixtures/prompt_templates.csv
"tx_nrllm_prompt_template"
"uid","pid","identifier","name","template","variables"
1,0,"test-template","Test Template","Hello {name}!","name"
Copied!

JSON response fixtures 

Tests/Fixtures/openai_chat_response.json
{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-5",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Test response"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 5,
    "total_tokens": 15
  }
}
Copied!

Mutation testing 

The extension uses Infection for mutation testing to ensure test quality.

Running mutation tests 

Run mutation tests
# Run mutation tests via runTests.sh
Build/Scripts/runTests.sh -s mutation

# Alternative: Via Composer script
composer ci:test:php:mutation
Copied!

Interpreting results 

  • MSI (Mutation Score Indicator): Percentage of mutations killed.
  • Target: >60% MSI indicates good test quality.
  • Current: 58% MSI (459 tests).
Mutation Score Indicator (MSI): 58%
Mutation Code Coverage: 85%
Covered Code MSI: 68%
Copied!

CI/CD integration 

GitHub Actions 

.github/workflows/tests.yml
name: Tests

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest

    strategy:
      matrix:
        php: ['8.2', '8.3', '8.4', '8.5']
        typo3: ['13.4', '14.0']

    steps:
      - uses: actions/checkout@v4

      - name: Setup PHP
        uses: shivammathur/setup-php@v2
        with:
          php-version: ${{ matrix.php }}
          coverage: xdebug

      - name: Install dependencies
        run: composer install --prefer-dist

      - name: Run tests
        run: composer test

      - name: Upload coverage
        uses: codecov/codecov-action@v3
        with:
          files: coverage/clover.xml
Copied!

GitLab CI/CD 

.gitlab-ci.yml
test:
  image: php:8.2
  script:
    - composer install
    - composer test
  coverage: '/^\s*Lines:\s*\d+.\d+\%/'
Copied!

Best practices 

  1. Isolate tests: Each test should be independent.
  2. Mock external APIs: Never call real APIs in unit tests.
  3. Use data providers: For testing multiple scenarios.
  4. Test edge cases: Empty inputs, null values, boundaries.
  5. Descriptive names: Test method names should describe behavior.
  6. Arrange-Act-Assert: Follow AAA pattern.
  7. Fast tests: Unit tests should complete in milliseconds.
  8. Coverage goals: Aim for >80% line coverage.

Architecture Decision Records 

This section documents significant architectural decisions made during the development of the TYPO3 LLM Extension.

Symbol legend 

Each consequence in the ADRs is marked with severity symbols to indicate impact weight:

Symbol Meaning Weight
●● Strong Positive +2 to +3
Medium Positive +1 to +2
Light Positive +0.5 to +1
Medium Negative -1 to -2
✕✕ Strong Negative -2 to -3
Light Negative -0.5 to -1

Net Score indicates the overall impact of the decision (sum of weights).

Decision records 

ADR-001: Provider Abstraction Layer 

Status 

Accepted (2024-01)

Context 

We needed to support multiple LLM providers (OpenAI, Anthropic Claude, Google Gemini) while maintaining a consistent API for consumers. Each provider has different:

  • API endpoints and authentication methods
  • Request/response formats
  • Model naming conventions
  • Capability sets (vision, embeddings, streaming, tools)

Decision 

Implement a provider abstraction layer with:

  1. ProviderInterface as the core contract.
  2. Capability interfaces for optional features:

    • EmbeddingCapableInterface.
    • VisionCapableInterface.
    • StreamingCapableInterface.
    • ToolCapableInterface.
  3. AbstractProvider base class with shared functionality.
  4. LlmServiceManager as the unified entry point.

Consequences 

Positive:

  • ●● Consumers use single API regardless of provider.
  • ●● Easy to add new providers.
  • ● Capability checking via interface detection.
  • ●● Provider switching requires no code changes.

Negative:

  • ✕ Lowest common denominator for shared features.
  • ◑ Provider-specific features require direct provider access.
  • ◑ Additional abstraction layer complexity.

Net Score: +5.5 (Strong positive impact - abstraction enables flexibility and maintainability)

Alternatives considered 

  1. Single monolithic class: Rejected due to maintenance complexity.
  2. Strategy pattern only: Insufficient for capability detection.
  3. Factory pattern: Used in combination with interfaces.

ADR-002: Feature Services Architecture 

Status 

Accepted (2024-02)

Context 

Common LLM tasks (translation, image analysis, embeddings) require:

  • Specialized prompts and configurations
  • Pre/post-processing logic
  • Caching strategies
  • Quality control measures

Decision 

Create dedicated Feature Services for high-level operations:

  • CompletionService: Text generation with format control.
  • EmbeddingService: Vector operations with caching.
  • VisionService: Image analysis with specialized prompts.
  • TranslationService: Language translation with quality scoring.

Each service:

  • Uses LlmServiceManager internally.
  • Provides domain-specific methods.
  • Handles caching and optimization.
  • Returns typed response objects.

Consequences 

Positive:

  • ●● Clear separation of concerns.
  • ● Reusable, tested implementations.
  • ●● Consistent behavior across use cases.
  • ● Built-in best practices (caching, prompts).

Negative:

  • ◑ Additional classes to maintain.
  • ◑ Potential duplication with manager methods.
  • ◑ Learning curve for service selection.

Net Score: +6.5 (Strong positive impact - services provide high-level abstractions with best practices)

ADR-003: Typed Response Objects 

Status 

Accepted (2024-01)

Context 

Provider APIs return different response structures. We needed to:

  • Provide consistent response format to consumers.
  • Enable IDE autocompletion and type checking.
  • Include relevant metadata (usage, model, finish reason).

Decision 

Use immutable value objects for responses:

Example: CompletionResponse value object
final class CompletionResponse
{
    public function __construct(
        public readonly string $content,
        public readonly string $model,
        public readonly UsageStatistics $usage,
        public readonly string $finishReason,
        public readonly string $provider,
        public readonly ?array $toolCalls = null,
    ) {}
}
Copied!

Key characteristics:

  • final classes prevent inheritance issues.
  • readonly properties ensure immutability.
  • Constructor promotion for concise definition.
  • Nullable for optional data.

Consequences 

Positive:

  • ●● Strong typing with IDE support.
  • ● Immutable objects are thread-safe.
  • ●● Clear API contract.
  • ● Easy testing and mocking.

Negative:

  • ◑ Cannot extend responses.
  • ✕ Breaking changes require new properties.
  • ◑ Slight memory overhead vs arrays.

Net Score: +5.5 (Strong positive impact - type safety and immutability outweigh flexibility limitations)

ADR-004: PSR-14 Event System 

Status 

Accepted (2024-02)

Context 

Consumers need extension points for:

  • Logging and monitoring.
  • Request modification.
  • Response processing.
  • Cost tracking and rate limiting.

Decision 

Use TYPO3's PSR-14 event system with events:

  • BeforeRequestEvent: Modify requests before sending.
  • AfterResponseEvent: Process responses after receiving.

Events are dispatched by LlmServiceManager and provide:

  • Full context (messages, options, provider).
  • Mutable options (before request).
  • Response data (after response).
  • Timing information.

Consequences 

Positive:

  • ●● Follows TYPO3 conventions.
  • ●● Decoupled extension mechanism.
  • ● Multiple listeners without modification.
  • ● Testable event handlers.

Negative:

  • ◑ Event overhead on every request.
  • ◑ Listener ordering considerations.
  • ◑ Debugging event flow complexity.

Net Score: +6.5 (Strong positive impact - standard TYPO3 integration with decoupled extensibility)

ADR-005: TYPO3 Caching Framework Integration 

Status 

Accepted (2024-03)

Context 

LLM API calls are:

  • Expensive (cost per token).
  • Relatively slow (network latency).
  • Often deterministic (embeddings, some completions).

Decision 

Integrate with TYPO3's caching framework:

  • Cache identifier: nrllm_responses.
  • Configurable backend (default: database).
  • Cache keys based on: provider + model + input hash.
  • TTL: 3600s default (configurable).

Caching strategy:

  • Always cache: Embeddings (deterministic).
  • Optional cache: Completions with temperature=0.
  • Never cache: Streaming, tool calls, high temperature.

Consequences 

Positive:

  • ●● Reduced API costs.
  • ●● Faster responses for cached content.
  • ● Follows TYPO3 patterns.
  • ◐ Configurable per deployment.

Negative:

  • ✕ Cache invalidation complexity.
  • ◑ Storage requirements.
  • ✕ Stale responses if TTL too long.

Net Score: +4.5 (Positive impact - significant cost/performance gains with manageable cache complexity)

ADR-006: Option Objects vs Arrays 

Status 

Superseded by ADR-011 (2024-12)

Context 

Method signatures like chat(array $messages, array $options) lack:

  • Type safety and validation.
  • IDE autocompletion.
  • Documentation of available options.
  • Factory methods for common configurations.

Decision 

Introduce Option Objects (initially with array backwards compatibility):

Example: Using ChatOptions
// Option objects only
$options = ChatOptions::creative()
    ->withMaxTokens(2000)
    ->withSystemPrompt('Be creative');

$response = $llmManager->chat($messages, $options);
Copied!

Implementation:

  • Pure object signatures: ?ChatOptions.
  • Factory presets: factual(), creative(), json().
  • Fluent builder pattern.
  • Validation in constructors.

Consequences 

Positive:

  • ● IDE autocompletion for options.
  • ● Built-in validation.
  • ● Convenient factory presets.
  • ●● Type safety enforced.
  • ● Single consistent API.

Negative:

  • ◑ Migration required for existing code.
  • ◑ No array syntax available.

Net Score: +5.5 (Strong positive impact - developer experience improvements with backwards compatibility)

ADR-007: Multi-Provider Strategy 

Status 

Accepted (2024-01)

Context 

Supporting multiple providers requires:

  • Dynamic provider registration.
  • Priority-based selection.
  • Configuration per provider.
  • Fallback mechanisms.

Decision 

Use tagged service collection with priority:

Configuration/Services.yaml
# Services.yaml
Netresearch\NrLlm\Provider\OpenAiProvider:
  tags:
    - name: nr_llm.provider
      priority: 100

Netresearch\NrLlm\Provider\ClaudeProvider:
  tags:
    - name: nr_llm.provider
      priority: 90
Copied!

Provider selection hierarchy:

  1. Explicit provider in options.
  2. Default provider from configuration.
  3. First configured provider by priority.
  4. Throw exception if none available.

Consequences 

Positive:

  • ● Easy provider registration.
  • ● Clear priority system.
  • ●● Supports custom providers.
  • ● Automatic fallback.

Negative:

  • ◑ Priority conflicts possible.
  • ◑ All providers instantiated.
  • ◑ Configuration complexity.

Net Score: +5.5 (Strong positive impact - flexible multi-provider support with minor overhead)

ADR-008: Error Handling Strategy 

Status 

Accepted (2024-02)

Context 

LLM operations can fail due to:

  • Authentication issues.
  • Rate limiting.
  • Network errors.
  • Content filtering.
  • Invalid inputs.

Decision 

Implement hierarchical exception system:

Exception
└── ProviderException (base for provider errors)
    ├── AuthenticationException (invalid API key)
    ├── RateLimitException (quota exceeded)
    └── ContentFilteredException (blocked content)
└── InvalidArgumentException (bad inputs)
└── ConfigurationNotFoundException (missing config)
Copied!

Key features:

  • All provider errors extend ProviderException.
  • RateLimitException includes getRetryAfter().
  • Exceptions include provider context.
  • HTTP status code mapping.

Consequences 

Positive:

  • ●● Granular error handling.
  • ● Provider-specific recovery strategies.
  • ● Clear exception hierarchy.
  • ● Actionable error information.

Negative:

  • ◑ Many exception classes.
  • ◑ Exception handling complexity.
  • ✕ Breaking changes in new versions.

Net Score: +5.0 (Positive impact - robust error handling enables graceful recovery strategies)

ADR-009: Streaming Implementation 

Status 

Accepted (2024-03)

Context 

Streaming responses provide:

  • Better UX for long responses.
  • Lower time-to-first-token.
  • Real-time feedback.

Decision 

Use PHP Generators for streaming:

Example: Streaming chat responses
public function streamChat(array $messages, array $options = []): Generator
{
    $response = $this->sendStreamingRequest($messages, $options);

    foreach ($this->parseSSE($response) as $chunk) {
        yield $chunk;
    }
}

// Usage
foreach ($llmManager->streamChat($messages) as $chunk) {
    echo $chunk;
    flush();
}
Copied!

Implementation details:

  • Server-Sent Events (SSE) parsing.
  • Chunked transfer encoding.
  • Memory-efficient iteration.
  • Provider-specific adaptations.

Consequences 

Positive:

  • ●● Memory efficient.
  • ● Natural iteration syntax.
  • ●● Real-time output.
  • ◐ Works with output buffering.

Negative:

  • ✕ No response object until complete.
  • ◑ Error handling complexity.
  • ◑ Connection management.
  • ✕ No caching possible.

Net Score: +3.5 (Positive impact - streaming UX benefits outweigh implementation complexity)

ADR-010: Tool/Function Calling Design 

Status 

Accepted (2024-04)

Context 

Modern LLMs support tool/function calling for:

  • External data retrieval.
  • Action execution.
  • Structured output generation.

Decision 

Support OpenAI-compatible tool format:

Example: Tool definition
$tools = [
    [
        'type' => 'function',
        'function' => [
            'name' => 'get_weather',
            'description' => 'Get weather for location',
            'parameters' => [
                'type' => 'object',
                'properties' => [
                    'location' => ['type' => 'string'],
                ],
                'required' => ['location'],
            ],
        ],
    ],
];
Copied!

Tool calls returned in CompletionResponse::$toolCalls:

  • Array of tool call objects.
  • Includes function name and arguments.
  • JSON-encoded arguments for parsing.

Consequences 

Positive:

  • ●● Industry-standard format.
  • ●● Cross-provider compatibility.
  • ● Flexible tool definitions.
  • ● Type-safe parameters.

Negative:

  • ◑ Complex nested structure.
  • ◑ Provider translation needed.
  • ✕ No automatic execution.
  • ◑ Testing complexity.

Net Score: +5.0 (Positive impact - OpenAI-compatible format ensures broad compatibility)

ADR-011: Object-Only Options API 

Status 

Accepted (2024-12)

Supersedes: ADR-006

Context 

ADR-006 introduced Option Objects with array backwards compatibility (union types ChatOptions|array). This dual-path approach created:

  • Unnecessary complexity in the codebase.
  • OptionsResolverTrait with 6 resolution methods.
  • fromArray() methods in all Option classes.
  • Cognitive load deciding which syntax to use.
  • Inconsistent usage patterns across the codebase.

Given that:

  • No external users exist yet (pre-release).
  • No breaking change impact on third parties.
  • Clean break is possible without migration burden.

Decision 

Remove array support entirely. Use typed Option objects only:

Example: Object-only options API
// All methods now use nullable typed parameters
public function chat(array $messages, ?ChatOptions $options = null): CompletionResponse;
public function embed(string|array $input, ?EmbeddingOptions $options = null): EmbeddingResponse;
public function vision(array $content, ?VisionOptions $options = null): VisionResponse;

// Usage with factory presets
$response = $llmManager->chat($messages, ChatOptions::creative());

// Usage with custom options
$response = $llmManager->chat($messages, new ChatOptions(
    temperature: 0.7,
    maxTokens: 2000
));

// Usage with defaults (null)
$response = $llmManager->chat($messages);
Copied!

Implementation:

  • Signatures: ?ChatOptions instead of ChatOptions|array.
  • Defaults: null creates default Options in method body.
  • Removed: OptionsResolverTrait, all fromArray() methods.
  • Preserved: Factory presets, fluent builders, validation.

Consequences 

Positive:

  • ●● Type safety enforced at compile time.
  • ●● Single consistent API pattern.
  • ● Reduced codebase complexity ( 250 lines removed).
  • ● No trait usage or resolution overhead.
  • ● Better IDE support without union types.
  • ◐ Cleaner method signatures.

Negative:

  • ◑ No array syntax for quick prototyping.
  • ◑ Slightly more verbose for simple cases.

Net Score: +6.0 (Strong positive - type safety and consistency outweigh minor verbosity increase)

Files changed 

Deleted:

  • Classes/Service/Option/OptionsResolverTrait.php

Modified:

  • Classes/Service/Option/AbstractOptions.php - Removed fromArray() abstract.
  • Classes/Service/Option/ChatOptions.php - Removed fromArray().
  • Classes/Service/Option/EmbeddingOptions.php - Removed fromArray().
  • Classes/Service/Option/VisionOptions.php - Removed fromArray().
  • Classes/Service/Option/ToolOptions.php - Removed fromArray().
  • Classes/Service/Option/TranslationOptions.php - Removed fromArray().
  • Classes/Service/LlmServiceManager.php - Object-only signatures.
  • Classes/Service/LlmServiceManagerInterface.php - Object-only signatures.
  • Classes/Service/Feature/*Service.php - All feature services updated.
  • Classes/Specialized/Translation/LlmTranslator.php - Uses ChatOptions objects.

ADR-012: API key encryption at application level 

Status

Superseded

Date

2024-12-27

Superseded

2025-01 by nr-vault integration

Authors

Netresearch DTT GmbH

Context 

The nr_llm extension stores API keys for various LLM providers (OpenAI, Anthropic, etc.) in the database. These credentials are sensitive and require protection.

Problem statement 

TYPO3's TCA type=password field has two modes:

  1. Hashed mode (default): Uses bcrypt/argon2 - irreversible, suitable for user passwords
  2. Unhashed mode (hashed => false): Stores plaintext - required for API keys that must be retrieved

API keys must be retrievable to authenticate with external services, so hashing is not an option. However, storing them in plaintext exposes them to:

  • Database dumps/backups
  • SQL injection attacks
  • Unauthorized database access
  • Accidental exposure in logs

Requirements 

  1. API keys must be retrievable (not hashed).
  2. Keys must be encrypted at rest in the database.
  3. Encryption must be transparent to the application.
  4. Solution must work without external dependencies (self-contained).
  5. Must support key rotation.
  6. Backwards compatible with existing plaintext values.

Decision 

Implement application-level encryption using sodium_crypto_secretbox (XSalsa20-Poly1305) with key derivation from TYPO3's encryptionKey.

Architecture 

┌─────────────────────────────────────────────────────────────────┐
│                        Backend Form                              │
│                    (user enters API key)                         │
└─────────────────────────────┬───────────────────────────────────┘
                              │ plaintext
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│                    Provider::setApiKey()                         │
│              ProviderEncryptionService::encrypt()                │
│                                                                  │
│  1. Generate random nonce (24 bytes)                             │
│  2. Derive key from TYPO3 encryptionKey via SHA-256              │
│  3. Encrypt with XSalsa20-Poly1305                               │
│  4. Prefix with "enc:" marker                                    │
│  5. Base64 encode for storage                                    │
└─────────────────────────────┬───────────────────────────────────┘
                              │ "enc:base64(nonce+ciphertext+tag)"
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│                         Database                                 │
│                   tx_nrllm_provider.api_key                      │
└─────────────────────────────────────────────────────────────────┘
Copied!

Key derivation 

Example: Domain-separated key derivation
// Domain-separated key derivation
$key = hash('sha256', $typo3EncryptionKey . ':nr_llm_provider_encryption', true);
Copied!

The domain separator :nr_llm_provider_encryption ensures:

  • Keys are unique to this use case.
  • Same encryptionKey produces different keys for different purposes.
  • No collision with other extensions using similar patterns.

Encryption format 

enc:{base64(nonce || ciphertext || auth_tag)}

Where:
- "enc:" = 4-byte prefix marker
- nonce = 24 bytes (SODIUM_CRYPTO_SECRETBOX_NONCEBYTES)
- ciphertext = variable length
- auth_tag = 16 bytes (Poly1305 MAC, included by sodium)
Copied!

Implementation 

Files created/modified 

File Purpose
Classes/Service/Crypto/ProviderEncryptionServiceInterface.php Interface definition
Classes/Service/Crypto/ProviderEncryptionService.php Encryption implementation
Classes/Domain/Model/Provider.php Updated setApiKey/getDecryptedApiKey
Configuration/TCA/tx_nrllm_provider.php Added hashed => false
Configuration/Services.yaml Service registration

Key methods 

Example: Encryption service methods
// ProviderEncryptionService
public function encrypt(string $plaintext): string;
public function decrypt(string $ciphertext): string;
public function isEncrypted(string $value): bool;

// Provider Model
public function setApiKey(string $apiKey): void;      // Encrypts before storage
public function getApiKey(): string;                   // Returns raw (encrypted)
public function getDecryptedApiKey(): string;          // Returns decrypted
public function toAdapterConfig(): array;              // Uses decrypted key
Copied!

Consequences 

Positive 

Encryption at rest: Database dumps no longer expose plaintext credentials.

Transparent operation: Encryption/decryption handled automatically.

No external dependencies: Uses PHP's built-in sodium extension.

Authenticated encryption: Tampering is detected (Poly1305 MAC).

Backwards compatible: Unencrypted values work without migration.

Industry standard: XSalsa20-Poly1305 is used by NaCl/libsodium.

Negative 

Single point of failure: If encryptionKey is compromised, all keys are exposed.

No key rotation: Changing encryptionKey requires re-encryption of all keys.

In-memory exposure: Decrypted keys exist briefly in memory.

Performance overhead: Encryption/decryption on every save/load (minimal).

Net Score: +4 (Strong positive)

Alternatives considered 

  1. TYPO3 Core password type with custom transformer. Rejected: TCA doesn't support custom encryption transformers for password fields.
  2. Defuse PHP Encryption library. Rejected: Adds external dependency. Sodium is built into PHP 7.2+.
  3. OpenSSL AES-256-GCM. Rejected: Sodium's API is simpler and less prone to misuse.
  4. Database-level encryption (TDE). Rejected: Requires database configuration, not portable across environments.
  5. External vault (HashiCorp, AWS KMS). Deferred: Planned for nr-vault extension. Current solution works standalone.

References 

ADR-013: Three-level configuration architecture (Provider-Model-Configuration) 

Status

Accepted

Date

2024-12-27

Authors

Netresearch DTT GmbH

Context 

The nr_llm extension needs to manage LLM configurations for various use cases (chat, translation, embeddings, etc.). Initially, configurations were stored in a single table mixing connection settings, model parameters, and use-case-specific prompts.

Problem statement 

A single-table approach creates several issues:

  1. API Key Duplication: Same API key repeated across multiple configurations.
  2. Model Redundancy: Model capabilities and pricing duplicated.
  3. Inflexible Connections: Cannot have multiple API keys for same provider (prod/dev).
  4. Mixed Concerns: Connection details, model specs, and prompts intermingled.
  5. Maintenance Burden: Changing an API key requires updating multiple records.

Real-world scenarios not supported 

Scenario Single-Table Problem
Separate prod/dev OpenAI accounts Must duplicate all configurations
Self-hosted Ollama + cloud fallback Cannot model multiple endpoints
Cost tracking per API key No clear key-to-usage mapping
Model catalog with shared pricing Model specs repeated everywhere
Team-specific API keys No multi-tenancy support

Decision 

Implement a three-level hierarchical architecture separating concerns:

┌─────────────────────────────────────────────────────────────────────────┐
│ CONFIGURATION (Use-Case Specific)                                        │
│ "blog-summarizer", "product-description", "support-translator"          │
│                                                                          │
│ Fields: system_prompt, temperature, max_tokens, top_p, use_case_type    │
│ References: model_uid → Model                                            │
└──────────────────────────────────┬──────────────────────────────────────┘
                                   │ N:1
┌──────────────────────────────────▼──────────────────────────────────────┐
│ MODEL (Available Models)                                                 │
│ "gpt-5", "claude-sonnet-4-5", "llama-70b", "text-embedding-3-large"     │
│                                                                          │
│ Fields: model_id, context_length, capabilities, cost_input, cost_output │
│ References: provider_uid → Provider                                      │
└──────────────────────────────────┬──────────────────────────────────────┘
                                   │ N:1
┌──────────────────────────────────▼──────────────────────────────────────┐
│ PROVIDER (API Connections)                                               │
│ "openai-prod", "openai-dev", "local-ollama", "azure-openai-eu"          │
│                                                                          │
│ Fields: endpoint_url, api_key (encrypted), adapter_type, timeout        │
└─────────────────────────────────────────────────────────────────────────┘
Copied!

Level 1: Provider (Connection Layer) 

Represents a specific API connection with credentials.

tx_nrllm_provider
├── identifier        -- Unique slug: "openai-prod", "ollama-local"
├── name              -- Display name: "OpenAI Production"
├── adapter_type      -- Protocol: openai, anthropic, gemini, ollama...
├── endpoint_url      -- Custom endpoint (empty = default)
├── api_key           -- Encrypted API key
├── organization_id   -- Optional org ID (OpenAI)
├── timeout           -- Request timeout in seconds
├── max_retries       -- Retry count on failure
└── options           -- JSON: additional adapter options
Copied!

Key Design Points:

  • One provider = one API key = one billing relationship.
  • Same adapter type can have multiple providers (prod/dev accounts).
  • Adapter type determines the protocol/client class used.

Level 2: Model (Capability Layer) 

Represents a specific model available through a provider.

tx_nrllm_model
├── identifier        -- Unique slug: "gpt-5", "claude-sonnet"
├── name              -- Display name: "GPT-5 (128K)"
├── provider_uid      -- FK → Provider
├── model_id          -- API model identifier: "gpt-5"
├── context_length    -- Token limit: 128000
├── max_output_tokens -- Output limit: 16384
├── capabilities      -- CSV: chat,vision,streaming,tools
├── cost_input        -- Cents per 1M input tokens
├── cost_output       -- Cents per 1M output tokens
└── is_default        -- Default model for this provider
Copied!

Key Design Points:

  • Models belong to exactly one provider.
  • Capabilities define what the model can do.
  • Pricing stored as integers (cents/1M tokens) to avoid float issues.
  • Same logical model can exist multiple times (different providers).

Level 3: Configuration (Use-Case Layer) 

Represents a specific use case with model and prompt settings.

tx_nrllm_configuration
├── identifier        -- Unique slug: "blog-summarizer"
├── name              -- Display name: "Blog Post Summarizer"
├── model_uid         -- FK → Model
├── system_prompt     -- System message for the model
├── temperature       -- Creativity: 0.0 - 2.0
├── max_tokens        -- Response length limit
├── top_p             -- Nucleus sampling
├── presence_penalty  -- Topic diversity
├── frequency_penalty -- Word repetition penalty
└── use_case_type     -- chat, completion, embedding, translation
Copied!

Key Design Points:

  • Configurations reference models, not providers directly.
  • All LLM parameters are tunable per use case.
  • Same model can be used by multiple configurations.

Relationships 

┌────────────┐       ┌─────────┐       ┌───────────────┐
│ Provider   │ 1───N │ Model   │ 1───N │ Configuration │
└────────────┘       └─────────┘       └───────────────┘
     │                    │                    │
     │ api_key            │ model_id           │ system_prompt
     │ endpoint           │ capabilities       │ temperature
     │ adapter_type       │ pricing            │ max_tokens
     └────────────────────┴────────────────────┘
Copied!
Entity Responsibility Changes When
Provider API authentication & connection API key rotates, endpoint changes
Model Capabilities & pricing New model version, pricing update
Configuration Use-case behavior Prompt tuning, parameter adjustment

Implementation 

Database tables 

Example: Database schema
-- Level 1: Providers (connections)
CREATE TABLE tx_nrllm_provider (
    uid int(11) PRIMARY KEY,
    identifier varchar(100) UNIQUE,
    adapter_type varchar(50),
    endpoint_url varchar(500),
    api_key varchar(500),  -- Encrypted
    ...
);

-- Level 2: Models (capabilities)
CREATE TABLE tx_nrllm_model (
    uid int(11) PRIMARY KEY,
    identifier varchar(100) UNIQUE,
    provider_uid int(11) REFERENCES tx_nrllm_provider(uid),
    model_id varchar(150),
    capabilities text,  -- CSV: chat,vision,tools
    ...
);

-- Level 3: Configurations (use cases)
CREATE TABLE tx_nrllm_configuration (
    uid int(11) PRIMARY KEY,
    identifier varchar(100) UNIQUE,
    model_uid int(11) REFERENCES tx_nrllm_model(uid),
    system_prompt text,
    temperature decimal(3,2),
    ...
);
Copied!

Domain models 

Example: Domain model classes
// Provider → owns credentials
class Provider extends AbstractEntity {
    public function getDecryptedApiKey(): string;
    public function toAdapterConfig(): array;
}

// Model → belongs to Provider
class Model extends AbstractEntity {
    protected ?Provider $provider = null;
    protected int $providerUid = 0;

    public function hasCapability(string $cap): bool;
    public function getProvider(): ?Provider;
}

// Configuration → belongs to Model
class LlmConfiguration extends AbstractEntity {
    protected ?Model $model = null;
    protected int $modelUid = 0;

    public function getModel(): ?Model;
    public function getProvider(): ?Provider; // Convenience
}
Copied!

Service layer access 

Example: Using configuration from service layer
// Getting a ready-to-use provider from a configuration
$config = $configurationRepository->findByIdentifier('blog-summarizer');
$model = $config->getModel();
$provider = $model->getProvider();

// Provider adapter handles the actual API call
$adapter = $providerAdapterRegistry->getAdapter($provider);
$response = $adapter->chat($messages, $config->toOptions());
Copied!

Backend module structure 

Admin Tools → LLM
├── Dashboard      (overview, stats)
├── Providers      (CRUD, connection test)
├── Models         (CRUD, fetch from API)
└── Configurations (CRUD, prompt testing)
Copied!

Consequences 

Positive 

●● Single Source of Truth: API key stored once per provider.

●● Flexible Connections: Multiple providers of same type (prod/dev/backup).

Model Catalog: Centralized model specs and pricing.

Clear Separation: Connection vs capability vs use-case concerns.

Easy Key Rotation: Update one provider, all configs inherit.

Cost Tracking: Usage attributable to specific providers.

Multi-Tenancy Ready: Different API keys per team/project.

Negative 

Increased Complexity: Three tables instead of one.

More Joins: Queries must traverse relationships.

Migration Required: Existing data needs transformation.

Learning Curve: Users must understand hierarchy.

Net Score: +5 (Strong positive)

Trade-offs 

Single Table Three-Level
Simple queries Normalized data
Data duplication Referential integrity
Faster reads Smaller storage
Harder maintenance Easier updates

Alternatives considered 

1. Two-Level (Provider → Configuration) 

Rejected: Models would be embedded in configurations, duplicating capabilities/pricing.

2. Four-Level (Provider → Model → Preset → Configuration) 

Rejected: Preset layer adds complexity without clear benefit. Temperature/token settings belong with use-case.

3. Single Table with JSON Columns 

Rejected: Loses referential integrity, harder to query, no normalization.

4. Configuration Inheritance 

Rejected: Complex to implement, confusing precedence rules.

Future considerations 

  1. Model Auto-Discovery: Fetch available models from provider APIs.
  2. Cost Aggregation: Track usage and costs per provider/model.
  3. Fallback Chains: Configuration → fallback model if primary fails.
  4. Rate Limiting: Per-provider rate limit tracking.
  5. Health Monitoring: Provider availability status.

References 

Changelog 

All notable changes to the TYPO3 LLM Extension are documented here.

The format follows Keep a Changelog and the project adheres to Semantic Versioning.

Version 0.2.1 (2026-02-28) 

Changed 

  • Require netresearch/nr-vault ^0.4.0 for API key encryption.

Version 0.2.0 (2026-02-28) 

Added 

  • PHP 8.2+ and TYPO3 v13.4+ compatibility.
  • TYPO3 v13.4 ddev install command.
  • Coverage uploads and fuzz/mutation CI workflow.
  • Unit tests for enums, WizardResult DTO, providers, services, and specialized classes.
  • Coverage tests for PromptTemplateService and TranslationService.

Changed 

  • Moved phpunit.xml and phpstan-baseline.neon into Build/ directory.
  • Expanded CI matrix to PHP 8.2-8.5 and TYPO3 v13.4/v14.
  • Replaced TYPO3 v14-only APIs with v13-compatible equivalents.
  • Narrowed testing-framework to ^9.0 for PHPUnit 12 compatibility.
  • Removed dead ProviderRegistry class and orphaned phpstan baseline file.
  • Removed 55 dead translation keys.
  • Harmonized composer script naming to ci:test:php:* convention.
  • Migrated CI to centralized workflows.
  • Added SPDX copyright and license headers.
  • Replaced generic emails with GitHub references.

Fixed 

  • Resolved CI failures for PHP 8.2 and TYPO3 v13 compatibility.
  • Resolved PHPStan failures for dual TYPO3 v13/v14 support.
  • Fixed PHPUnit deprecation warnings.
  • Used CoversNothing for excluded exception and enum test classes.
  • Localized user-facing hardcoded strings in controllers.
  • Disabled functional tests in CI (environment-specific).
  • Fixed direct php-cs-fixer call in ci:test:php:cgl script.

Version 0.1.2 (2026-01-11) 

Fixed 

  • Fixed CI: use correct org secret name for TER token.
  • Simplified TER upload workflow.

Version 0.1.1 (2026-01-11) 

Fixed 

  • Fixed CI: create zip archive for TER upload.

Version 0.1.0 (2026-01-11) 

Initial release of the TYPO3 LLM Extension.

Added 

Core Features

  • Multi-provider support (OpenAI, Anthropic Claude, Google Gemini, Ollama, OpenRouter, Mistral, Groq).
  • Unified API via LlmServiceManager.
  • Provider abstraction layer with capability interfaces.
  • Typed response objects (CompletionResponse, EmbeddingResponse).
  • Three-tier configuration architecture (Providers, Models, Configurations).
  • Encrypted API key storage using sodium_crypto_secretbox.

Feature Services

  • CompletionService: Text completion with format control (JSON, Markdown).
  • EmbeddingService: Vector generation with caching and similarity calculations.
  • VisionService: Image analysis with alt-text, title, description generation.
  • TranslationService: Translation with formality control and glossary support.
  • PromptTemplateService: Centralized prompt management with database-driven templates.

Specialized Services

  • Image generation (DALL-E).
  • Text-to-speech (TTS) and speech transcription (Whisper).
  • DeepL translation integration.

Provider Capabilities

  • Chat completions across all providers.
  • Embeddings (OpenAI, Gemini).
  • Vision/image analysis (all providers).
  • Streaming responses (all providers).
  • Tool/function calling (all providers).

Infrastructure

  • TYPO3 caching framework integration.
  • Backend module for provider management and testing.
  • Prompt template management with versioning and performance tracking.
  • Comprehensive exception hierarchy.
  • Type-safe enums and DTOs for domain constants.

Developer Experience

  • Option objects with factory presets (ChatOptions).
  • Full backwards compatibility with array options.
  • Extensive PHPDoc documentation.
  • Type-safe method signatures.

Security

  • Enterprise readiness security workflows and supply chain controls.
  • SLSA Level 3 provenance, Cosign signatures, and SBOM generation.
  • OpenSSF Scorecard and Best Practices compliance.

Testing

  • Comprehensive unit and integration tests.
  • E2E testing with Playwright.
  • Property-based (fuzz) testing support.

Upgrade Guides 

Upgrading from Pre-Release 

If you used a pre-release version:

  1. Remove old extension

    composer remove netresearch/nr-llm
    Copied!
  2. Clear caches

    vendor/bin/typo3 cache:flush
    Copied!
  3. Install current version

    composer require netresearch/nr-llm:^0.2
    Copied!
  4. Run database migrations

    vendor/bin/typo3 database:updateschema
    Copied!
  5. Update configuration

    Review your TypoScript and extension configuration for any changed keys or deprecated options.

Breaking Changes Policy 

This extension follows semantic versioning:

  • Major versions (x.0.0): May contain breaking changes
  • Minor versions (0.x.0): New features, backwards compatible
  • Patch versions (0.0.x): Bug fixes only

Breaking Changes Documentation 

Each major version will document:

  1. Removed or changed public APIs
  2. Migration steps with code examples
  3. Compatibility layer availability
  4. Deprecation timeline for removed features

Deprecation Policy 

  1. Features are marked deprecated in minor versions
  2. Deprecated features remain functional for one major version
  3. Deprecated features are removed in the next major version
  4. Migration documentation provided before removal

Sitemap