Configuration
The extension uses a database-based configuration architecture with three levels: Providers, Models, and Configurations. All management is done through the TYPO3 backend module.
Backend module
Access the LLM management module at Admin Tools > LLM.
The backend module provides four sections:
- Dashboard
- Overview of registered providers, models, and configurations with status indicators.
- Providers
- Manage API connections with encrypted credentials. Test connections directly from the interface.
- Models
- Define available models with their capabilities and pricing. Fetch models from provider APIs.
- Configurations
- Create use-case-specific configurations with prompts and parameters.
Provider configuration
Providers represent API connections with credentials. Create providers in Admin Tools > LLM > Providers.
Required fields
identifier
-
- Type
- string
- Required
true
Unique slug for programmatic access (e.g.,
openai-prod,ollama-local).
name
-
- Type
- string
- Required
true
Display name shown in the backend.
adapter_type
-
- Type
- string
- Required
true
The protocol to use. Available options:
openai- OpenAI API.anthropic- Anthropic Claude API.gemini- Google Gemini API.ollama- Local Ollama instance.openrouter- OpenRouter multi-model API.mistral- Mistral AI API.groq- Groq inference API.azure_openai- Azure OpenAI Service.custom- Custom OpenAI-compatible endpoint.
api_key
-
- Type
- string
- Required
true
API key for authentication. Encrypted at rest using
sodium_. Not required for local providers like Ollama.crypto_ secretbox
Optional fields
endpoint_url
-
- Type
- string
- Default
- (adapter default)
Custom API endpoint. Leave empty to use the adapter's default URL.
organization_id
-
- Type
- string
- Default
- (empty)
Organization ID for providers that support it (OpenAI, Azure).
timeout
-
- Type
- integer
- Default
- 30
Request timeout in seconds.
max_retries
-
- Type
- integer
- Default
- 3
Number of retry attempts on failure.
options
-
- Type
- JSON
- Default
- {}
JSON object with additional adapter-specific options.
Testing provider connections
Use the Test Connection button to verify provider configuration. The test makes an actual HTTP request to the provider's API and returns:
- Connection status (success/failure).
- Available models (if supported by the provider).
- Error details (on failure).
Model configuration
Models represent specific LLM models available through a provider. Create models in Admin Tools > LLM > Models.
Required fields
identifier (model)
-
- Type
- string
- Required
true
Unique slug (e.g.,
gpt-5,claude-sonnet).
name (model)
-
- Type
- string
- Required
true
Display name (e.g.,
GPT-5 (128K)).
provider
-
- Type
- reference
- Required
true
Reference to the parent provider.
model_id
-
- Type
- string
- Required
true
The API model identifier. Examples vary by provider:
- OpenAI:
gpt-5,gpt-5.2-instant,o4-mini. - Anthropic:
claude-opus-4-5-20251101,claude-sonnet-4-5-20251101. - Google:
gemini-3-pro-preview,gemini-3-flash-preview.
Optional fields
context_length
-
- Type
- integer
- Default
- (provider default)
Maximum context window in tokens (e.g., 128000 for GPT-5).
max_output_tokens
-
- Type
- integer
- Default
- (model default)
Maximum output tokens (e.g., 16384).
capabilities
-
- Type
- string (CSV)
- Default
- chat
Comma-separated list of supported features:
chat- Chat completion.completion- Text completion.embeddings- Text-to-vector.vision- Image analysis.streaming- Real-time streaming.tools- Function/tool calling.
cost_input
-
- Type
- integer
- Default
- 0
Cost per 1M input tokens in cents (for cost tracking).
cost_output
-
- Type
- integer
- Default
- 0
Cost per 1M output tokens in cents.
is_default
-
- Type
- boolean
- Default
- false
Mark as default model for this provider.
Fetching models from providers
Use the Fetch Models action to automatically retrieve available models from the provider's API. This populates the model list with the provider's current offerings.
LLM configuration
Configurations define specific use cases with model selection and parameters. Create configurations in Admin Tools > LLM > Configurations.
Required fields
identifier (config)
-
- Type
- string
- Required
true
Unique slug for programmatic access (e.g.,
blog-summarizer).
name (config)
-
- Type
- string
- Required
true
Display name (e.g.,
Blog Post Summarizer).
model
-
- Type
- reference
- Required
true
Reference to the model to use.
system_prompt
-
- Type
- text
- Required
true
System message that sets the AI's behavior and context.
Optional fields
temperature
-
- Type
- float
- Default
- 0.7
Creativity level from 0.0 (deterministic) to 2.0 (creative).
max_tokens (config)
-
- Type
- integer
- Default
- (model default)
Maximum response length in tokens.
top_p
-
- Type
- float
- Default
- 1.0
Nucleus sampling parameter (0.0 - 1.0).
frequency_penalty
-
- Type
- float
- Default
- 0.0
Reduces word repetition (-2.0 to 2.0).
presence_penalty
-
- Type
- float
- Default
- 0.0
Encourages topic diversity (-2.0 to 2.0).
use_case_type
-
- Type
- string
- Default
- chat
The type of task:
chat- Conversational interactions.completion- Text completion.embedding- Vector generation.translation- Language translation.
Using configurations
Retrieve configurations programmatically:
use Netresearch\NrLlm\Domain\Repository\LlmConfigurationRepository;
use Netresearch\NrLlm\Provider\ProviderAdapterRegistry;
class MyController
{
public function __construct(
private readonly LlmConfigurationRepository $configRepository,
private readonly ProviderAdapterRegistry $adapterRegistry,
) {}
public function processAction(): void
{
// Get configuration by identifier
$config = $this->configRepository->findByIdentifier('blog-summarizer');
// Get the model and provider
$model = $config->getModel();
$provider = $model->getProvider();
// Create adapter and make requests
$adapter = $this->adapterRegistry->createAdapterFromModel($model);
$response = $adapter->chatCompletion($messages, $config->toOptions());
}
}
TypoScript settings
Runtime settings can be configured via TypoScript:
Constants
plugin.tx_nrllm {
settings {
# Default temperature (0.0-2.0)
defaultTemperature = 0.7
# Maximum tokens for responses
defaultMaxTokens = 1000
# Cache lifetime in seconds
cacheLifetime = 3600
# Enable/disable response caching
enableCaching = 1
# Enable streaming by default
enableStreaming = 0
}
}
Environment variables
For deployment flexibility, use environment variables:
# TYPO3 encryption key (used for API key encryption)
TYPO3_CONF_VARS__SYS__encryptionKey=your-secure-encryption-key
# Optional: Override provider settings via environment
TYPO3_NR_LLM_DEFAULT_TIMEOUT=60
Security
API key protection
- Encrypted storage: API keys are encrypted using
sodium_.crypto_ secretbox - Database security: Ensure database backups are encrypted.
- Backend access: Restrict backend module access to authorized users.
- Key rotation: Changing the TYPO3
encryptionrequires re-encryption.Key
Input sanitization
Always sanitize user input before sending to LLM providers:
use TYPO3\CMS\Core\Utility\GeneralUtility;
$sanitizedInput = GeneralUtility::removeXSS($userInput);
$response = $adapter->chatCompletion([
['role' => 'user', 'content' => $sanitizedInput]
]);
Output handling
Treat LLM responses as untrusted content:
$response = $adapter->chatCompletion($messages);
$safeOutput = htmlspecialchars($response->content, ENT_QUOTES, 'UTF-8');
Logging
Enable detailed logging for debugging:
$GLOBALS['TYPO3_CONF_VARS']['LOG']['Netresearch']['NrLlm'] = [
'writerConfiguration' => [
\Psr\Log\LogLevel::DEBUG => [
\TYPO3\CMS\Core\Log\Writer\FileWriter::class => [
'logFileInfix' => 'nr_llm',
],
],
],
];
Log file location: var/
Caching
The extension uses TYPO3's caching framework:
- Cache identifier:
nrllm_responses. - Default TTL: 3600 seconds (1 hour).
- Embeddings TTL: 86400 seconds (24 hours).
Clear cache via CLI:
vendor/bin/typo3 cache:flush --group=nrllm