AI Universe 

Extension key

ns_aiuniverse

Package name

nitsan/ns-aiuniverse

Language

en

Author

Team T3Planet

Company

T3Planet

License

GPL-2.0-or-later


AI Universe is the shared AI foundation layer for T3Planet's AI Extensions for TYPO3. It centralizes AI provider communication, model selection, request handling, statistics preparation, and utility functions so other extensions can build AI features faster and with consistent behavior.

This documentation is written for:

  • Developers integrating T3Planet's AI Extensions into their TYPO3 CMS.
  • Editors and administrators configuring providers and operations.
  • Non-technical stakeholders who need a clear product overview.

Start here 

Understand what this extension does and where it fits.

Installation 

Install and activate the extension safely.

Configuration 

Configure providers, API keys, models, and defaults.

Usage 

Workflows for editors, admins, and non-technical users.

Developer guide 

Service usage, request pipeline, and integration examples.

Troubleshooting 

Fix common setup, connectivity, and provider issues.


Table of contents

Introduction 

AI Universe is a base extension for AI operations in T3Planet's AI Extension. It is designed as shared infrastructure, not as a standalone frontend plugin.

What it is 

  • A reusable service layer for AI provider communication.
  • A central configuration point for API keys and model defaults.
  • A utility and statistics layer for AI-enabled T3Planet's AI Extensions for TYPO3.

What it is not 

  • It does not register a frontend plugin by itself.
  • It does not ship page templates or TypoScript frontend rendering.
  • It is not an end-user chatbot UI product out of the box.

Who it is for 

Developers
Use services like AiRequestService, BaseClient, AiStatisticsService, and utilities to build AI features.
Editors / admins
Configure providers, API keys, default models, and basic auth in Extension Configuration.
Non-technical stakeholders
Get a single foundation layer that reduces duplicated AI integration work across extensions.

Key capabilities 

  • Multi-provider request preparation and response parsing.
  • Embedding request/response support for selected providers.
  • OpenAI usage statistics retrieval, transformation, and cache-backed chart data.
  • Basic authentication helper for protected URL fetches.
  • Centralized utility helpers for extension configuration access.

Supported providers 

The codebase includes provider handling for:

  • OpenAI
  • Claude / Anthropic
  • Gemini
  • Azure OpenAI
  • Mistral
  • DeepSeek
  • xAI
  • Custom LLM endpoint

Provider behavior and available options depend on configured API keys and model settings in ext_conf_template.txt.

Installation 

Requirements 

  • TYPO3: 11.0 up to 13.4
  • PHP: 7.4 up to 8.4
  • Composer-based TYPO3 setup recommended

Install with Composer 

Install extension
composer require nitsan/ns-aiuniverse
Copied!

Activate extension 

  1. Open TYPO3 backend.
  2. Go to Admin Tools > Extensions.
  3. Activate EXT:ns_aiuniverse.

Post-install checks 

After activation, verify:

  • Extension is listed as active.
  • Extension configuration is available in Admin Tools > Settings > Extension Configuration.
  • Cache configuration nsaiuniverse_statistics is present (registered by ext_localconf.php).

First configuration 

At minimum, set:

  • defaultModel (provider family)
  • Provider API key (for example openai_api_key)
  • Provider default model (for example openai_model)

For details, continue with Configuration.

Configuration 

This extension is configured through TYPO3 Extension Configuration values defined in ext_conf_template.txt.

Configuration overview 

Main groups include:

  • AI engine defaults
  • Provider-specific API keys and models
  • Embedding model settings
  • Translation provider defaults
  • Basic authentication options

Minimum production configuration 

For a working setup:

  1. Set defaultModel (for example openai, gemini, claude, mistral).
  2. Add API key for the selected provider.
  3. Set provider default model.
  4. Keep token and temperature values aligned with your usage/cost policy.

Provider keys 

Commonly used keys:

OpenAI
openai_api_key, openai_model, openai_temperature, openai_max_tokens, openai_embedding_model, openai_admin_api_key
Anthropic / Claude
anthropic_api_key, anthropic_model, anthropic_temperature, anthropic_max_tokens
Gemini
gemini_api_key, gemini_model, gemini_embedding_model
Azure
azure_api_key, azure_api_endpoint, azure_api_model, azure_api_version
Mistral
mistral_api_key, mistral_model, mistral_embedding_model, mistral_temperature, mistral_max_tokens
DeepSeek
deepseek_api_key, deepseek_model, deepseek_temperature, deepseek_response_format
xAI
xai_api_key, xai_model, xai_temperature, xai_response_format
Custom LLM
enable_custom_llm_model, custom_llm_api_url, custom_llm_api_key, custom_llm_model_name, custom_llm_temperature

Basic authentication 

Use these keys when your source URLs are protected:

  • basicAuthEnabled
  • basicAuthUsername
  • basicAuthPassword

The helper HttpAuthUtility retries 401/403 requests with basic auth when this is enabled and fully configured.

Security recommendations 

  • Restrict backend access to trusted administrators.
  • Rotate provider API keys regularly.
  • Avoid sharing backend screenshots that expose secrets.
  • Use dedicated provider keys per environment when possible.

Usage 

This guide focuses on practical operation for editors, administrators, and non-technical stakeholders.

For administrators 

Daily responsibilities:

  • Keep provider API keys valid.
  • Maintain default model settings.
  • Monitor OpenAI usage trends.
  • Keep credentials and access permissions under control.

Admin checklist 

  1. Confirm selected defaultModel.
  2. Confirm provider key is set (for selected provider).
  3. Test extension-dependent AI features in your connected modules.
  4. Review usage statistics regularly (cost/rate-control).

For editors 

Editors usually do not configure providers directly. They interact with features built by other extensions that depend on AI Universe.

When AI features fail in a backend module:

  • Retry once.
  • Capture exact error text.
  • Inform administrator with module/page context.

For non-technical stakeholders 

AI Universe helps organizations by:

  • Reducing duplicated AI integration work across extensions.
  • Centralizing provider and model governance.
  • Improving consistency of AI capabilities across teams.

What to expect operationally 

  • Some providers have rate limits and temporary outages.
  • Model behavior can differ between providers and versions.
  • Statistics data may be cached and not always real-time.

Known boundaries 

  • No standalone frontend plugin is provided by this extension.
  • This package is a service layer; UI features come from dependent extensions.

Developer guide 

Technical integration guide for extension developers.

Core services 

AiRequestService
Sends provider requests, handles response parsing, and logs request outcomes.
BaseClient
Builds provider-specific request payloads and extracts provider-specific responses.
AiStatisticsService
Fetches OpenAI usage data, transforms result sets, and prepares chart-ready data.
AiEngineConfiguration
Exposes configured engines and filters engines based on available API keys.
HttpAuthUtility
Adds basic auth headers and provides protected URL fetch utility behavior.
AiUniverseUtilityHelper
Utility methods for extension configuration, TYPO3 version data, and page/language helpers.

Dependency injection 

Services are autowired through Configuration/Services.yaml.

Service registration overview
services:
  _defaults:
    autowire: true
    autoconfigure: true
    public: false

  NITSAN\NsAiUniverse\:
    resource: '../Classes/*'
Copied!

Request flow 

  1. Consumer calls AiRequestService::sendRequest().
  2. Service merges defaults and incoming options.
  3. BaseClient::getRequestData() builds provider-specific endpoint + body.
  4. TYPO3 RequestFactory sends request.
  5. BaseClient::getResponseData() extracts generated text.
  6. AiLogService stores status log entry.

Example integration 

Example use in custom service
use NITSAN\NsAiUniverse\Service\AiRequestService;
use TYPO3\CMS\Core\Utility\GeneralUtility;

$ai = GeneralUtility::makeInstance(AiRequestService::class);

$text = $ai->sendRequest(
    'openai',
    [['role' => 'user', 'content' => 'Summarize this page in 3 bullets.']],
    'gpt-4o',
    ['temperature' => 0.3, 'max_tokens' => 300],
    true,
    'my_extension',
    'summary'
);
Copied!

Embeddings 

Use BaseClient::getEmbeddingRequestData() and BaseClient::parseEmbeddingResponse() for embedding workflows.

Supported embedding request handlers in code:

  • OpenAI
  • Gemini
  • Mistral

Statistics and caching 

AiStatisticsService:

  • fetches OpenAI usage API data
  • paginates if needed
  • transforms data via AiUniverseChartHelper
  • stores processed results in nsaiuniverse_statistics cache
  • default cache TTL is 24 hours for statistics snapshots

Logging 

Request outcomes are written to sys_log via AiLogService. Use this for operational tracing and debugging.

Implementation notes 

  • Provider capabilities and models evolve quickly; keep defaults reviewed.
  • Error handling should be explicit around network failures and provider API errors.
  • For security-sensitive environments, review how extension configuration is managed.

Architecture 

Overview 

AI Universe is a shared foundation layer:

Consuming Extension Code
        |
        v
  AiRequestService
        |
        v
     BaseClient
        |
        v
   Provider APIs
Copied!

Parallel support:

AiStatisticsService -> OpenAI Usage API -> Processed chart data cache
HttpAuthUtility    -> Protected URL fetching with optional Basic Auth
Copied!

Main components 

  • Request orchestration: AiRequestService
  • Provider adapters and payload composition: BaseClient
  • Statistics processing: AiStatisticsService
  • Engine configuration filtering: AiEngineConfiguration
  • Utility and environment helpers: AiUniverseUtilityHelper
  • HTTP auth helper: HttpAuthUtility

Configuration model 

Runtime behavior is mostly driven by extension configuration keys from ext_conf_template.txt.

This includes:

  • provider keys and models
  • default engine selection
  • token/temperature values
  • basic auth settings

Caching 

The extension registers cache nsaiuniverse_statistics in ext_localconf.php.

Statistics service stores processed data in this cache to reduce repeated usage API calls.

Constraints 

  • No native frontend plugin and no Fluid frontend output in this package.
  • Primary role is reusable service infrastructure.

API reference 

Reference for major public classes used by dependent extensions.

AiRequestService 

Namespace:
NITSANNsAiUniverseServiceAiRequestService

Key method:

Main request API
public function sendRequest(
    string $modelType,
    array $messages,
    string $aiSelectedModel = '',
    array $options = [],
    bool $logRequest = true,
    string $module = '',
    string $scope = ''
): string
Copied!

BaseClient 

Namespace:
NITSANNsAiUniverseClientBaseClient

Important methods:

  • getRequestData()
  • getResponseData()
  • getStreamRequestData()
  • getStreamChunkText()
  • getEmbeddingRequestData()
  • parseEmbeddingResponse()
  • getOpenAiUsageData()

AiStatisticsService 

Namespace:
NITSANNsAiUniverseServiceAiStatisticsService

Main method:

  • getOpenAiStatistics(string $date = '', int $dateScope = 0, bool $forceRefresh = false): array

AiEngineConfiguration 

Namespace:
NITSANNsAiUniverseConfigurationAiEngineConfiguration

Main methods:

  • getTextGenerationAIEngines(bool $ignoreConfig = false): array
  • getAllAIEngines(bool $ignoreConfig = false): array

HttpAuthUtility 

Namespace:
NITSANNsAiUniverseUtilityHttpAuthUtility

Main methods:

  • fetchContentFromUrl(string $url): string
  • addAuthHeader(ServerRequestInterface $request): ServerRequestInterface
  • isBasicAuthEnabled(): bool

AiUniverseUtilityHelper 

Namespace:
NITSANNsAiUniverseUtilityAiUniverseUtilityHelper

Main methods:

  • getExtensionConf(string $extensionKey = 'ns_aiuniverse'): array
  • setExtensionConf(array $value, string $extensionKey = 'ns_aiuniverse'): void
  • isApiKeySet(string $extensionKey = 'ns_aiuniverse', string $apiKeyName = 'openai_api_key'): bool

Troubleshooting 

Common issues 

Provider request fails 

Checklist:

  • Verify selected provider key (for example openai_api_key) is set.
  • Verify selected model value is valid for that provider.
  • Verify endpoint URL for custom/azure setups.
  • Review sys_log entries written by AiLogService.

HTTP 401 / 403 when fetching protected URL 

Checklist:

  • Enable basicAuthEnabled.
  • Set both basicAuthUsername and basicAuthPassword.
  • Re-test URL using HttpAuthUtility.

No statistics shown 

Checklist:

  • Ensure OpenAI key is configured (admin key preferred for org usage endpoint).
  • Check if API rate limit was hit.
  • Force refresh in consumer module (if available) or clear cache.
  • Confirm cache nsaiuniverse_statistics is available.

Unexpected model behavior 

  • Confirm active default model key.
  • Confirm per-provider model key.
  • Validate option values (temperature/tokens) are in expected range.

Operational guidance 

  • Use separate API keys per environment where possible.
  • Rotate keys after staff/vendor access changes.
  • Monitor usage trends and adjust limits before cost spikes.

Support information 

When escalating an issue, provide:

  • TYPO3 version
  • extension version
  • provider name and model
  • timestamp and scope/module
  • sanitized error message from logs

Glossary 

AI provider
External service that processes prompts and returns responses (for example OpenAI, Gemini, Anthropic, Mistral).
Model
Specific AI model identifier within a provider.
Embedding
Numeric vector representation of text used for similarity and semantic tasks.
Request orchestration
The process of building payloads, sending requests, and parsing responses.
Basic Auth
HTTP authentication using username and password, encoded as Authorization: Basic ....
Cache
Local storage used to reduce repeated API calls and improve response times.
Service layer
Internal reusable classes used by other extensions rather than direct frontend output.

1.0.0 - 20 March 2026 

Here is the list of features and updates introduced in the initial release:

20-03-2026 [FEATURE] Added core service layer for AI provider request and response handling.
20-03-2026 [FEATURE] Added multi-provider support paths for OpenAI, Gemini, Azure, Claude, DeepSeek, xAI, Mistral, and custom LLM.
20-03-2026 [FEATURE] Added centralized request orchestration via AiRequestService.
20-03-2026 [FEATURE] Added OpenAI usage statistics processing with cache-backed chart data generation.
20-03-2026 [FEATURE] Added AI engine filtering based on configured provider API keys.
20-03-2026 [FEATURE] Added HTTP Basic Auth utility for protected URL retrieval.
20-03-2026 [RELEASE] Released v1.0.0 stable version
Copied!