All frontends must implement the API defined in interface
All operations on a specific cache must be done with these methods. The frontend object of a cache is the main object
any cache manipulation is done with, usually the assigned backend object should not be used directly.
|getIdentifier||Returns the cache identifier.|
|getBackend||Returns the backend instance of this cache. It is seldom needed in usual code.|
|set||Sets/overwrites an entry in the cache.|
|get||Returns the cache entry for the given identifier.|
|has||Checks for existence of a cache entry.
Do no use this prior to
|remove||Removes the entry for the given identifier from the cache.|
|flushByTag||Flushes all cache entries which are tagged with the given tag.|
|collectGarbage||Calls the garbage collection method of the backend. This is important for backends which are unable to do this internally (like the DB backend).|
|isValidEntryIdentifier||Checks if a given identifier is valid.|
|isValidTag||Checks if a given tag is valid.|
|requireOnce||PhpFrontend only Requires a cached PHP file directly.|
Currenly three different frontends are implemented. The main difference is the data types which can be stored using a specific frontend.
The string frontend accepts strings as data to be cached.
Strings, arrays and objects are accepted by this frontend. Data is serialized before it is passed to the backend.
Since version 4.5, the igbinary serializer is used transparently (if available in the system), which speeds up both serialization and unserialization while also reducing data size.
The variable frontend is the most frequently used frontend and handles the widest range of data types. While it can also handle string data, the string frontend should be used if the cache needs to store strings, if only to avoid the additional serialization done by the variable frontend.
This is a special frontend to cache PHP files. It extends the string frontend
with the method
requireOnce() which allows PHP files to be
if a cache entry exists. This can be used by extensions to cache and speed up loading
of calculated PHP code and becomes handy if a lot of reflection and
dynamic PHP class construction is done.
A backend to be used in combination with the PHP frontend must implement the interface
TYPO3\CMS\Core\Cache\Backend\PhpCapableBackendInterface. Currently the file backend and
the simple file backend fulfill this requirement.
The PHP frontend can only be used to cache PHP files. It does not work with strings, arrays or objects. It is not intended as a page content cache.
A variety of storage backends exists. They have different characteristics and can be used for different caching needs. The best backend depends on a given server setup and hardware, as well as cache type and usage. A backend should be chosen wisely, as a wrong decision could end up actually slowing down a TYPO3 installation.
All backends must implement at least interface
All operations on a specific cache must be done with these methods. There are several further interfaces that can be
implemented by backends to declare additional capabilities. Usually, extension code should not handle cache backend operations
directly, but should use the frontend object instead.
|setCache||Reference to the frontend which uses the backend. This method is mostly used internally.|
|set||Save data in the cache.|
|get||Load data from the cache.|
|has||Checks if a cache entry with the specified identifier exists.|
|remove||Remove a cache entry with the specified identifier.|
|flush||Remove all cache entries.|
|collectGarbage||Does garabage collection.|
|flushByTag||TaggableBackendInterface only Removes all cache entries which are tagged by the specified tag.|
|findIdentifiersByTag||TaggableBackendInterface only Finds and returns all cache entry identifiers which are tagged by the specified tag.|
|requireOnce||PhpCapableBackendInterface only Loads PHP code from the cache and require_onces it right away.|
|freeze||FreezableBackendInterface only Freezes this cache backend.|
|isFrozen||FreezableBackendInterface only Tells if this backend is frozen.|
|defaultLifetime||Default lifetime in seconds of a cache entry if it is not specified for a specific entry on set()||No||integer||3600|
This is the main backend suitable for most storage needs. It does not require additional server daemons nor server configuration.
The database backend does not automatically perform garbage collection. Instead the Scheduler garbage collection task should be used.
It stores data in the configured database (usually MySQL) and can handle large amounts of data with reasonable performance. Data and tags are stored in two different tables, every cache needs its own set of tables. In terms of performance the database backend is already pretty well optimized and should be used as default backend if in doubt. This backend is the default backend if no backend is specifically set in the configuration.
The core takes care of creating and updating required database tables "on the fly".
However, caching framework tables which are not needed anymore are not deleted automatically. That is why the database analyzer in the install tool will propose you to rename/delete caching framework tables after you changed the caching backend to a non-database one.
For caches with a lot of read and write operations, it is important to tune the MySQL setup.
The most important setting is
innodb_buffer_pool_size. A generic goal is to give MySQL
as much RAM as needed to have the main table space loaded completely in memory.
The database backend tends to slow down if there are many write operations and big caches which do not fit into memory because of slow harddrive seek and write performance. If the data table grows too big to fit into memory, it is possible to compress given data transparently with this backend, which often shrinks the amount of needed space to 1/4 or less. The overhead of the compress/uncrompress operation is usually not high. A good candidate for a cache with enabled compression is the core pages cache: it is only read or written once per request and the data size is pretty large. The compression should not be enabled for caches which are read or written multiple times during one request.
The database backend for MySQL uses InnoDB tables. Due to the nature of InnoDB, deleting records does not reclaim the actual disk space. E.g. if the cache uses 10GB, cleaning it will still keep 10GB allocated on the disk even though phpMyAdmin will show 0 as the cache table size. To reclaim the space, turn on the MySQL option file_per_table, drop the cache tables and re-create them using the Install tool. This does not by any mean that you should skip the scheduler task. Deleting records still improves performance.
|compression||Whether or not data should be compressed with gzip. This can reduce size of the cache data table, but incurs CPU overhead for compression and decompression.||No||boolean||false|
|compressionLevel||Gzip compression level (if the
||No||integer from -1 to 9||-1|
Memcached is a simple, distributed key/value RAM database. To use this backend, at least one memcached daemon must be reachable, and the PECL module "memcache" must be loaded. There are two PHP memcached implementations: "memcache" and "memcached". Currently, only memcache is supported by this backend.
Warning and design constraints¶
Memcached is a simple key-value store by design . Since the caching framework needs to structure it to store the identifier-data-tags relations, for each cache entry it stores an identifier->data, identifier->tags and a tag->identifiers entry.
This leads to structural problems:
- If memcache runs out of memory but must store new entries, it will toss some other entry out of the cache (this is called an eviction in memcached speak).
- If data is shared over multiple memcache servers and some server fails, key/value pairs on this system will just vanish from cache.
Both cases lead to corrupted caches. If, for example, a tags->identifier entry is lost,
dropByTag() will not be able to find the corresponding identifier->data entries
which should be removed and they will not be deleted. This results in old data delivered by the cache.
Additionally, there is currently no implementation of the garbage collection that could rebuild cache integrity.
It is important to monitor a memcached system for evictions and server outages and to clear clear caches if that happens.
Furthermore memcache has no sort of namespacing. To distinguish entries of multiple caches from each other, every entry is prefixed with the cache name. This can lead to very long runtimes if a big cache needs to be flushed, because every entry has to be handled separately and it is not possible to just truncate the whole cache with one call as this would clear the whole memcached data which might even hold non TYPO3 related entries.
Because of the mentioned drawbacks, the memcached backend should be used with care or in situations where cache integrity is not important or if a cache has no need to use tags at all. Currently, the memcache backend implements the TaggableBackendInterface, so the implementation does allow tagging, even if it is not advised to used this backend together with heavy tagging.
Since memcached has no sort of namespacing and access control, this backend should not be used if other third party systems have access to the same memcached daemon for security reasons. This is a typical problem in cloud deployments where access to memcache is cheap (but could be read by third parties) and access to databases is expensive.
Array of used memcached servers. At least one server must be defined. Each server definition is a string, allowed syntaxes:
|compression||Enable memcached internal data compression. Can be used to reduce memcached memory consumption, but adds additional compression / decompression CPU overhead on the related memcached servers.||No||boolean||false|
Redis is a key-value storage/database. In contrast to memcached, it allows structured values. Data is stored in RAM but it allows persistence to disk and doesn't suffer from the design problems of the memcached backend implementation. The redis backend can be used as an alternative to the database backend for big cache tables and helps to reduce load on database servers this way. The implementation can handle millions of cache entries each with hundreds of tags if the underlying server has enough memory.
Redis is known to be extremely fast but very memory hungry. The implementation is an option for big caches with lots of data because most important operations perform O(1) in proportion to the number of (redis) keys. This basically means that the access to an entry in a cache with a million entries is not slower than to a cache with only 10 entries, at least if there is enough memory available to hold the complete set in memory. At the moment only one redis server can be used at a time per cache, but one redis instance can handle multiple caches without performance loss when flushing a single cache.
The garbage collection task should be run every once in a while to find and delete old tags.
The implementation is based on the PHP phpredis module, which must be available on the system.
It is important to monitor the redis server and tune its settings to the specific caching needs and hardware capabilities. There are several articles on the net and the redis configuration file contains some important hints on how to speed up the system if it reaches bounds. A full documentation of available options is far beyond this documentation.
|hostname||IP address or name of redis server to connect to.||No||string||127.0.0.1|
|port||Port of the redis daemon.||No||integer||6379|
|persistentConnection||Activate a persistent connection to redis server. This could be a benefit under high load cloud setups.||No||boolean||false|
|database||Number of the database to store entries. Each cache should use its own database, otherwise all caches sharing a database are flushed if the flush operation is issued to one of them. Database numbers 0 and 1 are used and flushed by the core unit tests and should not be used if possible.||No||integer||0|
Password used to connect to the redis instance if the redis server needs authentication.
The password is sent to the redis server in plain text.
|compression||Whether or not data compression with gzip should be enabled. This can reduce cache size, but adds some CPU overhead for the compression and decompression operations in PHP.||No||boolean||false|
Set gzip compression level to a specific value. The default compression level is usually sufficient.
|No||integer from -1 to 9||-1|
APC is mostly known as an opcode cache
for PHP source files but can be used to store user data in shared memory as well.
Its main advantage is that data can be shared between different PHP processes and requests.
All calls directly access shared memory. This makes this backend lightning fast for
set() operations. It can be an option for relatively small caches
(few dozens of megabytes) which are read and written very often and becomes handy
if APC is used as opcode cache anyway.
The implementation is very similar to the memcached backend implementation and suffers from the same problems if APC runs out of memory. Garbage collection is currently not implemented. In its latest version, APC will fail to store data with a PHP warning if it runs out of memory. This may change in the future. Even without using the cache backend, it is advisable to increase the memory cache size of APC to at least 64MB when working with TYPO3, simply due to the large number of PHP files to be cached. A minimum of 128MB is recommended when using the additional content cache. Cache TTL for file and user data should be set to zero (disabled) to avoid heavy memory fragmentation.
It is not advisable to use the APC backend in shared hosting environments for security reasons. The user cache in APC is not aware of different virtual hosts. Basically every PHP script which is executed on the system can read and write any data to this shared cache, given data is not encapsulated or namespaced in any way. Only use the APC backend in environments which are completely under your control and where no third party can read or tamper your data.
Xcache is a PHP opcode cache similar to APC. It can also store in-memory key/value user data.
The cache backend implementation is nearly identical to the implementation of APC backend and has the same design constraints.
Xcache does not work in command-line context. The Xcache backend implementation is constructed to silently discard any cache operation if in CLI context. That means if Xcache backend is used, it is of no effect in CLI.
Furthermore, it is important to set the PHP ini value
xcache.var_size to a value (eg. 100M)
that is big enough to store the needed data. The usage of this capacity should be monitored.
(Available since TYPO3 CMS 6.1)
Wincache is a PHP opcode cache similar to APC, but dedicated to the Windows OS platform. Similar to APC, the cache can also be used as in-memory key/value cache.
The cache backend implementation is nearly identical to the implementation of APC backend and has the same design constrains.
The file backend stores every cache entry as a single file to the file system. The lifetime and tags are added after the data part in the same file.
This backend is the big brother of the Simple file backend and implements additional interfaces. Like the simple file
backend it also implements the
PhpCapableInterface, so it can be used with
PhpFrontend. In contrast to
the simple file backend it furthermore implements
A frozen cache does no lifetime check and has a list of all existing cache entries that is reconstituted during initialization. As a result, a frozen cache needs less file system look ups and calculation time if accessing cache entries. On the other hand, a frozen cache can not manipulate (remove, set) cache entries anymore. A frozen cache must flush the complete cache again to make cache entries writable again. Freezing caches is currently not used in TYPO3 CMS core. It can be an option for code logic that is able to calculate and set all possible cache entries during some initialization phase, to then freeze the cache and use those entries until the whole thing is flushed again. This can be useful especially if caching PHP code.
In general, the backend was specifically optimized to cache PHP code, the
set operations have low
overhead. The file backend is not very good with tagging and does not scale well with the number of tags. Do not use this
backend if cached data has many tags.
The performance of
flushByTag() is bad and scales just O(n).
On the contrary performance of
is good and scales well. Of course if many entries have to be handled, this might
still slow down after a while and a different storage strategy should be used
(e.g. RAM disks, battery backed up RAID systems or SSD hard disks).
|cacheDirectory||The directory where the cache files are stored. By default it is assumed that the directory is below
Simple File Backend¶
The simple file backend is the small brother of the file backend. In contrast to most
other backends, it does not implement the
TaggableInterface, so cache entries can not be tagged and flushed
by tag. This improves the performance if cache entries do not need such tagging. TYPO3 CMS core uses this backend
for its central core cache (that hold autoloader cache entries and other important cache entries). The core cache is
usually flushed completly and does not need specific cache entry eviction.
The PDO backend can be used as a native PDO interface to databases which are connected to PHP via PDO. It is an alternative to the database backend if a cache should be stored in a database which is otherwise only supported by TYPO3 dbal to reduce the parser overhead.
The garbage collection is implemented for this backend and should be called to clean up hard disk space or memory.
There is currently very little production experience with this backend, especially not with a capable database like Oracle. Any feedback for real life use cases of this cache is appreciated.
Data source name for connecting to the database. Examples:
|username||Username for the database connection.||No||string|
|password||Password to use for the database connection.||No||string|
Transient Memory Backend¶
The transient memory backend stores data in a PHP array. It is only valid for one request. This becomes handy if code logic needs to do expensive calculations or must look up identical information from a database over and over again during its execution. In this case it is useful to store the data in an array once and just lookup the entry from the cache for consecutive calls to get rid of the otherwise additional overhead. Since caches are available system wide and shared between core and extensions they can profit from each other if they need the same information.
Since the data is stored directly in memory, this backend is the quickest backend available. The stored data adds to
the memory consumed by the PHP process and can hit the
memory_limit PHP setting.
The null backend is a dummy backend which doesn't store any data and always returns
get(). This backend becomes handy in development context to practically "switch off" a cache.