MapServer MapCache (formerly known as mod-geocache) is a new member in the family of tile caching servers. It aims to be simple to install and configure (no need for the intermediate glue such as mod-python, mod-wsgi or fastcgi), to be (very) fast (written in c and running as a native module under apache), and to be capable (services WMTS, googlemaps, virtualearth, KML, TMS, WMS). When acting as a WMS server, it will also respond to untiled requests, by merging its cached tiles vertically (multiple layers) and/or horizontally.
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
mod-geocache / mapcache - A fast tiling solution for the apache web server
1. MapServer MapCache
(f.k.a mod-geocache)
A Fast Tile-Caching Solution for the
Apache HTTP Server
14 / 09 / 2011 Thomas Bonfort (Terriscope)
Stephen Woodbridge (iMaptools)
3. Yet Another Tile Cache ?
• Mature / feature-full solutions already out
• High performance needed
• Started out as a small project, to validate the
concept
• Integrated into the MapServer stack for next
release
4. Apache module
• Module = code run by the apache processes
that treat requests
• Native code
• No overhead (e.g. CGI, FastCGI)
• Does not require spawning an interpreter per
concurrent request
• Caveats:
– Thread/process synchronization
– Memory management
– Security
5. Features
• Metatiling
– Cross process/thread locking ensures you
can enable metatiling on an unseeded
tileset
• Image recompression / optimization
– JPEG quality
– PNG compression level
– PNG quantization
– “Mixed” format: PNG+JPEG
9. “Demo” Interface
• Lists all active services
• Easy way to check configured caches
• Simple OpenLayers slippy-map
• Cut-and-paste definitions for Layers
• http://localhost:8081/mapcache/demo
10. Grids
• Extent
• Projection
• Resolution per level
• Tile size
• Comes preconfigured with popular grids
• Supports grid aliases
• For limited areas, use grid subsets, not your
own grid!
11. Data Sources
• Extensible, anything able to provide an image
for given:
– Width, Height
– Extent
– SRS
– Optionally dimension
• WMS is the only implemented source
– Custom query parameters
– Custom headers
12. Caches
• Extensible mechanism, backend must provide
api to get/set a tile for a given grid, x, y, z
(+dimension)
• Higher level locking mechanism allows on-
demand cache generation
• Backends provide different performance /
manageability tradeoffs
• Currently implemented: filesystem, sqlite,
memcached
13. Disk based caches
• Tilecache hierarchy:
– /tmp/osm/g/17/000/027/304/000/081/334.png
• Custom hierarchy:
– /tmp/{tileset}/{grid}/{x}-{y}-{z}.{ext}
• Support for symlinking blank tiles
• Watch out for filesystem limitations !
– Max files per directory
– Max number of inodes
– Blocksize
14. Sqlite caches
• Store tile data as blobs in sqlite db
• Slower than disk caches, but prevents
filesytem headaches
• Flexible storage options:
– Provided custom schema
– MBTiles schema
– Custom schema, provide your own queries:
• select tile_data from tiles where tile_column=:x and
tile_row=:y and zoom_level=:z");
• insert or replace into
tiles(tile_column,tile_row,zoom_level,tile_data) values
(:x,:y,:z,:data)"
15. Vertical Assembling
Save bandwidth, request a single tiled layer !
&LAYERS=OSM&… &LAYERS=NEXRAD&… &LAYERS=OSM,NEXRAD&…
17. Tile Assembling
• CPU bound operation: image format (PNG/
JPEG) encoding and decoding
• CPU acceleration (MMX,SSE,…) of pixel
manipulation operations (scaling, blending)
• Configurable resampling
• No reprojection support
• Missing spec for TMS and WMTS support
18. Proxying support
• Transparently add tiling / fast WMS support to
existing services
• Intercepts GetTile / GetMap requests
• Configurable forwarding to other services
based on request parameters
19. Seeder
• Use multiple threads to load the source WMS
• Reseed tiles older than a specified date
• Seed only tiles inside given geometry
– OGR for data access: filter based on SQL
queries, e.g. FIPS_A1=’USA’ ,
pop_density>1000
– GEOS Prepared Geometries for fast
intersection calculation
• Delete mode
• Specify dimension values
20. Benchmarks
• Server: 4-core, 8GB ram, SSD storage
• ab tool used over Gigabit Ethernet
• “warm” filesystem
• All requests on exact same image data