Hi Niklas,
Am 25.09.2017 um 11:16 schrieb Niklas B:
I've successfully setup everything on my machine and I'm now trying to generate complete file-based maps[1] up to zoom level 15 for Sweden. I've written a "offline renderer" using the node-tileserver that accepts a rectangular range of co-ordinates (i.e only Sweden) and iteratively generates the tiles. Generating the 715 378 unique tiles (x4 for all bitmap variants and json) takes roughly a week using 8 cores and 52GB of ram. I'm guessing I'm doing something super inefficient.
- Machine: Google Cloud VM, 8 CPU with 52GB ram plus 200 GB of SSD disk
- Only loaded Europe, not entire planet.
- Postgresql:
- 12GB shared buffer
- 1.5GB work_mem
- autovacuum = Off
The node worker is using Redis and Bull (https://github.com/OptimalBits/bull https://github.com/OptimalBits/bull) and is multi-core. It's a fairly simple version[2] (see source code), but I'm lacking quite a bit understanding of which parts of node-tileserver I need to use so it's a bit of a hack.
My best guess is that it's the database queries that are slow I need to do additional tuning. Since the DB is only 6GB for Europe I'm thinking about moving Postgresql to ramdisk.
I did not try it but partial indexes might help (in addition to the spatial index which is created by osm2pgsql): CREAETE INDEX <name> ON <table> USING gist(way) WHERE <where clause of expensive query>;
You can find the SQL queries used for the generation of the vector tiles in the config.json file.
That's the only advice I am able to give based on my knowledge (mainly the classic Mapnik toolchain).
Best regards
Michael