Workshop on Long-Term Visual Localization under Changing Conditions

We will be hosting a long-term visual localization workshop at CVPR 2019, which will contain competitions for three different visual localization scenarios. Please see the main workshop page for details.

Results can be submitted to the challenge under the Submission tab above. Select the relevant challenge from the dropdown menu. For the visual localization challenge, and the end-to-end localization challenge, please submit the method once per dataset, using the same method name for all submissions. Datasets for which no results are available for a method will be displayed with zeroes.

The ranking is performed using the Shultze method.

The pose error thresholds which have been used for each dataset to calculate the fraction of correctly localized images are listed for each dataset on the Benchmark page.

Listed below are the public results on the three benchmark datasets.

Visual localization challenge

MethodAachenExtended CMU SeasonsRobotCar SeasonsInLocSILDa
daynighturbansuburbanparkday allnight allduc1duc2eveningsnownight
Localizing Visual Landmarks for 2D matching62.4 / 71.8 / 79.924.5 / 35.7 / 44.984.3 / 89.3 / 93.068.0 / 75.1 / 84.442.4 / 51.4 / 69.751.1 / 77.7 / 92.313.8 / 30.3 / 57.70.0 / 0.0 / 0.00.0 / 0.0 / 0.00.0 / 0.0 / 0.00.0 / 0.0 / 0.00.0 / 0.0 / 0.0
Localizing Visual Landmarks for Place Recognition0.0 / 0.2 / 20.80.0 / 1.0 / 10.217.3 / 42.5 / 89.05.8 / 19.4 / 76.16.6 / 23.1 / 73.07.9 / 30.0 / 85.94.1 / 15.7 / 59.10.0 / 0.0 / 0.00.0 / 0.0 / 0.00.3 / 7.1 / 41.00.0 / 0.0 / 0.90.0 / 2.7 / 56.9
DenseVLAD0.0 / 0.1 / 22.80.0 / 2.0 / 14.314.7 / 36.3 / 83.95.3 / 18.7 / 73.95.2 / 19.1 / 62.07.6 / 31.2 / 91.21.0 / 4.4 / 22.70.0 / 1.5 / 5.10.0 / 0.8 / 2.30.2 / 6.2 / 42.10.0 / 0.0 / 0.70.0 / 2.6 / 53.9
DGCNCCC21.0 / 44.2 / 80.615.3 / 32.7 / 57.10.0 / 0.0 / 0.00.0 / 0.0 / 0.00.0 / 0.0 / 0.07.2 / 28.3 / 87.60.0 / 1.1 / 13.40.0 / 0.0 / 0.00.0 / 0.0 / 0.00.0 / 0.0 / 0.00.0 / 0.0 / 0.00.0 / 0.0 / 0.0

Local feature challenge

MethodAachen
night
DELF - new model39.8 / 61.2 / 85.7
DELF - old model39.8 / 60.2 / 84.7
saliency_ranking_net (multi-scale)44.9 / 59.2 / 77.6
Upright RootSIFT (Feature Challenge Baseline)33.7 / 52.0 / 65.3
ELF13.3 / 21.4 / 30.6

End-to-end localization challenge

MethodAachenExtended CMU SeasonsRobotCar SeasonsInLocSILDa
daynighturbansuburbanparkday allnight allduc1duc2eveningsnownight
EffecientPairWise2D-3DMatching0.0 / 0.0 / 0.00.0 / 0.0 / 0.00.0 / 0.0 / 0.00.0 / 0.0 / 0.00.0 / 0.0 / 0.042.6 / 78.2 / 95.78.5 / 14.1 / 20.60.0 / 0.0 / 0.00.0 / 0.0 / 0.00.0 / 0.0 / 0.00.0 / 0.0 / 0.00.0 / 0.0 / 0.0