Uncertainty in deep learning has recently received a lot of attention. While deep neural networks have shown better accuracy than other competing methods in many benchmarks, it has been shown that they may yield wrong predictions with unreasonably high confidence. This has increased the interest in methods that help providing better confidence estimates in neural networks, some using specifically designed architectures with probabilistic building blocks, and others using a standard architecture with an additional confidence estimation step based on its output. This work proposes a confidence estimation method for Convolutional Neural Networks based on fitting a forest of randomized density estimation decision trees to the network activations before the final classification layer and compares it to other confidence estimation methods based on standard architectures. The methods are compared on a semantic labelling dataset with very high resolution satellite imagery. Our results show that methods based on intermediate network activations lead to better confidence estimates in novelty detection, i.e., in the discovery of classes that are not present in the training set.