Machine learning models are increasingly contributing to the exploration of chemical compound space in many application areas, such as molecular dynamics, inverse design, and structural relaxations. However, due to the non-linear nature of those models, a major throwback is their lack of interpretability. Hence, as the scope of machine learning applications in quantum chemistry continues to broaden, the demand for interpretability becomes increasingly pronounced. Making machine learning models more transparent allows for assessing how well they align with the physical principles of quantum mechanics, increasing trust in those models, and consequently leading to wider adoption in the community. In this work, we focus on three aspects of interpretability in the context of machine learning force fields. First, we aim to provide interpretable representations of learned chemical environments. Second, we emphasize on the interpretability of model predictions in terms of atomic interactions. Lastly, we demonstrate that utilizing trained machine learning force fields allows for shedding light into molecular manipulation processes in real-world experiments, which were hitherto conducted almost blindly. For the first aim of enhancing the interpretability of machine learning force fields, we present a method that allows for dividing molecules into different moieties based on the learned local feature environments. We show a variety of applications of this method, such as the selection of representative data points, automatic coarse-graining, and identification of reaction coordinates. The second approach, based on layer-wise relevance propagation, shows the influence of higher-order input features on the model output. We introduce important guidelines regarding the application of explanation methods to regression problems in quantum chemistry and demonstrate that, when applied correctly, the explanations can be linked to fundamental chemical knowledge for atomistic systems and, in particular, for coarse-grained systems. For the third aim, regarding the interpretability of molecular manipulation processes in scanning probe microscopy, we solve an inverse problem by providing molecular conformations relying on sparse observations. In this study, we demonstrate a proof-of-concept of how incorporating the predicted atomistic structures into the analysis of experiments enables an unprecedented level of interpretability. Our proposed methods provide insights into the inner workings of machine learning force fields and illuminate experiments that were previously constrained by limited observation capabilities. We anticipate that this development will foster greater trust in machine learning force fields and enhance understanding of the underlying learned concepts. Furthermore, we believe that combining machine learning force fields with our proposed manipulation monitoring approach will significantly aid in guiding experiments conducted with the scanning probe microscope.