.. PEANUT documentation master file, created by sphinx-quickstart This file serves as the root for the Sphinx documentation. PEANUT Documentation ==================== This repository documents the **PEANUT** (Predicting potential Energies with Artificial NeUral neTworks) model. This project documents a neural network potential (NNP) designed to predict molecular potential energies using a graph-based neural network architecture. Molecules are represented as graphs with atoms as nodes and interactions as edges, enabling message passing to model interatomic effects based on distances and geometric arrangements. The network learns atom-wise representations and aggregates them to obtain total energies, following principles used in modern NNPs such as DimeNet or MACE. The architecture aims to achieve a balanced trade-off between expressive power and computational efficiency, combining representation learning with established components such as neighbor lists and smooth cutoff functions to ensure suitability for molecular dynamics applications. The documentation covers brief descriptions of the model architecture, datasets used for training and evaluation, and the current status of development. .. toctree:: :maxdepth: 3 :caption: Contents: network_architecture/network_architecture dataset training_procedure work_progress Key features ============ The model will use learned features (see Section representation learning) to construct node features per atom. These node features are finally processed through a MLP to predict potential energy contributions per atom. Finally, the sum of all atom-wise energy contributions shall match the energy of the input data (e.g. energy of a molecule). If a model is applied to the above described setting, we have to ensure that several symmetry requirements are fullfilled to ensure that physical properties are conserved. This includes translational invariance, rotational invariance and permutational invariance. #. **Rotational invariance**: use distances for radial features and spherical harmonics for angles (or other invariant angular descriptors) #. **Translational invariance**: using relative positions only ensures this #. **Permutational invariance**: sum or mean over neighbor messages ensures exchangeability Conceptual workflow =================== .. code-block:: text For each atom :math:`i`: 1.1. Get neighbors in the cutoff range 1.2. Compute radial and directional features (learned) for each edge 1.3. Compute attention weights for each neighbor (closer neighbors are chemically more important) (not implemented) 1.4. Aggregate messages per scale 1.5. Update node embedding :math:`h_i` After N message-passing layers: 2. Sum over all nodes to predict molecular energy Note: In step 1.1., I will have to use the previously mentioned neighbor list. Step 1.4. accounts for the possibility of using more than one edge MLP for different meassge passing treatment, depending on pairwise ditances (''closer atoms are more important''). Symmetry requirements will be fulfilled as to the following sketched overview: .. code-block:: text [Atomic positions r_i] | v [Neighbor list r_ij] <-- translational invariance via relative positions | v [Radial features r_ij] <-- rotational & translational invariance | v [Directional features Y_ij] <-- rotational invariance | v [Message passing / attention] <-- permutational invariance | v [Node embeddings h_i] | v [Sum/Pooling] --> Energy (invariant) | v [Optional: Gradient] --> Forces --- Note: The potential energy will be used to compute forces that act on each atom :math:`(F = -\nabla U)`. In case forces will be used during the training, the energy gradients can also provide an auxiliary supervision signal.