Competencies

Complex Systems

CompetencyDescription
Local Climate
Zones modelling
Categorizing urban areas based on land cover, structure, and function to better understand local climate variations. For example, using data on vegetation, building height, and road networks can help identify heat-prone areas within a city.
Weather forecasting Predicting future weather using atmospheric data and computational models, including standard models like ALADIN (Aire Limitée Adaptation dynamique Développement InterNational), which provides high-resolution forecasts. ALADIN-type models are used in meteorological agencies to predict localized weather patterns, such as temperature, precipitation, and wind speeds, helping with agriculture planning and disaster preparedness in specific regions.
Causal inference Causal inference focuses on identifying cause-and-effect relationships from data. For instance, analyzing if increased air pollution causes higher asthma rates requires understanding causality rather than mere correlation.
Causal network inference Building on causal inference by mapping and analyzing interconnected causal relationships. In healthcare, it’s used to uncover how multiple factors, like diet, genetics, and environment, jointly impact disease risk.
Neuroimaging data analysis Analyzing neuroimaging data involves processing brain scan data (e.g., MRI, fMRI) to understand brain structure and function. For example, it’s used in identifying areas associated with memory in Alzheimer's research.
Urban Climate modelling Simulating urban climates to understand how cities affect local weather, air quality, and temperatures. For instance, it models how green spaces reduce city heat during summer. Graph Theory Graph theory studies the relationships between objects, represented as nodes and edges in a graph. It’s applied in social networks, like analyzing Facebook connections, to understand and predict social interactions.
Combinatorial optimization Combinatorial optimization finds the best solution among a finite set of possibilities. It’s used in logistics, such as optimizing delivery routes for trucks to minimize travel time and fuel costs.

Artificial Intelligence

CompetencyDescription
Symbolic and explainable machine learning Design of algorithms that can automatically learn clear and understandable rules from data labeled as "yes" or "no." These rules are designed to work well with new, unseen data and to provide intuitive and transparent explanations for each classification decision. This is especially valuable in domains where straightforward, interpretable rules can accurately classify data, empowering users to trust and act on the insights with confidence. Formal verification Developing algorithms that provide a more reliable alternative to traditional testing. Given a system model (such as a technical device in simulation or a piece of source code) and a desired property (such as avoiding unsafe states), our approach can either identify a bug or mathematically prove that the property holds. Especially useful for safety-critical systems where the additional effort needed for achieving mathematical rigor is justified.
Robot Motion Planning Solving complex robotics tasks by generating effective, collision-free plans. For example, our algorithms can guide a team of warehouse robots to their goal positions without collisions or help an autonomous vehicle navigate from point A to point B while respecting vehicle geometry, avoiding obstacles, and optimizing performance criteria. This approach ensures that robotic systems operate smoothly, safely, and efficiently within their environments.
Classification and regression problems We specialize in training neural networks for both classification and regression tasks across diverse data types, including image classification, time series forecasting, and tabular data analysis. While this setup is standard, our expertise lies in performing nuanced data analysis to enhance the effectiveness of training, enabling us to tackle complex data scenarios with precision and accuracy.
Data augmentation Enhancing datasets by generating new data points using tools like stable diffusion and other generative models. This approach can be particularly valuable for augmenting training data in image-based tasks. For example, in medical imaging, where obtaining diverse labeled images can be challenging, our data augmentation methods can produce realistic variations to improve model performance and robustness, even with limited initial data.
Neural architecture search Design of optimized neural network architectures tailored to specific learning problems, such as classification or regression, using labeled datasets. While traditional algorithms for Neural Architecture Search can be very time-intensive, we are developing techniques to accelerate the process, making it more feasible for real-world applications. For instance, in personalized medicine, finding the ideal neural network architecture can significantly enhance the accuracy of disease prediction models, even when computational resources are limited.
Numerical and black-box optimization Creation of advanced algorithms for large-scale numerical optimization and black-box optimization. Our algorithms frequently outperform commercial solutions and can be customized for specific problem types. Numerical optimization has countless applications throughout all areas of engineering, science, and the humanities.
Ethics of AI Conducting thorough assessments of ethical issues surrounding AI, particularly in scenarios where AI makes safety-critical decisions. For example, our work can guide ethical frameworks for autonomous vehicles, ensuring that they make responsible and transparent decisions, taking into account public safety and societal impact.

Theoretical Computer Science

CompetencyDescription
Logic Logic is the study of correct reasoning and argumentation. Logicians use mathematical tools to analyse patterns of reasoning and develop formal systems for representing and evaluating arguments. Our work focuses on the mathematical properties of some of these formal systems, but we also look at how formal logical systems fit with pre-theoretical intuitions about correct reasoning.
Knowledge representation Knowledge representation is the branch of artificial intelligence concerned with how knowledge can be represented symbolically using formal languages and manipulated in an automated way by reasoning programs. In particular, we focus on how different formal logic languages perform as tools for representing knowledge about different domains (such as knowledge, actions, programs, etc.) and to what extent we can automate reasoning using these formalisms.
Graph theory Graph theory is a branch of mathematics that studies graphs, abstract representations of collections of objects and the relationships between them. By studying the properties of these representations, graph theory helps us understand complex systems in areas ranging from social networks to computer networks. We focus on extremal graph theory, the branch of graph theory that studies conditions that guarantee that a large graph contains a certain type of structure.
Random discrete structuresRandomly generated discrete structures such as graphs, permutations, trees and hypergraphs have become ubiquitous in applied sciences as abstract models of complex objects: networks, population dynamics or physical systems. Focusing on the theoretical aspect of such models, we study random discrete structures via probabilistic concepts: laws of large numbers, asymptotic distributions, large deviations, concentration inequalities and extreme values. We address questions about random discrete structures by exploring techniques from various areas of mathematics, including probability, information theory, statistical physics, analysis and linear algebra.
Computational complexityComputational complexity theory is a branch of computer science that studies the inherent difficulty of computational problems. It analyses the time and space resources required to solve problems using algorithms. By classifying problems according to their complexity, this theory helps us understand the limits of computation and identify problems that are practically impossible to solve efficiently. We focus on deepening our understanding of computational complexity using non-standard measures of complexity, such as energy complexity and others relevant to neurocomputing and knowledge compilation.

Statistical Modeling

CompetencyDescription
Design of experiments Developing experimental designs tailored for complex, non-standard scenarios, with a focus on managing nuisance factors and implementing effective blocking strategies. For instance, in agricultural trials, blocking can account for soil variation, while nuisance factors like weather conditions are controlled to isolate treatment effects. Especially effective and reliable in environments with inherent variability.
Hierarchical modeling and modern mixed-effects approachesDeveloping hierarchical and mixed-effects models to analyze data with multiple sources of uncertainty, allowing for detailed characterization and quantification of variability across different levels of a system. By estimating the size and impact of these variability components, it supports critical decision-making processes, incorporating both the estimates and their associated uncertainties. Both frequentist and Bayesian methods are employed, enabling flexible model structures that are well-suited to complex datasets, such as those in ecology or clinical studies where data are often nested or grouped.
Bayesian modelingApplying structured Bayesian models to analyze and predict complex physical or technological phenomena, enabling estimation of key quantities while simultaneously accounting for nuisance factors within a fully probabilistic framework. This approach leverages expert knowledge to incorporate informed priors, which improves estimation accuracy and stability when working with empirical data. For example, in engineering, Bayesian modeling can be used to predict system reliability under uncertainty, adjusting for variables like material variability and measurement error.
Spatial and spatio-temporal statisticsModeling complex processes that vary across both space and time, often using advanced statistical techniques for precision and efficiency. Techniques such as Gaussian processes and structured models, implemented through computationally efficient methods like Integrated Nested Laplace Approximations (INLA) or Markov Chain Monte Carlo (MCMC) simulations, enable the analysis of intricate patterns. For example, these methods can track environmental changes over time, helping to identify spatial trends in climate data or predict disease spread across regions.
Calibration of complex physical models against empirical data Developing structured yet modular statistical models to align outputs from complex physical models with empirical data, enhancing predictive accuracy and model reliability. This calibration process not only improves predictions but also reveals biases or limitations within the original physical model. For instance, in climate modeling, calibration against observed temperature and precipitation data can help identify and correct systematic biases, leading to more accurate long-term climate forecasts.
Hiring, promotion, and selection process analyticsUsing data-driven approaches to evaluate and validate hiring, promotion, and selection processes, focusing on the effectiveness of selecting the most qualified candidates. This involves in-depth analysis of measurement errors, assessment of individual raters and rating criteria, and identification of areas for improvement in selection systems. For instance, by examining rater consistency and the predictive validity of selection criteria, organizations can enhance fairness and accuracy in their decision-making processes. Learning analytics Applying statistical modeling, text analysis, and ML to gain insights into learning outcomes and educational effectiveness. Item-level analyses can reveal common misconceptions among specific groups or demonstrate the impact of a particular study track. By identifying patterns and trends in learning behaviors and outcomes, we enable data-informed improvements in curriculum design and teaching strategies.