Parameter Inference

Biochemical reaction networks represent complex cellular regulatory mechanisms. These networks are typically analyzed using discrete stochastic simulation models. The models may involve numerous reactions involving a large number of chemical species, governed by highly uncertain parameters.

Given existing data pertaining to a biochemical reaction network, one is often interested in inferring the values of the model parameters that likely generated the data. The data itself may come from models simulated in the past, or physical experiments. Approximate Bayesian Computation (ABC) is a proven approach that effectively solves such parameter inference problems by using simulation models as a tool to find the region in the parameter space corresponding to least deviation from given data.

The rejection sampling algorithm forms the basis of the ABC framework. Samples are drawn from a specified prior distribution, and subsequently simulated. The simulated responses are compared to existing data by means of a distance function and appropriate summary statistics. Samples that result in distance function values below a specified tolerance threshold are accepted, and the rest rejected. The sampling algorithm proceeds until the desired number of accepted samples have been obtained. The inferred parameters are then reported as the mean parameter values corresponding to the accepted samples.

Design choices such as selection of distance functions, summary statistics and acquisition function for the inference process have a deep impact on the solution quality. Furthermore, increasing problem complexity often leads to impractically high inference times using rejection sampling.

Our research explores methods to accelerate high-quality parameter inference by leveraging state-of-the-art methods from the fields of computational biology, machine learning, optimization and statistics. Some of our active research topics include investigating intelligent construction of priors, methods for automated large-scale summary statistic selection,  and training fast local and global approximations or surrogate models of computationally expensive simulators.


Recent Publications:

HASTE: Hierarchical Analysis of Spatial and Temporal Data

The HASTE project, a SSF-funded project on computational science and big data, takes a holistic approach to new, intelligent ways of processing and managing very large amounts of microscopy images to leverage the imminent explosion of image data from modern experimental setups in the biosciences. One central idea is to represent datasets as intelligently formed and maintained information hierarchies, and to prioritize data acquisition and analysis to certain regions/sections of data based on automatically obtained metrics for usefulness and interestingness.

The project is a collaboration between the Wählby lab (PI),  Hellander lab (co-PI), both at the Department of Information Technology, Uppsala University, the Spjuth lab (co-PI) at the Department of Pharmaceutical Biosciences, Uppsala University,  the Nilsson lab at the Department of Biochemistry and Biophysics at Stockholm University and SciLifeLab, Vironova AB and AstraZeneca AB.

Read more on the project webpage.


Simulation of stochastic multicellular systems

In multicellular systems, cells of different types interact in various ways, both mechanically and chemically, to regulate complex processes. There is a large computational gap between detailed models of sub-cellular, molecular processes in single cells, and models of multicellular systems comprising of large numbers of interacting cells such as bacterial colonies, tissue and tumors. In the lab we seek to bridge this gap. We also develop new simulation methodology for modeling specific biological systems together with collaborators.

Studying the scaling mechanisms of cartilage sheets

During embryo development cartilaginous structures assemble that later densify into bone and form the basis for the embryo’s skeleton. Understanding the cellular dynamics responsible for the correct shaping and growth of the cartilage is hence of high importance for modeling the full embryogenesis.

In this collaboration with the Adameyko lab at the Karolinska Institute we study the key question of how mechanical interactions and individual behavior at the cellular level enable the accurate shaping of the cartilage sheet. In order to analyse the influence of different mechanisms in-silico, we built a computational model of the cartilage sheet, combining the center-based model (CBM) as a mathematical framework for the cellular mechanics with rules governing the cellular behavior based on biological observations. We validate the model against in-vivo data, obtained from cell-lineage tracing performed by the Adameyko Lab [1].

Recent publications

  1. Kaucka, M., Zikmund, T., Tesarova, M., Gyllborg, D., Hellander, A., Jaros, J., … & Dyachuk, V. (2017). Oriented clonal cell dynamics enables accurate growth and shaping of vertebrate cartilage. eLife, 6, e25902.2
  2. Marketa Kaucka, Evgeny Ivashkin, Daniel Gyllborg, Tomas Zikmund, Marketa Tesarova, Jozef Kaiser, Meng Xie, Julian Petersen, Vassilis Pachnis, Silvia K Nicolis , Tian Yu, Paul Sharpe, Ernest Arenas, Hjalmar Brismar, Hans Blom, Hans Clevers , Ueli Suter, Andrei S Chagin, Kaj Fried, Andreas Hellander and Igor Adameyko, (2016) Analysis of neural crest-derived clones reavals novel aspects of facial development, Science Advances 2(8). 


Smart systems for computational experiments

The integration between on the one hand data, modeling and algorithms, and on the other hand the specification, coordination and execution of large scale and data-intensive computational experiments poses a fundamental problem in all scientific disciplines relying on modeling and simulation. Today it is largely left to the modeler or engineer to manually tune models to fit data, to choose algorithms, to configure simulation workflows and to analyze simulation result. This is a big burden to place on e.g. a biologist who is mainly interested in how she can use modeling and simulation to learn new things about a biological system of interest. By utilizing machine learning and cloud computing, we are developing smart systems for scalable and efficient model exploration. An example of a workflow is shown in the image below, where a high-dimensional parameter sweep application is augmented with automated feature extraction and clustering, followed by training a model for classification based on user-defined labels (such as interesting or non-interesting realizations). With this model, the smart sweep application will learn to more efficiently explore areas of interestingness in the parameter space.  

Software and applied cloud computing

Open source computational science and engineering (CSE) software is an integral part of methodology-oriented computational research and a priority in the group. Due to the ongoing transformation of e-infrastructure to clouds, methods and workflows that promote horizontal scalability and elasticity for cloud applications are needed, and this may in many cases require re-thinking of how we best make use of computational resources. Other important questions include reproducibility and handling of large and complex data. 

Selected recent publications: 

Multiscale simulations of chemical kinetics

Life spans in size from small organisms consisting of single cells to complex organisms built up of billions of cells. Even the single-cell organisms are challenging to fully understand and study—their function is dependent on a rich set of reaction networks. Important molecules inside a cell may exist in only a few copies, and that makes them exceedingly difficult and costly to study.

The aim of our research is to develop algorithms and software that can assist in discoveries in basic science and medicine. We use mathematical models to describe how molecules move and interact inside cells, and then simulate these models to gain an understanding of how cells work. The multiscale nature of the problem is an interesting challenge. At the finest level we would consider single biomolecules and their exact molecular structure. There are models and methods for simulating systems at that level, but they are computationally expensive.

We couldn’t simulate the behavior of a large, complex system with such a method. Instead of considering the true structure of molecules, we could use a model that approximates them by spheres. At this level we can simulate medium-sized systems inside a cell on a time scale of seconds to minutes. An even more coarse-grained model doesn’t model individual molecules, but counts the number of molecules of different species in different parts of the domain. At this scale we can simulate bigger systems for hours.

We have developed methods with the aim of coupling accurate fine-grained methods with less computationally expensive coarse-grained methods. In doing so, we obtain methods that are more accurate than the coarse-grained method, but still more efficient than the fine-grained method. These methods are called multiscale methods. By adding scales to our simulations—more accurate models, incorporating some of the many complex internal structures that are vital to the function of the cell, but also more coarse-grained models, we attempt to move beyond the boundaries of what is currently possible to simulate with state-of-the-art methods.

Recent publications: