In multicellular systems, cells of different types interact in various ways, both mechanically and chemically, to regulate complex processes. There is a large computational gap between detailed models of sub-cellular, molecular processes in single cells, and models of multicellular systems comprising of large numbers of interacting cells such as bacterial colonies, tissue and tumors. In the lab we seek to bridge this gap. We also develop new simulation methodology for modeling specific biological systems together with collaborators.
Studying the scaling mechanisms of cartilage sheets
During embryo development cartilaginous structures assemble that later densify into bone and form the basis for the embryo’s skeleton. Understanding the cellular dynamics responsible for the correct shaping and growth of the cartilage is hence of high importance for modeling the full embryogenesis.
In this collaboration with the Adameyko lab at the Karolinska Institute we study the key question of how mechanical interactions and individual behavior at the cellular level enable the accurate shaping of the cartilage sheet. In order to analyse the influence of different mechanisms in-silico, we built a computational model of the cartilage sheet, combining the center-based model (CBM) as a mathematical framework for the cellular mechanics with rules governing the cellular behavior based on biological observations. We validate the model against in-vivo data, obtained from cell-lineage tracing performed by the Adameyko Lab .
Marketa Kaucka, Evgeny Ivashkin, Daniel Gyllborg, Tomas Zikmund, Marketa Tesarova, Jozef Kaiser, Meng Xie, Julian Petersen, Vassilis Pachnis, Silvia K Nicolis , Tian Yu, Paul Sharpe, Ernest Arenas, Hjalmar Brismar, Hans Blom, Hans Clevers , Ueli Suter, Andrei S Chagin, Kaj Fried, Andreas Hellander and Igor Adameyko, (2016) Analysis of neural crest-derived clones reavals novel aspects of facial development, Science Advances 2(8).
The integration between on the one hand data, modeling and algorithms, and on the other hand the specification, coordination and execution of large scale and data-intensive computational experiments poses a fundamental problem in all scientific disciplines relying on modeling and simulation. Today it is largely left to the modeler or engineer to manually tune models to fit data, to choose algorithms, to configure simulation workflows and to analyze simulation result. This is a big burden to place on e.g. a biologist who is mainly interested in how she can use modeling and simulation to learn new things about a biological system of interest. By utilizing machine learning and cloud computing, we are developing smart systems for scalable and efficient model exploration. An example of a workflow is shown in the image below, where a high-dimensional parameter sweep application is augmented with automated feature extraction and clustering, followed by training a model for classification based on user-defined labels (such as interesting or non-interesting realizations). With this model, the smart sweep application will learn to more efficiently explore areas of interestingness in the parameter space.
Open source computational science and engineering (CSE) software is an integral part of methodology-oriented computational research and a priority in the group. Due to the ongoing transformation of e-infrastructure to clouds, methods and workflows that promote horizontal scalability and elasticity for cloud applications are needed, and this may in many cases require re-thinking of how we best make use of computational resources. Other important questions include reproducibility and handling of large and complex data.
Selected recent publications:
B. Drawert, A. Hellander, B. Bales, D. Banerjee, G. Bellesia, B.J. Daigle, Jr. G. Douglas, M. Gu, A. Gupta, S. Hellander, C. Horuk, D. Nath, A. Takkar, S. Wu, P. Lötstedt, C. Krintz, L. R. Petzold (2016) Stochastic Simulation Service: Bridging the gap between the computational expert and the biologist, PloS Comp. Bio. (to appear)
Life spans in size from small organisms consisting of single cells to complex organisms built up of billions of cells. Even the single-cell organisms are challenging to fully understand and study—their function is dependent on a rich set of reaction networks. Important molecules inside a cell may exist in only a few copies, and that makes them exceedingly difficult and costly to study.
The aim of our research is to develop algorithms and software that can assist in discoveries in basic science and medicine. We use mathematical models to describe how molecules move and interact inside cells, and then simulate these models to gain an understanding of how cells work. The multiscale nature of the problem is an interesting challenge. At the finest level we would consider single biomolecules and their exact molecular structure. There are models and methods for simulating systems at that level, but they are computationally expensive.
We couldn’t simulate the behavior of a large, complex system with such a method. Instead of considering the true structure of molecules, we could use a model that approximates them by spheres. At this level we can simulate medium-sized systems inside a cell on a time scale of seconds to minutes. An even more coarse-grained model doesn’t model individual molecules, but counts the number of molecules of different species in different parts of the domain. At this scale we can simulate bigger systems for hours.
We have developed methods with the aim of coupling accurate fine-grained methods with less computationally expensive coarse-grained methods. In doing so, we obtain methods that are more accurate than the coarse-grained method, but still more efficient than the fine-grained method. These methods are called multiscale methods. By adding scales to our simulations—more accurate models, incorporating some of the many complex internal structures that are vital to the function of the cell, but also more coarse-grained models, we attempt to move beyond the boundaries of what is currently possible to simulate with state-of-the-art methods.
We are looking for a talented individual to join our efforts on creating smart and scalable cloud services to support simulation-driven scientific discovery via large-scale computational experiments such as parameter sweeps. This is a classic and very important problem that we will approach in new ways, leveraging recent advances in cloud computing, data-intensive computing and machine learning. See the full advertisement here (Deadline Sept. 1):
To make it easy to try StochSS, our software for rapid model development and simulation of stochastic regulatory networks, we are now providing it as a service on http://try.stochss.org. To use it, simply follow the link to set up your account.
Since this only require a modern browser and a valid email address (no software installation), we hope that this service will help experienced modelers evaluate the software, and importantly, that it will reduce the barrier for new modelers to explore the possibilities of stochastic simulations in systems biology.
After testing StochSS, if you think it will be useful in your research, there are multiple options for you to use it on your own resources. The simplest way to get started is to download the binary package (uses Docker).
Our trial server is deployed in the SNIC Science Cloud. If you would like to provide StochSS as a service for your reserach group or for a distributed collaboration, you can do this easily on your own servers, or in another cloud infrastructure provider such as Amazon EC2. MOLNs, another member of the StochSS-suite of tools, can help you to configure and deploy an identical setup. Please do not hesitate to reach out to us if you need help with this process.
Many of you also like the possibility to work with solvers in a programming environment. All of the tools that are powering StochSS are also available as stand alone libraries:
PyURDME (Python API for spatial stochastic modeling and simulation )
Gillespy (Python API for well-mixed simulations, based on StochKit2)
In addition, if you have access to cloud infrastructure, and would like to work in a pre-configured environment powered by a Jupyther Notebook frontend and interactive parallel computing, you should check out MOLNs:
MOLNs: Cloud platform/orchestration framework for large-scale computational experiments such as ensembles and parameter sweeps, backed by Jupyther and Ipython Parallel.
Last week brought some great news. I am awarded the Göran Gustafsson Prize 2016 from the Gustafsson Foundation (KTH/UU). In the proposed project titled Smart Services for Scientific Discovery we will look into new and more productive ways to combine simulation software and cloud computing infrastructure to build intelligent applications for e.g. exploring an underlying model’s essential behavior.