We develop and apply algorithms from artificial intelligence, machine learning, and multivariate statistics to our customer's data. If there is a signal, we will find the right tools to bring it out, even if this requires inventing novel algorithms and methods.
We have a number of state-of-the-art - and beyond - methods, both supervised and unsupervised, which we can apply out-of-the box to get a first glimpse into understanding the data. Since every project is different, we extend our toolbox to provide the specific functionality that is needed.
We implement our core software in C++, and offer custom Graphical User Interface applications written in Java for rapid data evaluation. We also offer support for installing, deploying, or embedding runtime software for use in the field.
Our specialty is the analysis of biomedical data sets, including all kinds of biomarkers and genomic and epigenetic data. For complex diseases, our algorithms do not require perfect data labeling, since unsupervised methods will correct for uncertainties in diagnosis. Moreover, for heterogeneous diseases, we can identify sub-types and sub-classes.
Our algorithms can deal with large data sets, such as genome-wide population re-sequencing or analyzing modifications in DNA methylation genome-wide. "Big Data', however, is not a requirement: even smaller data sets can hold the key to improved diagnosis or disease sub-typing. The key here is to avoid over-training the models, and adapt and improve when more data is being generated in the field.
In addition to "static" analyses that are trained offline, we offer interactive systems that adapt and learn 'as you go'. These systems guess the next best step based on the interaction history, and react to the intermediate and delayed outcomes, and are as such ideal for deployment in scenarios for which static data sets do not exist.