Recent technological advances in genomics now allow producing biological data at unprecedented tera- and petabyte scales. Yet, the extraction of useful knowledge from this voluminous data presents a significant challenge to a scientific community. Efficient mining of vast and complex data sets for the needs of biomedical research critically depends on seamless integration of clinical, genomic, and experimental information with prior knowledge about genotype–phenotype relationships accumulated in a plethora of publicly available databases. Furthermore, such experimental data should be accessible to a variety of algorithms and analytical pipelines that drive computational analysis and data mining. Translational projects require sophisticated approaches that coordinate and perform various analytical steps involved in the extraction of useful knowledge from accumulated clinical and experimental data in an orderly semiautomated manner. It presents a number of challenges such as (1) high-throughput data management involving data transfer, data storage, and access control; (2) scalable computational infrastructure; and (3) analysis of large-scale multidimensional data for the extraction of actionable knowledge.
cited By 3
D. Sulakhe, Balasubramanian, S., Xie, B., Berrocal, E., Feng, B., Taylor, A., Dr. Bhadrachalam Chitturi, Dave, U., Agam, G., Xu, J., Börnigen, D., Dubchak, I., T. Gilliam, C., and Maltsev, N., “High-Throughput Translational Medicine: Challenges and Solutions”, in Systems Analysis of Human Multigene Disorders, N. Maltsev, Rzhetsky, A., and T. Gilliam, C. New York, NY: Springer New York, 2014, pp. 39–67.