Title: Big data, big future

With our increasing capabilities in research techniques and tools comes an inevitable increase in the amount of data and the complexity of datasets. But how do we cope with this plethora of data? The availability of big data has the potential to transform many areas of the life sciences, which have a long history of dealing with large quantities of data. Advances in experimental capabilities have greatly increased the amount of data that needs to be stored and analyzed, bringing about the need for new innovations in terms of data handling, storage, management and visualization. When we think of big data in the life sciences, omics data – genomics, transcriptomics, proteomics, metabolomics, and so on – usually comes to mind. Researchers will often have terabytes and petabytes of storage space occupied by omics data. To put this into perspective, 1 petabyte of data is equal to 212,000 DVDs. Technological advancements have allowed the collection and merging of large heterogeneous datasets from different sources, including genome sequences, electronic health records and wearables. But what do we do with all these data?

Click Here For Complete Article Text

 

   Person Information
   Application Sequencing
Company  Product  Process  Other  Subjects  Event  Event  Date  Location  Publication  Publication  Date Text  Descriptor
  • HighTech / Internet / AI

  • Data Analytics

  • Data Interpretation

 

 

 

 

  • BioTechniques

 

  • 6/1/2020

 

  • Article