Investigacion_Cambio Global y Biodiversidad_Análisis de la biodiversidad arquitectura de datos

Biodiversity analysis: architecture of data

The architecture of data allows scientists to establish how data is processed, stored and used, ensuring its protection at all times.

Biodiversity analysis: architecture of data

BIOMA activities generate a large volume of data that are not restricted to specific research lines. Most of the lines and activities will produce data that are naturally interrelated. For example, subprojects operating in the same geographic area will produce georeferenced records linked together by location.

Cross data between separate subprojects, in particular by correlating series of data, can generate new knowledge. Therefore, this line is configured as a crosscutting service with the goal of coherently and consistently organizing the data of research generated in separate subprojects to facilitate their work and using such aggregated data for the basic research , which can only be carried out when sufficient amounts of dispersed data are available (science based on data).

The efficient management and exploitation of large volumes of data requires a state-of-the-art research at management of data. This involves researching, using and eventually developing data storage and display applications and standards, along with management and security protocols. Storage of data, a area of research itself, is implemented as a service for all BIOMA members.

Several researchers at group have been working for a long time on the development of data architectures and on the exploitation of data at instructions of data of massive biodiversity, as well as on computational biodiversity. Their task is to set up the service for the whole group, creating a general data base and a management infrastructure, thus facilitating interaction between projects.

Among other things, the architecture of data provides:

  • design and warehousing strategy(Data Warehousing)

  • development of a biodiversity computational infrastructure.

  • Control of information flow along the entire data generation path from the field.

  • Experiments for final analysis, with special attention to data, verification and maintenance.

  • Safety and reliability of data assemblies through quality control, error checking and data.

  • management of Access.

  • Exploitation tools, including development of methods to facilitate data.

  • Visualization, distribution, access and analysis.

  • Mining data: efficient retrieval of information from external sources and indexing of archive.

  • General repository organization of data for the group of research.

  • Implementation of exchange tools from data and results for subprojects and participants.

  • Quality control and results monitoring for the entire group.

  • Science based on data from the consolidated volume of data.

REDES SOCIALES