Wednesday, 23 September 2009

EIDCSR workshop on 14 October

The first EIDCSR project workshop is taking place on 14 October, more details below:


Date and location

14 October at Rewley House, 1 Wellington Square, Oxford OX1 2JA

The event will start at 10.30 and will finish with lunch at 13.00


Description

This workshop is organized as part of the dissemination activities of the JISC-funded EIDCSR Project. The aim of the workshop is to hear about proven practice in selected data management areas identified as challenging for researchers through the EIDCSR audit and requirements analysis exercise. Whilst the EIDCSR Project is addressing the requirements of researchers working within medical and life sciences, the event is likely to be of interest to those working in, or supporting, other disciplinary areas.

The expected audience includes researchers who generate data in labs and computing simulations and staff from service units with an interest in research data management and curation issues.


Outcomes

Participants in the workshop will have the opportunity to learn about, and contribute to discussion of, the different approaches to the ensuring the flow of data between laboratory and in silico experimentation. In particular, the workshop will discuss:

* methods for the capture, storage and reuse of metadata in the laboratory;

* lifecycles integrating wet lab and in silico experimental data;

* for delivery and visualisation of large-scale data.

Programme

Some of the speakers will include:


Alan Garny, Oxford Department of Physiology Anatomy and Genetics - Alan will discuss his research group data management workflow and challenges.

Brian Brooks, Unilever Cambridge Centre for Molecular Informatics - Brian will talk about their Chemical Laboratory Repository In/Organic Notebooks (CLARION) Project.

Angus Whyte, Digital Curation Centre - Angus will share the experiences from the DCC SCARP Project on data management best practice.

Booking

To book a place please email eidcsr@oucs.ox.ac.uk

Wednesday, 9 September 2009

Data audit and requirements analysis

One of the initial exercises to be conducted as part of the EIDCSR project was the audit and requirements analysis based on DAF to document the data practices and assets as well as to capture the requirements for tools and services of the research groups participating in the project. This exercise took place throughout the summer and the report describing the results will be available soon.

As I explained on a previous post, these research groups collaborate as part of a BBSRC grant to conduct research on ventricular architecture by using novel techniques such as Magnetic Resonance Imaging (MRI) and Diffusion Tensor MRI (DTMRI) and combine them with traditional histological techniques as well as with image processing with data registration and computational models for bio-mathematical simulation.

Their research workflow is well described by Gernot et. all (2009)* in the diagram below. It starts with the generation of complementary images stacks that are then processed in different ways to generate meshes that can be used for computational modelling of the heart.


The result of this complex process produces the following data outputs:
  • Histology data: large high resolution images produced by microscopes in the lab representing sections of a heart.
  • MRI and DTMRI data: stack of tiff images resulting from the raw data produced by the magnet in a lab.
  • Segmentation data: outputs resulting from applying image segmentation techniques to the histology and MRI data.
  • Mesh data: volumetric model produced from segmented data in a mesh generator.
  • Simulations: electrophysiological simulation using the mesh data and other input files that define the models and the parameters.
  • 3D heart atlas: representing an average representation of a heart ventricles obtained from the histology and MRI data.
And the research group requirements can be grouped under three themes:
  • Secure storage: all the data outputs presented above are stored on a combination of desktop computers and a project NAS system and researchers realize the need to keep the data safe by having appropriate and resilient back-up procedures.
  • Data transfer: the histology data are large and needs to be accessed by researchers within the groups and others.
  • Metadata: currently the provenance metadata for some of the data presented above is recorded in printed lab-books. This information is crucial when making the data available to others and it is required when publishing articles based on the data. In addition to this, it may be helpful to improve searching within the NAS system.
*Gernot Plank, Rebecca A.B. Burton, Patrick Hales, Martin Bishop, Tahir Mansoori, Miguel O. Bernabeu, Alan Garny, Anton J. Prassl, Christian Bollensdorff, Fleur Mason, Fahd Mahmood, Blanca Rodriguez, Vicente Grau, Jürgen E. Schneider, David Gavaghan, and Peter Kohl Generation of histo-anatomically representative models of the individual heart: tools and application Phil Trans R Soc A 2009 367: 2257-2292.

ShareThis