Category Archives: Requirements

For which RDM activities will UK research funders pay?

Much RDM activity has been stimulated by requirements and expectations emerging from the main UK research funders, as usefully described in the DCC’s funder policy table.  But do funders understand what they are really asking researchers and institutions to do with their data, and how much sustainable research data management activities actually cost?  The April 2013 DCC Research Data Management Forum was a free and timely opportunity for Jisc MRD projects, DCC Institutional Engagement partners and other interested people to directly quiz representatives of some of the main UK research funders.  Graham Pryor of the DCC has written a blogpost over at the DCC website, which lays out the day’s discussion and provides funders’ responses to queries about RDM costs.

For me, some of the main take-home messages included:

–          More cooperation and standardisation across research funder guidance to bidders, policy and guidance to peer review panel members would be sensible and useful for the sector as a whole.  That is to say, harmonisation of language, approach and policy would benefit bidding researchers and their institutions but would also help funders work in a more effective and interoperable way, which has to be advantageous to them too.

–          Collaborative measures by HEIs and researchers should be considered.  Who else in your area would be a good partner?  Not just for a research bid, but for shared services such as storage?  Can you achieve an economy of scale by partnering with another institution in your geographical or research area?

–          Use of existing tools and services should be considered as a priority: HEI doing their own development should be a last resort.  Anthony Beitz, amongst others, has argued this persuasively before.  Initiatives like the DCC can help with suggestions and descriptions of tools.

–          We need to move forward with pragmatic measures for what researchers need right now, whilst not losing sight of modelling longer-term sustainable strategies.

So far, so sensible.  But a take-home worry for me was the importance placed again and again by funders on the key role of the peer review panel.  We don’t know how AHRC or ESRC deal with this because neither of them were present, but the funders who were there rely on their peer review panels to make decisions about ‘the science’ (for which I mentally substitute ‘the research’) and also, in the case of most of the funders present, the data management plan or statement.

Given that we who work solely and only on research data management, digital curation and digital preservation as our fields of interest are still in the process of working this stuff out, how do we know whether the peer review panel members have sufficient and appropriate knowledge of these fields to responsibly discharge their duty when judging the RDM plans of other researchers?  One funder explained that they expect a DMP to be in place at the point of bidding but that these are not peer reviewed “because peer reviewers are unlikely to have the knowledge required.”  What of the other funders?  It strikes me that knowing the limits of panel expertise – the ‘known unknowns’ – is by far the most responsible approach to any type of peer review process.

In addition to quality or level of knowledge, I’m also interested in consistency of standards applied.  One funder openly admitted on the day that he is aware that there is a troubling amount of variability in the approach to both the creation and assessment of data management plans in bids.  Other comments indicated that some bids have their data management plans or statements specifically reviewed and some don’t.

Peer review panels are largely comprised of senior researchers (by which I mean time in the field as opposed to age).  The Jisc MRD programme, like many other initiatives, often focuses training and awareness-raising efforts on early career researchers and postgraduate students, with the idea being that they will take their good practice up with them through their academic careers.  But what do we do until then?  Even if we can rely on current ECRs and PGs to be well and consistently trained, we’re still in a situation where all bidding, for the next twenty-odd years is being reviewed by senior researchers who have not been specifically targetted by RDM training and awareness-raising efforts.

A solution?  As fellow evidence gatherer Jonathan Tedds suggested in the discussion, we can learn from areas such as bidding for telescope time in astronomy, where peer review necessarily includes someone who is specifically there to provide their technical knowledge.  For consistency, research funders seem to need the presence of or input from an appropriate external body.  So this seems to be an area where the DCC and research funders can work together, for example, to produce consistent and approachable, up-to-date guidance for peer review panel members, and to ensure someone who specialises in digital curation as applied to research data management is included in their peer review panels.

Just my suggestions.  Your comments are, as always, welcome below.

Laura Molloy

e: laura.molloy AT

‘Institutional Policies, Strategies, Roadmaps’ session at JISC MRD and DCC IE workshop, Nottingham

The ‘Components of Institutional Research Data Services’ event on 24 October 2012 brought together the ongoing JISC MRD infrastructure projects as well as the institutions with which the Digital Curation Centre is running an ‘institutional engagement’.

The ‘Institutional policies, strategies, roadmaps’ session (session 1A) reflected this nicely, with two speakers from MRD projects ‘Admire’ and ‘Research360’, and two from DCC IEs, St Andrews and Edinburgh.

What is working?

Tom Parsons from Nottingham’s Admire project described further connections across this set of institutions, acknowledging the 2011 aspirational Edinburgh data policy (more on this later) as the inspiration for theirs at Nottingham, and underlining the importance of being aware of the requirements not only of major funders at your institution but also the institutional policies which exist: these need to be found, understood, and worked with to give a coherent message to researchers and support staff about RDM.  This can be done, as he noted, by reflecting these existing messages in your data policy but also by strengthening the data management aspects of these existing policies, and so making the most of any credibility they already have with university staff.

At Bath, RCUK funders are also important influences on progress.  Cathy Pink from Research360 has established that the biggest funder of research work at her institution is the EPSRC, and so Research360’s roadmap work to particularly respond to the EPSRC’s expectations is important at her university, and was published earlier this year.  Bath has looked to the Monash University work to guide its direction in policy formation, particularly to inform strategic planning for RDM and making a clear connection between work at the university to advance RDM and the university’s existing strategic aims: an intelligent way to garner senior management buy-in.

Cathy noted that the DAF and Cardio tools from DCC were both useful in ascertaining the existing situation at Bath: these measures are important to take both in order to identify priorities for action, and also in order to be able to demonstrate the improvements (dare I say impact?) brought about by your work in policy formulation and / or training and guidance provision.

To be taken seriously at the institution and to promote awareness and buy-in, Cathy urged institutions to incorporate feedback from a wide range of relevant parties at the university: research support office, the library, IT support and the training support office where available.  This promotes a coherent approach from all these stakeholders as well as a mutually well-informed position on what each of these areas can contribute to successful RDM.

Birgit Plietzch from St Andrews also found DAF and Cardio relevant to ascertain the current data management situation at her institution but felt the processes could be usefully merged.   Birgit’s team again started by finding out who was funding research at the university (400+ funders!) and then increasing their understanding of these funders’ RDM requirements to create a solid base for policy work.  Again, the Monash University work in this area was useful at her institution, and when the EPSRC roadmap work was completed, as with Bath, it helped to demonstrate the relevance of RDM to diverse areas of institutional activity.

Edinburgh’s Stewart Lewis, too, described the value of creating relationships not only with senior management champions for RDM but also between the university mission statement or strategic aims, and RDM policy.  Stewart acknowledged that the aspirational policy published by Edinburgh in 2011 is a useful way to both instigate and lead on improved RDM at the university, but that action is also crucial.  The aspirational mode of policy gives a stable, high-level statement which is then enacted through supporting, and more volatile, documents.  So whilst action is devolved from the top-level document, it is still intrinsically important if culture change is to happen.  To this end, they have created various levels of implementation groupings to carry through specific actions.  Infrastructure specified by their policy work includes a minimum storage amount and training provision.

In accordance with the Grindley Theory of Four Things (see the – fittingly – 4th bullet point of, Edinburgh is concentrating on four high level  areas: planning, infrastructure, stewardship and, lastly, support across these three.   These areas were chosen in order to meaningfully move forward the RDM work at Edinburgh whilst still making sense to the researcher population.

Challenges and lessons learned

Tom shared some findings gathered by Admire from their survey of the institution’s researcher population which shows around 230 projects are currently funded and so storage requirements are substantial.  Most of these projects are funded by RCUK funders, and so the expectations for a well-organised approach to RDM are also pretty substantial.  When c. 92% of researchers surveyed at the institution report having had no RDM training, we can understand the need for (and scale of) Admire’s work!

Cathy echoed Tom’s point: don’t attempt to simply lift one institution’s work and hope to apply it to yours.  The tailoring required is significant if a set of policies is going to work in your own context.

The first attempt at the RDM policy for Bath was rejected by the senior management group.  Inspirationally, Cathy recognised this as a great opportunity to refine their work and improve the policy using the feedback received.  It also helped clarify their ambitions for the policy and resolved the team to do better than ‘just good enough’: being tempered, of course, by the support infrastructure that could be realistically delivered by the institution – a similar situation as with Nottingham.

Cathy emphasised the point that good quality consultation across the institution is time-consuming but well worthwhile if you aim to build genuinely useful and effective policy or other resources.

Birgit also faced challenges in getting a wider acceptance of some promising RDM policy work.  The institutional environment, including a recent reshuffle of IT provision, had contributed problems to the smooth progress of their IE and senior management, once again, needed compelling evidence to understand the benefits of improved RDM for the institution.

Birgit also found that academics were overextended and found it difficult to make the time to participate in the research that her team needed to undertake to develop policy in this area, but when they realised the relevance they were keen to be involved in the process and to access RDM training.  The notion of the aspirational (as opposed to the highly-specified) mode of RDM policy is popular with researchers at her institution.

Next steps for Stewart and the team at Edinburgh include attaching costs, both in terms of person-time and financial, to the actions specified under their EPSRC roadmap, which will be published soon.  The team will also soon run focus groups using the DCC’s DMP Online tool, run a pilot of Datashare, establish what is needed by researchers in addition to storage, and run training for liaison librarians; these activities, however, need resources: the next challenge to meet.

Discussion picked up the balance between universities offering trustworthy storage appropriate for research data and the motivation of researchers to bid for these resources elsewhere: researchers bidding for this type of funding not only helps the university to concentrate resources in other useful areas but also helps to give a clear message to funders that if they want improved RDM, they have to be prepared to contribute financially towards it.

Costing was a popular topic: Graham Pryor (DCC) was interested that no speaker said they’ve attached costs.  Sometimes explicitly identifying costs means this work becomes unacceptable to senior management on financial grounds.  Paul Stainthorpe at Lincoln agreed that you can spend lots of time on policy, but it won’t be accepted unless there’s a business case.  Other institutions agreed, but added that senior management want some illustrative narrative in addition to the hard figures, to tell them why this really matters.

Birgit added that there is also the problem of unfunded research, particularly in the arts.  Her team has been receiving an increasing number of enquiries relating to this area, and it’s an area also being considered by Newcastle’s Iridium project, who have looked at research information management systems and discovered they only track funded work, leaving unfunded research as ‘a grey area’, even though it may be generating high impact publications.  At UAL, a partner in the KAPTUR project, lots of researchers do a lot of work outside the institution and not funded by it and so for the purposes of the project, they’re being explicit about managing funded work.

UAL has recently launched their RDM policy as a result of their KAPTUR work and stakeholders are happy with it in principle, but the challenge now is how to implement it: John Murtagh noted that engagement and understanding mean work must continue beyond the policy launch.  I mentioned the importance of training here as an element which has to be developed at the institution alongside policy and technical infrastructure.  This was agreed by Wendy White of Southampton: policy needs to be an ongoing dialogue and the challenge is to integrate these elements.

What could the MRD programme or the DCC do to help?

–          DCC: advise on whether funders are going to move the goalposts, and how realistic the risks are of this happening;

–          DCC: advise on what public funding can be used to support RDM policy work;

–          help with costing work

–          DCC: mediation between universities and the research councils, clarifying requirements and sharing universities’ experiences, etc.

–          DCC: providing briefings on current issues, e.g. PVC valued briefings re. open access.

Ways of approaching an RDM Service

A few weeks back an interesting discussion took place on the JISCMRD mailinglist, picked up on this blog under the main question of ‘ What are the requirements an RDMI has to cover?’.

One of the conclusions was that requirements gathering, scope and application obviously depend on a number of factors, one major one being the remit of the project: implementing a system for (pilot) audiences is different from getting an institution-wide service into place and running. The MaDAM project between 2009 and 2011 implemented a pilot RDMI for users from the biomedical domain – and was followed up by  MiSS (MaDAM into Sustainable Service) in autumn last year. Whilst having all the crucial experience, findings, contacts and committed stakeholders through MaDAM to build upon, it was clear that a set of new challenges would arise in implementing a RDMI as a service for the whole (and benefit) of the University.

1) Especially having the main stakeholders on board for such an endeavour is of utmost importance – but soon we realised that this process basically never stops: aims, benefits and plans have to be mulled over, renegotiated and agreed upon. There is always one office and internal project more to be included.

2) The University of Manchester also released their RDM Policy in May this year – and also other internal endeavours change structures (research offices), infrastructure (storage, eDMP) and processes. As a research project implementing a service we have to closely liaise with all those stakeholders to be on the same page and coordinate our efforts (and integrate with existing systems and the IT landscape) – despite a small core project team.

3) Outreach, awareness raising and training become more and more important. To this end (also including the points made before) we had to draw up a communication plan, collaborate and plan for an interim service and training for staff. We always strived to hold as many presentations as possible for various audiences at the University about our projects – now we have to start organising dedicated events e.g. at the moment on DMPs. Also: tangible benefits have to be communicated to all audiences!

4) One major challenge for the devleopment of the service and how it will look like lies in balancing the generic with the specific needs. Trying to gather and negotiate as many views, feedback and concerns is a huge task and some evolvement over time is needed – especially as only a system (service!) in use shows all the specifics required for every user, group and discipline. The service coming into life in about a year’s time will be a ‘thin layer’ across the whole RDM lifecycle, but the development is a continuous one and the RDM Service will get more tailored, sophisticated and mature while running.

Meik Poschen  <>
Twitter:  @MeikPoschen

What are the requirements an RDMI has to cover?

The idea for this post came through a discussion Tom Parsons (ADMIRe project, University of Nottingham) started on the JISCMRD mailinglist about the ‘Requirements for an RDM system’. As we all know there are a lot of challenges involved in figuring out what requirements have to be covered by a Research Data Management Infrastructure. What is it an RDMI has to deliver in the end, what functional specifications have to be defined and on what scale?

Requirements have to be gathered on different levels from various researchers, research groups and other stakeholders. Approaches, case studies and outcomes are depending not only on projects’ remits (e.g. pilot projects vs. 2nd phase projects implementing an institutional wide service), but also on resources, buy-in from stakeholders and varying needs of the target audiences. The wider the field to play on, the harder it gets to balance out disciplinary specific and generic needs and to decide what parts of the research lifecycle can/have to be covered.

At the same time the institutional landscape has to be taken into account as well, especially the IT landscape and systems already existing – Tom pointed to the danger of “straying into the territory of a research administration system when we are thinking about an overall RDM system”. Trying to inter-connect with existing systems across the institution is something e.g. the MiSS project is doing at the moment, integrating existing infrastructure while adhering to the existing IT framework at the University of Manchester; this means the project initiation and accounting systems will be connected to the MiSS RDM core system (as will eScholar as the dissemination platform) and e.g. DMPs are automatically transferred into the active data stage. The temporary downside of this approach – creating an RDMI across the whole lifecylce, but with a thin layer for a service to start with – is that certain specific needs can only be addressed over time. Then again, such a new infrastructure needs to be able to evolve in use anyway.

To round up this post, here are some thoughts mentioned in the discussion:

Chris Morris pointed to a “concept of operations for research data management for a specific domain, molecular biology:” and also “got one general suggestion, which is not to start with use cases about deposition, but with retrieval. There is no scholarly value in a write-only archive.”

Some examples for use cases provided by Joss Winn: “ He remarks that those are very high level looking at them now and provides another link with “more detail from our project tracker, although it may not be as easy to follow: Basically, other than the usual deposit, describe, retrieve functionality, our two pilot research groups are looking to use the RDM platform for data analysis, such that is is a working tool from the start of the research project, rather than a system for depositing data at the end of the project.”

Steve Welburn addresses another important question, namely what technical framework to choose and how they came to choose DSpace:

Thanks to everyone mentioned for their links and thoughts!

Meik Poschen  <>
Twitter:  @MeikPoschen

Oxford digital infrastructure to support research workshop

The University of Oxford have impressively attempted to marshal the diverse projects ranging across disparate areas of expertise in research data management at the university. I attended a DaMaRo workshop today to review the digital infrastructure required to meet the challenges of the multi disciplinary and institutional research landscape as it pertains to Oxford.

First and foremost, this is no mean feat in a university as diverse and dispersed as Oxford and Paul Jeffreys and colleagues are to be congratulated for the work to date. It’s hard enough attempting join up in a smaller, albeit research intensive university such as Leicester and the road is long and at times tortuous. Never mind potentially at odds with established university structures and careers…

I particularly liked the iterative approach taken during the workshop: so present key challenges to the various stakeholders present; provide an opportunity to reflect; then vote with your feet (ok, post-it notes in traffic light colours) on which areas should be prioritised. At the very least this is useful even if we may argue over which stakeholders are present or not. In this case the range was quite good but inevitably you don’t get so many active researchers (at least in terms of publishing research papers) at this kind of meeting.

In assessing the potential research services it was pointed out where a charging model was required, if not funded by the institution or externally. Turns out here at Oxford the most popular choice was the proposed DataFinder service (hence no weblink yet!) to act as a registry of data resources in the university which could be linked to wider external search. I remember during the UK Research Data Service pathfinder project that there was a clearly identified need for a service of this kind. Jean Sykes of LSE, who helped steer the UKRDS through choppy waters, was present and told me she is about to retire in a couple of months. Well done Jean and I note that UKRDS launched many an interesting and varied flower now blossoming in the bright lights of ‘data as a public good’ – an itch was more than scratched.

I also note in passing that it was one of the clear achievements of the e-science International Virtual Observatory Alliance movement, developed for astronomical research between 2000-2010, that it became possible to search datasets, tools and resources in general via use of community agreed metadata standards. Takes medium to long term investment but it can be done. Don’t try it at home and don’t try and measure it by short term research impact measures alone…even the  Hubble Space Telescope required a decade plus before it was possible to clearly demonstrate that the number of journal papers resulting from secondary reuse of data overtook the originally proposed work. Watch it climb ever upwards after that though…

Back to the workshop: we identified key challenges around Helpdesk type functionality to support research data services and who and how to charge when – in the absence of institutional funding. I should highlight some of the initiatives gaining traction here at Oxford but it was also pointed out that in house services must always be designed to work with appropriate external services. Whether in-house or external, such tools must be interoperable with research information management systems where possible.

Neil Jefferies described the DataBank service for archiving, available from Spring 2013, which provides an open ended commitment to preservation. The archiving is immutable (can’t be altered once deposited) but versioned so that it is possible to step back to an earlier version. Meanwhile Sally Rumsey described a proposed Databank Archiving & Manuscript Submission Combined DAMASC model for linking data & publications. Interestingly there is a serious attempt to work with a university spin off company providing the web 2.0 Colwiz collaboration platform which should link to appropriate Oxford services where applicable. It was noted that to be attractive to researchers a friendly user interface is always welcome. Launch date September 2012 and the service will be free to anyone by the way, in or out of Oxford.

Meanwhile, for research work in progress the DataStage project offers secure storage at the research group level while allowing the addition of simple metadata as the data is stored, making that step up to reusability all the easier down the line. It’s about building good research data management practice into normal research workflows and, of course, making data reusable.

Andrew Richards described the family of supercomputing services at Oxford. Large volumes of at risk storage are available for use on-the-fly but not backed up. You’d soon run into major issues trying to store large amounts of this kind of dataset longer term. There is also very little emphasis on metadata in the supercomputing context other than where supplied voluntarily by researchers. I raised the issue of sustainability of the software & associated parameters in this context where a researcher may need to be able to regenerate the data if required.

James Wilson of OUCS described the Oxford Research Database Service ORDS which will launch around November 2012 and again be run on a cost recovery basis. The service is targeted at hosting smaller sized databases used by the vast majority of researchers who don’t have in-house support or appropriate disciplinary services available to them. It has been designed to be hosted in a cloud environment over the JANET network in the same way as biomedical research database specific applications will be provided by Leicester’s BRISSkit project.

Last but not least, Sian Dodd showed the Oxford Research Data Management website which includes contact points for a range of research data lifecycle queries. It is so important to the often isolated researcher that there is a single place to go and find out more information and point to the tools needed for the job at hand.  Institutions in turn need to be able to link data management planning tools to in-house resources & costing information. To that end, the joint Oxford and Cambridge X5 project (named after the bus between the two) will go live in February 2013 and provide a tool to enable research costing, pricing & approval.

Discuss, Debate, Disseminate – PhD and Early Career Researcher data management workshop, University of Exeter, 22 June 2012

Jill and Hannah of the Open Exeter project have not been holding back with their user requirements research – not content with attracting hundreds of responses to their survey of Exeter postgraduates, they’re also augmenting this with their own research as well as running events like Friday’s, in an admirably thorough approach to gathering information on what postgraduate students and early career researchers at their institution need, how they work and where the gaps are in the current infrastructure provision.

Twenty enthusiastic participants turned up on 22 June, happily from across the sciences and humanities, and contributed with gusto to group discussion, intensive one-to-one conversations and a panel session.  The project has recruited six PhD students – Stuart from Engineering; Philip from Law; Ruth from Film Studies; Lee from Sport Sciences and Duncan from Archaeology, plus one more currently studying abroad – to help bridge the gap between project staff and their PhD peers.  These six are working intensively with the project team to sort out common PhD-level data management issues and activities in the context of their own work, which allows them to not only improve their own practice but also to share their experiences and tips with other PhD students and ECRs in their own disciplines at Exeter.  (You can see more about this at

One of the most interesting aspects of working on this programme, for me, is understanding the nuts and bolts of research data management in a specific disciplinary context, in a particular institution.  In other words, the same context in which each researcher is working.  Although funders are increasingly calling the shots with requirements and expectations for research data management, the individual researcher still has to find a way to put these requirements into practice with the infrastructure they have to hand.  That means it’s all very well for the EPSRC or AHRC or whoever to require you to do something, and you may even understand why and want to do it, but who do you ask in IT to help?  Why isn’t it OK to just put data on Dropbox?  What to do with data after you finish your PhD or project?  And what is metadata anyway?

Despite the generally-held view by researchers that their RDM requirements are unique to their discipline, these questions – and other like them – are actually fairly consistent across institutions when researchers are sharing concerns in an open and relaxed environment.  And this was one of the achievements of today’s event: by keeping things friendly, low-key and informal, the team got some very useful information about what PhDs and ECRs are currently doing with RDM, the challenges they’re encountering and what Exeter needs to provide to support well-planned and sustainable RDM.

Some additional detail from the event:

–       Jill offered a working definition of ‘data’ for the purposes of the workshop: “What we mean by data is all inclusive.  It could be code, recordings, images, artworks, artefacts, notebooks – whatever you feel is information that has gone into the creation of your research outputs.”  This definitely seemed to aid discussion and meant we didn’t spend time in semantic debate about the nature of the term.

–       Types of data used by participants:
o       Paper, i.e. printouts of experiment
o       Word documents
o       Excel spreadsheets
o       Interview transcripts
o       Audio files (recordings of interviews)
o       Mapping data
o       PDFs
o       Raw data in CSV form
o       Post-processed data in text files
o       Graphs
o       Tables for literature review
o       Search data for systematic review
o       Interviews and surveys: audio files, word transcripts
o       Photographs
o       Photocopies of documents from the archives
o       NVivo files
o       STATA files

–       Common RDM challenges included: the best way to back-up, use of central university storage, number of passwords, complexity of working online (which can make free cloud services more attractive), lack of support with queries or uncertainty about who to contact; selection and disposal, uncertainty over who owns the data.

–       Sources of help identified during the event: subject librarians, departmental IT officers, and during the life of the project, Open Exeter staff, existing online resources such as guides from the Digital Curation Centre ( and the Incremental project ( and

Commonalities & Differences: Requirements & Disciplines

Within our remit to identify themes and trends in the JISMRD Programme and to enable collaboration and synergies between its projects, exploring commonalities and differences is a key area with a multitude of angles. Diverse endeavours, domains, institutions and scopes on the project side entail a number of approaches, methods, user communities, research practices & cultures, data life cycles, workflows and therefore actual needs, requirements, benefits, data infrastrutures and policies. Knowledge transfer in the programme is crucial to not to re-invent the wheel (at least not every time), learn from previous experiences, discuss emerging topics, collaborate and hence (mutually) benefit from all those differences and commonalities.

In the weeks and months to come I shall focus on commonalities and differences on this blog under different aspects, starting with the requirements and disciplinary angle (albeit I am aware that a lot of areas are overlapping: requirements gathering involves methods as well as research practice and perceived benefits, which again have an impact on costs, et cetera et cetera). The thought would be to ideally start a discourse, get feedback and input from projects and people, gather documentation and discussion topics, facilitate and provide support. A workshop at some later stage might be an activity spawning from that, if deemed useful.

My own project related hat is that of the user liaison & researcher, e.g. gathering requirements, including looking into research practice and benefits of diverse communitites at the University of Manchester previously in the MaDAM project (JISCMRD phase 1; see here for outputs) and now in MiSS (JISCMRD02; see resources section). Our requirements approach in both projects is user-driven, iterative and based on close collaboration between RDM specialists, users/researchers, other stakeholders (high-level buy-in is especially important) and the project team/developers. In MaDAM we were focussing on pilot users from the Biomedical domain – in MiSS the RDMI will have to cater for the whole of the University with the challenge of establishing a balance between a generic, easy-to-use eInfrastructure and providing a system open enough for discipline specific needs (plug-in points). We have user champions in each faculty: Life Sciences (Core Facilities and MIB – large and diverse data), Engineering and Physical Science (Henry Mosley Centre, Material Sciences & MIB – large data), Medical and Human Sciences (sensitive data!) and Humanities (CCSR, applied quantitative social research – data service and diversity) and will also open up a user committee to the wider University for input and feedback in a few weeks. We just have completed our baseline requirements phase, so please watch out on this channel for more details and the report!

But back to you, the JISCMRD projects’ fields of interests and needs:

How do you approach your requirements process?

What are particular challenges, e.g. in specific disciplines?

What are particularly enthralling lessons learned (already)?

How to achive benefits and synergies between projects?

What would be your ideas on how to facilitate (by us) any exchange on such issues, any ideas are welcome!

Meik Poschen  <>
Twitter:  @MeikPoschen