What impact will a new National Data Library have on rural communities? 

We generate vast amounts of data across various sectors like health, education, crime, and financial services. This data, produced by individuals, government, universities, businesses, and voluntary and community sector organisations, holds significant value but often remains underutilised. Artificial Intelligence (AI) is revolutionising how we handle data, playing a crucial role in its preparation and analysis. The Government is planning to establish a National Data Library (NDL) to enhance public services, promote cross-government collaboration, and drive innovation. What will the NDL do, and what impact will it have on rural communities? Jessica Sellick investigates. 

………………………………………………………………………………………………..

The call for a National Data Library (NDL) originated in a blog published by Onward in May 2024 which urged the Government to establish a British Library for Data. The idea gained further momentum when, in June 2024, the Labour party included a commitment to create a National Data Library in their manifesto.

In July 2024, building on the National Data Strategy, it was announced that the Department for Science, Innovation and Technology (DSIT) would expand to include the Government Digital Service (GDS), the Central Digital and Data Office (CDDO) and the Incubator for AI (i.AI). This expansion aims to unite efforts in the digital transformation of public services under one department, creating a new digital centre for Government.   

“Data is one of His Majesty’s Government’s (HMG) most underutilised and highly valued assets with the power to transform our public services, unlock cross government efficiencies, and boost the UK economy…There is strong opportunity to now accelerate how we deliver real change, building on the foundations that have been laid through the National Data Strategy and work on transformation to date. As part of the new digital centre for government in the Department for Science, Innovation and Technology (DSIT), we will continue the journey to improve quality, use and reuse of data across government in a responsible and ethical way. We will also turbo charge innovation, productivity, and resilience with data across the public sector, and transform the public experience of interacting with government”, Craig Suckling, Chief Data Officer for UK Government, 16 September 2024. 

The Autumn Budget 2024 included a commitment to maximise the growth benefits of the Uk’s thriving digital and technology sectors. It referenced the creation of ‘a National Data Library to unlock the full value of our public data assets’: 

“This will provide simple, ethical, and secure access to public data assets, giving researchers and businesses powerful insights that will drive growth and transform people’s quality of life through better public services and cutting-edge innovation, including AI”. 

Data is fundamental to the Government’s five missions and plan for change: growing the economy, an NHS fit for the future, safer streets, opportunity for all, and making Britain a clean energy superpower. In the context of a NDL, Baringa has analysed data curation around these missions, finding that: 

  • Administrative Data Research UK (ADR UK) holds 51% of datasets for future NHS, 21% for kickstart economic growth, 14% for breakdown barriers, 5% for take back our streets, and 1% for clean energy.  
  • Integrated Data Service has 41% of its datasets against kickstart economic growth, 19% for future NHS, 11% for breakdown barriers, 7% for clean energy, and 2% for take back our streets. 

What is the ambition behind the NDL? What should the key focus areas and functions of the NDL be? What lessons can we learn from previous initiatives, are there any potential drawbacks to establishing a  new NDL, and will a NDL specifically benefit rural communities?  

What is the National Data Library? In a parliamentary debate in September 2024, The Rt Hon Peter Kyle MP, the Secretary of State for Science, Innovation and Technology, emphasised the need to build a state with cutting-edge digital infrastructure. This includes faster data centres, advanced AI capabilities and broadband connections, all aimed at creating opportunities for every community. The Rt Hon Peter Kyle described how ‘we must manage public sector data as a national strategic resource’: 

“For far too long, public sector data has been undervalued and underused. We must replace chaos with co-ordination, and confusion with coherence. This is what the national data library will do. With a coherent data access policy and a library and exchange service, it will transform the way we manage our public sector data. It will have a relentless focus on maximising the value of that data for public good, on growing the economy and creating new jobs, and on delivering the data-driven AI-powered public services”. 

The Government’s AI Opportunities Plan, published in January 2025, included a section on ‘unlocking data assets in the public and private sector’. It called for the creation of a National Data Library (NDL) and how the Government should:   

  • Rapidly identify at least 5 high-impact public datasets to make available to AI researchers and innovators. Prioritisation should consider the potential economic and social value of the data, as well as public trust, national security, privacy, ethics, and data protection considerations. Explore the use of synthetic data generation techniques to construct privacy-preserving versions of highly sensitive data sets. Government datasets are a public asset, and their valuation should be carefully considered.  
  • Strategically shape the data collection process, rather than just making existing data available. The NDL could build on the achievements of the UK Biobank to enhance research in areas such as disease recognition and the prediction of health outcomes. The NDL should run open calls to receive proposals from researchers and industry for new datasets.  
  • Develop and publish guidelines and best practices for releasing open government datasets which can be used for AI, including the development of effective data structures and data dissemination methods. 
  • Couple compute allocation with access to proprietary datasets as part of an attractive offer to researchers and start-ups choosing to establish themselves in the UK, thereby unlocking innovation. 
  • Build public sector data collection infrastructure and finance the creation of new high-value datasets that meets the needs of the public sector, academia and start-ups.  
  • Actively incentivise and reward researchers and industry for curating and unlocking private datasets.  
  • Establish a copyright-cleared British media asset training dataset, which can be licensed internationally and at scale. 

In an interview with The Financial Times in March 2025, The Rt Hon Peter Kyle MP  shared that the Government had initiated a 6 month scoping exercise for a NDL. This focused on the education sector and creating a “content store” to consolidate education documents, including lesson plans, anonymised pupil assessments, and curriculum guidance. This pilot initiative is part of a testbed to develop plans to monetise large government datasets within a decade. Kyle described it as “a really good example of the kind of gathering of data together and use of data that will ultimately be the National Data Library”.  

The Government’s ambition for a NDL is centred on improving the accessibility and management of public data, increasing its utilisation by researchers and industry, and monetising large datasets. There are varying interpretations of what a NDL is intended to achieve for public services and citizens. Is it about improving digital infrastructure and connectivity, delivering productivity and efficiency across Government departments, or generating new tools and services? Indeed, Icebreaker One highlights two competing visions for what a NDL could do emerging from Government: either to improve the use of government-held data for research, or to improve operational data access across public services. The difference between data for research and data for operations is significant different technically, legally, and commercially. 

“No nation has the infrastructure needed to fully harness AI [Artificial Intelligence] for public good. The National Data Library (NDL) represents an opportunity for the UK to be the first. It can help create the infrastructure needed to unlock the value of public-sector data alongside frameworks to identify and collect new types of data for breakthrough insights. Instead of a complicated web of systems and slow, one-off approvals, the NDL will establish a clear, secure way to access linked data sets, supporting AI innovation, better”, Tony Blair Institute for Global Change, (February 2025 page 3). 

What should it do, and where should it start? Since 2018, ADR UK has collaborated with academic partners and devolved governments to secure access to administrative data for research across the UK. The premise is that public sector data is a valuable resource that can benefit society. They provide examples of linking sector data with administrative data at scale, such as ECHILD (Education and Child Health Insights from Linked Data), which connects education outcomes data with health data. Another example is their collaboration with the Ministry of Justice to track cohorts of people through the criminal, family and civil courts to understand how legal problems interact with personal/household issues such as homelessness, ill-health or debt, and the relationship between offending and local levels of deprivation. 

An evaluation of ADR UK in November 2024 found investing in data linkage across public sources can deliver £5.00 of benefits for every £1.00 of costs. The evaluation highlighted ADR UK’s contribution to supporting the acquisition, linkage and cleaning of over 200 new datasets, improving data accessibility and security by funding remote access through SafePods, building buy-in for administrative data sharing across Whitehall departments and devolved governments, and establishing closer links between researchers and policy makers. The evaluators also identified key areas for improvement, including that data-access forms are too long and complicated, accreditation can be time consuming, and some researchers have experienced delays in linked data becoming available. Commenting on the report, Dr Emma Gordon, director of ARD UK described how “ARD UK’s strength lies in its partnership model, and how we collaborate with government data owners to open up research access to data. This evaluation provides evidence that our model works and is scalable, providing a very good return on investment…In our next business case, we will build upon these strengths to outline how we will continue to smooth the researcher journey to accessing population-level linked data”. 

The Digital Economy Act 2017 (DEA) sought to remove legal barriers to digital government while reinforcing data protection laws. The Cabinet Office and Public Sector Fraud Office issued a press release in February 2024  outlining the findings of a statutory review of more than 100 data sharing pilots across 70 local authorities and 17 government departments since 2018. This identified savings of £137 million including £99.5 million of fraud identified in COVID-19 loan schemes through the Fraud Analytics Programme bringing together the Cabinet Office, Department for Business and Trade, British Business Bank and HM Revenue and Customs data; £14.9 million of fraud was also identified in council tax and housing benefit systems through comparing local authority records and HM Revenue and Customs; and £5 million of overdue council tax was recovered by 29 local authorities using data from HM Revenue and Customs. 

The original proposal for a British Data Library from Onward aimed to develop ‘a centralised, secure platform to collate high-quality data for scientists and start-ups to build AI models, attracting talent and investment’. Similarly, ADR UK has implemented a model of up-front data governance, cleaning and linkage that de-identifies and creates research-ready curated datasets that can be maintained over time. They believe this approach provides a solid foundation for building a NDL, as it ensures the latest data is readily available in the library. This eliminates the need to work with data owners to create it, shortens data access times, and allows bodies of knowledge around using complex datasets to be built up over time. 

In 2024, Wellcome and the Economic and Social Research Council (ESRC) published a White Paper Challenge, inviting technical visions and architectures from industry, academia or civil society groups for a NDL. The aim was to make public sector datasets more accessible to researchers, benefitting both the research community and the UK population. 

“A successful National Data Library programme will be a tremendous asset for the UK and global research communities. Progress in life science and health is not only driven by insights from clinical or research data. Responsibly using data from across government and all public sectors will help us all understand how to build a healthier future for everyone”, John-Arne Røttingen, CEO of Wellcome. 

Proposals were required to include key enabling elements, technical and operational architecture, and nationally funded data repositories relevant to science and health research in the UK. Five successful submissions were published and discussed at a dedicated workshop in January 2025.    

In their submission, Emrys Consortium outlined how the lack of a centralised strategy, policy and funding has meant there are now 291 million possible ways of interrogating primary care health data for 55 million people. They propose operational architecture to ensure ease of access and interoperability while maintaining public trust. They describe how “our approach offers a step change in accessibility, security, engagement and breadth of research. It will move us away from discrete silos of data that give a partial population view in a single modality to a world of ‘on demand’ integrated data across multiple providers for a given project, with standardised workflows and approaches to citizen engagement and information governance. We cannot solve all the issues with interoperable digital systems across the UK public sector. However, we cannot solely focus on de-identified population science using Section 251 approvals”. Their technical vision builds on LATTICE architecture developed by the Francis Crick Institute. To support the development of a NDL, they recommend five cross-cutting panels covering technical, social contract, information governance, research advisory, and All-Party Parliamentary Group panels. A NDL would initially focus on three instantiations: anonymous data for population-level analysis and reporting, sensitive data that is routinely collected, and research-generated data for consented cohorts.   

The Bennett Institute for Applied Data Science at the University of Oxford proposes a modular approach that includes a network of connected data centres, a federated working model where researchers ‘take only what you need’, and a network of standalone component services. To ensure successful delivery, they recommend starting with the top three datasets that researchers actually want (highlighting GP and environmental data), incorporating real user stories and real-world challenges early on, and avoiding immediate automation by having people perform some tasks manually at first. They emphasise the importance of building teams of developers who understand government data, hiring managers with deep technical skills, and not outsourcing the design and delivery of all components to a single provider. Privacy should shape how data is collected, stored, and shared from the outset. 

Both the Bennett Institute and the Emrys Consortium forgo a traditional library in one physical location for a distributed fabric of data controllers and research-performing organisations working in an open data format, linked through trusted research environments (TREs).  

Icebreaker One’s submission proposes reviewing and improving existing research data infrastructures rather than bringing a lot of data together into a central place. The UK Government uses different channels to coordinate, fund and support the development of research data infrastructures and a NDL should add something new, or improve or replace what already exists. Technically, and at its simplest, a NDL could support the discovery of data held by the public sector that is available for research through maintaining a searchable catalogue or harvesting metadata from across many public sector organisations. They highlight the work they have done to build Open Net Zero, a net zero data discoverable, accessible and usable platform. This indexes nearly 60,000 datasets from more than 400 organisations. The NDL could enable new forms of curation of public sector data and data discovery that would not otherwise happen. They suggest the data infrastructure design should be driven by specific use cases. 

The Open Data Institute and Kings College London in their technical white paper response propose that the NDL uses open, interoperable standards and technologies to share and safely access data, identify and fix data gaps, set up clear governance structures, and give communities meaningful tools to have control over how their data is used and by whom. AI processes must be built in from the start. 

 “A robust diagnosis of the problem you are trying to solve is a crucial part of building this data infrastructure – what are the current issues, where are the gaps in our data ecosystem, who is currently not accessing data that they need, and what are their aspirations?” 

Their submission looks at how the NDL can be AI ready in developing a Multilayer Interoperability Framework. This includes a technical stack to facilitate the collection of data that is AI ready in its access and sharing; a governance model to determine who makes decisions and how the public are engaged in a user centric approach; the legal form it should take; and the commercial model needed to maintain an AI-ready NDL: 

“AI readiness focuses on the documentation of data processes towards its use in AI, such as data preparation (labelling, outliers), data quality (bias, completeness, consistency, integrity), data documentation (metadata, previous uses, intended uses, feedback), and data access (formats, delivery, privacy, security). If the NDL is meant to become a central UK hub for cross-governmental, interoperable data that facilitates scientific discovery using safe and trustworthy AI, then it needs to be an AI-ready NDL”.  

DARE UK’s submission builds on the organisation’s Federated Architecture Blueprint (2023) and Transformational Programme (2025-2027) which for the NDL envisions a safe, collaborative network of secure data infrastructures where approved researchers can explore sensitive data to advance research for public benefit. They highlight three essential capabilities: a network of secure, trustworthy services, standardised ways to see the purposes data is being used for and being transparent, and enhancing and integrating current infrastructures to meet the functionality needs of the NDL. 

Separate to the White Paper Challenge, the Tony Blair Institute for Global Change (TBI) published a roadmap for delivery, setting out immediate steps to unlock more value from existing data, medium term structural reforms to build a scalable system (between 6 months and 3-years), and long term actions to ensure the NDL delivers sustained impact by reimagining what data we need in an AI enabled world (3-5 years).  This includes a proposal for a National Data Trust focused on health. The TBI proposes integrating NHS and Department for Work and Pensions (DWP) datasets to develop tailored return-to-work programmes. Similarly, linking planning and housing data sets to develop green energy solutions. The NDL could revolutionise medical research by assessing treatments using NHS and social care records. They have developed an economic case for the NDL which estimates that from an initial investment of £200 million to enhance data linking, £1 billion in benefits could be generated. As the NDL expands, reinvesting these returns could create a self-sustaining model, delivering up to £13 billion per year once fully scaled. 

More broadly The ODI has suggested that the Government should take inspiration from other countries and organisations that curate datasets and enable data sharing for public benefit. They cite how UK Biobank data has been used to understand the impact of COVID-19 on the brain, and obesity. They also reference X-Road, developed in Estonia, which facilitates safer data sharing between different government departments. 

For me, three sets of overarching considerations emerge from the White Paper Challenge and broader work: 

1. The designof the NDL: 

  • Where should Government lead, and where do the private sector, academic, the voluntary and community sector, and philanthropists fit? 
  • Should the NDL be a single national data portal, or different platforms and applications? 
  • Should we prioritise and target interventions, or aim for a wider data exchange? 
  • What inputs are needed to deliver system-wide solutions? For example, who will staff a NDL, and what catalogues, guides, support, and communities of practice will they need to establish? What legislation, policy, regulation and standards are required to deliver a NDL? 
  • In the words of the Government’s AI Opportunities Plan, how can a NDL be an ‘AI maker’ not an ‘AI taker’? 

2. The implementation of the NDL: 

  • How can the NDL strategically acquire and curate data for research purposes? 
  • What ready-to-access environments exist that can shorten access times for researchers?  

3. Reviewing the NDL: 

  • What performance indicators and metrics are we putting in place to measure success – and how is the NDL working towards achieving these? 
  • How has the NDL built capacity and capability within the public sector’s data landscape? 
  • How are costs being managed, and is the budget being adhered to? 
  • How is the NDL increasing the resilience of public services? 
  • Where is the business plan and long-term funding coming from to guarantee the NDL’s sustainability? 

Are there any drawbacks? Anil Madhavapeddy and Sadiq Jaffar spent a year downloading millions of papers for a conservation evidence project. They describe how problematic it was to gain actual access to the datasets underpinning the papers, observing that it is difficult to deal with large, sensitive datasets where selective access control is required, and how that sort of data is not mirrored on the open web, leading to them to call for cooperative solutions to open up access to data. They are currently supporting 46 UK Wildlife Trusts who are building coordination around the use of systematic biodiversity data gathering.  

In its submission to the White Paper Challenge, the Bennett Institute for Applied Data Science describes how “[Data is] more like nuclear material. Personal data has huge potential to do good; but also intrinsic risk. Small amounts of data achieve little and pose few risks. Data becomes powerful only when brought together, and refined: then it also becomes more dangerous…Data needs secure environments because if data leaks, it can’t be unleaked. Today sending data around to multiple locations, chaotically, for one off projects that are often trivial we sometimes treat national data like we treated nuclear material in the early days of  the atomic age: enthusiastic amateurs, in unsafe conditions, painting glowing material on their teeth, in a clock factory”. 

In the context of the NDL, what are the costs of curating public and private datasets and ensuring correct use? Should some materials be available to everyone and others restricted to specialist researchers? Should we keep the data where it already is or bring it all together in one place? How can we enforce privacy controls on the underlying data and code? Should we take a modular approach rather than building one enormous national database?  

Will the NDL handle sensitive public data securely and safely? There are many instances where information held by a public sector organisation has been stolen or accessed without authorisation. For example, the cyber-attacks on the Legal Aid Agency, education institutions, The Ministry of Defence, and the Electoral Commission. Analysis by Baringa shows that the Government has spent at least £320 million during the past 5-years across its two flagship data platforms(for HM Treasury’s Integrated Data Programme and ADR UK), in addition to over £180 million spent on health platforms such as the NHS Secure Data Environments and OpenSafely, and £330 million set aside for the NHS’s Federated Data Platform over seven years. Baringa argues that there needs to be a more cohesive approach to tracking and communicating the benefits of this investment at a time of pressure on public spending. At the same time, the Public Accounts Committee (PAC) has highlighted how out of date tech and poor-quality data is putting public sector adoption of AI at risk. They estimated, in 2024, that 28% of central Government systems are locked away in out of date legacy IT systems, which are partially defined as an ‘end of life product’ and/or out of support from the supplier and therefore impossible to update. 72% of the Government’s legacy systems lacked remediation funding.  

If the NDL is going to have a commercial application, it is worth noting that Government and indeed many public sector organisations do not have a track record of developing commercial models. For example, the NHS has an established process for providing access to data for charities, academics and medical research companies. Currently, NHS England is not allowed to sell data for profit but it can provide data on a cost recovery basis. In practice, this means the NHS can charge for the cost of processing and delivering a data service, but not the data itself. Individual NHS Trusts also enter into working partnerships with companies. In 2023, the NHS published a Value Sharing Framework for data partnerships which states ‘when deciding how much to charge for access to data, NHS organisations should consider how the data will be used, as well as the type of data being requested. The cost of access should not be dependent on the nature of the partner organisation. This means NHS parties should not routinely set a relatively high charge for commercial companies and a low or no charge for non-commercial academic institutions or charities – the charge should depend on the use case’.  Alongside this, Wellcome is partnering with the Government to establish a new £600 million health data research service. It aims to secure access to health data and speed up research to better prevent, diagnose and treat diseases. Through the NDL, the Government has announced plans to leverage anonymised NHS patient data to attract private sector investment. We know patient data is valuable, what are the financial considerations here (e.g. pay to access, shared equity), and how will patients be made aware that their data is being processed?  What can we learn from public bodies that already charge fees to access or update data such as Companies House and HM Land Registry?  

For me, the NDL opens up the need to balance accessibility with building and maintaining public trust and the ethical sharing of data. Global Government Forum focuses on public trust as part of calls for a UK Sovereign Data Fund to manage the monetisation of public datasets curated by a NDL. The Fund would have a broad social mission to allow it to invest its profits to fund projects that work towards improved healthcare provision, greater social mobility, digital inclusion, and digital infrastructure. They suggest this model could prevent public datasets from becoming undervalued giveaways to foreign-owned entities. Research by the Bennett Institute for Public Policy has found for some consumers the location of the company matters, with an organisation based in the UK paying corporation tax and providing employment in the UK viewed as more acceptable than one based in Silicon Valley. Companies that exploit public data to create products and services that are then sold back into the public sector at a profit are regarded by some as particularly exploitative. To address these concerns, the Bennett Institute suggests the NDL will need to put a value sharing framework in place, and have clear monitoring and communication processes. 

Hewlett Packard Enterprise House of Data Report (2023) used the returns from 257 freedom of information requests to look at data modernisation and transformation in the public sector. 67% of respondents said they didn’t have enough budget, and 55% do not have the required skills, to create value from data such as making operational cost savings or improving public services. In March 2025, Laura Gilbert, the former director of data science gave evidence to the House of Commons Science, Innovation and Technology Committee. Gilbert described how “there are a lot of people I came across in government…in quite senior digital data roles…who I don’t consider technologists or data people”. 

Commentators have also queried how much the NDL will cost, and whether it will lead to reductions in research and development budgets elsewhere. For example, Tom Forth indicates how a National Data Library “will be a large cost to taxpayers and the case should be made for it on those terms, probably as an alternative to other research and development funding”. Ellie Ashman, who worked on product registers for the Government highlights how one of the problems for Registers was “the cost and disruption of change huge in proportion to the perceived benefit”. She describes how, for any piece of national data infrastructure to be a success, it needs to be “funded like a permanent fixture, whilst also being funding as the complex space it is – with a skilled team and adaptive practices at the centre”. Some commentators query if a brand new entity is needed (does this offer value-for-money, is it cost effective) or whether we can build on what already exists such as the redesign of the Government Digital Service or through another body such as The Alan Turing Institute or The British Library

What does this mean for rural communities? Commentators have been discussing the physical location of a NDL. Tom Forth suggests Leeds as an ideal candidate because over 90% of the UK’s GP data is held by two companies based there (TPP and EMIS). Additionally, the Leeds Innovation Arc is under construction; and The British Library is seeking to establish a presence in the city. Forth highlights how, over the last 15 years, the Government has created and funded a series of national institutions focused on data, AI and technology exclusively in London which has incentivised growth in the South East of England. Whether the model for the NDL is one central point or a series of distributed hubs, it raises the question: can it have a physical presence in a rural area?  

Whether you are a numbers enthusiast or not, data – perfect or imperfect – matters. Many people sit on vast amounts of data without knowing how to use it or simply not using it at all. In academic circles, data supporting research findings is generally expected to be made openly available according to the FAIR Data Principles. Developed in 2016, these principles state that data must be Findable, Accessible, Interoperable and Re-usable. The principles were part of a broader efforts to recognise the ability of machines to automatically find and use data or metadata, in addition to supporting re-use by individual researchers. 

Defra is a data-rich Government department. What rural data is already publicly available, and how many datasets could be released? Which existing 3-5 rural datasets should we make available to AI researchers and innovators aligned to FAIR data principles? The UK Parliament’s horizon scan, which looked at the issues facing rural communities, flagged transport, education, health, social care and housing, as well as the need to support rural economic growth and skills development. 

Data is often collected by or on behalf of organisations for specific purposes without considering its broader applicability and significance. How can we bring together scattered pieces of anomalous evidence to make the case for rural areas? For instance, can we apply a rural lens [such as the Rural Urban Classification] to Government datasets that are widely held and regularly updated across various departments? Could this ‘rural reference data’ then be incorporated into the Statistical Digest of Rural England and made available on the NDL?  

More practically, in a health context, beyond telling us about accessibility and availability (e.g. number of GP appointments, dentistry provision), can this data be linked to information about when rural patients first present, the treatment and care they receive, and their clinical outcomes? How can rural employment data be connected to issues such as job quality, earnings, and the relationship between where people live and work? How can AI assist in categorising more data according to the Rural Urban Classification, and in processing data more quickly to provide real-time rural trends and analysis?

What 3-5 datasets do we currently not collect (big and small) that would help rural areas? How can a NDL support the collection and curation of new rural data? In many senses, perhaps we need to stop collecting so much data – indeed stop collecting some existing data – to be able to collect data that matters. How can we facilitate a discussion here that leads more public bodies, academic institutions, philanthropists and commercial bodies to ‘think rural’ in their work? 

Much of the existing discussion on the NDL focuses on government data and external organisations, but what do rural communities want to prioritise? In addition to technical solutions, we need to consider rural residents as users of the NDL and provide easy-to-access interfaces to assist with this. How can we ensure a NDL is user centric and that users expand beyond researchers and innovators to rural residents and businesses? How will residents be able to navigate a NDL if they have a particular subject in which they are interested? Will the NDL include a searchable catalogue and other tools to help residents find the right data? This is also important so residents can see how they are benefitting from a NDL rather than their data being traded. Ultimately, could this lead to more equitable access to services for rural residents?  

Where next? The Chancellor reported the outcome of Spending Review 2025 (SR25) on 11 June 2025. This will set day-to-day spending totals for all Government departments from 2026/27 to 2028/29, and investment spending plans for a further year (2029/30). Section 1.7 of SR25 references how the government has prioritised funding for “creating a new National Data Library to join up data across the public sector” (page 10). With the NDL on its way, will it become a vital piece of infrastructure for public service delivery and economic growth in the UK, and will it achieve this in ways that benefit rural communities? Watch this space.  

…………………………………………………………………………………………………

Jessica is a project manager at Rose Regeneration and a senior research fellow at The National Centre for Rural Health and Care (NCRHC). She is currently collecting data to highlight the positive impact of relocating NHS clinical services into community settings; developing a community masterplan; and evaluating a heritage skills programme. Jessica also sits on the board of a charity supporting rural communities across Cambridgeshire and is a member of her local Patient Participation Group. 

She can be contacted by email jessica.sellick@roseregeneration.co.uk 

Website: http://roseregeneration.co.uk/https://www.ncrhc.org/ 

LinkedIn: 🌈Jessica Sellick