How to share your research code openly: Best practices for transparency and reuse

T
The Researcher's Source
By: Erika Pastrana, Wed Mar 11 2026
Erika Pastrana

Author: Erika Pastrana

Vice President, Nature Research Journals Portfolio

Openly sharing the code that underpins your research makes your results transparent, improves reproducibility and replicability of your findings, can increase citations and visibility, and helps you meet funder, institution, and publisher requirements. Building on our previous introduction to open code sharing, this practical guide shows you how to do it responsibly for maximum reward.

Step one: preparing your code for sharing publicly

Review your content: you should review your code to check that it includes only what you intend to share. Make a simple list of: 

  • The new or custom code you wrote which is important for your results. 
  • Any external packages your code needs (e.g. Python libraries). 
  • Any runnable examples or demo datasets so users can test your code.  
  • Any data or parts you can't share (because of privacy, legal, or commercial reasons). If something can't be shared, note why, and how someone could request access. 

Clean and organise your project

  • Delete clutter: remove old drafts, temporary folders, and anything that requires passwords or keys to access. 
  • Use a clear folder layout so someone new can find things quickly: directories should have meaningful names that describe their contents (e.g. ‘my-project-name-version’ vs ‘code’). 

Document your code: write down exactly what is needed to use your code. This includes: 

  • A README in the top folder that explains what the code does, how to set it up, how to run it, how to reproduce the results, and who to contact with any questions.  
  • Environment dependencies: What exactly does a user need to install to use your code? Provide a list of packages and versions (e.g. requirements.txt or environment.yml) for reproducibility.  
  • Container: It can also be helpful to include a container/capsule (e.g. Dockerfile or a Code Ocean capsule) so others can run your code in a ready-made environment. These platforms make it easy for readers to immediately run the code, and in some cases offer verification that the code conforms to best practices and that it works. 
  • Consider adding comments to the code files to make it easier for others to understand the rationale behind the code and to re-use the resource. 

Ask a colleague to test it: we strongly recommend that you ask colleagues who are not familiar with the code to test it.

Step two: choosing a repository

While version control with Git is a popular way to ensure a backup of your code and documentation of its development over time, a stable, permanent version of your code is also essential to ensure that future readers can find the exact version of the code you used. To help others repeat your results, put your code in a repository that gives it a permanent identifier (DOI/PID), such as Zenodo or Code Ocean. This offers long-term preservation for your code and makes it easier to find and cite, giving you greater credit for your work.

Step three: licensing your code

A licence tells others exactly what they’re allowed to do with your code, and under what conditions. Without a licence, your code is “all rights reserved” by default, which means others can’t legally use it, modify it, or share it. We recommend using a licence approved by the open source initiative to support reuse, reproducibility, and compliance. Common open source licences include MIT, Apache 2.0, and GPL. Find out more at Choose a Licence

Repository Dos & Don’ts

Do

Don’t

  • Choose trusted PID platforms like Zenodo, Code Ocean, or institutional repositories. 
  • Consider using a container platform (e.g. Code Ocean).  
  • Include your README for clear documentation. 
  • Add an open source licence.  
  • Include environment files (e.g. requirements.txt or environment.yml). 
  • Upload tests: Provide a runnable example such as a small demo dataset, or ‘smoke test’, that users can quickly use to test your code. 
  • Add metadata to make your data discoverable.
  • Don’t rely on a GitHub URL only (without a PID/DOI, there is a risk your data won’t be accessible in the future). 
  • Don’t skip documentation (readers won’t know how to run your code). 
  • Don’t omit the licence.

Step four: writing a code availability statement

All Springer Nature journals require a code availability statement in original research articles that have developed new code as part of the work (we’ve included an example below). The following information must be provided in this statement: 

  • What: name of the code/tool and a one-line description. 
  • Where: repository URL and persistent identifier (DOI/PID), including a citation to the code in the reference list. 
  • Which version: tag/release number (e.g. v1.0.0) that matches the analysis in the paper. 
  • Licence: e.g. MIT, Apache-2.0, GPL-3.0. 
  • How to run: point to README/environment files or capsule (e.g. Code Ocean). 
  • Any restrictions, and reasons for this: Where there are restrictions, such as privacy, legal, or commercial limitations, it should be clear how to request access or link to a controlled access version. 

Example of a code availability statement, supporting best practices for code sharing - original publication in Nature Machine Intelligence © Springer Nature 2026

Example of a code availability statement, supporting best practices for code sharing - original publication in Nature Machine Intelligence.

Step five: citing your code

Treat code like a research output: it should be cited as a reference and in the article text (similar to data citations). The reference for the code should include the PID/DOI. This ensures credit for the team and individuals who worked on the code, makes your software findable, and lets readers retrieve the exact version you used. 

Example of a code citation, supporting best practices for code sharing - original publication in Nature Machine Intelligence. © Springernature 2026

Example of a code citation, supporting best practices for code sharing - original publication in Nature Machine Intelligence.

Why best practices matter

Taken together, these best practices make open code sharing simpler, more effective, and more rewarding – for you and other researchers. Well-prepared, clearly documented, and properly archived code ensures:

  • Results are reproducible and replicable: others can run the same code, under the same conditions, and verify or build on your findings with confidence. 
  • Peer review is smoother: clear documentation and accessible code reduce back-and-forth queries with editors and reviewers. 
  • Compliance: best practice supports alignment with institutional, funder, and journal expectations, including Springer Nature’s unified code policy.  
  • Visibility and impact: code that is archived with a DOI/PID is easier to discover, reuse, and cite. 
  • Future-proofing: versioned, well-described code is also easier for you to revisit as projects evolve. 
  • Community benefits: open, reusable code advances science for all, reducing duplication of effort, and strengthening trust in research. 

Quick checklist for authors

If you keep the following eight checks in mind as you work through your research project, you’ll be ready to share your code openly, fully aligned with best practice and Springer Nature policy expectations, to help others understand and reproduce your code.

  1. Code is clean, organised, and descriptively named. 
  2. Any sensitive data that can’t be shared has been removed. 
  3. README properly explains what my code does, how to install and run it (including dependencies and environment), expected outcomes, and how to contact you. 
  4. Code has been tested by another colleague on a different machine. 
  5. A versioned release is created on Git. 
  6. Code is archived in a PID-minting repository, including README, helpful metadata, and OSI-approved licence applied. 
  7. Code availability statement included in my manuscript submission. 
  8. Code is cited (in-text and in the references). 

See our research code policy

Related content

Don't miss the latest news and blogs, sign up to The Researcher's Source Monthly Digest!

Erika Pastrana

Author: Erika Pastrana

Vice President, Nature Research Journals Portfolio

Erika Pastrana is the Vice President of the Nature Research and Reviews Journals, a distinguished collection of over 60 scientific publications that span diverse fields—from Nature Sustainability and Nature Reviews Psychology to Nature Medicine and Nature Reviews Genetics. Under her leadership since January 2025, these journals uphold the highest standards of scientific reproducibility, global impact, and a strong commitment to open science.

Erika began her editorial career in 2010 as an editor at Nature Methods, focusing on neuroscience. In 2014, she transitioned to Nature Communications as a Team Manager, and by 2017, she became Editorial Director of the Nature Research Journals division, overseeing editorial strategy for health and applied sciences.

She holds a degree in Biochemistry and Molecular Biology and a Ph.D. in Neuroscience from the Universidad Autónoma de Madrid, where she researched axonal regeneration in animal models of nervous system injury. Erika continued her scientific work with four years of postdoctoral research at Columbia University in New York.

In recognition of her contributions to scientific communication, Erika received the 2024 Communication Award from the Spanish Geographical Society.