MAY 18-21, 2026 AT THE HILTON SAN FRANCISCO UNION SQUARE, SAN FRANCISCO, CA

47th IEEE Symposium on
Security and Privacy

IEEE S&P ‘26 Call for Artifacts

IEEE S&P ‘26 will evaluate research artifacts for availability, functionality, and reproducibility.

All authors of accepted IEEE S&P ‘26 papers are strongly encouraged to openly share their research artifacts for assessments. Each submitted artifact will be reviewed by the Artifact Evaluation Committee (AEC). Before submitting your artifact, please read the Artifact Evaluation Information below. Should you have any questions or concerns, you can reach the AEC chairs at sp26-ae@ieee-security.org

Important Dates

Cycle 1

Cycle 2

Artifact Evaluation Information

Overview

A scientific paper consists of a collection of artifacts that extend beyond the document itself: software, hardware, evaluation data and documentation, raw survey results, mechanized proofs, models, test suites, benchmarks, and so on. In some cases, the public availability and quality of these artifacts is as important as the paper itself. To emphasize the importance of the artifacts that extend beyond the document, the benefits to the authors and the community as a whole, and promote the availability and reproducibility of experimental results, IEEE S&P Symposium runs an Artifact Evaluation (AE) process. Authors of accepted papers are strongly encouraged to submit their artifacts for availability, functionality and reproducibility assessments. The AEC will review each submitted artifact and also grant Distinguished Artifact Awards to outstanding artifacts accepted to IEEE S&P ‘26.

Process

The artifact evaluation will take place as an optional process for availability, functionality and reproducibility after paper notifications are sent out. The submitted artifacts will be reviewed by the AEC. Artifacts must be submitted via the submission form in the same cycle as the accepted paper.

After the AE decisions are made, the final artifacts need to be made available on a platform that supports permanent access. For this purpose, we recommend Zenodo. Other valid hosting options include institutional and third-party digital repositories such as FigShare, Dryad, and Software Heritage. We cannot accept artifacts hosted on personal websites or software development repositories such as GitHub as these cannot guarantee permanent access. Please note that all the final artifact files must be uploaded to the permanent repository to receive the badge, after a successful AE decision. It is not sufficient to just post a pointer to GitHub or to a web site. Please see the Artifact Packaging and Submission Instructions for more details. In case of any questions, please contact the AEC chairs.

Authors define the contents of their artifact submission. For example, software, hardware, data sets, survey results, test suites, mechanized (but not paper) proofs, access to special hardware, and so on. Authors choose which badges their artifact should be evaluated towards, i.e., one or several of the following three categories: Artifact Available, Functional, and Results Reproduced. In general, good artifacts are expected to be: consistent with the paper, as complete as possible, well documented, and easy to (re)use. The AEC will read the author’s instructions and evaluate if the artifact meets the criteria for each of the requested badges.

To facilitate easy evaluation and ensure reproducibility, artifacts that are submitted for Functional and Results Reproduced badges must be formatted following the Artifact Packaging and Submission Instructions. Please note that this includes packaging the artifact in a way that it can be easily ran on a public research infrastructure of the author’s choice. We understand that this is not possible for all artifacts. We will work with authors to identify and accommodate exceptions to this rule.

Each artifact submission will be reviewed by at least two AEC members. The review is single-blind and strictly confidential. All AEC members are instructed to treat the process confidentially during and after completing evaluation. We expect that all artifacts submitted to the evaluation process will be made publicly available via a permanent link after the evaluation. We can support delayed release (e.g., exploits under embargo), but we will not support evaluation of proprietary artifacts that will never be publicly released.

Reviewers may communicate with authors (via HotCRP) during artifact evaluation to help resolve glitches while preserving reviewer anonymity. Please make sure that at least one of the authors is reachable and knowledgeable to answer questions in a timely manner.

Artifact Badges

Available

To earn this badge, the AEC must judge that the artifact associated with the paper has been made available for retrieval permanently and publicly. As an artifact undergoing AE often evolves as a consequence of AEC feedback, authors can use mutable storage for the initial submission, but must commit to uploading their materials to public services (e.g., Zenodo, FigShare, Dryad) for permanent storage backed by a Digital Object Identifier (DOI). Final permanent storage is a condition to receive this badge. Authors are welcome to report additional sources, like GitHub and GitLab, that may ease the dissemination of the artifact and possible future updates.

Functional

To earn this badge, the AEC must judge that the artifact conforms to the expectations set by the paper for functionality, usability, and relevance. Also, an artifact must be usable on other machines than the authors’, including when specialized hardware is required (for example, paths, addresses, and identifiers must not be hardcoded.) The AEC will particularly consider three aspects:

Reproduced

To earn this badge, the AEC must judge that they can use the submitted artifact to obtain the main results presented in the paper. In short, is it possible for the AEC to independently repeat the experiments and obtain results that support the main claims made by the paper? The goal of this effort is not to reproduce the results exactly, but instead to generate results independently within an allowed tolerance such that the main claims of the paper are validated. In the case of lengthy experiments, scaled-down versions can be proposed if clearly and convincingly explained for their significance.