Call for Full Papers

The annual SIGIR conference is the major international forum for the presentation of new research results, and the demonstration of new systems and techniques, in the broad field of information retrieval (IR). The 42nd ACM SIGIR conference, to be held in Paris, France, welcomes contributions related to any aspect of information retrieval and access, including theories, foundations, algorithms, applications, evaluation, and analysis. The conference and program chairs invite those working in areas related to IR to submit original papers for review.

Important dates

Time zone : Anywhere On Earth (AOE)
Full paper abstract registration deadline Mon, Jan 21, 2019
Full paper submission deadline Mon, Jan 28, 2019
Full paper notifications Sun, Apr 14, 2019

Submission Guidelines

All papers must be original and not simultaneously submitted to another journal or conference.

Submissions of full research papers must be in English, in PDF format, and be at most 9 pages (including figures) + 1 pages of references in length, in the current ACM two-column conference format. Suitable LaTeX, Word, and Overleaf templates are available from the ACM Website (use the "sigconf" proceedings template). Full research papers must describe work that is not previously published, not accepted for publication elsewhere, and not currently under review elsewhere (including as a short-paper submission for SIGIR 2019). Submissions must be anonymous and should be submitted electronically via the conference submission system.


Authors are required to take all reasonable steps to preserve the anonymity of their submission. The submissions document must not include author information and must not include citations or discussion of related work that would make the authorship apparent. Note however, that it is acceptable to explicitly refer in the paper to the companies or organizations that provided datasets, hosted experiments, or deployed solutions. For example, instead of stating that an experiment “was conducted on the logs of a major search engine”, the authors should refer to the search engine by name. The reviewers will be informed that it does not necessarily imply that the authors are currently affiliated with the mentioned organization. While authors can upload to institutional or other preprint repositories such as before reviewing is complete, we generally discourage this since it places anonymity at risk (which could result in a negative outcome of the reviewing process). Authors should carefully go through ACM’s authorship policy before submitting a paper. To support identification of reviewers with conflicts of interest, the full author list must be specified at submission time.

Authors should note that changes to the author list after the submission deadline are not allowed without permission from the PC Chairs. At least one author of each accepted paper is required to register for, attend, and present the work at the conference.

All full papers are to be submitted via EasyChair:

The CFP for short papers (4 pages), as well as workshops, tutorials, doctoral consortium, industry day, and other SIGIR 2019 venues will be released separately.

List of Tracks

Upon submission, authors should select one or two relevant tracks among the seven tracks indicated below that best represent the area of their submission.

Search and Ranking. Research on core IR algorithmic topics, including IR at scale, covering topics such as:

  • Queries and query analysis
  • Web search, including link analysis, sponsored search, search advertising, adversarial search and spam, and vertical search
  • Retrieval models and ranking, including diversity and aggregated search
  • Efficiency and scalability
  • Theoretical models and foundations of information retrieval and access

Future Directions. Research with theoretical or empirical contributions on new technical or social aspects of IR, especially in more speculative directions or with emerging technologies, covering topics such as:

  • Novel approaches to IR
  • Ethics, economics, and politics
  • Applications of search to social good
  • IR with new devices, including wearable computing, neuroinformatics, sensors, Internet-of-Things, vehicles

Domain-Specific Applications. Research focusing on domain-specific IR challenges, covering topics such as:

  • Social search
  • Search in structured data including email search and entity search
  • Multimedia search
  • Education
  • Legal
  • Health, including genomics and bioinformatics
  • Other domains such as digital libraries, enterprise, news search, app search, archival search

Content Analysis, Recommendation and Classification. Research focusing on recommender systems, rich content representations and content analysis, covering topics such as:

  • Filtering and recommender systems
  • Document representation
  • Content analysis and information extraction, including summarization, text representation, readability, sentiment analysis, and opinion mining
  • Cross- and multilingual search
  • Clustering, classification, and topic models

Artificial Intelligence, Semantics, and Dialog. Research bridging AI and IR, especially toward deep semantics and dialog with intelligent agents, covering topics such as:

  • Question answering
  • Conversational systems and retrieval, including spoken language interfaces, dialog management systems, and intelligent chat systems
  • Semantics and knowledge graphs
  • Deep learning for IR, embeddings, and agents

Human Factors and Interfaces. Research into user-centric aspects of IR, including user interfaces, behavior modeling, privacy, and interactive systems, covering topics such as:

  • Mining and modeling search activity, including user and task models, click models, log analysis, behavioral analysis, and attention modeling
  • Interactive and personalized search
  • Collaborative search, social tagging and crowdsourcing
  • Information privacy and security

Evaluation. Research that focuses on the measurement and evaluation of IR systems, covering topics such as:

  • User-centered evaluation methods, including measures of user experience and performance, user engagement and search task design
  • Test collections and evaluation metrics, including the development of new test collections
  • Eye-tracking and physiological approaches, such as fMRI
  • Evaluation of novel information access tasks and systems such as multi-turn information access
  • Statistical methods and reproducibility issues in information retrieval evaluation

Program Chairs

Amazon Research
University of Montreal
RMIT University


All questions about full paper submissions should be emailed to sigir2019-pcchairs AT easychair DOT org.

SIGIR 2019 Sponsors






Additional Sponsors