Sponsored by the Association for the Advancement of Artificial Intelligence

November 6-8, 2025
Westin Arlington Gateway | Arlington, VA, USA
The Association for the Advancement of Artificial Intelligence is pleased to present the 2025 Fall Symposium Series, to be held Thursday-Saturday, November 6-8 at Westin Arlington Gateway, Arlington, Virginia. The Fall Symposium Series is an annual set of meetings run in parallel at a common site. It is designed to bring colleagues together in an intimate forum while at the same time providing a significant gathering point for the AI community. The two and one-half day format of the series allows participants to devote considerably more time to feedback and discussion than typical one-day workshops. It is an ideal venue for bringing together new communities in emerging fields.
Symposia generally range from 40–75 participants each. Participation was open to active participants as well as other interested individuals on a first-come, first-served basis. Each participant is expected to attend a single symposium.
The program will host the following symposia:
- AI for Social Good: Emerging Methods, Measures, Data, and Ethics
- AI Trustworthiness and Risk Assessment for Challenged Contexts (ATRACC)
- Engineering Safety-Critical AI Systems
- First AAAI Symposium on Quantum Information & Machine Learning (QIML): Bridging Quantum Computing and Artificial Intelligence
- Safe, Ethical, Certified, Uncertainty-aware, Robust, and Explainable AI for Health (SECURE-AI4H)
- Unifying Representations for Robot Application Development
AAAI Code of Conduct for Events and Conferences
All persons, organizations and entities that attend AAAI conferences and events are subject to the standards of conduct set forth on the AAAI Code of Conduct for Events and Conferences.
Registration and General Information
Registration Fees
The conference registration fee includes admission to one symposium, access to the electronic proceedings, coffee breaks, and the opening reception.
Refund Requests
The deadline for refund requests is October 3, 2025. All refund requests must be made in writing to fssreg@aaai.org. A $50.00 processing fee will be assessed for all refunds.
Registration Fee Schedule
Registration Deadlines
All deadlines are 11:59PM Eastern Time
Member
Nonmember
Student Member
Nonmember Student
Early
On or before October 3rd
$395.00
$560.00
$225.00
$335.00
Late
After October 3rd
$495.00
$660.00
$325.00
$435.00
AAAI Silver Registration
(Includes AAAI membership, plus the conference)
Registration Deadlines
All deadlines are 11:59PM Eastern Time
Regular One-Year
Regular 3-Year
Regular 5-Year
Student (One-Year)
Early
On or before October 3rd
$540.00
$830.00
$1,120.00
$300.00
Late
After October 3rd
$640.00
$930.00
$1,220.00
$400.00
Visa Information
Letters of invitation can be requested by accepted FSS-25 authors or registrants with a completed registration with payment. You can access the visa letter form via the link in your registration confirmation email.
Hotel Information
For your convenience, AAAI has reserved a block of rooms at the Westin Arlington Gateway. The Westin Arlington Gateway is located in the Ballston area of Arlington. It is a short walk from the Ballston Metro Station, which allows guests to easily explore Arlington, downtown Washington, DC, Alexandria, or Georgetown. Reagan National Airport is easily accessible via the Washington Metro rapid transit.
The conference room rate per night is $219.00 (King/Double).
Rates do not include applicable state and local taxes (approximately 13.25%), or hotel fees in effect at the time of the meeting. Symposium attendees must contact the Westin Arlington Gateway directly. Please request the group rate for the Association for the Advancement of Artificial Intelligence (AAAI) when reserving your room. The cut-off date for reservations is October 18, 2025 at 5:00 PM ET, local time at the hotel. Reservations after this date will be accepted based on availability at the hotel’s prevailing rate. All reservations must be secured by one night’s deposit per room, via credit card. Reservations may be cancelled with no penalty up to 5:00 pm, 72 hours prior to the date of arrival. After that time, a penalty of one night’s room and tax will be incurred. Upon check-in, date of departure must be confirmed. Early departure will result in a fee equal to one night’s guest room rate.
Westin Arlington Gateway
801 North Glebe Road,
Arlington, Virginia 22203 USA
Transportation to the Hotel
For complete transportation information and directions, please see
https://www.marriott.com/hotels/maps/travel/wasag-the-westin-arlington-gateway/ and scroll down to “Getting Here.”
Hotel Parking: On-site parking fee is 40 USD daily
Disclaimer
In offering the Westin Arlington Gateway (hereinafter referred to as “Supplier”), and all other service providers for the AAAI Fall Symposium Series, the Association for the Advancement of Artificial Intelligence acts only in the capacity of agent for the Supplier, which is the provider of hotel rooms and transportation. Because the Association for the Advancement of Artificial Intelligence has no control over the personnel, equipment or operations of providers of accommodations or other services included as part of the Symposium program, AAAI assumes no responsibility for and will not be liable for any personal delay, inconveniences or other damage suffered by symposium participants which may arise by reason of (1) any wrongful or negligent acts or omissions on the part of any Supplier or its employees, (2) any defect in or failure of any vehicle, equipment or instrumentality owned, operated or otherwise used by any Supplier, or (3) any wrongful or negligent acts or omissions on the part of any other party not under the control, direct or otherwise, of AAAI.
2025 Fall Symposium Student Travel Grant
The National Science Foundation has a Travel Grant for USA-based Students attending or participating in the 2025 Fall Symposium Series. The program aims to reduce financial barriers for students from less-resourced institutions in the USA, enabling them to engage with cutting-edge AI research, network with leading researchers, and participate in mentoring opportunities in a collaborative environment. Funds should be used for necessary travel expenses, lodging, and registration costs.
Deadline: September 5th
Submission Requirements
Interested individuals should submit a paper or abstract by the deadline listed below, unless otherwise indicated by the symposium organizers on their supplemental website. Please submit your submissions directly to the individual symposium according to their directions. Do not mail submissions to AAAI. See the appropriate section in each symposium description for specific submission requirements.
In line with AAAI’s policy, the symposia will have a ‘no virtual presentations’ policy. All presenters, keynotes, and/or panelists will need to give their talks in person. A livestream for remote attendees can be arranged upon request. Additionally, all participants: presenters, organizers, and attendees are REQUIRED to register for the event.
Submission Site
Please be sure to select the appropriate symposium when submitting your work. Please see the individual symposia for any additional submission site details.
Important Dates
- By July 11: AAAI opens registration for Fall Symposium Series
- August 1: (unless otherwise noted): Papers due to organizers
- August 15: (unless otherwise noted): Organizers send notifications to authors
- August 29: (recommended): Spring Symposium Series final papers due to organizers
- October 3: Deadline for Registration Refund Requests – Late Registration Rate Begins
Onsite Registration Schedule
Upon arrival please check in at the registration area for your badge. AAAI will release the exact location of registration closer to the event.
Registration hours will be:
Thursday, Nov 6
8:00 AM – 5:00 PM
Friday, Nov 7
8:30 AM – 5:00 PM
Saturday, Nov 8
8:30 AM – 11:00 AM
General Event Schedule
Each Symposium schedule may vary slightly
Thursday, Nov 6
9:00am – 10:30am Session
10:30am – 11:00am Break
11:00am – 12:30pm Session
12:30pm – 2:00pm Lunch
2:00pm – 3:30pm Session
3:30pm – 4:00pm Break
4:00pm – 5:00pm Session
5:30pm – 6:30pm Reception
Friday, Nov 7
9:00am – 10:30am Session
10:30am – 11:00am Break
11:00am – 12:30pm Session
12:30pm – 2:00pm Lunch
2:00pm – 3:30pm Session
3:30pm – 4:00pm Break
4:00pm – 5:00pm Session
5:30pm – 6:30pm Plenary
Saturday, Nov 8
9:00am – 10:30am Session
10:30am – 11:00am Break
11:00am – 12:30pm Session
Additional Information:
General inquiries regarding the symposium series should be directed to AAAI at fssreg@aaai.org.
AI For Social Good: Emerging Methods, Measures, Data, and Ethics
AI has demonstrated transformative potential across sectors such as aging, combating information manipulation, disaster response, education, environmental sustainability, government, healthcare, social care,
transportation, and urban planning. Yet, the systematic development of AI For Social Good (AI4SG) remains fragmented across those many research communities, with limited convergence around effective
methodologies, equitable impact measurement, or access to important data and long-term engagement with affected populations. The main objective for this symposium is to convene across those disciplines engaging researchers, practitioners, and policymakers to explore with a particular focus on speculative approaches and important early-stage work that may not find a home in traditional technical conferences. We welcome speculative work, field reports from pilot deployments, critical reflections on failed projects, and interdisciplinary contributions that push the boundaries of AI application for equitable social outcomes.
Topics
The general topics of interest in this symposium are broad, covering emerging methods, measures, data, and ethics relating to AI4SG.
Some examples are:
- What methodologies enable meaningful co-design of AI systems with underserved or marginalized populations?
- How can fairness, accountability, transparency, and ethics be operationalized and measured in AI deployments?
- What are the emerging social risks from using LLM architectures and how can they be monitored and mitigated?
- How should AI monitoring systems be designed and used across different sectors?
- What are the emerging types of and effects of misinformation on AI4SG?
Format
This will be a two and one-half-day symposium will feature a diverse program, including invited keynote talks, peer-reviewed presentations of completed research, surveys of research from different sectors and areas of applications, and presentations of speculative or early-stage work. We anticipate 40-60 participants.
Submission Requirements
All contributions must be original and unpublished and should not be under consideration by other conferences or journals. The evaluation criteria include relevance to the workshop, novelty, technical contribution, impact significance, clarity, and reproducibility. All presentations must be in person.
- Technical Papers: Full-length research papers of up to 7 pages (excluding references and appendices)
- Short Papers: Position or short papers of up to 4 pages (excluding references and appendices)
- All papers must be submitted in PDF format, using the AAAI-25 author kit.
- Submit papers by August 1, 2025 to https://easychair.org/my/conference?conf=fss25
Symposium Committee
- Prof. Daniel E. O’Leary (oleary@usc.edu), University of Southern California, USA (Chair, Contact Person)
- Prof. Erik Cambria (cambria@ntu.edu.sg), Nanyang Technological University, Singapore
- Prof. Michael J Prietula (mj.prietula@emory.edu), Emory University, USA
- Prof. Bo An (boan@ntu.edu.sg) Nanyang Technological University, Singapore
- Prof. Guido Geerts (geerts@udel.edu), University of Delaware, USA
- Prof. Mary Lacity (mlacity@walton.uark.edu), University of Arkansas, USA
- Dr. Rui Mao (rui.mao@ntu.edu.sg), Nanyang Technological University, Singapore
- Dr. Eunika Mercier-Laurent (eunika@innovation3d.fr) Chair of TC12 IFIP (Artificial Intelligence)
- Prof. San Murugesan (san@computer.org), Western Sydney University, Australia
- Prof. Steve Smith, (ssmith@andrew.cmu.edu), Carnegie Mellon University, USA
- Prof. Veda C. Storey, (vstorey@gsu.edu), Georgia State University, USA
- Prof. Yangin (Ben) Yoon, (ben.yangin.yoon@seoultech.ac.kr), Seoul National University of Science and Tech
- Dr. Lijun Yu (lijuny@google.com), Google, USA
AI Trustworthiness and Risk Assessment for Challenging Contexts (ATRACC)
AI systems, including those built on large language and foundational/multi-modal models, have proven their value in all aspects of human society, rapidly transforming traditional robotics and computational systems into intelligent systems with emergent, and often unanticipated, beneficial behaviors. However, the rapid embrace of AI-based critical systems introduces new dimensions of errors that induce increased levels of risk, limiting trustworthiness. The design of AI-based critical systems requires proving their trustworthiness. Thus, AI-based critical systems must be assessed across many dimensions by different parties (researchers, developers, regulators, customers, insurance companies, end-users, etc.) for different reasons. Assessment of trustworthiness should be made at both, the full system level and at the level of individual AI components. At the theoretical and foundational level, such methods must go beyond explainability to deliver uncertainty estimations and formalisms that can bound the limits of the AI, provide traceability, and quantify risk.
The focus of this symposium is on AI trustworthiness broadly and methods that help provide bounds for fairness, reproducibility, reliability, and accountability in the context of quantifying AI-system risk, spanning the entire AI lifecycle from theoretical research formulations all the way to system implementation, deployment, and operation. This symposium will bring together industry, academia, and government researchers and practitioners who are vested stakeholders in addressing these challenges in applications where a priori understanding of risk is critical.
Topics
Topics of interest include, but are not limited to:
- Agentic AI: addressing challenges related to autonomy and safety, including multi-agent systems with an emphasis on robustness, reliability, accountability, and emergent behaviors in risk-averse contexts.
- Pluralistic alignment: approaches to AI alignment for addressing the diverse and often conflicting perspectives, values, and needs of different users.
- AI benchmarking and evaluation: theoretical and empirical methods for analyzing the capabilities of foundation models, including benchmark design, formal guarantees, and multimodal AI evaluation.
- Methods and approaches for enhancing and evaluating reasoning in general purpose AI systems, e.g., causal reasoning techniques and outcome verification approaches.
- Assessment of non-functional requirements such as explainability, accountability, and privacy as well as assessment from pilot stage to systematic evaluation and monitoring.
- Approaches for verification and validation of AI systems, including evaluation of different aspects such as factuality and trustworthiness.
- Evaluation of AI systems vulnerabilities and risks, including adversarial and red-teaming approaches.
- Links between performance and trustworthiness leveraged by AI sciences, system and software engineering, metrology, and Social Sciences and Humanities.
- User studies and evaluation of governance mechanisms in organizations and communities.
Symposium Details:
- Duration: 2 1/2 days
- Features: Keynote and invited talks from accomplished experts in the field of Trustworthy AI, panel sessions, presentation of selected papers, student papers, and a poster session.
Submission Details:
- Full papers: Maximum 8 pages
- Poster/short/position papers: Maximum 4 pages
- Deadline for submission: August 1st
- Notification of acceptance or rejection: August 15th
- Camera-ready papers for symposium proceedings: August 29th
- Submission Link: https://easychair.org/my/conference?conf=fss25
All accepted papers will be included in the AAAI Fall 2025 proceedings.
For provisional schedule, program committee, and practical information, please visit the Symposium website
Program Chairs:
Bertrand Braunschweig (bertrand.braunschweig@irt-systemx.fr) and Brian Hu (brian.hu@kitware.com)
Engineering Safety-Critical AI Systems
Artificial intelligence (AI) has increasing application to high-risk settings, but foundational practices for engineering safe AI systems remain few, and research in safety engineering for AI remains scattered across disparate fields of study. The AI community needs more discussion on identifying, assessing, and mitigating AI hazards. In this symposium, we will strike at a single fundamental question: How should we build AI systems for safety-critical applications? The symposium will bring together three communities as part of a broader effort to advance the discipline of engineering AI for safety: 1) domain experts who want to use AI in safety-critical applications, 2) AI researchers, engineers, and practitioners who build AI capabilities, and 3) safety and systems engineers who are tasked with designing, developing, and testing systems that include AI components.
Topics
While most topics around safety and AI are welcome, we are especially interested in topics that inform how to engineer AI systems now and in the future. Areas of interest include but are not limited to the following:
- Safety requirements engineering for AI and/or new safety standards for AI systems
- Software architectures for increased AI system safety
- Uncertainty quantification and/or robustness in AI components or systems
- Methods for defining AI system specifications
- Safety test and evaluation of AI components or systems
- Software tooling to support safety engineering in AI
- Reporting of high impact failure modes or cases in AI systems
- High-risk application domains of AI that require specific definitions of safety
- Case studies of safety engineering for deployed AI systems
- Formal specification and verification of AI systems and/or processes for certifying AI system safety
- Provably safe AI and safe-by-design AI
- Risk assessment of generative AI in safety-critical systems such as medical robots and autonomous vehicles
- Human-AI interaction in safety-critical systems
Format
The symposium will be two and a half days, featuring invited keynote speakers, selected paper presentations, and panels.
Submission Requirements
There are two tracks for submission:
Track #1: Research and Development – This track seeks papers articulating new AI safety research or technical reports describing new safety engineering artifacts (process, procedure, standards, software architectures, tooling, etc.). Preference will be afforded to works that are rigorous, have evidence to support real-world application, and advance the discipline of engineering safe AI systems.
Track #2: Case Studies in Engineering AI Systems – This track seeks papers that highlight the practical engineering challenges with building safe AI systems in a challenging application domain and/or present a case study in engineering safe AI systems in the real world. Preference will be afforded to works that showcase real-world problems with high impact and rigorous and well-justified engineering solutions to these problems.
For both tracks, we are seeking papers in two formats: (a) short papers (2-4 pages, excluding references) and (b) full papers (6-8 pages, excluding references). All submissions must be formatted using the AAAI-25 author kit and be submitted through the AAAI EasyChair site. All papers will undergo single-blind review (the papers do not need to be anonymized). Accepted full papers will have options to be included in the AAAI symposium proceedings.
- Paper submission deadline: August 11
- Notification of acceptance or rejection: August 22
- Camera-ready deadline: August 29
Symposium Committee
Co-chairs:
Wanyi Chen (Duke University, wc151@duke.edu)
Dr. Eric Heim (Carnegie Mellon University, etheim@sei.cmu.edu)
Organizing Committee:
Dr. Gregory Canal (Johns Hopkins University, Greg.Canal@jhuapl.edu)
Dr. Mary Cummings (George Mason University, cummings@gmu.edu)
Andrew Dolgert (Carnegie Mellon University, ajdolgert@sei.cmu.edu)
Dr. Lu Feng (University of Virginia, lf9u@virginia.edu)
Ritwik Gupta (University of Maryland and University of California, Berkeley, ritwikgupta@berkeley.edu)
Chase Midler (CrowdStrike, cmidler@gmail.com)
Dr. Sanjeev Mohindra (MIT Lincoln Laboratory, smohindra@ll.mit.edu)
Oren Wright (Carnegie Mellon University, owright@sei.cmu.edu)
First AAAI Symposium on Quantum Information & Machine Learning (QIML): Bridging Quantum Computing and Artificial Intelligence
Symposium Objectives:
Quantum Information and Machine Learning (QIML) is an emerging field that leverages quantum algorithms to potentially revolutionize AI paradigms. The goal of this Fall Symposium is to bring together participants from academia, industry, and government to explore how QIML, along with classical ML and High Performance Computing research, are currently being expanded to support various fields in science and engineering. Recent advancements in quantum hardware and quantum algorithms have enabled promising applications in classification, regression, generative modeling, optimization, and reinforcement learning. The AAAI Fall 2025 symposium will bring together leading researchers from academia, industry practitioners, and emerging scholars to explore cutting-edge theoretical advancements, novel methodologies, and practical applications, as well as to discuss current challenges. The event will foster interdisciplinary dialogue to highlight both visionary and preliminary work, identifying key challenges and opportunities that quantum technologies present. The symposium will be organized around several topics within QIML, which include, but are not limited to:
Topics:
- Quantum algorithms for classical AI (classification, regression, clustering, PCA, etc.)
- Quantum-inspired machine learning techniques
- Quantum Reinforcement Learning
- Quantum Neural Networks
- Quantum Graph Neural Networks
- Quantum Generative Models
- Variational Quantum Classifiers and Regressors
- Quantum data encoding methods
- Classical AI/ML for developing quantum algorithms, quantum security, networking etc.
- Scalability and hardware considerations for QIML applications.
- Applications of QIML to finance, healthcare, climate, security, biomedical, etc.
- Hybrid Quantum Computing with classical High Performance Computing.
- Ethical considerations in QIML
Format of Symposium:
Two days of invited and contributed presentations plus panel discussions with industry and academic experts. The symposium will also include an evening poster session.
Attendance:
- Keynote addresses from prominent researchers
- Technical paper presentations
- Panel discussions featuring industry, academia, and government experts
- Breakout groups for focused discussions on emerging QIML topics
- Poster sessions highlighting ongoing research
- Interactive tutorials demonstrating quantum machine learning applications ( i.e., NASA Earth Observational satellite data used in implementing QML models), use of quantum computing platforms, open source libraries, tools, and educational material (e.g., IBM Qiskit Machine Learning Ecosystem, Pennylane, Google Cirq, etc.)
Submission requirements:
We are accepting paper submissions for position, poster, artifacts, or research articles in two formats: (a) Short papers (2-4 pages) and (b) Full papers (6-8 pages). All submissions will undergo peer review. Submission Requirements: All submissions must follow the instructions of the AAAI author kit and be submitted through the AAAI EasyChair site. The review is single-blinded. Accepted submissions will be published in the AAAI symposium proceedings.
- Full papers (6-8 pages): describe work related to the topics above.
- Position Papers (2-4 pages excluding references): discuss topics outlined above.
- Artifact Papers (2-4 pages excluding references): describe artifacts (e.g., software tools or libraries) related to the topics above.
- Poster papers (2 to 4 pages).
Paper Submission Deadline: August 1, 2025
Decision Notification: August 15, 2025
Camera Ready Deadline: August 29, 2025
Submission Site Information: https://easychair.org/my/conference?conf=fss25
Symposium Committee
Steering Committee:
- Dr. James Hendler, Rensselaer Polytechnic Institute (RPI), USA. hendler@cs.rpi.edu
- Dr. Malik Magdon-Ismail, Rensselaer Polytechnic Institute (RPI), USA. magdon@cs.rpi.edu
- Dr. Jennifer Wei, NASA, USA. jennifer.c.wei@nasa.gov
- Dr. Ali Tajer, Rensselaer Polytechnic Institute, USA. tajer@ecse.rpi.edu
Organizing Committee:
- General Chair: Dr. James Hendler (RPI), USA
Program Chairs:
- Dr. Thilanka Munasinghe, Rensselaer Polytechnic Institute (RPI), USA. munast@rpi.edu
- Dr. Kimberly Cornell, University at Albany, USA. kacornell@albany.edu
- Dr. Malik Magdon-Ismail, Rensselaer Polytechnic Institute (RPI), USA. magdon@cs.rpi.edu
Track Chairs:
- Dr. Samuel Yen-Chi Chen , Wells Fargo, (QML+Aplications), ycchen1989@gmail.com
- Dr. Walter Krawec, University of Connecticut, (Quantum Information Science, Cryptography/Security), walter.krawec@uconn.edu
- Dr. Ali Tajer, RPI, (Quantum Information Science), tajer@ecse.rpi.edu
Program Committee:
- Dr. Yulia Gel, Virginia Tech / National Science Foundation, USA, ygl@vt.edu
- Dr. Youzhou Chen, University of California, Riverside, USA, yuzhouc@ucr.edu
- Dr. Kyo Lee, Jet Propulsion Laboratory – NASA / Caltech, USA, huikyo.lee@jpl.nasa.gov
- Dr. Nick LaHaye, Spatial Informatics Group LLC, USA, nlahaye@sig-gis.com
- Dr. Ignacio Segovia Dominguez, West Virginia University, USA, ignacio.segoviadominguez@mail.wvu.edu
- Dr. Zhiding Liang, Rensselaer Polytechnic Institute, USA, liangz9@rpi.edu
- Dr. George Berg, University at Albany, USA, gberg@albany.edu
- Dr. Oshani Seneviratne, Rensselaer Polytechnic Institute, USA, senevo@rpi.edu
- Dr. Phung Lai, University at Albany, USA, lai@albany.edu
Publicity and Local Arrangements Chair:
Symposium External URL:
Safe, Ethical, Certified, Uncertainty-aware, Robust, and Explainable AI for Health
The SECURE-AI4H Symposium (Safe, Ethical, Certified, Uncertainty-aware, Robust, and Explainable AI for Health) brings together a multidisciplinary community focused on advancing trustworthy AI in healthcare and biomedicine. As AI technologies are increasingly deployed across diagnostics, hospital workflows, robotics, and personalized medicine, their safety, transparency, and real-world certifiability have become critical challenges. This symposium aims to foster cross-sector dialogue among AI researchers, clinicians, ethicists, engineers, cybersecurity experts, and policymakers to develop AI systems that are not only high-performing but also certifiable, interpretable, privacy-preserving, and resilient to adversarial threats.
Topics:
Areas of interest include (but are not limited to):
- certification and post-market surveillance
- uncertainty quantification
- adversarial robustness
- explainability and clinician trust
- privacy-preserving AI (federated learning, differential privacy)
- fairness and bias mitigation
- secure data governance
- LLM safety in clinical applications
- integration with telehealth, AR/VR, and robotics
- rural health innovation using lightweight AI
Format of Symposium:
SECURE-AI4H will feature a highly interactive and engaging program spanning the full 2.5-day schedule. The format includes keynote talks by global leaders in AI for health and regulation, peer-reviewed papers and poster presentations, and interactive panels.
Attendance:
The symposium welcomes researchers, practitioners, clinicians, and students across academia, industry, government, and healthcare systems. No specific invitation is required.
Submission Requirements:
We invite submissions of full papers (4–8 pages), short papers (2–4 pages), or abstracts (up to 1 page). Submissions should present original research, position papers, system demonstrations, or critical perspectives aligned with the symposium theme.
- Submission Deadline: August 12th
- Decision Notification: August 22nd
- Camera Ready Deadline: September 4th
Submissions should be made through EasyChair: https://easychair.org/conferences?conf=fss25
Submission Site Information:
Submission portal details will be updated at: https://idsl-group.github.io/secure-ai4h
Contact Email: secureai4h@gmail.com
Symposium Committee:
- Apurva Narayan (Western University);
- Hong Qin (Old Dominion University);
- Elham Dolatabadi (York University)
- Rishi Ganesan (Lawson Health Research Institute)
- Yalda Mohsenzadeh (Western University)
- Letu Qingge (North Carolina A&T)
- Laleh Seyyed-Kalantari (York University)
- Ritambhara Singh (Brown University)
- Amol Verma (University of Toronto)
Symposium External URL:
https://idsl-group.github.io/secure-ai4h
Unifying Representations for Robot Application Development
Capturing a desired task or interaction as a computational artifact (i.e., a representation) has long played a pivotal role in robotics. Many robotic subfields have traditionally employed a variety of different representational techniques, such as LTL, planning languages, social representations, natural language, and many more. These representations, however, lack cohesion in when and how they are applied.
The 3rd Symposium on Unifying Representations for Robot Application Development (UR-RAD) aims to increase representational cohesion between AI and robotics researchers. This year’s symposium additionally aims to increase interaction between junior and senior researchers, support in-progress research, and cultivate collaboration.
Topics
- Representational trends
- Natural language in robotics
- Novel representations & uses
- AI planning for robotics
- Formal methods in robotics
- Representations for robot learning
- UI representations
- EUD and programming representations
- Robot runtime/control environments
- Standardization opportunities
- Frameworks (e.g., ROS or middleware)
- Open-source & collaboration initiatives
- Identifying representation requirements
Format
The symposium will feature invited speakers, paper presentations, panels, and breakout discussions. In order to encourage interaction between junior and senior researchers, each paper accepted to UR-RAD 2025 (regardless of archival status or contribution type) will be paired with an expert in the field, as a “mentor”. Paper authors who opt in, in addition to junior members of the community, will have the opportunity for extensive interaction with mentors, who will guide discussions about individual papers.
Submission Requirements
We invite the following contributions, formatted using the AAAI-25 author kit:
- Full Papers (4-8 pages):for novel research, artifact submissions, or strong works in progress.
- Short Papers (2-4 pages):for positions, smaller artifacts, and early work in progress.
- Abstracts (1 page): for sharing ideas (non-archival only)
“Preferred” submission round, with the option to be included in the AAAI proceedings:
- Submission deadline: August 7th
- Camera Ready Deadline: August 29th
“Late” submission round, without the option to archive in the AAAI proceedings:
- 100-200 word abstract submission encouraged by: August 7
- Submission deadline: August 22
- Camera Ready Deadline: September 5
Regardless of archival plans, authors are encouraged to submit earlier rather than later. All papers submitted by August 7th, regardless of archival plans, will receive preference if a large number of submissions are received.
Submissions should be made through EasyChair: https://easychair.org/my/conference?conf=fss25
Organizing Committee
- Ruchen Wen (Co-Chair, Colgate University)
- David Porfirio (Co-Chair, Naval Research Laboratory)
- Saad Elbeleidy (Peerbots)
- Laura M. Hiatt (Naval Research Laboratory)
- Ross Mead (Semio)
- Andrew Schoen (Semio)
- Willie Wilson (Franklin & Marshall College)
- Laura Stegner (George Washington University)
Website and Contact Information
Website: ur-rad.github.io
Contact information: urrad.symposium@gmail.com
