Overview
Invited Speakers
- The speakers haven’t been finalized, stay tuned for updates!
Schedule
- Date: June 18, 2024
- Time: 8:30 AM - 5:30 PM
- Location: Summit 423-425
Times (PST) | Event |
09:00 - 09:10 | Opening |
09:10 - 09:50 | Talk by Ludwig Schmidt |
09:50 - 10:30 | Talk by Ruslan Salakhutdinov |
10:30 - 10:50 | Break |
10:50 - 11:30 | Talk by Yale Song |
11:30 - 12:10 | Talk by Jia Deng |
12:10 - 13:30 | Lunch |
13:30 - 14:30 | Poster Session |
14:30 - 15:10 | Talk by Ani Kembhavi |
15:10 - 15:50 | Talk by Ming Lin |
15:50 - 16:10 | Break |
16:10 - 16:50 | Talk by Yannis Kalantidis |
16:50 - 17:05 | Oral Presentation: CinePile: A Long Video Question Answering Dataset and Benchmark |
17:05 - 17:20 | Oral Presentation: GenAI-Bench: A Holistic Benchmark for Compositional Text-to-Visual Generation |
17:20 - 17:30 | Closing |
Poster Session
Notice: The location of the poster session is different from the workshop.
- Date: June 18, 2024
- Time: 1:30 PM - 2:30 PM
- Location: Arch Building Exhibit Hall, #300 - 349
Awards
Best long paper
CinePile: A Long Video Question Answering Dataset and Benchmark. Ruchit Rawal, Khalid Saifullah, Ronen Basri, David Jacobs, Gowthami Somepalli, Tom Goldstein |
Long paper honorable mention
A Benchmark Synthetic Dataset for C-SLAM in Service Environments. Harin Park, Inha Lee, Minje Kim, Hyungyu Park, Kyungdon Joo |
Best short paper
GenAI-Bench: A Holistic Benchmark for Compositional Text-to-Visual Generation. Baiqi Li, Zhiqiu Lin, Deepak Pathak, Jiayao Emily Li, Xide Xia, Graham Neubig, Pengchuan Zhang, Deva Ramanan |
Short paper honorable mention
R3DS: Reality-linked 3D Scenes for Panoramic Scene Understanding. Qirui Wu, Sonia Raychaudhuri, Daniel Ritchie, Manolis Savva, Angel X Chang |
Accepted Papers
Call for Papers
We invite papers on the use of synthetic data for training and evaluating computer vision models. We welcome submissions along two tracks:
-
Full papers: Up to 8 pages, not including references/appendix.
-
Short papers: Up to 4 pages, not including references/appendix.
Accepted papers will be allocated a poster presentation and displayed on the workshop website. In addition, we will offer a Best Long Paper award, Best Paper Runner-up award, and Best Short Paper with oral presentation.
Topics
Potential topics include, but are not limited to:
-
Effectiveness: What is the most effective way to generate and leverage synthetic data? How "realistic" does synthetic data need to be?
-
Efficiency and scalability: Can we make synthetic data generation more efficient and scalable without sacrificing quality?
-
Benchmark and evaluation: What benchmark and evaluation methods are needed to assess the efficacy of synthetic data for computer vision?
-
Risks and ethical considerations: What ethical questions and risks are associated with synthetic data (e.g. bias amplification), and how can we address them?
-
Applications: In addition to existing attempts on leveraging synthetic data for training visual recognition and vision-language models, what are other tasks in computer vision or other related fields (e.g., robotics, NLP) that could benefit from synthetic data?
-
Other open problems: How do we decide which type of data to use, synthetic or real-world data? What is the optimal way to combine both if both are available? How much real-world data do we need (in the long run)?
Submission Instructions
Submissions should be anonymized and formatted using the CVPR 2024
template and uploaded as a single PDF.
Note that our workshop is non-archival.
Submission link: OpenReview Link
Important workshop dates
- Deadline for submission:
March 15th, 11:59 PM Pacific TimeMarch 30th, 11:59 PM Pacific Time - Notification of acceptance: April 9th, 11:59 PM Pacific Time
- Camera Ready submission deadline: April 24th, 11:59 PM Pacific Time
- Workshop date: June 18th, 2024 (Full day)
Related Workshops
- Machine Learning with Synthetic Data @ CVPR 2022
- Synthetic Data for Autonomous Systems @ CVPR 2023
- Synthetic Data Generation with Generative AI @ NeurIPS 2023