tg-me.com/RIMLLab/169
Last Update:
๐ Open Research Position: Hallucination Detection & Mitigation in Vision-Language Models (VLMs)
We are looking for motivated students to join our research on hallucination detection and mitigation in Visual Question Answering (VQA) models at RIML Lab.
๐ Project Description
Visual Question Answering (VQA) models generate text-based answers by analyzing an input image and a query. Despite their success, they still suffer from hallucination issues, where responses are incorrect, misleading, or not grounded in the image content.
This research focuses on detecting and mitigating these hallucinations to enhance the reliability and accuracy of VQA models.
๐ Relevant Papers
"Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding"
"CODE: Contrasting Self-generated Description to Combat Hallucination in Large Multi-modal Models"
"Alleviating Hallucinations in Large Vision-Language Models through Hallucination-Induced Optimization"
๐น Must-Have Requirements
- Strong Python programming skills
- Knowledge of deep learning (especially VLMs)
- Hands-on experience with PyTorch
- Ready to start immediately
โณ Workload
Commitment: At least 20 hours per week
๐ Note: Filling out this form does not guarantee acceptance. Only shortlisted candidates will receive an email notification.
๐
Application Deadline: March 28, 2025
๐ Apply here: Google Form
๐ This position is now closed. Shortlisted candidates have been notified by March 30, 2025. Thank you to everyone who applied! Stay tuned for future opportunities.
๐ง For inquiries: [email protected]
๐ฌ Telegram: @amirezzati
@RIMLLab
#research_position #ML_research #DeepLearning #VQA
BY RIML Lab
Warning: Undefined variable $i in /var/www/tg-me/post.php on line 283
Share with your friend now:
tg-me.com/RIMLLab/169