Skip to content
View muzairkhattak's full-sized avatar
๐Ÿ˜ด
Welcome!
๐Ÿ˜ด
Welcome!

Highlights

  • Pro

Block or report muzairkhattak

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this userโ€™s behavior. Learn more about reporting abuse.

Report abuse
muzairkhattak/README.md

Hello! ๐Ÿ‘‹

I'm Muhammad Uzair from Pakistan, currently enrolled as a PhD student at EPFL, where I am affiliated with VILAB lab. Previously I completed my master's degree in Computer Vision from Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), Abu Dhabi, UAE.

  • ๐Ÿ”ญ Iโ€™m currently working on multi-modal learning and utilizing language (more recently LLMs) as a supervision signal for vision tasks.
  • ๐ŸŒฑ I like to play table tennis and football, and to hang out with my friends.
  • ๐Ÿ“ซ How to reach me: [email protected]

Pinned Loading

  1. mbzuai-oryx/CVRR-Evaluation-Suite mbzuai-oryx/CVRR-Evaluation-Suite Public

    Official repository of paper titled "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite for Video-LMMs".

    Python 42 3

  2. ProText ProText Public

    [CVPRW 2024] Official repository of paper titled "Learning to Prompt with Text Only Supervision for Vision-Language Models".

    Python 90 5

  3. PromptSRC PromptSRC Public

    [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without Forgetting".

    Python 233 9

  4. multimodal-prompt-learning multimodal-prompt-learning Public

    [CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".

    Python 667 53

  5. ViFi-CLIP ViFi-CLIP Public

    [CVPR 2023] Official repository of paper titled "Fine-tuned CLIP models are efficient video learners".

    Python 248 18

  6. TalalWasim/Video-FocalNets TalalWasim/Video-FocalNets Public

    Official repository for "Video-FocalNets: Spatio-Temporal Focal Modulation for Video Action Recognition" [ICCV 2023]

    Python 89 17