Abstract: In recent years, large-scale vision-language models (VLMs) like CLIP have gained attention for their zero-shot inference using instructional text prompts. While these models excel in general ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results