You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Visual Prompting is a technique for teaching models to perform a visual task via
in-context examples, and without any additional training. In this work, we analyze
the activations of MAE-VQGAN, a recent Visual Prompting model, and find
task vectors, activations that encode task specific information. Equipped with this
insight, we demonstrate that it is possible to identify the task vectors and use them
to guide the network towards performing different tasks without providing any
input-output examples. To find task vectors, we compute the average intermediate
activations per task and use the REINFORCE algorithm to search for the subset
of task vectors. The resulting task vectors guide the model towards performing a
task better than the original model without the need for input-output examples
Star History
Dataset preparation:
Our evaluation pipeline is based on Volumetric Aggregation Transformer. Please follow the dataset preparation steps for PASCAL-5i dataset in this repository.