The GIVE Challenge has led to the collection of a number of corpora of instruction giving in virtual environments. In all of these corpora an instruction giver (IG) is guiding an instruction follower (IF) through a virtual environment. The corpora consist of time-stamped logs of all written instructions the IG sent to the IF and all actions the IF carried out in the environment (like pushing buttons). Furthermore, the IF's position and orientation, together with the set of objects in their field of view, were logged every 200 msec.
In one corpus, both the IG and IF are human. In the other three corpora, human IFs are interacting with generation systems that participated in the GIVE Challenge.
- GIVE-1 system-generated corpus: system-human interactions from the first GIVE evaluation. In this first edition, movements in the worlds were discrete. That is, the IF moved by hopping from one tile to the next and turned in 90 degree increments. The corpus consists of over 1100 interactions.
- GIVE-2 system-generated corpus: system-human interactions from the second GIVE evaluation. In this edition of GIVE, movements in the worlds are continuous. The corpus consists of over 1800 interactions.
- GIVE-2.5 system-generated corpus: system-human interactions from the second second GIVE evaluation. The corpus consists of over 650 interactions.
- GIVE-2 human-generated corpus: a corpus of human-human interactions collected using the GIVE-2 infrastructure. The corpus consists of 45 German and 63 American English interactions.
If you are interested in any of these corpora, please contact us.
For more information on the corpora and the GIVE Challenge evaluations, please see the publications page. To get an idea of what the data are like, you can use the replay tool provided on the GIVE-2 corpus page. The tool allows online view and replay of the human-human interactions from the corpus.