Human Action in 3D Virtual Training Environments: representation, classification and codification

Human Action in 3D Virtual Training Environments: representation, classification and codification

Posted on 7 July 2017

A simplified definition of immersion is the degree of believability and engrossment of a virtual experience created by various embedded stimuli. Immersion is the user side of the virtual experience. On the other hand, to observe the user’s performance in a 3D immersive environment, the observers should be able to describe the performed actions as close as possible to the real-life performance. Basic Exhibition of Human Actions in Virtual Environments (BEHAVE) taxonomy of human actions in Virtual Training Environments (VTEs) was developed to describe the performed actions in VTEs.

BEHAVE classifies human actions into six classes of ‘Functional’ actions. This is while BEHAVE 2.0 is using an additional two classes for multiplayer environments. These classified actions include an extensive variety of actions performed in any 3D virtual environment. These classes include:

  1. Gestural;
  2. Responsive;
  3. Decisional;
  4. Operative;
  5. Locomotive;
  6. Constructional;
  7. Communicative; and
  8. Collaborative.

Functional Acts are atomic actions enabling the individual to perform. In the highest level of performance, there is the ‘Goal Act’, which indicates a particular goal to be achieved at the end of the performance, and is constituted by some ‘Constitutive Acts’. The Constitutive Acts are a set of Functional Acts that have to be performed resulting in certain objectives to be achieved to fulfil the Goal Act.

To describe the performed action in detail, BEHAVE uses a specific syntax. The BEHAVE syntax structures an action data in three main parts including the class, type and attributes of the actions. This is while certain additional information such as rules and timestamps are used for further analysis. The attribute set used to describe an action includes preposition, object, quantity, unit, property, and location.  The syntax was used in an experiment and describes by more than 80% of participants as sufficient to code every performed action in the scenario. This is while the remaining participants suggested minor changes in the order of the attributes. Furthermore, BEHAVE is evaluated for internal validity. The results can be found in Fardinpour (2016).

An evaluated taxonomy of human actions provides the opportunity for a common representation of human action in different fields, which can facilitate the communication and broadly applicable outputs. These areas may include video action recognition, error detection systems, task analysis, performance analysis, pattern recognition, and human-computer interactions. Currently, BEHAVE 2.0 is used in the development of the HAVEN virtual reality platform for action-based learning and assessment.

Fardinpour, A. (2016). Taxonomy of Human Actions for Action-based Learning Assessment in Virtual Training Environments (Doctoral dissertation, Curtin University).

Share This Article

Leave a Reply

Your email address will not be published. Required fields are marked *