wiki:SwimmersGame

A game driven by emotional speech: The swimmer's game

Back to Building emotion-oriented systems

The third example system is a simple game application in which the user must use emotional speech to win the game. The game scenario is as follows. A swimmer is being pulled backwards by the stream towards a waterfall. The user can help the swimmer to move forward towards the river bank by cheering him up through high-arousal speech. Low arousal, on the other hand, discourages the swimmer and drives him more quickly to the waterfall.

The system requires the openSMILE components as in the Emotion mirror system; a component computing the swimmer's position as time passes, and considering the user's input; and a rendering component for the user interface. Furthermore, we will illustrate the use of TTS output in the SEMAINE API by implementing a commentator providing input to the speech synthesis component of the SEMAINE system.

The PositionComputer combines a react() and an act() method. Messages are received via an EMMA receiver and lead to a change in the internal parameter position (l. 22). The act() method implements the backward drift (l. 29) and sends regular position updates (l. 30) as a plain-text message.

 1 public class PositionComputer extends Component {
 2   private Sender positionSender =
         new Sender("semaine.data.swimmer.position", "TEXT", getName());
 3   private float position = 50;
 4 
 5   public PositionComputer() throws JMSException {
 6     super("PositionComputer");
 7     receivers.add(
          new EmmaReceiver("semaine.data.state.user.emma.emotion.voice"));
 8     senders.add(positionSender);
 9   }
10 
11   @Override protected void react(SEMAINEMessage m)
                 throws MessageFormatException {
12     SEMAINEEmmaMessage emmaMessage = (SEMAINEEmmaMessage) m;
13     Element interpretation = emmaMessage.getTopLevelInterpretation();
14     if (interpretation == null) return;
14     List<Element> emotionElements =
                 emmaMessage.getEmotionElements(interpretation);
15 
16     for (Element emotion : emotionElements) {
17       List<Element> dimensions = XMLTool.getChildrenByLocalNameNS(
                 emotion, EmotionML.E_DIMENSION, EmotionML.namespaceURI);
18       for (Element dim : dimensions) {
19         if (dim.getAttribute(EmotionML.A_NAME).equals(
                 EmotionML.VOC_FSRE_DIMENSION_AROUSAL)) {
20           float arousalValue = Float.parseFloat(
                 dim.getAttribute(EmotionML.A_VALUE));
21           // Arousal influences the swimmer's position:
22           position += 10*(arousalValue-0.4f);
23           break;
24         }
25       }
26     }
27   }
28 
29   @Override protected void act() throws JMSException {
30     // The river slowly pulls back the swimmer:
31     position -= 0.1;
32     positionSender.sendTextMessage(String.valueOf(position),
                                      meta.getTime());
33   }
34 }

The SwimmerDisplay implements the user interface shown above. Its messaging part consist of a simple text-based Receiver (l. 5) and an interpretation of the text messages as single float values (l. 10).

 1 public class SwimmerDisplay extends Component {
 2 
 3   public SwimmerDisplay() throws JMSException {
 4     super("SwimmerDisplay", false, true/*is output*/);
 5     receivers.add(new Receiver("semaine.data.swimmer.position"));
 6     setupGUI();
 7   }
 8 
 9   @Override protected void react(SEMAINEMessage m) throws JMSException {
10     float percent = Float.parseFloat(m.getText());
11     updateSwimmerPosition(percent);
12     String message = percent <= 0 ? "You lost!" : percent >= 100 ? "You won!!!" : null;
13     if (message != null) {
         ...
       }
     }
     ...
   }

Due to the separation of position computer and swimmer display, it is now very simple to add a Commentator component that generates comments using synthetic speech, as a function of the current position of the swimmer. It subscribes to the same Topic as the SwimmerDisplay (l. 7), and sends BML output (l. 2) to the Topic serving as input to the speech synthesis component of the SEMAINE system⁠. Speech output is produced when the game starts (l. 18-20) and when the position meets certain criteria (l. 13-14). Generation of speech output consists in the creation of a simple BML document with a <speech> tag enclosing the text to be spoken (l. 25-28), and sending that document (l. 29).

 1 public class Commentator extends Component {
 2   private BMLSender bmlSender = new BMLSender("semaine.data.synthesis.plan", getName());
 3   private boolean started = false;
 4 
 5   public Commentator() throws JMSException {
 6     super("Commentator");
 7     receivers.add(new Receiver("semaine.data.swimmer.position"));
 8     senders.add(bmlSender);
 9   }
10 
11   @Override protected void react(SEMAINEMessage m) throws JMSException {
12     float percent = Float.valueOf(m.getText());
13     if (percent < 30 /*danger*/) say("Your swimmer needs help!");
14     else if (percent > 70 /*nearly there*/) say("Just a little more.");
15   }
16 
17   @Override protected void act() throws JMSException {
18     if (!started) {
19       started = true;
20       say("The swimmer needs your support to reach the river bank. Cheer him up!");
21     }
22   }
23 
24   private void say(String text) throws JMSException {
25     Document bml = XMLTool.newDocument(BML.ROOT_TAGNAME, BML.namespaceURI);
26     Element speech = XMLTool.appendChildElement(bml.getDocumentElement(), BML.E_SPEECH);
27     speech.setAttribute("language", "en-US");
28     speech.setTextContent(text);
29     bmlSender.sendXML(bml, meta.getTime());
30   }
31 }

The complete system consists of the Java components SystemManager, PositionComputer, SwimmerDisplay, Commentator, SpeechBMLRealiser and SemaineAudioPlayer, as well as the external C++ component openSMILE. The resulting message flow graph is shown in the following figure.

Back to Building emotion-oriented systems

Last modified 6 years ago Last modified on 03/24/11 09:42:24

Attachments (3)

Download all attachments as: .zip