Musical Scores for Virtual Reality HeadsetsCompositional Strategies and Performative Challenges
Digital technologies are transforming the way in which composers conceive and efficiently communicate their creative ideas to performers. In this context, the development of virtual reality devices is shaping new compositional and performative affordances. This article focuses on the use of virtual reality headsets for reading digital scores in multimedia contexts through two case studies. First, the author explains the practical reasons leading the composers to incorporate these devices and the technical solutions for their exploitation. Second, and by means of surveys of performers who have taken part in these case studies, some challenging features of multimodal interaction are flagged. A final discussion opens the doors to future collaboration between musicians and the scholarly community.
Virtual reality (VR) currently permeates almost any humanly devisable field of activity in which digital technologies are involved. Artists have, of course, taken advantage of these advancements for enhancing the audiovisual potentialities of their creative work. For instance, this aspect explains a conspicuous increasing interest among filmmakers in learning VR techniques. The incorporation of VR practices within the live performing arts has been slower than in the field of motion pictures, but some historical roots can be traced to the 1980s [1]. More recent overviews of this topic have emerged from theater studies [2] and dance studies [3], but, apparently among the live performing arts, music has directed wider scholarly attention to the influence of VR practices [4].
Networked musical performances using VR headsets stand as a relevant topic for these perspectives [5]. My article approaches the use of these devices for reading scores in multimedia contexts through two case studies. The first section presents the ways in which Franco-British composer and researcher Jonathan Bell (b. Aix-en-Provence, 1982) and Spanish composer-performer Óscar Escudero (b. Alcázar de San Juan, 1992) have conceived digital scores to be read through head-mounted displays, focusing on one specific work by each author in which the performers' bodily movements are artistically relevant [6]. The second section adopts the performers' position in the mentioned pieces, to grasp the challenges of the resulting staged conditions when compared with more standard contexts for reading a musical score. Finally, I suggest the possibility of potential empirical research with the help of these composers.
conceiving music for vr headsets: composer's perspective
The use of VR headsets for reading musical notation is a particular case among digital scores, i.e., encoded musical interfaces that "benefit from the usability and functionality of dynamic technological environments at some level and are responsive, evolving as the performance progresses and operating on a level of interactivity more in common with gaming and immersive new-media art" [7]. VR devices can be particularly relevant for understanding agency in the context of digital scores [8], as Bell's and Escudero's compositional practices reveal.
Bell's development of the SmartVox web application during his residence at IRCAM from 2014 to 2016 [9] presaged his attention to VR headsets. This application was aimed at delivering audiovisual scores via smartphones, which facilitates potential musicians' distancing on stage while remaining performatively cohesive. Bell's progressive technological steps toward head-mounted displays [10,11] reflect a concern about the portability of the required devices for musical reading and the musicians' bodily ease during performance. His audiovisual scores resort to a proportional representation of common practice notation, as his main concern with VR is not for graphic novelty but for performative portability and comfort. Echoing the concept of phygital play (physical + digital) [12], Bell explains his preference for double-mirror open head-mounted displays for mixed reality (MR) via his performers' feedback, since "the environment is still visible [End Page 390]
Screenshot of the visual score for the first voice of Common Ground (1:15) featuring the spatial chart below. (© Jonathan Bell)
around the score," which allows the user "to move freely on stage or elsewhere" [13]. Bell has so far composed the following pieces in which these technologies are needed, for at least one person on stage: In Memoriam J.-C. Risset (2018) for flute, clarinet, violin, and cello; Mit allen Augen… (2019) for voices and instrumental ensemble; Deliciae (2019–2020) for vocal octet; Common Ground (2019–2020) for female vocal sextet; Fantaisie (2021) for a cellist and an electronic performer; Machine à sons (2021) for vocal trio and accordion; and Conductor 2.0 (2022) for nine musicians and "augmented" conductor. All these pieces also incorporate electronic sounds.
Common Ground was realized at the request of Spanish visual artist Keke Vilabelda for his eponymous immersive installation at Melbourne (Color Image C). This collaboration emerged while Vilabelda and Bell were both recipients of a residency grant from Casa de Velázquez at Madrid, supported by the French Academy. During the performance of Common Ground [14], the singers intoned verses by Bell's father while they apparently wandered freely around the installation space. The singers' choreographic movements were carefully planned beforehand: Each performer received through the headset (Fig. 1) her audiovisual score part and a chart of the installation space indicating the physical position that every singer should take over time. The audiovisual part consists of a proportional score—in terms of rhythmic durations—with a cursor line that is synchronized with its aural synthetic reproduction; it is therefore reproduced by the headset as a score-follower video [15]. In addition, some blue notations inform about the bodily actions conceived by the composer. Per the previous remarks on portability and comfort, Bell opted for MR devices, as they allow singers to partially visualize the surrounding environment and assess the relative position of their peers. In his own words, this choice was aimed at "leaving free the lower field of view, which was appreciated as the performers need to move—sometimes rapidly—in the performance space" [16].
Escudero has not developed ad hoc software like Bell's SmartVox, but his interests in multimedia practices and social media have garnered scholarly attention recently [17–20]. He decided to make use of VR headsets when composing his Flat Time Trilogy, a cycle involving electronic sounds and video projection. Each piece of the trilogy is devised for a different player: POV (2017) for a saxophonist; OST (2018) for a performer with no musical instrument; and HOC (2018) for a percussionist. OST is cosigned alongside Belenish Moreno-Gil, with whom Escudero has developed further collaborative multimedia projects [21].
The performance of POV [22] requires a multichannel synchronization of the video projection, the electronic sounds and the inputs the saxophonist receives through the VR headset—the video score and a click track—in order to efficiently simulate a real-time interaction with the Internet on stage. As happened with Bell, Escudero opted for a commonpractice-based notation for his score but, unlike his peer, Escudero chose a model of VR headset which almost completely precludes visual access to the surrounding physical environment. The saxophonist must stand fixed during the whole performance while the video projection is fully screened on the performer's stage space (Fig. 2). Without a closed device, the musician would be often bathed with flashing lights and moving images while trying to play music—whether reading the score or performing it by heart. Although it is unclear that a high perceptual load will dramatically affect performative attention [23,24], Escudero's choice of a VR device for reading the score is clearly aimed at [End Page 391]
Pedro Pablo Cámara performing POV at ZKM (Karlsruhe), 24 November 2018. (© ZKM | Center for Art and Media. Photo: Felix Grünschloss.
visually isolating the saxophonist, which potentially avoids the risk of perceptual distraction with other elements of the multimedia piece. The musician's sight in POV is fully focused on a video score that, in addition to its relatively conventional musical notation, equally incorporates arrows and textual information indicating precise bodily movements during performance (Fig. 3). All these actions must be executed from the waist up, often relatively fast, while the legs remain as motionless as possible. Unlike Bell's choice of a scroll-follower video, Escudero split his score into many short excerpts that are periodically refreshed, so that there are never more than two excerpts in independent staves simultaneously screened—the video is doubled in order to provide a complete image for each eye (Fig. 4).
POV score (PDF version), page 10. (© BELOS Editions)
[End Page 392]
Screenshot of POV score (video version), measures 198–200 (6:21), matching with part of Fig. 3. (© BELOS Editions)
playing music with vr headsets: performers' perspective
Despite their different run times, as Common Ground is about twice as long as POV, the choice of these case studies is based on some similarities and specific differences, summarized below, from a performative viewpoint. Performers are asked in both artworks to read a digital score through egocentric displays [25]; however, the device for Common Ground is factually MR as the singers have a partial view of the stage space while POV resorts to a fully closed VR headset. In addition, performers receive precise information for fulfilling spatial movements in both cases but with very different purposes: The singers in Common Ground are wandering together around the installation while the saxophonist in POV is alone in a fixed stage location.
On 18 December 2018, I conducted some ethnographic observation at Madrid of the rehearsals for a monographic concert with Escudero's multimedia pieces in which The Flat Time Trilogy was programmed. During a rehearsal break, I asked to test by myself one of the VR headsets for better grasping the performers' perspective: after a few minutes, I started to feel unusual sensations and wondered to what extent they were caused by the particular perceptive and proprioceptive conditions imposed by the device I was carrying. This personal experience led me to contact the performers of Common Ground and POV to try to understand their interaction with VR technologies by means of a survey about this topic. In the case of Common Ground, all the singers provided me with survey responses: Judith Dodsworth, Sarah Maher, Natalie Grimmett, Angela Edgley, Alice Cleveley, and Rachael Joyce. In the case of POV, ten saxophonists fulfilled my request: Pedro Pablo Cámara, Henrique Portovedo, Joan Jordi Oliver, Alicia Camiña Ginés, Arno Roesems, Xelo Giner, Krzysztof Guńka, Juan José Faccio, Miguel Fernández de la Fuente, and Manuel Teles.
Previously, and according to the main challenges related to users' experience with VR [26], I reviewed the literature with an eye toward a meaningful plan for the survey. Informed by my own experience, I flagged three main issues: bodily challenges related to spatial orientation and balance, difficulties in adapting the sense of vision in the performative context, and a potential feeling of isolation on stage. Although the topic of isolation with VR technologies has been mostly considered by scholars from a societal perspective [27], there is evidence pointing to the capacity of VR to lower the feeling of presence [28] and to enhance a sense of fear in digital environments [29]. Both aspects are particularly sensitive in the context of public artistic performance, as they may lead to disconnection with a live audience. With regard to the physiological issues, the rise of visual fatigue with VR devices has been quantitatively measured [30,31], and the importance of peripheral vision—mostly unconscious [32]—has been also scrutinized in the context of VR practices. The lack of this peripheral vision may lead to body imbalance; in particular, some scholars recommend peripheral simulation [33] or self-motion illusion [34] as strategies to minimize discomfort. The impact of VR practices on proprioception has been remarked since the late 1990s, as it may affect spatial awareness and orientation [35,36]; more recent research is helping to quantitatively measure its effect in terms of physical balance [37,38].
Only one of the singers of Common Ground reported previous experience with VR headsets, which was in the context of game experience, not musical performance. Two singers chiefly highlighted that the ordinary sight deprivation led them to rely more on their aural perception to grasp spatial information—mainly for managing their own physical location and for avoiding colliding with their peers. Although they needed time to adapt, with a priority of aural feedback for managing performative motion, the schematic chart under the digital score helped them to build confidence in their actions. For this purpose, they first rehearsed by holding their phones with their hands and later added the VR headset. Unlike this potential aural focus, three singers stressed that they felt quite isolated during performance as the earpiece [End Page 393] provided synthetic guide tones while they were also deprived of normal sight. They highlighted that the quest for a balance of dynamics and for a sort of "collective energy" was particularly challenging in this situation. Two singers also remarked that the costumes for Common Ground—designed by visual artist Leticia Martínez Pérez—affected their hearing capacities and peripheral vision because of their thick material and unconventional head covering. In addition, two singers reported that it was not easy to get adapted to the VR headset's weight, which affected the head and neck posture required for proper voice emission. All singers who pointed to isolation or to a strong sense of deprivation also felt more unsure of spatial orientation; one of them even stressed that her balance was sometimes compromised. These kinds of sensations were often exacerbated by their wandering. Among the bodily strategies for a mitigation of these sensations, they mentioned very conscious feet movements to allow feeling grounded on stage and lifting of the chin for expanding the peripheral vision. Beyond the context of Common Ground, some singers believe that this experience has helped them to be more attentive to their sensorimotor reactions while simultaneously singing and moving. Other singers also acknowledged a deeper reflection upon the balance between individual and collective performance after playing this piece.
Concerning POV, four of the 10 surveyed saxophonists admitted prior experience, although sporadic, with VR glasses within the context of videogames, artistic installations—as visitors—and/or promotional advertising. Half of the surveyed musicians equally reported the feeling of spatial disorientation or instability; two of them even remembered a loss of balance leading to some dizziness. Although all the movements required in the score are from the waist up, several saxophonists acknowledged that the lack of visual references led them to become unconsciously displaced from the strictly fixed space on stage. To minimize this risk, two of them decided to put inconspicuous marks on the floor that were visually accessible when their headset was not tightly adjusted; another player opted for a very conscious avoidance of any movement of the feet. Another musician suggested that the click track and the electronic sounds were extremely helpful for managing coordinated body movement. As happened with Common Ground, several performers—three this time—mentioned feeling isolated, but their description was even more dramatic, as some of them described the feeling using terms such as "scary" and "overwhelming." However, the most persistent issue that was flagged by the saxophonists, as six of them explicitly mentioned it, concerns ocular movement while reading the digital score of POV. Saxophonists who mentioned sight issues, mainly visual stress and extraocular muscle fatigue, highlighted several—and not necessarily unrelated—plausible causes: the extremely short distance between the eyes and the screen, the automated and relatively fast movement of the video score, blocked peripheral vision, and the upward fixation of useful information impeding sight relaxation in the center of the screen. One player even reported some cognitive trouble in the mental fusion of the images provided to each eye when starting to study POV. Two musicians decided to learn the whole piece by heart with a conventional version of the score and without the VR device—one of them also mentioned playing it with closed eyes—for less dependence on vision during actual performance. Finally, four saxophonists flagged heat—both environmental and coming from the VR device—as an issue: Two of them sometimes finished with their screens fogged up. After having played Escudero's multimedia piece, four musicians acknowledge positive benefits in the development of their career, mostly in terms of improvement of body movement and spatial awareness.
discussion and potential prospects
Through the performers' surveys around both Common Ground and POV, the three main issues I foresaw—spatial orientation and balance, visual comfort, and felt isolation—were detected, with different degrees of impact on function in each case study. The headset model, the eventual interaction with other performers, and the particular requested motions for both multimedia pieces seem to explain the differences. Besides the factors strictly linked to the VR headset and concerning the case of POV, I believe that the restriction of performers' stage movement inhibited more performers' bodily responses than expected. We know, for instance, that knee flexions are distinctive bodily actions—although ancillary—for saxophonists embodying musical expression [39]; they were inhibited in POV.
In spite of the above-mentioned challenges, it is also true that a significant number of performers of both Common Ground and POV acknowledged that these experiences helped them to grow as musicians in terms of sensorimotor control and spatial awareness. These aspects are relevant in the context of musical education and training. Research on the educational impact of VR started to provide results at the end of the previous century [40]; today, large numbers of case studies have allowed a systematic review of the topic [41]—particularly within the context of VR headsets [42]. In the case of music education and training, and specifically musical performance, there is also a growing number of studies in recent years: Consider, for instance, a sample related to the content of this article, i.e., singing [43–45] and reed woodwinds [46,47].
I hope that case studies like Common Ground and POV may stimulate collaborative exchanges among composers, performers, music educators, and cognitive scientists that may lead to significant and systematized improvements in the teaching and development of musical performance with the aid of VR devices. In spite of the abovementioned risk of a sense of isolation—and sometimes fear—during performance with VR headsets, current research has shown that an immersive use of VR in the context of musical education and training may significantly help to reduce stress and anxiety [48–52]. A multidisciplinary collaboration in service of the optimal conditions for reading musical scores through VR devices would therefore have a very positive impact beyond the strict boundaries of new creation. [End Page 394]
Complutense University of Madrid, Musicology Department, Facultad de Geografía e Historia, Edif. B, c/Prof. Aranguren s/n, 28040 Madrid, Spain.
Email: jlbesada@ucm.es.

Following two postdoc appointments at IRCAM and at Strasbourg University, josé l. besada currently works at the Complutense University of Madrid. His research primarily focuses on the formal, cognitive, and technological features of both contemporary musical practices and music theory. He also conducts two weekly broadcasts on contemporary music for Spanish public radio.
Acknowledgments
I warmly thank the composers and the performers for their help during the preparation of the manuscript. This work is funded by the Ramón y Cajal program of the Research State Agency of the Spanish Ministry of Science and Innovation and the European Social Fund (ref. RYC2020-028670-I).
References and Notes
1. S. Dixon, "A History of Virtual Reality in Performance," International Journal of Performance Arts and Digital Media 2, no. 1 (2006): 23–54.
2. S. Pike, "Virtually Relevant: AR/VR and the Theater," Fusion Journal 17 (2020): 120–28.
3. S. Smith, "Dance Performance and Virtual Reality: An Investigation of Current Practice and a Suggested Tool for Analysis," International Journal of Performance Arts and Digital Media 14, no. 2 (2018): 199–214.
4. A large monograph covering a wide range of topics on this issue explains my assertion: S. Witheley and S. Rambarran, eds., The Oxford Handbook of Music and Virtuality (Oxford University Press, 2016).
5. B. Loveridge, "Networked Music Performance in Virtual Reality: Current Perspectives," Journal of Network Music and Arts 2, No. 1 (2020), commons.library.stonybrook.edu/jonma/vol2/iss1/2.
6. Bell and Escudero are two cases among other authors showing interest on these technologies; see for instance D. Kim-Boyle, "3D Notations and the Immersive Score," Leonardo Music Journal 29 (2019): 39–41. Further examples of the use of VR headsets on stage and beyond score reading are reported; see for instance A. Schürmer, "The Extensions of Opera: Radio, Internet, and Immersion," Contemporary Music Review 41, no. 4 (2022): 401–13.
7. C. Vear, The Digital Score: Musicianship, Creativity and Innovation (Routledge, 2019), 5.
8. B. A. Miller, "Digital Scores, Algorithmic Agents, and Encoded Ontologies: On the Objects of Musical Composition," in Material Cultures of Musical Notation: New Perspectives of Musical Inscription, ed. F. Schuiling and E. Payne (Routledge, 2022), 155–68.
9. SmartVox application, accessed 8 October 2024, github.com/belljonathan50/SmartVox0.1.
10. J. Bell, "Networked Head-Mounted Displays for Animated Notation and Audio-Scores with SmartVox," in Proceedings of the International Conference on New Interfaces for Musical Expression NIME'19 (Federal University of Rio Grande do Sul, 2019): 21–24, nime.org/proceedings/2019/nime2019_paper005.pdf.
11. J. Bell, "Improvements in bach 0.8.1, a User's Perspective," in Proceedings of the 17th Sound and Music Computing Conference SMC (University of Turin, 2020): 18–24, hal.science/hal-02773883/document.
12. M. L. Lupetti, G. Piumatti, and F. Rossetto, "Phygital Play. HRI in a New Gaming Scenario," in 7th International Conference on Intelligent Technologies for Interactive Entertainment INTETAIN (University of Turin, 2015): 17–21, eudl.eu/doi/10.4108/icst.intetain.2015.259563.
13. J. Bell and B. Carey, "Animation Notation, Score Distribution and AR-VR Environments for Spectral Mimetic Transfer in Music Composition," in Proceedings of the 5th International Conference on Technologies for Music Notation and Representation TENOR (Monash University, 2019): 11, hal.science/hal-02280057v1.
14. Jonathan Bell, Common Ground, performance recording, 12 min, 6 sec, accessed 8 October 2024, www.youtube.com/watch?v=ZrLgbBw4xfU.
15. Jonathan Bell, Common Ground, voice recording, 8 min., 25 sec., accessed 8 October 2024, www.youtube.com/watch?v=MvZPKYLXj9o. See this part for the first voice.
16. J. Bell and A. Wyatt, "Common Ground, Music and Movement Directed by Raspberry Pi," in Proceedings of the 6th International Conference on Technologies for Music Notation and Representation TENOR (Hochschule für Musik und Theater Hamburg: 2020–2021): 199, www.tenor-conference.org/proceedings/2020/26_Bell_tenor20.pdf.
17. L. Kjellsson, "Rupturas Espaciotemporales y el Sujeto Mutante en España de Manuel Vilas y POV de Óscar Escudero," in Perspectivas Sobre el Futuro de la Narrativa Hispánica: Ensayos y Testimonios, ed. R. Lefere et al. (Publicaciones de la Universidad de Alicante, 2020), 371–93.
18. J. L. Besada, "Cover, Custom, and DIY? Memetic Features in Multimedia Creative Practices," Contemporary Music Review 41, no. 4 (2022): 382–400.
19. F. Planas Pla, "Composing Social Media: The Representation of the Physicality-Virtuality Continuum in Óscar Escudero and Belenish Moreno-Gil's Works," INSAM Journal of Contemporary Music, Art and Technology 8 (2022): 80–103.
20. I. J. Piniella Grillet, "SPAM (An)Archive: Performing under Surveillance," INSAM Journal of Contemporary Music, Art and Technology 9 (2022): 99–121.
21. J. L. Besada, "Empowerment durch multimediale Praktiken: Das Duo Belenish Moreno Gil—Óscar Escudero und der digitale Wandel," Neue Zeitschrift für Musik 3 (2022): 29–32.
22. Óscar Escudero, POV, performance video recording, 9 min, 16 sec, accessed 8 October 2024, www.youtube.com/watch?v=41B1ki9cCDg. The score in the lower space of the video does not align with the performer's visualization in the VR headset.
23. J. D. Cosman and S. P Vecera, "Attentional Capture under High Perceptual Load," Psychonomic Bulletin and Review 17, no. 6 (2010): 815–20.
24. N. Lavie, D. M. Beck, and N. Konstantinou, "Blinded by the Load: Attention, Awareness and the Role of Perceptual Load," Philosophical Transactions of the Royal Society B 369, no. 1641 (2014), royalsociety publishing. org/doi/10.1098/rstb.2013.0205.
25. Borrowing the terminology from recent research on musical controllers: L. Atassi, "Allocentric and Egocentric Controllers: Similarities and Differences," Leonardo 55, no. 4 (2022): 394–98.
26. C. Hillmann, UX for XR: User Experience Design and Strategies for Immersive Technologies (Apress, 2021).
27. M. Tretter, M. Hahn, and P. Dabrock, "Towards Smart Glasses Society? Ethical Perspectives on Extended Realities and Augmenting Technologies," Frontiers in Virtual Reality 5 (2024), frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2024.1404890/full.
28. F. Aardema et al., "Virtual Reality Induces Dissociation and Lowers Sense of Presence in Objective Reality," Cyberpsychology, Behavior, and Social Networking 13, no. 4 (2010): 429–35.
29. G. Lambrakopoulos et al., "Experimental Evaluation of the Impact of Virtual Reality on the Sentiment of Fear," 23rd International Conference on Virtual Systems and Multimedia VSMM (University College Dublin and University of Ulster, 2017): 1–6, ieeexplore.ieee.org/document/8346251.
30. S. H. Lee et al., "Visual Fatigue Induced by Watching Virtual Reality Device and the Effect of Anisometropia," Ergonomics 64, no. 12 (2021): 1522–31.
31. J. Iskander and M. Hossny, "Measuring the Likelihood of VR Visual Fatigue through Ocular Biomechanics," Displays 70 (2021), sciencedirect.com/science/article/abs/pii/S0141938221001098.
32. E. Hitzel, Effects on Peripheral Vision on Eye Movements: A Visual Study of Gaze Allocation in Naturalistic Tasks (Springer, 2015).
33. M. Slater and M. Usoh, "Simulating Peripheral Vision in Immersive Virtual Environments," Computers & Graphics 17, no. 6 (1993): 643–53.
34. G. Bruder, F. Steinicke, and P. Wieland, "Self-Motion Illusions in Immersive Virtual Reality Environments," in Proceedings of the IEEE Virtual Reality Conference (Singapore, 2011): 39–46.
35. N. H. Bakker, P. J. Werkhoven, and P. O. Passenier, "The Effects of Proprioceptive and Visual Feedback on Geographical Orientation in Virtual Environments," Presence: Teleoperators and Virtual Environments 8, no. 1 (1999): 36–53.
36. R.A. Ruddle and P. Péruch, "Effects of Proprioceptive Feedback and Environmental Characteristics on Spatial Learning in Virtual Environments," International Journal of Human-Computer Studies 60, no. 3 (2004): 299–326.
37. H. K. Kim et al., "Virtual Reality Sickness Questionnaire (VRSQ): Motion Sickness Measurement Index in a Virtual Reality Environment," Applied Ergonomics 69 (2018): 66–73.
38. S. H. Park and G. C. Lee, "Full-Immersion Virtual Reality: Adverse Effects Related to Static Balance," Neuroscience Letters 733 (2020), pubmed.ncbi.nlm.nih.gov/32294492/.
39. N. Moura et al., "Knee Flexion of Saxophone Players Anticipates Tonal Context of Music," Nature Partner Journals: Science of Learning 8 (2023), nature.com/articles/s41539-023-00172-z.
40. J. Psotka, "Immersive Training Systems: Virtual Reality and Education and Training," Instructional Science 23, nos. 5–6 (1995): 405–31.
41. H. Hu, G. Liu, and T. Xie, "Multimodal Interaction in Virtual Reality Supported Education: A Systematic Review," Interactive Learning Environments, advance online publication, tandfonline.com/doi/abs/10.1080/10494820.2024.2342993.
42. L. Jensen and F. Konradsen, "A Review of the use of Virtual Reality Head-Mounted Displays in Education and Training," Education and Information Technologies 23 (2018): 1515–29.
43. S. Doganyigit and O. F. Islim, "Virtual Reality in Vocal Training: A Case Study," Music Education Research 23, no. 3 (2021): 391–401.
44. J. Zhang, Q. Cheng, and P. Fu, "The Effect of Virtual Reality (VR) Training on Mastery of the Five Elements of Singing," Journal of Music, Technology and Education 15, no. 2–3 (2022): 165–82.
45. P. Gao, "System Design of Vocal Music Teaching Platform Based on Virtual Reality Technology," in Frontier Computing FC 2022, ed. J. C. Hung, N. Y. Yen, and J. W. Chang (Springer, 2023), 587–94.
46. E.K. Orman, "Effect of Virtual Reality Graded Exposure on Heart Rate and Self-Reported Anxiety Levels of Performing Saxophonists," Journal of Research in Music Education 51, no. 4 (2003): 302–15.
47. H. Gao and F. Li, "The Application of Virtual Reality Technology in the Teaching of Clarinet Music Art under the Mobile Wireless Network Learning Environment," Entertainment Computing 49 (2024), sciencedirect.com/science/article/abs/pii/S1875952123000745.
48. J. Bissonnette et al., "Virtual Reality Exposure Training for Musicians: Its Effect on Performance Anxiety and Quality," Medical Problems of Performing Artists 30, no. 1 (2015): 169–77.
49. J. Bissonnette et al., "Evolution of Music Performance Anxiety and Quality of Performance During Virtual Reality Exposure Training," Virtual Reality 20 (2016): 71–81.
50. M. van Zyl, "The Effects of Virtual Reality on Music Performance Anxiety Among University-Level Music Majors," Visions of Research in Music Education 35, digitalcommons.lib.uconn.edu/vrme/vol35/iss1/15.
51. D. Bellinger et al., "The Application of Virtual Reality Exposure Versus Relaxation Training in Music Performance Anxiety: A Randomized Controlled Study," BMC Psychiatry 23 (2023), bmcpsychiatry.biomedcentral.com/articles/10.1186/s12888-023-05040-z.
52. P. Yang, "Virtual Reality Tools to Support Music Students to Cope with Anxiety and Overcome Stress," Education and Information Technologies 1 (2024), link.springer.com/article/10.1007/s10639-024-12464-x.
COLOR IMAGE C: MUSICAL SCORES FOR VIRTUAL REALITY HEADSETS: COMPOSITIONAL STRAT EGIES AND PERFORMATI VE CHALLENGES
Premiere of Common Ground at Grau Projekt art gallery (Melbourne) on 6 February 2020.(© Jonathan Bell. Photo © Simon Strong.) (See the article in this issue by José L. Besada.)