Automatic note generator for Javanese gamelan music accompaniment using deep learning

(1) Arik Kurniawati Mail (Department of Electrical Engineering, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia)
(2) * Eko Mulyanto Yuniarno Mail (Department of Electrical Engineering and Department of Computer Engineering, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia)
(3) Yoyon Kusnendar Suprapto Mail (Department of Electrical Engineering and Department of Computer Engineering, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia)
(4) Aditya Nur Ikhsan Soewidiatmaka Mail (Soewidiatmaka Gamelan, Bandung, Indonesia)
*corresponding author

Abstract


Javanese gamelan is a traditional form of music from Indonesia with a variety of styles and patterns. One of these patterns is the harmony music of the Bonang Barung and Bonang Penerus instruments. When playing gamelan, the resulting patterns can vary based on the music’s rhythm or dynamics, which can be challenging for novice players unfamiliar with the gamelan rules and notation system, which only provides melodic notes. Unlike in modern music, where harmony notes are often the same for all instruments, harmony music in Javanese gamelan is vital in establishing the character of a song. With technological advancements, musical composition can be generated automatically without human participation, which has become a trend in music generation research. This study proposes a method to generate musical accompaniment notes for harmony music using a bidirectional long-term memory (BiLSTM) network and compares it with recurrent neural network (RNN) and long-term memory (LSTM) models that use numerical notation to represent musical data, making it easier to learn the variations of harmony music in Javanese gamelan. This method replaces the gamelan composer in completing the notation for all the instruments in a song. To evaluate the generated harmonic music, note distance, dynamic time warping (DTW), and cross-correlation techniques were used to measure the distance between the system-generated results and the gamelan composer's creations. In addition, audio features were extracted and used to visualize the audio. The experimental results show that all models produced better accuracy results when using all features of the song, reaching a value of around 90%, compared to using only 2 features (rhythm and note of melody), which reached 65-70%. Furthermore, the BiLSTM model produced musical harmonies that were more similar to the original music (+93%) than those generated by the LSTM (+92%) and RNN (+90%). This study can be applied to performing Javanese gamelan music.

Keywords


Bidirectional LSTM; Deep Learning; Gamelan music; Javanese melody; Musical harmonic

   

DOI

https://doi.org/10.26555/ijain.v9i2.1031
      

Article metrics

Abstract views : 561 | PDF views : 180

   

Cite

   

Full Text

Download

References


[1] M. Cousineau, S. Carcagno, L. Demany, and D. Pressnitzer, “What is a melody? On the relationship between pitch and brightness of timbre,” Front. Syst. Neurosci., vol. 7, no. Jan, p. 127, Jan. 2014, doi: 10.3389/fnsys.2013.00127.

[2] J. D. Lomas and H. Xue, “Harmony in Design: A Synthesis of Literature from Classical Philosophy, the Sciences, Economics, and Design,” She Ji J. Des. Econ. Innov., vol. 8, no. 1, pp. 5–64, 2022, doi: 10.1016/j.sheji.2022.01.001.

[3] J. Becker, “Traditional Music in Modern Java,” University of Hawaii Press, p. 253, 2019, doi: 10.2307/j.ctv9zcjt8.

[4] A. Ranjan, V. N. J. Behera, and M. Reza, “Using a Bi-Directional Long Short-Term Memory Model with Attention Mechanism Trained on MIDI Data for Generating Unique Music,” in Studies in Computational Intelligence, vol. 1006, Springer Science and Business Media Deutschland GmbH, 2022, pp. 219–239, doi: 10.1007/978-3-030-92245-0_10.

[5] S. Hastanto, “Konsep Pathet Dalam Karawitan Jawa,” 2009. [Online]. Available: https://repositori.kemdikbud.go.id/13325/1/Konsep pathet dalam karawitan jawa.pdf.

[6] J. Becker, “Earth, Fire, Sakti, and the Javanese Gamelan,” Ethnomusicology, vol. 32, no. 3, p. 385, 1988, doi: 10.2307/851938.

[7] A. N. (Andrew N. Weintraub, “Unplayed Melodies: Javanese Gamelan and the Genesis of Music Theory (review),” Notes, vol. 63, no. 1, pp. 87–90, 2006, doi: 10.1353/not.2006.0123.

[8] B. Drummond, “Instruments of the Gamelan,” Boston Village Gamelan, pp. 1–14, 2014. [Online]. Available: https://www.gamelanbvg.com/gendhing/gamelanGlossary.pdf.

[9] J. Hilder, “Central Javanese Gamelon Handbook.” p. 51, 1992, [Online]. Available: https://gamelan.org.nz/wp-content/uploads/2015/02/Jo-Hilder-Central-Javanese-Gamelan.pdf.

[10] W. Widodo, B. Susetyo, S. Walton, and W. Appleton, “Implementation of Kupingan Method in Javanese Karawitan Music Training for Foreigners,” Harmon. J. Arts Res. Educ., vol. 21, no. 1, pp. 105–114, Jun. 2021, doi: 10.15294/harmonia.v21i1.29993.

[11] O. S. Olufunso, A. E. Evwiekpaefe, and M. E. Irhebhude, “Gender recognition based fingerprints using dynamic horizontal voting ensemble deep learning,” Int. J. Adv. Intell. Informatics, vol. 8, no. 3, p. 324, Nov. 2022, doi: 10.26555/ijain.v8i3.927.

[12] S. Lathuiliere, P. Mesejo, X. Alameda-Pineda, and R. Horaud, “A Comprehensive Analysis of Deep Regression,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 9, pp. 2065–2081, Sep. 2020, doi: 10.1109/TPAMI.2019.2910523.

[13] B. Kiran, D. Thomas, and R. Parakkal, “An Overview of Deep Learning Based Methods for Unsupervised and Semi-Supervised Anomaly Detection in Videos,” J. Imaging, vol. 4, no. 2, p. 36, Feb. 2018, doi: 10.3390/jimaging4020036.

[14] M. Bakator and D. Radosav, “Deep Learning and Medical Diagnosis: A Review of Literature,” Multimodal Technol. Interact., vol. 2, no. 3, p. 47, Aug. 2018, doi: 10.3390/mti2030047.

[15] H. Li, “Deep learning for natural language processing: advantages and challenges,” Natl. Sci. Rev., vol. 5, no. 1, pp. 24–26, Jan. 2018, doi: 10.1093/nsr/nwx110.

[16] T. Yampaka, S. Vonganansup, and P. Labcharoenwongs, “Feature selection using regression mutual information deep convolution neuron networks for COVID-19 X-ray image classification,” Int. J. Adv. Intell. Informatics, vol. 8, no. 2, p. 199, Jul. 2022, doi: 10.26555/ijain.v8i2.809.

[17] S. H. Ing, A. A. Abdullah, M. Y. Mashor, Z.-A. Mohamed-Hussein, Z. Mohamed, and W. C. Ang, “Exploration of hybrid deep learning algorithms for covid-19 mrna vaccine degradation prediction system,” Int. J. Adv. Intell. Informatics, vol. 8, no. 3, p. 404, Nov. 2022, doi: 10.26555/ijain.v8i3.950.

[18] S. Bhardwaj, S. M. Salim, D. Talha Ali Khan, and S. JavadiMasoudian, “Automated Music Generation using Deep Learning,” in 2022 International Conference Automatics and Informatics (ICAI), Oct. 2022, pp. 193–198, doi: 10.1109/ICAI55857.2022.9960063.

[19] O. Peracha, “Improving Polyphonic Music Models with Feature-Rich Encoding,” in Proceedings of the 21st International Society for Music Information Retrieval Conference, ISMIR 2020, Nov. 2019, p. 7. [Online]. Available: https://arxiv.org/abs/1911.11775.

[20] J. Ens and P. Pasquier, “MMM : Exploring Conditional Multi-Track Music Generation with the Transformer,” arxiv Comput. Sci., p. 10, Aug. 2020, Accessed: May 27, 2023. [Online]. Available: https://arxiv.org/abs/2008.06048v2.

[21] A. Kurniawati, Y. K. Suprapto, and E. M. Yuniarno, “Multilayer Perceptron for Symbolic Indonesian Music Generation,” in 2020 International Seminar on Intelligent Technology and Its Applications (ISITIA), Jul. 2020, pp. 228–233, doi: 10.1109/ISITIA49792.2020.9163723.

[22] J.-P. Briot, G. Hadjeres, and F.-D. Pachet, “Deep Learning Techniques for Music Generation,” Cham: Springer International Publishing, p. 284 , 2020, doi: 10.1007/978-3-319-70163-9.

[23] M. Civit, J. Civit-Masot, F. Cuadrado, and M. J. Escalona, “A systematic review of artificial intelligence-based music generation: Scope, applications, and future trends,” Expert Syst. Appl., vol. 209, p. 118190, Dec. 2022, doi: 10.1016/j.eswa.2022.118190.

[24] J. Xie, “A Novel Method of Music Generation Based on Three Different Recurrent Neural Networks,” J. Phys. Conf. Ser., vol. 1549, no. 4, p. 042034, Jun. 2020, doi: 10.1088/1742-6596/1549/4/042034.

[25] N. Agarwala, Y. Inoue, and A. Sly, “Music composition using neural network,” p. 10, 1992. [Online]. Available: https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1174/reports/2762076.pdf.

[26] S. Mangal, R. Modak, and P. Joshi, “LSTM Based Music Generation System,” IARJSET, vol. 6, no. 5, pp. 47–54, May 2019, doi: 10.17148/IARJSET.2019.6508.

[27] F. Shah, T. Naik, and N. Vyas, “LSTM Based Music Generation,” in 2019 International Conference on Machine Learning and Data Engineering (iCMLDE), Dec. 2019, pp. 48–53, doi: 10.1109/iCMLDE49015.2019.00020.

[28] S. Agarwal, V. Saxena, V. Singal, and S. Aggarwal, “LSTM based Music Generation with Dataset Preprocessing and Reconstruction Techniques,” in 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Nov. 2018, pp. 455–462, doi: 10.1109/SSCI.2018.8628712.

[29] S. Chikkamath and N. S R, “Melody generation using LSTM and BI-LSTM Network,” in 2021 International Conference on Computational Intelligence and Computing Applications (ICCICA), Nov. 2021, pp. 1–6, doi: 10.1109/ICCICA52458.2021.9697286.

[30] S. S. Azmi, C. S. Shreekara, and S. Baliga, “Music generation using Bidirectional Recurrent Neural Nets,” Int. Res. J. Eng. Technol., vol. 7, no. May, pp. 6863–6866, 2020, [Online]. Available: https://www.academia.edu/download/64615427/IRJET-V7I51292.pdf.

[31] T. Jiang, Q. Xiao, and X. Yin, “Music Generation Using Bidirectional Recurrent Network,” in 2019 IEEE 2nd International Conference on Electronics Technology (ICET), May 2019, pp. 564–569, doi: 10.1109/ELTECH.2019.8839399.

[32] H. Elfaik and E. H. Nfaoui, “Deep Bidirectional LSTM Network Learning-Based Sentiment Analysis for Arabic Text,” J. Intell. Syst., vol. 30, no. 1, pp. 395–412, Dec. 2020, doi: 10.1515/jisys-2020-0021.

[33] N. M R and S. Mohan B S, “Music Genre Classification using Spectrograms,” in 2020 International Conference on Power, Instrumentation, Control and Computing (PICC), Dec. 2020, pp. 1–5, doi: 10.1109/PICC51425.2020.9362364.

[34] L. Lerato and T. Niesler, “Feature trajectory dynamic time warping for clustering of speech segments,” EURASIP J. Audio, Speech, Music Process., vol. 2019, no. 1, p. 6, Dec. 2019, doi: 10.1186/s13636-019-0149-9.

[35] C. Jiang, D. Yang, and X. Chen, “Similarity Learning For Cover Song Identification Using Cross-Similarity Matrices of Multi-Level Deep Sequences,” in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2020, vol. 2020-May, pp. 26–30, doi: 10.1109/ICASSP40776.2020.9053257.




Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571  (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
   andri.pranolo.id@ieee.org (publication issues)

View IJAIN Stats

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0