DjangoBooks.com

the Djangobot

edited June 2016 in Licks and Patterns Posts: 13
[UPDATE 06/06/2016 :: Denis Chang contributed some more transcriptions, and there are new samples in the post at the end of the thread]

Skip to the end if you just want to hear the Djangobot improvising.

I'm using RNNs (Recurrent Neural Networks) to generate sight-reading exercises for classical guitar.

The basic idea is, we train the network on a huge database of classical guitar music, and then hook the output back into the input, so that it, in a sense, starts to "dream up" an infinite amount of original classical guitar music based on it's own "knowledge" learned from the database (knowledge that often times we don't even really understand).

It's a promising approach for generating sight-reading exercises in particular, because what it produces will be a result of what the RNN considers the most common patterns, abstractions, and relationships that exist in the database - exactly the kinds things you might want to be practicing if you were trying to learn classical guitar.

And what comes out of the RNN is usually very musically coherent - at least in small chunks (it does poorly at learning things like song structure).

Anyways, that's great, but jazz is more fun than classical guitar, so I tried using the same techniques (plus extra jazz mojo) to make a program that generates jazz guitar music. The problem is that there is significantly less jazz guitar music available in a digital format than needed to work with the RNN.

However, recently I ran across the work of Benjamin Givan, who transcribed over 200 Django solos for his PhD dissertation and made the transcriptions freely available online. Check it out if you haven't already seen these transcriptions: https://sites.google.com/site/klemjc/

He graciously provided me the original engraving files and I was able to get them into a format that would work with the RNN.

So I give you - the Djangobot, soloing over the changes to a few different standards:





The solos are in MIDI (synthesized sound) and so unfortunately sound computer-y and un-emotive.

Keep in mind that 200-ish solos is still a very small number for applying these techniques. These results are probably not good enough yet to use this pedagogically (ie, you might learn some things from these generated solos about Django's style, but you would have to be selective about it).

For comparison, the classical guitar database was around 3500 full songs. Approaching that amount of learning material, the generated output begins to take on the appearance of a huge, essentially infinite set of études, ranging through different aspects of the style. Groovy stuff.
nomadgtr

Comments

  • adrianadrian AmsterdamVirtuoso
    Posts: 545
    Nice project! What sorts of features are you using when training the network? In other words, how to you convert a Django transcription (or classical guitar piece) into the format that an RNN needs?

    Adrian
  • edited May 2016 Posts: 13
    Adrian - thanks!

    The main inter-conversion format is MIDI, the classical guitar database for example is entirely just MIDI files of the scores.

    With the jazz, though, I needed to keep the context of how chords changes relate to the melody, so MIDI alone didn't suffice. The main inter-conversion format for the jazz project is MusicXML, because it can represent the notes and the changes together. Though I could also build a MusicXML file out of a MIDI file with the changes in some other format - text, say.

    For either, the MIDI or the MusicXML, before going into the RNN it gets converted down to an entirely textual format, similar to ABC notation, but customized and tokenized to make the training easier, and to make it easier to control the generation process after training. To get "hooks" into the generation process, if you will.

    Ultimately the features are just the notes (pitch+duration), the chord names, and a few things that don't seem to affect much (key sig, time sig, tempo). There is no explicit music knowledge in the network - it has no conception of chord tones vs passing tones, scales, etc. For example, it does not know explicitly what notes make up an "A7" chord, other than by seeing what notes are usually played over the sections labelled "A7".

    All the network ever sees is just an in-order list of notes and chord names - everything else is learned implicitly from the examples.
  • wimwim ChicagoModerator Barault #503 replica
    Posts: 1,457
    Uh-oh .. the robots are coming

    Are you Djohn Connor?

    I need your clothes, your boots, and your wegen pick!



    images%2B(3).jpg
    adrianSvanis1337andrewhannumDaveycNemanja
  • edited June 2016 Posts: 13
    Alright here goes with some updates. [Again if you want to listen just skip to the end]

    Myself and /u/davidlawrence recently had the pleasure of studying with the infamous Denis Chang for the better part of a week at his home in Montreal. (thanks Denis! and if you're thinking at all about staying or doing lessons with Denis - do it! you will not regret it!)

    He (also) graciously agreed to feed the Djangobot with some of his own collection of transcriptions - a bit of Gonzalo Bergara, a bit of Stochelo Rosenberg, and a bit of Yorgui Loeffler.

    With these additions, the original dataset has been increased by about 25%. Also I guess technically now it's no longer a Djangobot. Just a gypsy jazz bot. Or maybe a Djangalo Rosenhardtler .. bot

    I've got two samples that show my favorite bits from the new run:

    Nuages -- this is just a snippet from the very end of a solo, around when the song moves to C major. this is starting to sound coherent to me, at least compared to the previous samples



    Django's Tiger -- this one has examples of both octave voicings (near the first E7 section), and chord soloing (near the "Christophe changes") which is why I chose it.



    I'm actually sort of suprised how much difference such a small increase in the dataset seemed to make (to my ear). I notice that with Denis' transcriptions it seems to have a lot better idea of ornamentations (trills, grace notes).

    Though, as Denis pointed out, it might be that the network does better with players who are more lick-based than outside-experimental. So for example, it might be easier to create a Stochelo-bot, rather than a Django-bot or a Bireli-bot.
  • Posts: 43
    Wow!
    That definitely made a big difference. It is actually starting to sound a little musical! Haha
  • BonesBones Moderator
    edited June 2016 Posts: 3,319
    Yeah that made a big difference. Getting there....

    A better acoustic guitar midi sound would make it better as well.
  • rob.cuellarirob.cuellari ✭✭✭✭
    Posts: 114
    is it preferable to submit entire solos or just licks?

    i can't write in musical notation, but would be happy to transcribe certain licks if anyone could then transpose that into musical notation for upload.

    (i really only have been transcribing django, and would prefer that for now as it's just a hobby of mine)
  • edited June 2016 Posts: 13
    @rob - entire solos is definitely preferable! if you transcribe into tab software such as GuitarPro, TuxGuitar, or even better MuseScore, then it's easy to convert to normal musical notation from there
Sign In or Register to comment.
Home  |  Forum  |  Blog  |  Contact  |  206-528-9873
The Premier Gypsy Jazz Marketplace
DjangoBooks.com
USD CAD GBP EUR AUD
USD CAD GBP EUR AUD
Banner Adverts
Sell Your Guitar
© 2024 DjangoBooks.com, all rights reserved worldwide.
Software: Kryptronic eCommerce, Copyright 1999-2024 Kryptronic, Inc. Exec Time: 0.018488 Seconds Memory Usage: 1.008797 Megabytes
Kryptronic