SunVox & MPE

Multi-platform modular music creation studio
Post Reply
User avatar
cube48
Posts: 114
Joined: Tue Jun 21, 2011 10:33 am

SunVox & MPE

Post by cube48 »

MPE - MIDI Polyphonic Expression
Direct link to current MPE specification document.

Long post warning! Only for curious! :)

I'm happy owner of Linnstrument (in short LS) and of course I had to try how SunVox plays along with it.
LS offers 3 dimensions for polyphonic expression. All 3 axis can be assigned to any MIDI CC but it's a good practice to stick to a MPE standard set of controls:
X axis - pitch bend over horizontal rows of pads
Y axis - MIDI CC 1 (ModWheel) or 74 (Brightness) - I stick mostly to ModWheel as it is more common modulation source for HW/SW synths
Z axis - Channel Pressure - poly pressure is also possible but it's rare that synths respond to it so I stick to ChanPress

In ideal MPE scenario your sound module is capable of multi-timbral performance or in other words can assign individual voices to separate MIDI channels. That way can every individual held note have it's own independent pitch/CC1/ChanPress controller streams.

In SunVox we already can assign MIDI CC's separated by MIDI channels (since 1.9 if I'm correct). So axis Y and Z is easy to assign to any parameter on different modules. With the latest update 1.9.1 we even have Glide module so the PitchBend response is also possible. LS has physically 25 horizontal pads available for pitch slides and allows you to set the PitchBend range freely in semitones. To get linear PB response it's ideal to set 24 semitones on LS which covers the longest possible PB slide on it. In SunVox you need to set Glide module's Pitch to respond to PitchBend MIDI messages on desired MIDI channel and set the Glide Scale to 40% (that fits perfectly to those 24 semitones of PB range). That way you'll get nice and clean PB slides over full width of LS rows.

The only missing part is to distribute the note events separated by MIDI channels to different SunVox modules. According to various NightRadios comments this feature (or MIDI channel handling) is planned for future SunVox versions. But I was thinking if I could achieve that with some workaround.
In the attached example I have four voice rows, each pair of Glide and Analog Generator responds to MIDI channels 2 - 5 respectively (4 voices MPE is enough for the start) in the following way:
Glide > Pitch > MIDI Ch X - PitchBend (axis X on LS)
AG > PulseWidth > MIDI Ch X - MIDI CC1 (axis Y on LS)
AG > FilterFreq > MIDI Ch X - ChanPress (axis Z on LS)
AG > FilterRes > MIDI Ch X - ChanPress (axis Z on LS)
AG > Volume > MIDI Ch X - ChanPress (axis Z on LS)

On LS I use NotePerChannel mode so each note triggered sends data on different MIDI channel (2 - 5). That way the MIDI CC's are already responding polyphonicaly and I can see individual Analog Generators reacting to the round-robin channel CC's distribution. Problem is that all four AG's receive the note events and as they are set to monophonic they all re-pitch to the new note.
If I could create some sort of counter (1-4) sensing new notes and cut the signals between main MultiSynth module and individual Glide modules in cyclic way, I could get separate response to single note events. So for MPE it doesn't necessarily has to differentiate MIDI channels.
So the crucial question of this long post (sorry) is:

How to distribute individual notes to modules in cyclic way?

Thank you if you managed to read to this point :D And thanks for any ideas on note distribution.
Attachments
MPE_experiment.sunvox
(6.2 KiB) Downloaded 349 times
Last edited by cube48 on Wed Jun 22, 2016 12:32 am, edited 1 time in total.
Koekepan
Posts: 263
Joined: Thu Dec 05, 2013 4:56 am

Re: SunVox & MPE

Post by Koekepan »

Your crucial question is: "How to distribute individual notes to modules in cyclic way?"

There is currently (to the best of my knowledge) no easy way of doing so.

However, this is a requested feature already. I requested it because I was working with modulations between generators, and I wanted to have polyphony without modulations going crazy.

There may be an elegant way for NightRadio to incorporate channel pressure, MPE, and release velocity along with sustain and note velocity, by regarding them all as signals to the envelope captured and passed along down each chain.

But I'm guessing. It might not be worth it. Still, sunvox as the most expressive softsynth you can run on an ipad would be pretty sweet.
User avatar
cube48
Posts: 114
Joined: Tue Jun 21, 2011 10:33 am

Re: SunVox & MPE

Post by cube48 »

Channel pressure is supported in SunVox as MIDI controller. If Release Velocity is the same as Release Time, it is then MIDI CC 72 which can be assigned. Sustain is MIDI CC 64, also possible to assign. What do you mean by capturing the envelope? If I understand you right, then you could just assign those MIDI CC's to desired parameters and shape your envelope in real-time by sending CC's from your controller.

I'm also patiently looking for the day when further MIDI implementation comes to SunVox. It's going to be an absolute killer (not that it isn't already).
Koekepan
Posts: 263
Joined: Thu Dec 05, 2013 4:56 am

Re: SunVox & MPE

Post by Koekepan »

cube48 wrote:Channel pressure is supported in SunVox as MIDI controller.
No complaints there.
cube48 wrote:If Release Velocity is the same as Release Time, it is then MIDI CC 72 which can be assigned.
Release Velocity is not the same as Release Time. It's a per-note MIDI performance/expression parameter based on how quickly the key was lifted by the player. As such, it's very much a per-note thing that, if captured, would benefit from round robin note allocation.
cube48 wrote:Sustain is MIDI CC 64, also possible to assign.
I really meant gate time - i.e. how long a keyboard key is depressed.
cube48 wrote:What do you mean by capturing the envelope? If I understand you right, then you could just assign those MIDI CC's to desired parameters and shape your envelope in real-time by sending CC's from your controller.
I don't think that you do understand what I'm saying. Let me try in more detail:

If and when you hit a note on a keyboard, you can gather parameters. You can gather when the note was hit, how hard it was hit, how much pressure was applied during the note's depression, how quickly it was released and possibly other factors such as breath control and channel pressure that are not related to the single key's action alone.

The velocity of hitting and releasing the key, the timing of hit and release, and the timing and degree of pressure during the note's depressed period form an intentional/expressive envelope with respect to that one unique note in time. If you take these elements and put them through your AAHDSHRR (add letters and numbers to taste) complex envelope you get the actual calculated envelope that the synthesis system actually applies to that one unique note in time. This one unique note in time shares its AAHDSHRRalantt envelope parameters with other notes put through the same synthesis chain but it does not (most probably) share the exact calculated envelope because of differences in how you played them with your thumb and ring fingers.

To make it clear: since the parameters largely under discussion (not channel pressure, for example) are unique to each note that is hit, while it may be possible to gather them as MIDI signals and do some kind of synthesis modulation to them, that is not as elegant as to have each note's individual pressure, velocity and release velocity travelling down the same signal path. After all, timing, velocity and release timing already go there.
cube48 wrote:I'm also patiently looking for the day when further MIDI implementation comes to SunVox. It's going to be an absolute killer (not that it isn't already).
User avatar
cube48
Posts: 114
Joined: Tue Jun 21, 2011 10:33 am

Re: SunVox & MPE

Post by cube48 »

Thanks Koekepan for your clarification. I really appreciate such discussion with views/knowledge exchange!

I fully get your point on each note having it's own envelope etc. What is still little bit unclear to me if those parameters like gate time, release time and release velocity are part of official MIDI CC list or they are specific to some synths/controllers? I know that i.e. Kurzweil's machines with V.A.S.T. have quite some extra modulation sources which are not fitting to official MIDI CC list and they are exclusive to their machines. Kurzweil has these modulation sources mapped above the CC number 128 and they are not even controllable from outside - they are available for internal routing only.

How is velocity of hit and release different from timing? I'm not arguing, I just want to understand it. I always thought that velocity is a value calculated from two switches under each key. Finger hits the key, first (upper) switch is triggered, finger+key is traveling down where it hits the second (lower) switch. Time between these triggers is the note velocity. Same but inverted for release velocity ... or?

Sorry, I also don't understand your envelope goal fully. It's clear that you can assign i.e. note velocity to A/D/R rate or Sustain of the envelope etc. These are note-on values which can affect envelope in the trigger moment. But how do you want to apply continuous MIDI CC's (i.e. Key Pressure) to envelopes when they are already running?

Yesterday I got an idea to create some sort of logical binary gate array made of Amplifier modules. If the short noise burst signal (each note hit) could toggle the >Invert< parameter in Amplifier we could create such logic like cyclic counter or similar. But I failed creating it, I don't know how to keep the Invert toggled and switch it back with another note.
Koekepan
Posts: 263
Joined: Thu Dec 05, 2013 4:56 am

Re: SunVox & MPE

Post by Koekepan »

cube48 wrote:I fully get your point on each note having it's own envelope etc. What is still little bit unclear to me if those parameters like gate time, release time and release velocity are part of official MIDI CC list or they are specific to some synths/controllers?
Gate time and release time are simply when things happen. There's no signal, aside from the observation that a note was pressed, it's still pressed, now it's released. They're based on timestamps. They are communicated when they happen (otherwise live play wouldn't really be possible).

Release velocity is a parameter in the note-off message, the same way that note velocity is a parameter in the note-on message.
cube48 wrote:Sorry, I also don't understand your envelope goal fully. It's clear that you can assign i.e. note velocity to A/D/R rate or Sustain of the envelope etc. These are note-on values which can affect envelope in the trigger moment. But how do you want to apply continuous MIDI CC's (i.e. Key Pressure) to envelopes when they are already running?
In principle, any way I want. In practice, take a look at some more complex synths. It's easy to map your note expression to sustain level, but you might also map it to a filter level, filter resonance or delay time or whatever. If a complex MIDI note module had parameters for velocity, pressure, and release velocity, those could act similar to Sound2Ctrl modules to directly affect parameters down the chain. This way each note would become a deeply expressive item by itself.

It may sound rather theoretical, but I'm mostly an ambient composer, so for me this stuff is critical. Right now I fake it with a lot of automation of parameters with a nanoKONTROL2.
iaon
Posts: 236
Joined: Mon Jun 02, 2014 7:56 am

Re: SunVox & MPE

Post by iaon »

Hey cube48,

If I understood your last paragraph correctly (toggle Invert by playing a note) here's a way to do it (albeit without the noise burst signal).

If you play a note into this MetaModule effect the signal going through it is inverted and stays inverted until the next note is played. Alternatively, you can use it without input by setting the DC Offset.

The main principle here is that a Sound2Ctl'd controller stays in place when the S2C is muted. The playable MM quickly unmutes it and mutes it again using pattern commands. The Amplifier Sound2Ctls its own Inverse controller through a feedback loop, delayed to update after the loop is broken again.

This could also be used to feed into a Sound2Ctl to switch between sweet spot values.

There are no doubt several ways into doing logical stuff with amplitudes...
Attachments
PlaySwitch.sunsynth
(4.99 KiB) Downloaded 377 times
Last edited by iaon on Thu Jun 23, 2016 7:48 pm, edited 2 times in total.
User avatar
cube48
Posts: 114
Joined: Tue Jun 21, 2011 10:33 am

Re: SunVox & MPE

Post by cube48 »

Koekepan wrote:If a complex MIDI note module had parameters for velocity, pressure, and release velocity, those could act similar to Sound2Ctrl modules to directly affect parameters down the chain. This way each note would become a deeply expressive item by itself.
Yes, but you would still need to distribute the notes to individual voice blocks, each with it's own envelopes etc. I get you now with the dynamic envelopes :wink:
iaon wrote: The main principle here is that a Sound2Ctl'd controller stays in place when the S2C is muted. The playable MM quickly unmutes it and mutes it again using pattern commands. The Amplifier Sound2Ctls its own Inverse controller through a feedback loop, delayed to update after the loop closes.
Thanks a lot man! This looks really promising! Pattern commands! I thought about involving pattern within MM but had no clear idea how. Thanks, really!

I tried your PlaySwitch shortly. I wanted it to control (mute/unmute) Velocity parameter on intermediate MultiSynth module but then the PlaySwitch acts little later thanks to it's delay and note signal slips through faster. I'll have to figure out the way how to cut the note signals coming from MultiSynth with your PlaySwitch. If I manage that, then the next level will be to build a chain of switches which counts to 4.
Damn, why I left the school when we were studying logical circuit designs etc?

Edit:
Here is the situation. Module 01 (MS) sends notes to modules 04 and 08 (MS2 and MS3). MS3 sends static note C5 (to have consistent volume levels) to AG and PlaySwitch. AG sends audio signal to PlaySwitch and Amplifier. PlaySwitch inverts the audio signal with every note it receives so Amplifier plays every other audio signal it receives (when two inverted signals meet in Amp they are deleted). Amplifier audio signals goes to Snd2Ctrl which controls the Velocity of MS2 (module 02). But this control signal is delayed against the original note so it's not 'muting' it. How can I delay the note signal between two MultiSynths? Regular delay module is for audio only ... or? (at least it didn't work for note signals here)
Image
iaon
Posts: 236
Joined: Mon Jun 02, 2014 7:56 am

Re: SunVox & MPE

Post by iaon »

The signal is inverted immediately on the first line of the MM pattern, the delay is only there to keep it from flipping more than once. But the note signals do beat the audio ones and can't be delayed afaik. The first part of your setup with the phase cancellation works well. If you use just switches and no temporary signal the 'note on' still beats the Sound2Ctl but so what, a switch is a switch, it's flipped and ready to affect the next note.

In this example the Sound2Ctl'd MultiSynths are set to 'Ignore notes with zero velocity'. These chainable four module units let through every other note signal. Two of these in series means every fourth note played successively into module 01 is sounded (regardless of timing).
Attachments
One234.sunvox
(16.92 KiB) Downloaded 351 times
User avatar
cube48
Posts: 114
Joined: Tue Jun 21, 2011 10:33 am

Re: SunVox & MPE

Post by cube48 »

iaon, you are awesome, sir! I managed to build the array. Yet it hits the primary issue of Channel Per Note expression mode which I couldn't realize until the array was working. Linnstrument rotates the channels in round robin manner per note (channels 2-4-3-5 in my case). My voice blocks are tied to channel specific MPE parameters (i.e. 1st voice takes CC's on MIDI channel #2 only etc.). So I have to synchronize the PlaySwitch array to voice blocks so the rows open accordingly to the required MIDI channels. This is no problem, it's just about setting the PlaySwitches DC Offset properly. Problem occurs when multiple notes are held together, then the array note distribution gets out of sync with LS channel distribution. It seems that we have to wait for better MIDI channel handling for in SunVox after all.

Never mind, it was the great lesson and I'm sure your contribution already is valuable for application that Koekepan is striving for or other users (including me).
Here is a simple 4-voice round-robin note distribution example in action. Your module is copied 4 times and initial configuration of PlaySwitch DC Offsets is set so they play in sequence.
Image

Thank you, iaon!
Attachments
4_voice_cycle.zip
(3.17 KiB) Downloaded 299 times
iaon
Posts: 236
Joined: Mon Jun 02, 2014 7:56 am

Re: SunVox & MPE

Post by iaon »

To minimize the sync problems it pays to control all the MultiSynths from the same two switches. Two exactly simultaneous notes may still go to the same generator but running order and exclusivity are preserved.
The setup in this example made it every time playing a note per line at max BPM and 2 TPL. At 1 TPL it starts doubling some notes which kind of makes sense as the switces work at that speed. I shortened the delay inside from 4 to 3 lines, still erring on the safe side, the Sound2Ctl only works for 1 line so it could probably go even lower without any feedback problems...
[edit: reuploaded minus five superfluous MultiSynths]

As for poliphony, maybe for now some other software could transform the MIDI signal in a usable way before it goes into SunVox?

The other attachement shows a way to do some logical operations on two on/off signals. The simple switches here are 0 / 1.
Attachments
QuadriCycle.zip
(2.17 KiB) Downloaded 273 times
AND_OR_etc.zip
(1.63 KiB) Downloaded 284 times
El Nino
Posts: 299
Joined: Thu Mar 07, 2013 11:54 am

Re: SunVox & MPE

Post by El Nino »

aw man...im jealous.

Linnstrument looks awesome
User avatar
cube48
Posts: 114
Joined: Tue Jun 21, 2011 10:33 am

Re: SunVox & MPE

Post by cube48 »

Thanks iaon for futher improvement! I'm still digesting it :)

El Nino, LS is awesome indeed. I've sold most of my gear collected over years to fund it. It completely turned around my approach to music, going more into instrumental direction now. I'm finally able to learn and grasp the musical theory with it which I was not able to crack with classical non-isomorphic piano layout for over 20 years. I'm still at the very beginning but it feels so good to play. It's like dipping your fingers into sound/synthesis, really expressive experience. FM synthesis is for me personally so far the most expressive as the very small change in values leads to drastic sonic spectrum changes (which I like) and SunVox module manages to keep the FM musical. I'll try to do some SunVox monophonic FM demo if time allows.
User avatar
cube48
Posts: 114
Joined: Tue Jun 21, 2011 10:33 am

Re: SunVox & MPE

Post by cube48 »

Well, since MPE is possible for some time already (since NR introduced MIDI channel mapping to modules), here’s another demo of using SunVox in MPE fashion.
https://youtu.be/KWoeYTf0wJE

Edit: Used patch attached.

IMPORTANT NOTE: You need to apply Aftertouch and ModWheel movement to voices otherwise they will sound crappy. Depending on your MPE controller your mileage with sensitivity may vary. Voices are mapped to MIDI channels 1-8.
Attachments
Cellisto.zip
(4.68 KiB) Downloaded 207 times
Post Reply