had a fun interaction with chatGPT today about sunvox

Post Reply
Ol sen
Posts: 20
Joined: Sat Feb 12, 2022 6:26 am

had a fun interaction with chatGPT today about sunvox

Post by Ol sen »

out of curiosity i started a conversation with the ChatGPT AI from open.ai yesterday and asked about sunvox. Turns out the engine knows about sunvox but basically nothing about its inner working. I gave it a list of the available module types and it immediately understood that when i would want to synthesise a trumpet sound how to do the comb'ed fft spectrum needed to create such sounds. Basically all ingredients are there in sunvox.

Furthermore the discussion ended up about comparison to the legacy Quartz Composer Patching environment editor from Apple that was introduced more than 20 years ago and is sadly definitely deprecated since last year. Why was QC a topic with chatGPT. The AI figured out that sunvox misses one particular feature that could open the box of pandora up to a certain complexity. Namely an "Iteration" module that was available in QC. It basically resambled functionality that is in sunvox known as Meta module but offered that its "Macro" content just like meta would be applied multiple times up to a certain amount where each application of the psynth_net structure on the output buffer would represent one iteration.

This sounds quite complex but it gave me some hints that Meta is by far not at the end of its possibilities. Lets assume the event mechanism of sunvox gets some event type that can tell wich iteration step of some maximum defined iterations is applied, that would allow, lets say some residing MultiCtl to apply different scaled parameter settings to its outputs and with it apply different set of parameter values on the substructure. That way it would become possible to create polyphonic sounds resembling a trumpet sound by just defining the base sound and controlling its comb peaks in the fft spectrum. Of course such could be acheaved in another way, like dedicated generator module to control the blow pressure or similar as well but still to be able to apply iteration on a audio buffer up to a certain amount (control the CPU intensity) would be massive.

in Quartz Composer the iteration module was a macro module which allowed to apply time and iteration index on the given exposed properties/controls on its hosted substructure, this iteration module existed in parallel to the macro module because some of the modules of QC where not allowed to iterate at all to have control over CPU load or to avoid multiple operations with modules where this makes not sense, like a URL request that makes no sense to be made multiple times.. well that does not apply to sunvox but the basic idea is to allow iterations and collect all of the slightly different sound buffers in the output to go on processing.. filter, eq etc..

Does this make sense to anyone?

by the way ChatGPT suggested such iteration could basically look like

Code: Select all

#define MAX_ITERATIONS 32

void iteration_handler(float* buffer, int buffer_size, int base_frequency, int max_iterations) {
    if (max_iterations > MAX_ITERATIONS) max_iterations = MAX_ITERATIONS;
    for(int i = 0; i < buffer_size; i++) {
        float sample = 0;
        for(int j = 1; j <= max_iterations; j++) {
            float overtone = base_frequency * j;
            float amplitude = calculate_amplitude(j); // function to calculate amplitude based on overtone number
            sample += amplitude * sin(2 * M_PI * overtone * i / 44100);
        }
        buffer[i] = sample;
    }
}
another suggestion was this..

Code: Select all

#include <stdio.h>
#include <math.h>

#define PI 3.14159265
#define NUM_HARMONICS 12

// Function to generate the base trumpet waveform
float generate_base_waveform(float phase) {
    // Use a wavetable to generate the base waveform
    float wavetable[256] = {...}; // Initialize the wavetable with your desired waveform
    int index = (int)(phase * 256) % 256;
    return wavetable[index];
}

// Function to generate the trumpet sound
void generate_trumpet(float* buffer, int num_samples, float frequency, float pressure) {
    // Calculate the phase increment for the base frequency
    float phase_inc = frequency / 44100;

    // Initialize the phase for the base frequency
    float base_phase = 0;

    // Loop through the samples
    for (int i = 0; i < num_samples; i++) {
        // Generate the base waveform
        float base_waveform = generate_base_waveform(base_phase);

        // Initialize the sum for the overtones
        float overtone_sum = 0;

        // Loop through the overtones
        for (int j = 1; j <= NUM_HARMONICS; j++) {
            // Calculate the frequency of the overtone
            float overtone_frequency = frequency * j;

            // Calculate the phase increment for the overtone
            float overtone_phase_inc = overtone_frequency / 44100;

            // Initialize the phase for the overtone
            float overtone_phase = 0;

            // Use FM synthesis to generate the overtone
            float overtone = sin(base_waveform + pressure * sin(overtone_phase));

            // Add the overtone to the sum
            overtone_sum += overtone;

            // Update the overtone phase
            overtone_phase += overtone_phase_inc;
        }

        // Add the overtones to the base waveform
        buffer[i] = base_waveform + overtone_sum;

        // Update the base phase
        base_phase += phase_inc;
    }
}
unrelated: another thought was how to create trumpet simulation with the help of FM2.. if someone has suggestions or links inside this forum i'd be happy to have a look how it can be done (apart from taking a trumpet sample and play it in different pitches). Why is that so complex? Because a trumpet simulation is not just a couple of overtones, because in reality it is based on different spacing of its overtones and amplitude on the spectrum and time domain. Where there are even overtones dropped in certain kind of pressures blown in a trumpet. So a simple pitch shifting does not work here. So trumpeting is quite difficult to apply with a sunvox patch as of now. I see this as a coding challenge to myself, i might come up with some useful new module type or extension for the psynth_net engine

sidenote: chatGPT suggested also that a basic trumpet sound is made out of at least 12 overtones.

moreover because i develop a note2chord module for the sunvox engine that already works in its prototype form i think of another approach which is instead of ready made lockup table of defined chords to create multiple notes with relative pitches and just hammer those into a polyphonic generator and that way make use of overtone spacing control and velocity as expression of pressure into a "trumpet". One would maybe just define how much maximal overtones shall be allowed to expose them as notes with their according pitches representing each one of the comb'ed overtones.
User avatar
burij
Posts: 90
Joined: Fri Nov 08, 2019 5:23 pm
Location: Berlin, Germany
Contact:

Re: had a fun interaction with chatGPT today about sunvox

Post by burij »

This was very interesting to read. I think, I understood around 80% thought. Also already had some conversations with chatGPT about Sunvox. I guess you know that one, if you so deep in creating realistic sounding instruments: https://www.soundonsound.com/techniques ... nstruments . I think, everything which is described there is pretty achievable with Sunvox.
You can download my exclusive Sunvox-Metamodules here: https://label.weddinger-schule.de/category/tools/
Ol sen
Posts: 20
Joined: Sat Feb 12, 2022 6:26 am

Re: had a fun interaction with chatGPT today about sunvox

Post by Ol sen »

indeed a nice article on the basics of a horn/trumpet. I see you are in Berlin as well. Wanna meetup for a coffée talking sunvox nerd stuff? See PM.
User avatar
Keres
Posts: 466
Joined: Mon Mar 21, 2016 9:41 am
Location: N. Tulsa Ok.
Contact:

Re: had a fun interaction with chatGPT today about sunvox

Post by Keres »

uuuuh.... here in Oklahoma we are wondering what sort of computers could "iterate" multiple copies of a metamodule and still be in realtime. sounds like this old proggie was made for rendering non-realtime waveform effects.
Ol sen
Posts: 20
Joined: Sat Feb 12, 2022 6:26 am

Re: had a fun interaction with chatGPT today about sunvox

Post by Ol sen »

an iterator can be understood as a collection of multiple iterations of the very same module with different parameters making use of the model residing in the structure. So there is no copy needed. Of course iterators can become CPU intense but when there is a limit set it can be handled. Example: Lets say you created one signal path to make a SAW waveform wobble with LFO on top and an LP filter and last one amplifier and feed output with that. We have basically a monophonic sound, maybe stereo at this point. Now to organise polyphony you could copy paste that structure multiple times and run them in parallel up to your CPU power barrier of course. But in this case you run multiple copies in parallel. With the concept of an iterator you do not copy the structure, instead the very same first structure is just applied multiple times with different arguments and the output of each iteration is collected together resulting in poly application of the algo / structure given. To keep CPU load in check the "iterator" could offer a maximum of 32 iterations or less.. even 10 or 12 would work fine. Means as long the audio buffer and sample rate allow for copies of a meta module to run in parallel an iterator does just that but without the need of a copy - ergo saving CPU power - in particular because the iterator concept allows to construct code that addresses optimisation to speed things up. But the main thing is it is the most memory efficient way of running multiple "copies" of one and the same calculation in the time domain.

this 8 year old video explains the "iterator" concept that existed in Quartz Composer quite well.


what might confuse in this video is that the dude who explains it makes use of the iterator in its most simple form. Where the iterator in this example is just a step algorithm that is counting up from 0 in the time domain. But the iterator in QC was a module that allowed counting up also faster than the frames presented per second. There was one major invisible feature that made it work smooth, which was skipping more iterations when the time for the next frame was up and repeating to count up again. There was another module that worked similar which was called "Replicator". Which was following the same concept but made literally internally copies of a structure to be processed and yet still had the skipping feature build in to keep up the frame rate.
Post Reply