CommandTracker: Mostly ideas for a music tracker sparse file format
CommandTracker: Mostly ideas for a music tracker sparse file format
Haha well I'm having a lot of trouble trying to do a multiquote so I'll just reference some of the points:
Multi-Voice Instruments
I actually didn't think about this, but it's a good point! Can you elaborate on the types of things you'd want to do with more than 1 voice? Chorus and supersaws came to mind here. Delay also. A lot of things I'd do as instrument automation in DAW land I'm used to doing in trackers by way of channels. Here there's definitely a trade-off in terms of file size though - a 3-tap delay would use 3 channels. A chorus would use at least 2 and sort of "manually" creating these effects would use more notes and thus bytes. Likewise if you want to have a long tail on a note, you do that by way of multiple channels. Same is true if you wanted, say, a pulse-saw or tri-saw kinda thing like you'd find in a normal synth.
If channels and voices are not 1:1 mapped, my initial feeling is to let the sound engine figure it out ? But for peristence based trackers (like FamiTracker), many effects "stay on" until you turn them off or change them. This can be really convenient but might complicate things a lot here. Likewise, there isn't a concept of a "voice engine" in a tracker typically. You never "run out" of voices but not channels (ignoring Renoise here for the moment).
I'll have to ponder how all that'd work methinks. Takeaway is that I think such a concept, while pretty new for classical tracker, could be possible. It can also be something baked into the tracker - as in there might only be 8 channels instead of 16 in the tracker, but they can use multiple voices. Compared to SID trackers, this would be a pretty new concept. For me I have no problem doing all that in pattern data and actually kinda like that level of expression since I find it easier to do (compared to say automation lanes in Ableton) but that might be a question to posit to other tracker musicians. Especially ones used to something like Renoise, which really blends the idea of a DAW with a tracker. I don't think there's quite enough horsepower on the X16 for something of that magnitude (and/or that might be beyond my skillset :P).
Effects
So here I was thinking of a few things. I was coming up with a concept of envelopes. For VeraSound, LR+vol and wave+PWM could use the same envelopes (they're the same precision) and these can be assigned to things in the instrument AND/OR changed in the pattern data by way of an Envelope Effect. I'm unaware of trackers that used this concept (other than Renoise but I think it handles it differently) but that could be neat.
Then there's pattern level "automation" by way of effects. These would be stuff like these:
https://gitlab.com/m00dawg/commander-x16-programs/-/blob/master/command_tracker/effects/index.md
Of note, different trackers handled these differently. Impulse Tracker didn't have effect persistence. SO if you wanted to do a volume slide down (C04) you would have to define that over and over in the pattern data. In FamiTracker, you "enable" it once and it continues to take affect until you turn it off. I think the latter is more efficient (probably why FT does that) but the composer has to remember a bit more.
Most of those above are common tracker effects but how they are implemented could vary wildly. So they could be using things like modwheel if under the covers, they're using things like modwheel or interact more directly with the sound engine and the composer doesn't have to be any the wiser.
Instrument Compatibility
Yes I think sharing instruments is something that is indeed possible, at least if/once figuring out the voice to channel concept. I wrote up some ideas of what an instrument would be here:
https://gitlab.com/m00dawg/commander-x16-programs/-/blob/master/command_tracker/file_formats/file_format.md
That's pretty early on and incomplete heh but my envelopes concept included the concept of looping, so if one wanted an instrument to have a default vibrato, one can either set it in the channel data (noting that, if there is effect persistence, it can be set once at the beginning of the song), or use an envelope with a loop. It also probably make sense to have a concept of instrument effect default though, like pre-setting a vibrato value like one would have in the channel data (so it works on any channel, not just the one with vibrato set). That would certainly be more efficient than using an envelope I think.
A lot of these concepts come from how I'm used to using FamiTracker. I think to your point of multi-timbral voices, not all of it may actually apply on the X16 though. E.g. if all instruments were 2 voice, that's still 17 total channels across 3 sound engines.
It's dinner time so I gotta cut this short but this is a great conversation!
Author of Dreamtracker (https://www.dreamtracker.org/)
Check Out My Band: https://music.victimcache.com/
Check Out My Band: https://music.victimcache.com/
CommandTracker: Mostly ideas for a music tracker sparse file format
I've had this in my head for a while on how multi-voice instruments could work...if there remains a concept of 1 to 1 channel/voice mappings, then I wonder if this could be implemented with channel skipping.
So let's say I have something like:
ROW| VERA01 | VERA02 | VERA03 | ...
## | -------+--------+--------+
00 | C-4 01 | --- -- | --- -- |
01 | ... .. | ... .. | ... .. |
02 | C-4 02 | C-4 03 | ... .. |
...
Sort of hard to represent in text, but basically if instrument 01 allocates 3 voices, we disable channels 2 and 3 (represented by the dashes, though in a multi-color tracker, we would probably just dim or highlight them to indicate they are in use). Then when we use instrument 02 on channel 1, if it's a single voice instrument, we know channels 2 and 3 are available and can be used for other things (as in the example).
It could even be possible to override one of the voices (say by placing a note on VERA02 at row 01 in the above example where instrument 01 might have a long tail), then their new note would take precedence. And actually likewise, if someone wanted to use a multi-voice instrument on channel 01 at row 00 and then put another instrument on channel 02 on the same row, channel 02 should override the voice.
So in other words, the UI could provide cueues to channel usage when using multi-timbral voices, but the composer can still do as they please if they want to override voices.
This seems like it would be non-trivial to implement in the tracker UI but provides the ability to use complex instruments while preserving the sort of lower level nature of using trackers while still adding a lot of convenience AND saving space. It also puts the voice overriding in the hands of the composer. Such a concept provides a lot of benefits - at the cost of unknown complexity (I'm currently not sure how complicated this would be to implement in practice, particularly on the X16).
I could see this being HUGELY useful for things like delay. Handling that at the instrument level means we don't have to worry about manually wrapping the delay when we cross a pattern boundary.
I think it could be limiting in terms of effects though - maybe for multi-voice instruments we could assign effects to each voice. So in a two voice instrument, effect column 1 effects voice 1, column 2 voice 2. That's not ideal when we want to apply affects to both though (like pitch slides).
Anyways hopefully that all makes sense!
Author of Dreamtracker (https://www.dreamtracker.org/)
Check Out My Band: https://music.victimcache.com/
Check Out My Band: https://music.victimcache.com/
CommandTracker: Mostly ideas for a music tracker sparse file format
Was talking about the YM2151 situation on the Discord. It's an aside but I'm hoping the Rev 3 board will move the YM chips to a daughter board which can be swapped with an FPGA board (like the VERA) when 2151's start to either fail in large numbers or get hard to find.
But while talking about that, we started talking about how the C256 solved this, and that got me down the rabbit hole of the C256 tracker that's being worked on. It seems based on RAD tracker, so I pulled up their file format (available at the bottom of this doc) and it's actually not too far off from what I was thinking.
The one addition is that it tracks byte counts for each pattern. I'm not sure why by just looking at the file definition. Could be a very reasl reason to do that - I'm just don't know yet. It also uses a different "end of row / end of pattern" flag than I do.
Author of Dreamtracker (https://www.dreamtracker.org/)
Check Out My Band: https://music.victimcache.com/
Check Out My Band: https://music.victimcache.com/
- kliepatsch
- Posts: 247
- Joined: Thu Oct 08, 2020 9:54 pm
CommandTracker: Mostly ideas for a music tracker sparse file format
I have finally come around to publishing the CONCERTO synth engine, along with its source. (Link to GitHub in the description).
Now you can see why I think multi-voice instruments are a useful thing ? Try out some of the timbres and the demo loops (buttons at the bottom).
CommandTracker: Mostly ideas for a music tracker sparse file format
Indeed that sounds beautiful! I was having some trouble with using the mouse when running it on my workstation (Linux) and haven't had time to take a look at that (looks like an emulator / desktop environment issue - nothing to do with Concerto). I was able to use it via Try It Now and though it did skip some, it was enough to hear the demos and make some nice juicy squares and super-saws. Very lovely!
I definitely agree, multi-voice instruments have some clear benefits. The challenge is trying to figure out how to manage that as I'm not aware of a tracker that had dynamic channels and part of the power of a tracker is allowing the composer very precise control. I wouldn't want to take that away but having multi-voice instruments add a ton of convenience and avoids a lot of monotony and repetition (e.g. with single voiced instruments, there would have to be a lot of channel duplication and in-pattern effects to do similar things).
I like the idea of UI hints with the channel skipping thing but it might be better to simply have some sort of UI queue as to which voices are in use. That takes away a bit of flexibility (the composer no longer would know which voices they are about to clobber if they really want to try and use all 16 channels at the same time) but that's probably ok.
The # voices thing is probably not something I'll worry about anytime soon anyway - there's a lot to do in going from zero to a full tracker and trying to do all that while learning assembly is a very tall mountain. Given some of the KickC conversations I thought about perhaps considering that but learning 6502 is one of the main things I want to get out of the X16. Point is, this will probably be a long project ?
Anyways fantastic results with your synth! Kudos for putting together such a rich sound engine in a very short period of time! It's quite wonderful! I took a quick look through the source code but will try to take a look in earnest later today.
Author of Dreamtracker (https://www.dreamtracker.org/)
Check Out My Band: https://music.victimcache.com/
Check Out My Band: https://music.victimcache.com/
- kliepatsch
- Posts: 247
- Joined: Thu Oct 08, 2020 9:54 pm
CommandTracker: Mostly ideas for a music tracker sparse file format
Thanks ? I am glad that you like what you have seen so far.
Maybe you should give yourself one or several smaller toy project before tackling such a big project with 6502. That's how I've done it, anyway, and it helps understand the source code of others better.
Anyway, if you are going to look at the source code, take a look at the Readme first. This will give you a rough idea where to start looking for what.
Is the problem with the mouse that you cannot reach every area of the UI? If so, simply move the mouse across over the whole window. This somehow resets the "mouse coordinate offset" or something so that you can get to everywhere.
I am not worried about the decoupling of the tracker channels with the hardware channels. Because the hardware PSG channels are identical, the user won't notice it. And in the cases where you actually want to continue using the same PSG channels, you can do that by using portamento (well, I'd have to check the details in the source myself). I like the idea of somehow monitoring how many of the PSG channels are in use at any time. Similar to how Synth1 does it (one of the most iconic VST synths).
CommandTracker: Mostly ideas for a music tracker sparse file format
Yep, that's the plan. Making a bunch of effectively useless programs that help me figure something out. I'm at the stage of assembly that it was a treat when I was able to simulate a pattern row count that updates with VSYNC ? Next big thing is really figuring out the textmode VERA stuff. I started messing with that a while back and need to again. Some heavyweight things I need work well for a tracker (like scrolling a pattern on playback). I took at look at your UI code just a moment ago to try to figure some things out. Some of it was beyond my understanding but I'll get there. I just learned a bit about jump tables yeterday haha.
On the note of the mouse, I'll have to play with it more. The problem is compounded by my desktop environment (Linux) where I have focus-follows-mouse set and where the X16 isn't grabbing the mouse. So I can't easily "wrap". I though the X16emu had a fullscreen mode but I didn't find it as a flag (I think it's a keyboard shortcut?)
You mentioned "I am not worried about the decoupling of the tracker channels with the hardware channels. Because the hardware PSG channels are identical, the user won't notice it." can you elaborate?
Normally in a tracker, the channels and voices are 1:1 but when supporting multi-voice instruments, things are now not strongly coupled. Which I think is worth it, but means you could "run out" of voices. So yep having a way to see which voices are in use makes sense. Many trackers have this sort of feedback, though since voices == channels in those cases, it is a bit different than here. Something as simple as the equivalent of LEDs on a real synth showing which voices are active I think would be sufficient.
Another thing to ponder is what happens when you use a 3 voice instrument followed by a 1 voice instrument on the same channel. Previously, when voices == channels, the second note immediately took over (there was no release). This might seem like a drawback but is actually a very useful feature (if you want to have long release tails you just use multiple channels, no biggie). But with decoupling voices, this changes things.
Definitely A LOT to think about there. For now, I'll focus on the fundamentals I think and worry about these more complex problems later ?
Author of Dreamtracker (https://www.dreamtracker.org/)
Check Out My Band: https://music.victimcache.com/
Check Out My Band: https://music.victimcache.com/
- kliepatsch
- Posts: 247
- Joined: Thu Oct 08, 2020 9:54 pm
CommandTracker: Mostly ideas for a music tracker sparse file format
Well, this is the first time I publish a larger coding project and I don't know ... If I compare it with e.g. @Stefan's code of X16-edit (which helped me a lot with the file stuff btw), my code looks awfully messy ? Anyway, I don't expect anyone to grasp what's going on after a few minutes of looking at it (especially in the gui.asm file). I tried to explain the overall concepts at the top, which are hopefully helpful.
Talking about the voice assignment is a bit too tedious for me at the phone right now, might do that later.
CommandTracker: Mostly ideas for a music tracker sparse file format
1 minute ago, kliepatsch said:
Talking about the voice assignment is a bit too tedious for me at the phone right now, might do that later.
Haha yes it's a potentially rich topic. I can't type more than a sentence or two before I get irritated at my phone ?
This is a solveable problem for sure, it's just a question of the pros and cons of various solutions. E.g. When interfacing with MIDI instruments, Renoise doesn't cut notes off within a channel (so playing one note after another simple engages the release rather than a cut). That's one option here - problem is when you're playing notes right after each other - there's no way to control that. But perhaps this could be controlled via an effect parameter that dictates if a channel will use note off or note release when playing the next note. That actually might be an elegant solution thinking about it. So having a "Mode" effect (Mxy). So say setting M10 uses note release and M11 uses note off.
Author of Dreamtracker (https://www.dreamtracker.org/)
Check Out My Band: https://music.victimcache.com/
Check Out My Band: https://music.victimcache.com/
-
- Posts: 913
- Joined: Tue Apr 28, 2020 2:45 am
CommandTracker: Mostly ideas for a music tracker sparse file format
4 hours ago, m00dawg said:
I was having some trouble with using the mouse when running it on my workstation (Linux) and haven't had time to take a look at that (looks like an emulator / desktop environment issue - nothing to do with Concerto).
That's a general issue with the emulator mouse support. Nothing you can do about it from inside the X16 code. What I find helps is to drag the mouse across the screen until it stops as the edge, then bring it back across all the way to the other side, then back to the middle. From there, do the same in the vertical directions, and the mouse should be synched with emulator.