|
|
|
My work here is (badly) done.
|
|
|
|
|
|
|
I keep coming back to Champagne but the vessel part is throwing me off.
To err is human to really elephant it up you need a computer
|
|
|
|
|
I get to bouteille as a French vessel for wine or perfume and is 9 letters, but that has nothing to do with a musical third note (mediant) which I'm assuming is part of an anagram that I can't decipher. So in other words - I'm stuck
|
|
|
|
|
@GregUtas
Where's the CCC?
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
He's struggling with ambisinistry.
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
|
I'm working on complicated MIDI wizardry for IoT gadgets, so I can make MIDI "smart" pedals and controllers and such.
Playing a multitrack MIDI file without reading it into memory is a bear.
I have just not been able to get this code right.
The trouble is each midi event has a delta attached to it that is the offset in "MIDI ticks"* from the previous event.
*A midi tick is a fixed time duration based on the tempo and timebase.
With a multitrack midi file, each track has its own sequence of events and the deltas are all relative to that track.
However, in order to play them, you must merge all the tracks into one event stream, adjusting the deltas.
The actual adjusting of the deltas isn't so bad, but the logic to figure out when to pull from what track - I'm not even sure I have it right yet, because my code has other issues.
My point in all this, is MIDI is early 1980s protocol and multitrack midi isn't exactly brand spanking new. Sequencers with scant amounts of RAM were doing this.
I feel like in so many ways MIDI was designed to make it possible to do things on little devices without much RAM.
But this particular operation - in C# I just merged the tracks in memory before I played them. I can't afford the RAM or the CPU to do that here. I have to stream everything.
And I've convinced myself I'm overcomplicating things.
I hate when I do that - it means I have tunnel vision, and/or am missing something big and important.
I don't like knowing that I don't know something I need to know, you know? It bugs me, like a song that's stuck in my head I can't remember the entire hook to.
To err is human. Fortune favors the monsters.
|
|
|
|
|
honey the codewitch wrote: However, in order to play them, you must merge all the tracks into one event stream, adjusting the deltas. Why? Why not have a 'pre-stream' of events that are the absolute times of next 'event' of each track? Then you just keep polling the track with the next playable event until its 'time' falls behind the next track with an upcoming playable event. Poll that one until ...
|
|
|
|
|
That's what I do essentially as far as the deltas. My pre stream is n contexts where n is the number of tracks. I use those to pull events out in the right order
To err is human. Fortune favors the monsters.
|
|
|
|
|
Isn't this effectively a merge sort from N sources.
Each source is already sorted. I admit my ignorance in how you calculate your context.
But I assume that EITHER you have an incoming stream of N contexts pre-sorted (which would make outputting it more trivial as you output it in the order it arrives).
Or you have an incoming stream with N tracks, where each track is at a specific offset, but the magic is that while track 1 has a small delta, track 3 could have an excessively large delta, so it is played at the right time. And you might not hear from track 3 for some time.
I often find it helpful to imagine how they implemented the player at the hardware level.
For my take, I would write something that played only 1 of N contexts correctly...
Then I would look hard at how to implement simply adding a second context to that. Based on how the data shows up.
Because by the time you get to the third or fourth, I think you usually have a decent approach.
The other comparison I would make is a Multiplexer. is this similar to CDM or TDM (Code or Time division Multiplexing).
Another comparison is Stereo, where you get L+R and a 2L (I forget the actual), but taking -2L + L+R => -L+R + L+R = 2R
But they did it that way to take advantage of simpler hardware.
I remember learning from that example that coding stuff versus building components you have to think differently about what is easy/hard.
==
Finally, your problem reminds me of a Computer Engineering Class, where we built circuits that were run through a simulator. The simulator used a queue design, where "events" would trigger through the queue, and the simulator was able to be fast, because it ignored the timing signals, allowing it to "not wait" any time before processing an item. (I got in trouble in the class, because I wrote obscenely inefficient but SIMPLE code, reducing the homework to a TRIVIAL problem, avoiding the timing issues others were busy coding around).
anyways, I could envision a queue that is managing N queues of inputs, and only when you take off an item, do you go to the stream to pull in another item.
|
|
|
|
|
Kirk 10389821 wrote: Finally, your problem reminds me of a Computer Engineering Class, where we built circuits that were run through a simulator. The simulator used a queue design, where "events" would trigger through the queue, and the simulator was able to be fast, because it ignored the timing signals, allowing it to "not wait" any time before processing an item. (I got in trouble in the class, because I wrote obscenely inefficient but SIMPLE code, reducing the homework to a TRIVIAL problem, avoiding the timing issues others were busy coding around).
anyways, I could envision a queue that is managing N queues of inputs, and only when you take off an item, do you go to the stream to pull in another item.
You're describing the problem pretty well, which I actually solved last night. I won't paste the implementation here, but here is using it with a queue q
Forgive the grotty code. It's just test stuff I've been banging on
#ifndef ARDUINO
#include <sfx_midi_file.hpp>
#include <sfx_midi_clock.hpp>
#include <sfx_midi_utility.hpp>
#include <queue>
#include <stdio.h>
#include <math.h>
using namespace sfx;
void dump_midi(stream* stm, const midi_file& file) {
printf("Type: %d\nTimebase: %d\n",(int)file.type,(int)file.timebase);
printf("Tracks: %d\n",(int)file.tracks_size);
for(int i = 0;i<(int)file.tracks_size;++i) {
printf("\tOffset: %d, Size: %d, Preview: ",(int)file.tracks[i].offset,(int)file.tracks[i].size);
stm->seek(file.tracks[i].offset);
uint8_t buf[16];
size_t tsz = file.tracks[i].size;
size_t sz=stm->read(buf,tsz<16?tsz:16);
for(int j = 0;j<sz;++j) {
printf("%02x",(int)buf[j]);
}
printf("\n");
}
}
void dump_midi(const midi_message& msg) {
switch(msg.type()) {
case midi_message_type::note_off:
printf("Note Off: %d, %d\n",(int)msg.lsb(),(int)msg.msb());
break;
case midi_message_type::note_on:
printf("Note On: %d, %d\n",(int)msg.lsb(),(int)msg.msb());
break;
case midi_message_type::polyphonic_pressure:
printf("Poly pressure: %d, %d\n",(int)msg.lsb(),(int)msg.msb());
break;
case midi_message_type::control_change:
printf("Control change: %d, %d\n",(int)msg.lsb(),(int)msg.msb());
break;
case midi_message_type::pitch_wheel_change:
printf("Pitch wheel change: %d, %d\n",(int)msg.lsb(),(int)msg.msb());
break;
case midi_message_type::song_position:
printf("Song position: %d, %d\n",(int)msg.lsb(),(int)msg.msb());
break;
case midi_message_type::program_change:
printf("Program change: %d\n",(int)msg.value8);
break;
case midi_message_type::channel_pressure:
printf("Channel pressure: %d\n",(int)msg.value8);
break;
case midi_message_type::song_select:
printf("Song select: %d\n",(int)msg.value8);
break;
case midi_message_type::system_exclusive:
printf("Systex data: Size of %d\n",(int)msg.sysex.size);
break;
case midi_message_type::reset:
if(msg.meta.data==nullptr) {
printf("Reset\n");
} else {
int32_t result;
const uint8_t* p=midi_utility::decode_varlen(msg.meta.encoded_length,&result);
if(p!=nullptr) {
printf("Meta message: Type of %02x, Size of %d\n",(int)msg.meta.type, (int)result);
} else {
printf("Error reading message\n");
}
}
break;
case midi_message_type::end_system_exclusive:
printf("End sysex\n");
break;
case midi_message_type::active_sensing:
printf("Active sensing\n");
break;
case midi_message_type::start_playback:
printf("Start playback\n");
break;
case midi_message_type::stop_playback:
printf("Stop playback\n");
break;
case midi_message_type::tune_request:
printf("Tune request\n");
break;
case midi_message_type::timing_clock:
printf("Timing clock\n");
break;
default:
printf("Illegal message: %02x\n",(int)msg.status);
while(true);
}
}
using midi_queue = std::queue<midi_stream_event>;
int main(int argc, char** argv) {
midi_clock mclock;
static const char* def = "data\\sonata.mid";
const char* sz;
if(argc<2) {
sz = def;
} else {
sz=argv[1];
}
file_stream fstm(sz);
midi_file f;
midi_queue q;
sfx_result r=midi_file::read(&fstm,&f);
if(sfx_result::success!=r) {
printf("Error opening file: %d\n",(int)r);
return (int)r;
}
dump_midi(&fstm,f);
fstm.seek(0);
midi_file_source msrc;
struct mstate {
midi_clock* clock;
midi_queue* queue;
midi_file_source* source;
};
midi_stream_event* queue = (midi_stream_event*)calloc(4,sizeof(midi_stream_event));
if(queue==nullptr) {
printf("out of memory");
return (int)sfx_result::out_of_memory;
}
mstate st;
st.clock = &mclock;
st.queue = &q;
mclock.tick_callback([](uint32_t pending,unsigned long long elapsed,void* state){
mstate st=*(mstate*)state;
while(true) {
if(st.queue->size()) {
const midi_stream_event& event=st.queue->front();
bool first = true;
if(event.absolute<=elapsed) {
if(event.message.type()==midi_message_type::meta_event && event.message.meta.type==0x51) {
int32_t mt = (event.message.meta.data[0] << 16) | (event.message.meta.data[1] << 8) | event.message.meta.data[2];
printf("Set tempo to %f\n",midi_utility::microtempo_to_tempo(mt));
st.clock->microtempo(mt);
}
printf("delta: %lli - ",(long long)event.delta);
dump_midi(event.message);
event.message.~midi_message();
st.queue->pop();
} else {
break;
}
} else {
break;
}
}
},&st);
r = midi_file_source::open(&fstm,&msrc);
if(sfx_result::success!=r) {
printf("Error opening file: %d\n",(int)r);
return (int)r;
}
mclock.timebase(msrc.file().timebase);
mclock.start();
while(true) {
midi_event e;
if(q.size()>=16) {
mclock.update();
continue;
}
sfx_result r=msrc.receive(&e);
if(r!=sfx_result::success) {
if(r==sfx_result::end_of_stream) {
mclock.update();
if(!q.size()) {
printf("Exiting\n");
break;
}
continue;
} else {
printf("Error receiving message: %d\n",(int)r);
}
printf("Exiting\n");
break;
} else {
q.push({(unsigned long long)msrc.elapsed(),e.delta,e.message});
printf("queue size: %d\n",(int)q.size());
mclock.update();
}
}
}
#endif
To err is human. Fortune favors the monsters.
|
|
|
|
|
Even older than MIDI, mergesort on multi mag tapes. The issue sounds awful similar.
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
It does. It really does, assuming those mag tapes are interleaved like "raided" kind.
To err is human. Fortune favors the monsters.
|
|
|
|
|
|
They use butterflies.
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
From what I recall, having done some MIDI stuff in the days before time, on a 1 MHz 8 bit CPU with 4k RAM, is that the data stream was at 31.25 kbaud.
[insert match equation here]
Which was one tick every 10 milliseconds; all 16 channels combined. So all I had to do was was to preprocess everything in less than 10 ms.
Nothing succeeds like a budgie without teeth.
To err is human, to arr is pirate.
|
|
|
|
|
Yeah, it wasn't really the speed that was my problem. It was the difficulty of streaming midi file tracks while merging them without loading more than I absolutely had to in RAM at once.
I got it working. It only keeps N messages in memory at a time, where N is the number of tracks. That's about as good as it gets I think.
To err is human. Fortune favors the monsters.
|
|
|
|
|
I think you're overcomplicating things. You could use a simple class for each track. Initialise it with the track length, byte offset/state, and a getNextByte(offset/state) callback function. Implement getNextEventTime(), and getNextEvent(). Now you can get the minimum getNextEventTime() of all tracks, and then getNextEvent() for any track that matched that minimum time. Calling getNextEvent() will update that track's next event time. Rinse & repeat. Rinsing is optional.
|
|
|
|
|
How do I know which track to pull an event from next? That's where it gets weird.
To err is human. Fortune favors the monsters.
|
|
|
|
|
From whichever tracks have the next time equal to the minimum-next-time of all tracks:
void playNextNotes()
{
int nextTime = MAX_INTVAL;
for (auto &track : tracks)
{
if (track.getNextTimestamp() < nextTime)
nextTime = track.getNextTimestamp();
}
waitUntil(nextTime);
for (auto &track : tracks)
{
if (track.getNextTimestamp() == nextTime)
{
auto event = track.getNextEvent();
midi.playEvent(event);
}
}
}
|
|
|
|
|
I got it all working last night.
The trick was in implementing your "getNextEvent()" method correctly (I don't call mine that, but same-o same-o)
To err is human. Fortune favors the monsters.
|
|
|
|
|