Archive for November, 2010

Pornotracker

Posted in Uncategorized on November 30, 2010 by pcedev

Gotta give props to ‘thefox’ for not only his tracker, but this awesome cover:

Never cared for the original song, but damn – ‘pop’ makes great chiptunes 😀 Haha, bass pumping from the NES too. If you get a chance, download the tracker itself (comes with the song). Sounds better than youtube video.

Advertisements

Composite Waveform Synthesis

Posted in Audio on November 27, 2010 by pcedev

Hahaha. I’ve now coined that term, for the lack of a better one, to describe a method of sound synthesis by compositing waveforms together. This isn’t additive synth, nor subtractive synth.

I’ve been studying every single method of synthesis that I can get info on. And there are quite a handful of them out there. I originally had the idea of trying to simulate subtractive synth on the PCE. Yeah, the idea sounds crazy, but hear me out. The very basics of subtractive synth, has to do with controlling the timbre of a waveform, over time, and at a low frequency. This is done with a controllable filter (analog voltage controlled filter or digital) that dulls a sound over time (or reverse).

Now, you can simulate this with wavetable synth (not sample based synth like mod/xm). And Bloody Wolf for the PCE tries to do this, with its brute force method. So I figured, why not use one waveform to subtract characteristics of another waveform, over a slow period of time. The idea sounds simple enough, and I had meant to try this out some time ago.

How does it work? Well, the amplitude of two or more waveforms… add. Waveforms are signed. There are negative values. If you play a waveform that’s inverted of a normal waveform, at the same time, same frequency, and same amplitude – they will cancel each other out. Pretty simple stuff. But what happens if you an inverted waveform… that’s just slightly different? Every corresponding sample of the inverted wave, will be canceled out. For samples that match up, and arent’ exactly inverted – there will be the difference left over. Move over, if you applied a volume envelope to the ‘subtractive’ waveform – you’ll slowly (relatively speaking) reduce/morph the waveform from it’s original form to the final subtracted form – with the left over samples all to themselves.

Hopefully this makes sense to you. It’s simple stuff, but you probably need a little background on waveforms, amplitudes, and mixing.

So, you now have a transition process. This gives you a timbre control system. Now for the more important details. For the sake of clarity, we’ll name the main waveform/channel the ‘carrier’ and the modifying waveform the ‘composite’. The carrier’s waveform should be defined in a single polarity range. I use the positive side… because it looks appropriate. But it doesn’t specifically matter. The composite waveform should be in the negative polarity range (all negative values). For values *that* don’t modify specific carrier’s waveform corresponding samples, you set to 0 (whatever 0 line is on your system. PCE it can be $0f or $10). So 0 to max negative, will subtract from the carrier’s sample in that slot/index position.

This is a little confusing to visualize. If you had a saw and you wanted to clip just the last part of the point/tip, you could keep all samples leading up to it as 0 line, then have an inverse of that peak in the negative range. If both carrier channel and composite channel were at max volume, you could get the clipping you defined. By itself, this really is useless. You can just draw a waveform that looks like that. So, you create special envelope to control the volume of the composite channel waveform. If you wanted a slow somewhat drawn out change in timbre, you have a very slow attack ramp up (relatively slow, because attack is usually very fast). For sustain, you’d most likely want a ping-pong loop point to keep the sound/timbre wavering up and down. Then whatever style exit ramp for key off.

I could have called this ‘subtractive’ waveform synthesis. But there’s another side to this. You also do the opposite of subtracting or dulling a waveform. You can exaggerate specific areas of it, or create you spikes or such. Even at the same time. A much more capable morphing effect. This is why I choose composite. And the term is already out there, to describe layering of different waveforms to create more harmonics in a waveform type. So it fits perfectly.

The last thing I want to add, is that detune really has an incredible effect here. It really brightens the timbre and adds its own wavering effect (which is already pretty common for layering channels).

Ok, this is the last thing I want to add. If you have a really bright sound and the timbre control is fairly aggressive, you’ll hear it in parallel with other notes. More specifically, chords. This is important, because the PCE only has 6 sound channels (jerks!). You can do a 3 note chord, but only needing to use 4 channels – not all six. That leaves two channels left over for other stuff.

Finally, here’s an example: http://www.pcedev.net/audio/chord_scale.wav . This is 4 channels, 3 notes, timbre control for one of the notes. Plays through one octave of notes, to the start of the next.

If it were this easy, wouldn’t everybody be doing it? That’s the catch. Not unlike other synths, the setup is pretty touchy. Even small differences/errors in the composite waveform can give you artifacts. Artifacts not in the sense of aliasing or such, but buzzy sounding. You have to massage both the carrier and the composite waveforms to get that perfect smooth sound. Not too soft because the timbre variance will be small, and not too large because the timbre variance will be buzzy or tiny. Finding just the right sound in between took me quite a bit of time.

Anyway, that’s the basics of it. Believe me, it does get a little more complicated than that. I didn’t go into detail about turning detune on, then off (and maybe multiple times during the length of the sound to make it more rich) and where to put it. The good thing is, once you find a good spot, right speed, right amount, etc of the detune stuff – it scales for you when you play notes up and down from that base note you turned the instrument define/sound too. The bad news it, if you go too far out – you’ll have to come up with a scaling system for these parameters – or do multiple defines for different octave ranges. The choice is yours. That assumes anyone reading this blog will even attempt this, let alone on the PCE. So very few people actually do anything audio related on the PCE.

Edit: Oh yeah, I forgot to mention that you need to sync both channels waveform pointers (and down counters). I’ve already posted what regs do that on a previous blog entry 😉

SuperGrafx Strider

Posted in Uncategorized on November 22, 2010 by pcedev

Wow, never seen this bit if info here before:

Props goes to Team Andromeda for posting this scan. I’ve seen a fair amount of sites/forum with small pieced-together info about the SGX port, and among other things like theories and speculation. But I’ve never seen an article were someone working on the ACD port stating that they did in fact start developing Strider for the SGX. Not even unofficially, and this is pretty much official. Yeah, magazines had pics – but that hasn’t been deemed exactly credible. Some people argued that the port never even existed and that magazines were stretching ‘rumors’. Claiming pics were fakes. Anyway, I thought this was a cool admission about the SGX development of the famous long loss port. People speculated, magazines reported tiny-teasing bits of info, but now someone officially quoted. I can move on now.

VCE notes

Posted in Uncategorized on November 17, 2010 by pcedev

From tests I’ve done with the mid res dot clock, I’ve noticed some things. First off, is the color burst signal. This resides at the start of the line. There’s a toggle where every scanline has an out of phase color subcarrier signal. Of course, this matches the out of phase color burst signal so that it’s still aligned. This means the errors from the Y to C frequency ratio are in phase only every other line. The phase difference is exactly 1 pixel too, in 7.159mhz mode. This is why my checker board dither pattern didn’t get alternating color errors on every line, going down the screen. It remained constant, even though all dithering in my test image was offset by 1 pixel every other scanline.

Now, this was with H-filter off. When H-filter is on, the toggle mechanism still works the same. But with one small difference. When H-filter is on, there are now 263 scanlines. This translates into the next frame having the color subcarrier offset being reversed for the scanline layout. Then on the next frame, it’s back to the original again. And so on, and so fourth.

Basically, it takes 30hz to correct errors in the composite signal. Scrolling at anything above 30hz breaks the pattern/filter. My test image, the cat from MooZ’s forum post, is full of checker board dither all over the place. So when you turn *on* the h-filter in mid res mode, there’s massive flicker (I’m not doing anything to the image). Because there are huge areas where interference between Y and C are being corrected by the filter. No good. The image looks best with H-filter turned off. But… you need to take caution to adjust/make sure you’re in the right phase first. Again, this will probably require user validation on a test pattern, since you’ll never know where you are in the phase toggle/setup. Else, you’re image will be tinted either to red side or the blue side. The red side looks best IMO.

Another observation, comb filters on my capture card seem to remove the artifact from simple luma changes across a wide (scanline) pattern – max white to dark color (I noticed this with my capture card+hardware comb filter on+ Ccovell’s screen test demo). It turns it to solid gray when the comb filter is on. But doesn’t effect with my test patterns of color dithered image, though I don’t have any direct white/black dithering patterns. And, because of the phase errors – you simply can’t just mix colors assume they’ll average like that. There’s a ‘tinting’ involved (because of which phase of the VCE you’re on). It’d take a bit of work to calculate what colors you can come up with. But the more important feature out of all of this, is for transparency. In mid res mode, you’ll get a nice solid transparency mask with any sort of checker board overlay. More solid than what Genesis games attempt to do in similar fashion.

Video

Posted in Uncategorized on November 16, 2010 by pcedev

A few months back, I talked about h-blur technique. This involves shifting every other scanline over 1 pixel, then revert back and by to the interleaved set of scanlines. Rinse/repeat. But during some tests, I found some strange artifacts with PCE’s mid res mode.

I was looking over some CoCo 3 stuff the other week (from some light web surfing). Came across this article/post by Potatohead (you might know him from the Atari 8bit scene). Long story short, he created a 256 color mode for the CoCo 3. It only works on NTSC video (composite or RF). But the idea behind it was pretty interesting. When you have exact multiples of the color sub carrier frequencies, you get specific bleed over into the chroma field. When it’s not an exact multiple, you get rainbow artifacts instead (see Genesis, NES, etc). It works because the pixels are higher resolution than what the frequency can sustain. When the frequency of Y is not an exact multiple of C, then you can use filters to comb through and get back some of the resolution of Y. But this isn’t the case when you have exact multiples of C.

Coco 3 has a 4x multiple mode of C. Potatohead uses 4 shades of grey at a 14.318mhz dot clock. The patterns accumulator and bleed over into the Chroma channel. By using shades other than grey, you can ‘tint’ those 256 colors. Other systems has used this artifact as well, CGA/Apple/CoCo1&2. Pretty cool. But how does this apply to the PCE? Easy. The PCE has a resolution that’s an exact multiple of C as well. The mid resolution (dot clock 7.159mhz). If you turn off h-filter bit in the VCE, and use two pixel groups – you get near-perfect solid colors of those two pixels. Matter of fact, on my NTSC set (a new one at that) it IS perfect. Same on my HD set. And same on my NTSC capture card.

Now, what might come to mind.. is the Genesis. It’s known for it’s crappy video output (at least for NTSC). But the Genesis isn’t an exact multiple of Chroma frequency. It’s something like 1.87x the frequency. So you see a rainbow effect because it’s out of phase. The PCE mid res mode pretty much has the Genesis high res mode beat – hahaha. Of course, you need to have the h-filter turned off.

The other thing that I noticed, is that the phase of the accumulation is not the same every time. Just like the coco 1/2, where you had to reset the system until you saw the correct red/blue pattern – to continue along playing the game. The PCE phase offset is also random. But you can control it. By turning the h-filter on, and then off, you can have the user/person cycle through until they see the correct pattern. There might be an even better why to automate this, but only if you can force it to a specific phase first (reset it).

Ok, so what are the benefits? A larger master palette for one (4096 colors if my math is right). No flicker at all (this isn’t a color cycle thing). You aren’t restricted to only using pairs of colors. If you were, you’d be limited to a 172×224 resolution. You can restrict dither in areas of an image where you need color over resolution, horizontally.

10.74mhz and 5.37mhz dot clock will *not* have this property that the 7.159mhz dot clock has. You’d figure 10.74mhz could be similar, since the resolution is so high, but it’s not. And with really good filters, you can pull the higher resolution in Y back out of the signal. My capture card does an amazing job of this.

The down side? It won’t work in RGB. It also has no effect on PAL systems. But lets be frank, RF and composite are the only official outputs supported by the systems. Yes, RGB was technically available on the back plane of the original core systems – but it need amps and was never officially made available. Not to mention how are you doing to tap RGB when you have a CD addon? Not without opening up the system. The later systems, the Duo units, which are more or less considered the main systems (replacing the core systems) and probably selling more units, completely lack the external means the tap the RGB lines. It requires opening up the system. PAL systems were only realized as an experimental test market, and not PAL soft was ever officially made (the finland demo group that did the NES images ran on PAL modified PCEs). Therefore, I think it’s safe to assume this isn’t a ‘special case’ scenario when using this mode. There’s enough of a reason why someone could use this, without paying attention to the naysayers (of course, that’s if you’re targeting the real machine, not emulation or such). And not like with the Genesis, where RGB *was* supported *and* used. And any NTSC artifacting used in games – was completely nullified by RGB setups. Good luck trying to get any emulators to support this type of artifacting though. I don’t know of any PCE emulators that even have NTSC filters (yet Kega has it for Genesis emulation. I guess enough games needed it to warrant it’s support).

Also, on another note; I think I got my h-blur demos wrong. That’s to say, the dithering pattern was wrong. It was checker board for a static image, but that turns each frame into vertical bars (because of how the dither pattern on every other line, aligns up). I guess the proper method would be to *not* phase shift the checker board dither by 1 pixel every other line. That way, it would appear as checker board on ALL frames (less flicker).

Edit: Oh, I also forget to mention. If you turn off the color burst when using this artifacting res, you also get a much larger greyscale range.

Batman

Posted in Audio on November 15, 2010 by pcedev

Here’s the example bass guitar player. Play through the notes with the gamepad (directions on screen). In main.asm, are the tables and include files for the samples. The TIMER section should be pretty self explanatory, as in ‘note_play’. I left all the samples as a single block, because it’s easier that way. All the samples and support tables are address independent (but you do need to include them in sequential order like in my example in order to use the tables). That means you need to add (OR) the page offset when loading the pointer. I also added a generic envelope system (delay & decay) to make the samples more appropriate.

More thoughts on audio..

Posted in Audio on November 15, 2010 by pcedev

A few weeks back, I did a mini music player for the 8k compo. I converted the Unreal SuperHero 3 song to this mini format. It’s not particularly impressive in sound, but I’m proud to have gotten the size down as small as it is. 4300bytes before compression, ~2300bytes after compression. Includes song data and all samples.

But that’s not what I’ve been thinking of lately. I’ve been revisiting some ideas I’ve previously had that I never really explored. Hard sync was one of them. When I original had done hard sync testing, I was turning the channel and volume on/off – which isn’t the best idea for non A revisions of the 6280. After having done audio tests with Ryphecha of Borg… err Mednafen, I was pretty excited to learn of new behavior of the audio chip – that worked on both revisions and without additional sound artifacts.

Here’s a brief introduction to hard sync. The simplest way to make a music note sound, is to have a ‘carrier’ output a tone that resonates at a specific frequency. An oscillator and its waveform output. With hard sync, you have a second oscillator that runs at a different frequency from the ‘carrier’. Instead of outputting a sound, the oscillator ‘resets’ the carriers waveform position when the hard sync oscillator overflows. There are two ways to setup hard sync. One is to have the hard sync OSC slightly below the frequency of the carrier, this will cause it to drift in and out of phase. You get a bending of the carriers sound. You can attach low frequency envelopes to ‘sweep’ or slowly increment/decrement the frequency of either oscillator (hard sync one or the carrier or both). Because of the harshness of the resetting, certain types of waveforms are more forgiving of the aliasing of hard sync. Newer synths and PC soft synths correct the sudden change, but older synths did not. Hard sync was fairly popular with older synths. It’s even used in the SID chip of the C64.

Now, the second setup of hard sync is a little more complex. A note is waveform that resonates at a specific frequency. It’s the speed of the output divided by the size of the cycle (waveform length), that gives the final note frequency. If you run an OSC for the hard sync modulator at higher than the ‘carrier’ OSC, then the carrier OSC never has a chance to finish or complete its waveform cycle. The result is that the hard sync OSC is basically now the new carrier frequency control. Because it now directly controls the length of the waveform, by resetting it earlier than it should. You have a new waveform or cycle size; cycle/speed = note frequency. Here’s the interesting part, if you keep the hard sync OSC fixed, but slightly change the carrier frequency (at a low rate like 20-30hz) – you change the timbre of the sound. This is also where other waveforms come into play. A waveform like Sine now gives a much better results than either saw or square.

How does this relate to the PCE audio? There no such hard sync hardware, true. But there is something you can do to emulate that. It requires the 7khz TIMER interrupt. Using a 16bit phase accumulator with the whole 16bit value as the radix, and overflow as the sync trigger. Phase accumulators are dead easy to do. I’ve used them to do sample-based synth on the PCE. The PCE audio hardware itself provides the carrier. You provide a frequency envelope for both the soft OSC and the PCE’s frequency/period of the channel. The rest should be obvious. Congrats, you now have single channel timbre control. But there is a draw back (isn’t there always?). 7kz is a bit coarse, relatively speaking. The phase accumulator takes care of keeping the oscillation in tune by using the carrier over after roll over, but that doesn’t mean you won’t get some artifacting. Since the base frequency of the PHA is 7khz, the result is obviously going to be a bit dirty. To me, it’s worth the trade off to create a more synthy sound. And the resource is pretty light (not samples to fetch and write, no banks to map, etc). Remember, you’re not going to be playing notes at 7kz, hell not even at 2kz. C-6 is a pretty damn high note, and that’s 1.045khz frequency. The 7khz is there for the phase accumulator. You don’t need to play samples in between each each hard sync OSC overflow, nor in between. That’s pretty damn great, IMO.

The above is pretty solid in approach. But there’s more. There are other synth techniques out there, for creating unique and controllable timbre. One of them is wavetable synthesis. Not to be confused with sample-based synthesis (which is what XM/mod/SPC and such use). Wavetable synth is basically a system similar to the PCE’s audio. You playback small/short waveforms to create a tone. But the difference, is that you change the waveform out overtime. A low frequency envelope controls this. It’s a means to control the timbre of the sound. You can emulate all kinds of sound in this manner, include some FM sounds to even some complex filter sweep sounds. Look up PPG on youtube to get an idea of how this sounds. One the PCE, this could be easily doable if it weren’t for one small thing… you don’t know where the PCE’s channel waveform pointer is – at any point in time. With the exception of when you reset it. I’ve probably mentioned this before, but Bloody Wolf attempts to emulator wavetable synthesis. But it doesn’t now the waveform pointer position, so it just resets it and uploads a new waveform. The drawback is a slight click sound at whatever frequency the low frequency envelope is swapping out channel waveforms. The sound, or instrument, in particular for Bloody Wolf – is the trumpet sound. It’s a sharp saw tooth waveform the gradually changes into a soft edged saw tooth. The low frequency clickyness makes the sound a bit dirty, but it seems to be more forgiving as an trumpet model. That, and there’s a lot of other sounds being played at the same time to help mask this.

The convenient thing for us, is that when you use hard sync as the cycle driving OSC (higher frequency than the actual carrier), is that you always know where the waveform pointer is. Oh… how nice, ehh? What a perfect opportunity to change out the waveform – if you so wanted to. So not only could you control timbre by means of speeding up or down the carrier frequency, but now you have a convenient way to update the waveform. Can you say timbre+frequency filter simulation? I can 😉

Hard sync and/or wavetable synth not your thing? Well, let me introduce you to LA synth. No, it’s not some stupid brand name product or fitness place or even a new shoe. It’s Linear Arithmetic synthesis. The name is totally misleading. The most important part of the idea behind LA synth, is that the attack part of an instrument phase, is the more complicated part to create or replicate. With this idea in mind, the creators of LA synth came up with a system that uses PCM samples for the attack phase, and real time synth for the rest. This idea can work well for the PCE. It also very much so lacks in the attack department of an instrument sound. The attack part of an instrument sound is also very short. This is nice in that it doesn’t take as much space. There’s also another convenient aspect about the attack phase… being that it’s short, there’s almost never any pitch modulation or sliding during this phase. This means you can have a complex attack part for an instrument, then switch over the normal PCE channel waveform output – and still be able to do vibrato, detune, frequency slide, and slide to note. That’s pretty important. The down side, is that the instrument building process becomes more complex.. like a synth. Instead of just tweaking a sound/tone until you found something you like – you take more of a modeling approach. And this unfortunately requires more much experience than simple tone+vol envelope+vibrato or such. Here’s an interesting fact; Batman for the PCE uses a very similar technique to build the bass guitar sound. That’s the output of just the sample channel of Batman. It has the attack part of the guitar, and the loop/tone phase of the guitar. But in that game, they sample for both (they have a loop point for the DDA sample). There isn’t a need for note sliding for such bass playing, so they aren’t limiting themselves. A nice strong pluck sound there too. Notice the almost absence of ‘noise/static’ from the sample too. I’ve ripped all the bass guitar samples with all the corresponding loop points. I’ll provide them as well as some playback code when I get a chance.