Original Post


With the VB20 event–or Virtualfest or Virtual Half-Decade or whatever we’re gonna call it–coming up a few months from now, I’ve decided it was high time to roll up my sleeves, squeeze out my very best effort, and see what I can do to knock the ball out of the park. It’s Guy Perfect overdrive from here on out, and I’m going to do everything in my power to make sure the gettin’s good. (-:

This thread exists as a newsletter of sorts, chronicling my progress and dumping all manner of demos into your lap between now and the main event. I’ll have something or other to present each and every month, though I can’t promise I’ll always update on the 1st. But hey, let’s get this off on the right foot!

[size=24px]Into the Mainframe

My original project idea for last year’s Virtual Fall event was a game. A new game. The design was coming along quite swimmingly, but then I realized in no uncertain terms that I am not an artist. And that was a bit of a problem, since I kinda needed some art for my game. So I couldn’t make that game. That was when F-Zero entered the picture, then exited the picture due to technical difficulties, and then a third project that I didn’t finish designing in time. And that’s why I didn’t have a Virtual Fall submission!

Things are a bit different this round. A co-worker of mine has an astounding graphical ability and he’s agreed to help me with the artwork for my game. So I’m gonna make the game this time, using stick figures if I have to until the super-awesome graphics are ready. And the desktop wallpapers, maybe some posters… We’ll see. I’m seriously considering an original soundtrack album too.

I don’t want to spill all the beans about exactly what the game is and what it’s about, since that would ruin the surprise. There will be plenty of time for hype later on. What I can say for now is that even though the literal gameplay might place you in mountains or forests or cities or whatever, the greater lore of the game universe is actually a highly technical one, inspired by the Virtual Boy itself. There are many abstract concepts from the tech and IT industries being incorporated into the theme and placed in a lush world of critters and scenery, and the way everything meshes together… well, I for one find it fascinating. I hope you guys will too. (-:

[size=24px]Fun For All

Oh, and did I mention all of the assets are being made available to the public? That’s right: the program source code, the concept art, all source graphics and all tools developed for the project are being made available for download and royalty-free modification when the Virtual Boy celebrates its big two-oh.

It’s being developed with devkitV810, which is dasi’s little pride and joy, though I haven’t seen him stop by these parts lately. Maybe he’ll wander in sometime in the next ten months. I’d like to get the compiler and libraries out to the world before the VB20 event.

So with that all out of the way, let’s get this show started. As I said, I’ll be making an announcement and presentation every month, so stay tuned!

27 Replies

[font=Arial][size=24px]August 2014

It was on the 15th when it was brought to my attention that the Virtual Boy is turning 20 years old next year. I decided I had to do something to commemorate the event, and my thoughts turned back to my first Virtual Fall project. Surely with a whole year to get my thoughts and plans together, and a coworker with a keen eye, I’d be able to do a little better job this time around.

Well, I was right! So far. I mean, I’ve been working non-stop on this for a mere week and a half, but I’ve kinda surprised myself with the amount of work I’ve accomplished. Unfortunately, I didn’t get to the point I wanted to be at, which would have involved attaching a Virtual Boy ROM to this post showcasing what I’ve created. So that’s gonna be another week or so.

But until then, have I ever got something to tell you guys…

[size=24px]Utopia Audio

Remember how I mentioned I was working on a third Virtual Fall project after the F-Zero thing went bust, but didn’t have time to finish the design? Well, I’ve finished the design. And implemented it. It now exists, and is running on my Virtual Boy. But alas, I still can’t quite demonstrate it yet. Let me explain…

Planet Virtual Boy has seen its fair share of tools and utilities pass through the forums over the moons, but one thing that’s a bit less common is audio stuff. For Virtual Fall, I figured I could make a sound and music engine, then release the code for all to enjoy. I didn’t finish it in time. Heck, I didn’t finish designing it in time. So no audio libraries during Virtual Fall.

I’ve got it now, though. Two weeks after VB’s birthday #19, I finalized the specs and put together a couple of libraries that I’m quite proud to present to the Virtual Boy scene. There’s still that matter of them being designed for use with devkitV810, though. Really gotta find dasi. |-:

Utopia Sound

This one’s a mixer engine that exposes all of the VSU’s hardware capabilities to the application while at the same time managing state information for sound contexts.

The basic architecture of the mixer is that there are “sound” instances configured in memory that are linked with an application-defined callback function to be run each audio frame. The callback function defines the behavior of the sound, such as a sound effect or music note, and doesn’t suffer from the usual limitations of trying to pack every conceivable use into a file format.

When processing, the mixer will maintain a record of which sounds have priority on which hardware channels, and manage VSU registers accordingly. For instance, sounds with higher priority on a channel will interrupt but not stop sounds of lower priority on the same channel. Individual sounds can configure their own local registers all willy-nilly without worrying about stepping on the toes of the actual hardware and the sounds allotted to it.

Utopia Music

My little pride and joy, this is a music engine that is modeled as a sort of lightweight version of a non-VB project I’ve been planning for years and years. While I won’t say that it provides comprehensive support for Virtual Boy musical needs, I will say that it goes above and beyond, and I’m really itching to get to play with it.

Music data is structured into event lists, which are maintained by a tracker. Music notes are one of an assortment of event types, with others being things like sound changes, tempo modifications or calling other event lists. All active event lists are processed concurrently, providing dynamic and versatile scheduling of music information.

Envelopes can be defined to gradually modify pitch, panning, volume and tempo over time, or individual literal values can be used directly. An instance of an event list has its own panning and volume features, which recursively apply to all notes that the list generates. Identical envelopes across events can be used as presets, such as piano-like intensity graphs, reducing the size of the data.

Taking a step out of the ordinary, the engine provides two time types for all of its time-based features–beats or realtime–and they can be used independently and simultaneously by different events. When something is scheduled with beats, the time is relative to the master musical position and, ultimately, the tempo. When something is scheduled in realtime, on the other hand, it will process for some fixed number of milliseconds, at the artist’s discretion. Why is that useful? Well, you ever hear those super-fast arpeggios used by demoscene chiptune artists? Oh yes, that’s precisely what I have in mind.

Music notes are scheduled directly into the Utopia Sound mixer. In fact, there is a Utopia Music library function designated as a sound callback for use with mixer processing. It’s really quite a lovely development of one library leveraging the capabilities of another. (-:

The whole kit and kaboodle is designed to reduce data size and minimize memory usage. Decoding the file format to the point where the entire thing can be represented only requires 28 bytes in RAM–everything else can subsequently be directly accessed from read-only data. The read-only data itself makes use of redundancy elimination techniques, bit-packed data and in some cases variable-size data fields all in the interest of slimming down the file size without adding any processing overhead.

[size=24px]The Composer

This is why I don’t have a presentation. In order to actually make music to go with this awesome engine I put together, I’ll need some software designed for that purpose. And most of my week and a half went towards engine development itself, with only the past three days being dedicated to composer development. So even though I was able to hand-craft data files in a hex editor, no composer means no music, and no music means no presentation.

Among other things, I had to convert the entire Utopia Audio suite to Java (my preferred GUI platform), emulate the VSU for preview and playback purposes, and am in the process of creating the user application to tie it all together.

Just today I started working on the GUI, and when it became apparent that there was no way I’d have it ready to go for the August presentation, I decided to put it down, stretch my back, take a break and actually do some laundry and clean up around the apartment like I haven’t done for almost two weeks now. I tells ya, Planet Virtual Boy, I’m devoted to the cause!

While the data back-end is solid, the composer’s GUI isn’t far enough along to make for any good screenshots. So instead, here’s some concept art I drew of it!

  • This reply was modified 9 years, 7 months ago by Guy Perfect.

Wow, this is awesome! Definitely looks like you’ve been busy. I’m very excited for..well all of this really. Not only does the game idea sound really cool, the whole Utopia audio you’ve designed sounds very beneficial to the community. I’m definitely going to have to check it out once it’s available!

There will be many that will benefit using your engine. The community really needs something like this especially if it’s open source.;)

One feature request: importing of MIDI files. There are already a lot of excellent MIDI sequencers out there (some commercial-grade), and I for one don’t relish the thought of learning a new composing tool. 😉

I can’t think of any reason importing from MIDI can’t be done, at least on a “this track plays these notes” level, but the structure of the Utopia Music file isn’t really well-suited for copying MIDI content wholesale because it’s designed to work a bit more efficiently…

Take a drum loop for example. In MIDI, you have drum notes, then the same drum notes again, and again and again and again and so-forth, throughout the entire file. Utopia lets you define a drum loop once, then play it just once, but with a duration so the engine automatically repeats it again and again and again for as long as you desire.

I guess what I’m trying to say is that composing in MIDI and then converting to Utopia poorly utilizes Utopia’s features, in a way that yields a terrible return on data size. A typical Utopia music note is 12 bytes, and an event list call is 16 bytes. The drum notes themselves notwithstanding, this means you can do your percussion in 28 bytes for the entire soundtrack. But if every single instance of every percussion note needs to be encoded as a 12-byte data structure, that’s going to add up quick.


Attached to this post is a VB ROM that tests Utopia’s millisecond timing, looped event list calls and event list envelopes. It basically just uses a single channel to play C major while fading out and then back in. Press the A button to hear it.

The entire UM file is 152 bytes, but that includes header information like an identifier and version ID, the 32 samples for wave memory… The actual tracker data is only 76 bytes (including the note definitions). How big is that, really? Well, the following quoted sentence is 76 characters: “Hello, my name is Ultra Dude and I like donuts with colorful sprinkles atop.”

C major is being played on a single channel thanks to a fast arpeggio, where it plays each constituent note for 25 milliseconds and just repeats it over and over for 2 full seconds. That means a total of, uh… 80 music notes are being played.

In other words, for a two-second chord, UM was able to schedule more music notes than there were bytes in the data scheduling them.

  • This reply was modified 9 years, 7 months ago by Guy Perfect.

Could you scan a midi file and convert it to a more efficient structure for your program to use ?

Most programmers might not be as well versed as you at creating music from scratch, and importing a midi into your program to use on the virtual boy might be an easier way for them to have music in their game.


This sounds awesome! And impressive that it’s so memory efficient! Is it equally cpu efficient? Say you have a game that is already struggling to perform at a decent framerate, how much will playing a song of average complexity bog it down?

I haven’t done any bench testing on it, but I can say that the CPU usage isn’t a big impact. I made use of integer operations exclusively, and function pointers where appropriate, to simplify the execution path the CPU needs to take.

For example, one of the event types is a simple operation, such as a = b + c. The tracker itself has a few bytes of memory allocated to it for the purpose of giving the music some variables to play with, and you can literally write a program inside the music file by coupling operations with the “goto” event type (which itself will not activate if the most recent operation result was zero). This kind of thing can be useful if, for instance, you want the music to sound slightly different while under water…

But I digress. Accounting for each of the operation types (both signed and unsigned), we’re looking at 27 different possible operations in the current spec (add, less than, right shift, etc.). Each operation has a numeric code, and that code is used directly as the index into an array of function pointers. So instead of saying “if code is this do this, or if code is that do that”, it just says “perform function

 on this data".

The event list processor works the same way. Each event type has a numeric code, and each is indexed in an array of function pointers. When it's all said and done, there isn't much at all that needs to be done even when there [i]is[/i] something to do. In practice, most audio frames will not have any new tracker actions and will take very little CPU in total.

Hooray! A proper tracker for the VB has been a dream of mine. I was hoping to have an infinite amount of time so that I could try and learn to make one for Virtual Fall, but I just didn’t have anywhere near the time nor knowledge. :/

Anyway, the canonical tracker that I can think of is LSDj for GameBoys. This link is a bit of a walkthrough by one of my favorite chiptune artists, rainbowdragoneyes.

I’ve always thought it would be great to try and be a Virtual Boy chiptune artist. 😀 Maybe when you’re done I’ll be able to!

Sounds like a great piece of kit all around, though. Thanks for your hard work! I’m super excited to see how it progresses!

Edit:// Just in case anyone’s actually interested in the song during the tutorial, it’s Creatures ov Deception. 😀

jrronimo wrote:
I’ve always thought it would be great to try and be a Virtual Boy chiptune artist. 😀 Maybe when you’re done I’ll be able to!

Well this puts things into perspective. Thank you for the comment. (-:

This past week has been kind of humdrum. I hadn’t met my September 1 goal of having a composer ready, and as I worked through my full-time job during the week, I wasn’t especially motivated to just crank out code like I’d been doing–mostly because I didn’t have a solid plan for all the little details of the composer like I did for the music and sound engines. It really wasn’t all that productive a week.

On the other hand, I spent much more time on Saturday than I expected on what I thought would be a small little feature. The Samples window allows you to create wave patterns to use on the audio channels. The composer itself operates in a high-precision mode that uses floating-point precision for everything and a very high sampling rate. In the case of waves, by default it uses 250 samples (compared to the VB’s 32), but the size is configurable. If nothing else, you can generate a list of sample values externally and paste in the numbers into a text box, and it will read them in as wave data. It will also convert them to the corresponding Virtual Boy values (and display them in place of the high-precision values if you desire), allow importing and exporting, hand-drawing of waves using the mouse… I guess it shouldn’t have been a surprise that it would take as long as it did. (-:

Either way, faced with the fact that every user interface element will be a project in its own right, I’m still coming to terms with the realization that in order to do this right, it’s going to take more time than I wanted to spend on it. How much time? I really can’t estimate it at this point, but I’m reeeeeeeeeally hoping to formally have the composer done by month’s end.

But what you just said right here–the thing I quoted above? That makes it all worth it. It’s people like you whom this project is for. Not for me specifically, and it’s not for the community to stare at and congratulate with a thumbs-up and a golf clap. So instead of spending my remaining hour tonight breeding a perfect-IV Clauncher that knows Entrainment, I spent it loading up a sheet of paper with my signature soft-blue ink and window designs.

I’ll be keeping in mind what you said. And now, I can’t wait to get back to work on this tomorrow once I get home. (-:

Guy, is there any way to get some of the tools you’ve been talking about. Could you make even devkitv810 available. I know gccvb is compiling at higher versions and I’d like to get a hold of that.
Also, do you have libVUE integrated to devkitv810?

I’d like to get devkitV810 out there, but it’s dasi’s project and I don’t want to put it out until he feels it’s ready. Problem is, I can’t get ahold of him. |-:

libvue isn’t packaged with devkitv810 just yet, but the plan is to put it in there for the formal release. Even without that, it’s just two files: a .S assembly file and a .h header. And a .html for documentation. (-:

For the record, devkitV810 isn’t based on the same code branch as gccvb. dasi modified it himself to make a compiler for both Virtual Boy as well as PC-FX, which it does to a T. It’s still got a few outstanding issues (standard libraries aren’t linking correctly, for instance), so I’d really rather not make it available without his blessing.

A keen observer will notice that it’s been more than a month since I’ve had anything to say. (-:

The reason for that is simple, and one I didn’t think would hold up the works quite like it did: I’m a perfectionist (surprise!). One of my habits when making software is to just tear the whole darn thing down and start over from the beginning, using what I learned and innovated last time in the new design. The upside is that the result is always really slick. The downside is that I start over a lot, and that takes hecka time.

Another problem is that my co-worker hasn’t really delivered. He went and got himself fired due to attendance issues and I haven’t heard from him since, meaning he took the character design plans with him. (-: Oh well, that’s how things go. So I’m not going to make such a push to have a game ready for VB20, but I do still want to make the game if someone wants to help with character and graphic design, so I won’t spill all the details of the project just yet.

Anyhoo, back to starting over. If I’d been keeping up with the “post a month” theme regarding the music engine and composer project, it would have been three months’ worth of “Okay, I know I said this is where I was last month, but it’s better now!” soooooo, that’s why I haven’t said exactly that. However, I feel quite firmly that I’ve got the specifics down to the point that I won’t really be making them any better through further revisions, so I’m ready to make an announcement…

As a Christmas gift to the Virtual Boy community, I’ll do everything in my power to have the music project completed in its entirety, and with the level of quality that I aim for. And then, if no other project comes up, I’ll at least work towards producing a digital album that runs on the Virtual Boy for the VB20 event. These goals are in the right level of ambition for me, and I’m confident that I’ll deliver on everything this paragraph promises. You’re invited to hold me to that. (-:

I just… need to finish the new Pokémon game first. And play some more of the new Super Smash Bros. game. So gimme a couple more days, please. (-:

Wow, I can’t believe I missed all this talk about sound and music engines. And you say it’s tracker-based? That’s awesome, trackers are totally what I use already to make music! (If you need any feedback on the GUI design, I’ll be on the ready. 😉 )

Looking forward to it seeing it! 😀

[font=Arial][size=24px]The Month of Sound

That’s right! Now that I completed the new Pokémon game and accidentally discovered that the big green thing can one-shot the twisted triangle thing, I’m ready to make that Virtual Boy Christmas gift a reality!

Starting today, and over the next four weeks (it’s 28 days ’til Christmas), I will not only be developing the promised audio architecture and composer for Virtual Boy software, but I will be chronicling the technical details here in this thread in the spirit of furthering understanding for the general public!

I will be supplying “due dates” to each post here, as though it were a school assignment and I have to get my homework done on time. Provided I don’t encounter any real-life catastrophes or other unexpected happenings, I should be able to stick to my posted schedule.

Let’s get this show on the road. Grab a seat, heat up some popcorn, and get ready to learn.


This project consists of three major components: the sound/music engine specification, the C library that will drive the Virtual Boy hardware, and the Java program that will be used to produce the music tracks in question.

Why Java? Because everyone can use it. I could go with C via SDL or somesuch, but no matter how you slice it, it’s gonna take some kind of third-party library, and Java is both trusted and secure, so it makes it the natural choice. Plus, it’s going to be way easier for other people to peer into the Java code. We won’t get a bunch of questions along the lines of “How do I OpenGL” or whatever.

My first swing at the project started with the sound library, then the music specification, then the music library, then finally the composer. When I got to the composer stage, I realized I’d have to re-implement pretty much everything I’d implemented for the hardware… again. And when I started doing that, I realized that certain tweaks here and there would make the whole thing make so much more sense in Java, and there’s no reason it couldn’t work the same way in the C library. Thus began the loop of factoring and refactoring that took two and a half months to stop.

Today, I’m smarter than I once was. I will be developing the C and Java code in parallel to ensure that A) both bits of code work the same way and B) I won’t have to go back and retrace my steps later on. It should go a lot more smoothly this time around, and I invite the rest of you peek in to what I’m doing.

[size=24px]Do You Hear What I Hear?

All of this starts at the lowest level: sound. The Virtual Boy makes it easy by providing audio hardware that Just Plain Works™ once you set a few memory values. Java, on the other hand, doesn’t work the same way. You have to prepare a PCM buffer and set up an audio stream to play it on. Java is far more flexible, of course, but it also means there has to be more overhead to get it to produce the same output.

Ah, but do we necessarily want the same output? Not necessarily. Virtual Boy’s audio boils down to 10-bit digital stereo at 41,700hz using 32-sample waves at 64 steps per sample (and effectively 16 levels of volume). That’s rather restrictive, but the resolution is high enough that it can still sound pretty good. Java, on the other hand, gives us the option to, for instance, have 24-bit digital stereo at 96,000hz using waves of arbitrary sample count and samples with greater depth than the output stream. What do we take away from all this? Java can produce better sound than the Virtual Boy, even when following all of the same rules.

What I intend for the composer is that it can play music in two modes: high-precision mode and hardware mode. High-precision mode will construct a PCM stream with as much… well, precision… as is available, using fine-grained interpolation of samples, higher sample depth for waves, and all music engine configuration happens on a per-sample basis rather than per-frame (more on that later). This will allow for higher-quality clips to be available for soundtrack releases, whereas the lower-quality version shows the artist exactly how it will sound on the hardware.

[size=24px]Utopia Audio

When I said “the C library”, what I really meant was “the two C libraries”. Utopia Audio is a gift in and of itself that makes audio programming on Virtual Boy a whole lot easier. Utopia Music, on the other hand, is a music engine that is built on Utopia Audio. But I already covered this in an earlier post, so I’ll spare you the details here.

Utopia Audio’s abstract outline looks like this:

* The application allocates a buffer of “sound” objects, which is used to configure a “mixer” object
* The application requests an unused sound object from the mixer
* The sound is configured with application data, a priority and a handler function, then given to the mixer to play
* The mixer maintains state information for each sound until such a time that the handler functions stop them
* According to the rules of priority, the mixer will update the hardware channels with data from the appropriate sound

It’s not a complicated setup by any stretch of the imagination, but it abstracts away the nitty-gritties of configuring the hardware, and allows the application dynamic control over sound contexts instead. There’s very little actual wizardry involved, but there’s a big benefit in changing the paradigm from “how do I do this” to “what do I want to do”.

This abstraction takes the form of the Sound context itself. In its simplest form, it’s a memory structure that keeps track of all of the VSU channel properties, which are only actually written to the VSU registers when A) the sound has priority on the channel and B) the value actually changes. Sound management itself takes place in an application-defined function referenced by address, which uses its own data linked to the Sound object via a void pointer. I know that sounds kinda hocus-pocus, but I’ve used it in practice and it’s really quite nice. (-:

[size=24px]Sample This!

Utopia Audio is pretty straightforward in C on Virtual Boy, but as mentioned before, there are differences in the audio subsystems on Java. The bridge between the two will have to be a sort of VSU emulator, but with a few key differences; namely the ones that pertain to the high-precision and hardware rendering modes…

The wave feature of the Virtual Boy VSU isn’t anything the application has to worry about, but it needs to be implemented from scratch in Java. Rather than an array of 32 bytes where only the lower 6 bits are significant, it will be an array of an arbitrary number of floats. When the samples are updated, a 32-byte array is also constructed, where the floats are converted into bytes in VB format. That way, the high-precision sampler can interpolate the floats, and the hardware renderer can sample the bytes without interpolation.

This works the same way as the Sound object in the C version of the library. It has in-memory values for each attribute of an output channel, but again with higher precision where applicable. Frequency will have greater depth; volume and panning will be floats instead of 4-bit integers; etc. The rest of it will just be the same as the C library.

All of the prioritization and processing present in the C version of the Mixer object will take place in the Java version as well, but the Java version has one key additional feature: sampling. Since there is no VSU hardware to convert wave and channel data into something we can listen to, that logic will be the responsibility of the Mixer, meaning it will have to live up to its name. This will take the form of a [/font][font=Courier New]render()[/font][font=Arial] method that supplies an output buffer in the format of the output stream. The Mixer will simply render audio one sample at a time until the buffer is full.

The output of the Mixer will then either be sent to the system’s speakers, or written to a .wav file, depending on the user’s preference.

[size=24px]Due Dates

* Assignment is due by end of day November 30, 2014 for 3 points.
* If finished by end of day November 28, 2014, an additional 2 points of extra credit.
* If late, -1 point for each day after the due date.

Let’s see how many points I have by the end of this.

  • This reply was modified 9 years, 4 months ago by Guy Perfect.

Wow, this is very exciting! I can’t wait to play with this on Christmas morning =P

It’s been ten years or more since the last time I did audio in Java, and I’m pleased to see that it’s the same as it’s ever been. If you’ve ever wanted to work with it but couldn’t make your way around the API, well, here’s what I know!

The Java Sound API is package javax.sound, and the PCM subset of that is javax.sound.sampled. Within sampled, there is an AudioSystem class that acts as the point of communication between Java applications and the system’s audio hardware. From AudioSystem, you can query installed audio hardware and its capabilities, or you can request some kind of a data line that supports the capabilities you desire. That’s what I’m doing with this Virtual Boy project.

Opening a PCM output stream deals with two particular data classes: AudioFormat, which does what it says on the tin; and Line.Info, which specifies stream capabilities. Through [font=Courier New]AudioSystem.getLine()[/font], you can produce a SourceDataLine, and from there you can [font=Courier New]open()[/font], [font=Courier New]start()[/font] and [font=Courier New]write()[/font] sample data as bytes.

Here’s a code excerpt I put together while reorienting myself with the API:

// Specify an audio format
AudioFormat fmt = new AudioFormat(
    48000.0f, // Rate, in samples per second, of the audio stream
    16,       // Number of bits per sample
    2,        // Number of channels (2 = stereo)
    true,     // Sample data is signed
    false     // Sample data is big-endian

  AudioFormat has another constructor that allows you to specify the data type
  and frame size/rate information. In addition to integer samples, it also
  supports float, A-law and u-law.

// Specify an output stream with the given audio format
DataLine.Info info = new DataLine.Info(SourceDataLine.class, fmt);
// Request an output stream with the given format
SourceDataLine line = null;
try {
    line = (SourceDataLine) AudioSystem.getLine(info);
} catch (Exception e) { line = null; }

// No appropriate playback devices are available
if (line == null) {
    System.err.println("Error opening audio output stream.");

// Put the output stream in playback status

// Read file data until the whole file has played
for (;;) {
    byte[] data = fileRead(file, 48000 * 4 * 5); // 5-second buffer
    if (data.length == 0) break;                 // Loop exit condition
    line.write(data, 0, data.length);            // Send to output

// Close the stream like a good programmer

Development is going well! Having a Utopia Audio implementation in Java is quite refreshing, and I’m super excited about being able to hear all the pretty sounds on the desktop end without having to prepare test files for the Virtual Boy first. (-:

But… (you knew that was coming!)… while I find the work fascinating and will continue to work on it, I’m not outright obsessed with it like I was in August. Back when I first caught wind of the VB20 anniversary, every second of free time I had went to the project. It was very productive, certainly, and was also very entertaining.

I find myself sitting here thinking, “ugh, there’s more to do by tomorrow”. What I’d like to be thinking is “okay, what’s next?” but you know how it goes. (-: If I don’t let off the gas a bit, it’s going to feel like work and not play, and I’m actually rather worried about that getting in the way of doing a good job rather than a fast job.

Now I remember why I don’t do deadlines as a policy.

I’d like to nix the “homework” thing and just get this ready as I can, without feeling like the world’s going to end if I don’t have everything just-so by some arbitrary dates. I still want to do the Christmas thing, buuuut… Please understand that if I don’t have it ready by Christmas, it’s because I’ll be doing a better job rather than one filled with hype. (-:

So instead, a progress report…

What I’ve gotten ready since Thursday is a reference implementation of Utopia Audio in C for Virtual Boy, and the better part (probably 85%-90%) of the Java implementation. I keep hopping back and forth to make sure the logic is the same in both versions, because that means the output will also be the same in both versions (what’s the point of having a composer if it sounds different on the hardware?).

Since I’m also starting at the audio end instead of the application end, what I’ve pretty much constructed is a mostly-functional VSU emulator (Utopia Music doesn’t use the envelope or frequency modification features) and an audio engine to drive it. And holy heck, it sure came together fast! I’m really looking forward to where this is going.

Now that I’m not trying to get this done “on time”, watch as productivity skyrockets…

Aw, don’t let those artificial deadlines get to you, I myself am more excited to hear that development is going great on it and that you’re putting in the effort to make this in the first place!

Sound and music has been one aspect I’ve still yet to really do a lot with, and your audio system is virtually mirroring what I was planning to eventually (try to) do in the end, so I definitely appreciate this project and can patiently wait for it. I guess it helps I got about 90% of a game still ahead of me yet, so plenty to distract myself with in the meantime. 😛

Keep up the great work!

It’s that time of month again! That didn’t sound right, did it?

I’m working on the audio project, of course. I find myself wanting to work on it. Like, even after a full day at work, I come home and think, “this is how I want to spend my free time”. It’s awesome that I’m doing that again. I was doing that back in August and it was awesome then too. (-:

I’ve also been, um… doing… something else. The awesomeness of that other activity rivals the awesomeness of this one.

There’s one other thing I’m doing. I’m working on version 1.0 of the Sacred Tech Scroll. The new document is being redone from the ground up, using the original as a source but for the most part containing new material. I explain things better, the presentation is cleaner, and I’ve hopefully removed all of the ambiguity (such as tables with no column headings). Oh yeah, and I’m incorporating new information as it becomes available (Nintendo instruction cycle counts, MPYHW’s operation, etc). The changelog between v1.0 and the last published version v0.91 is huuuuuge. But it’s the Virtual Boy’s 20th birthday, so I’m gonna make sure we get that information right this time around.

Utopia Sound was, as expected, totally redone from scratch, and it’s finished and fully commented. I used a few different approaches this time, and the API as a whole became leaner and somehow the back-end code is simpler and smaller… Oh well, I won’t complain.

Since I also implemented Utopia Sound in Java, I’ve got a nice little sound rendering engine that produces bona fide, no-holds-barred identical output to the hardware when requested. This is true even so far as to make identical noise samples, which I’ve demonstrated with an attached audio file. The first sound played is a recording from the hardware, and the second sound played is the output using the same parameters from the Java rendering engine.

I spent the day researching some long-standing what-ifs on the hardware, such as the condition for integer division overflow, the exception handler addresses for the TRAP instruction, and the behavior of shift instructions by register. Some things I’d made assumptions about, but I wanted to nail it down once and for all by seeing it with my own eyes. The last thing I tested was when the VSU noise generator reinitializes its shift register, and updated the Java rendering engine to match (it resets the register not only when specifying the tap location, but any time the play control register is written).

My newfound zeal for Virtual Boy can be attributed to Nintendo, ultimately. Releasing new Super Smash Bros. games got me back into that series, which led me to doing some hacking on the Nintendo 64 game, which in turn led me down the right path to get in touch with some contacts that holy-cow totally got me the resources I needed to do a Nintendo 64 Sacred Tech Scroll! I’m not going anywhere near that project for a while yet, but working with N64 a bit got me to remember just how much I love this stuff. (-:

2015 is going to be an awesome year for Virtual Boy. My so-called “resolutions” with regards to Virtual Boy are as follows:

* Complete v1.0 of the Sacred Tech Scroll, making everyone experts once and for all
* Complete Utopia Sound and Utopia Music, both for development and the composer application
* Find dasi and get that silly devkitV810 out to the world
* Finally make that blessed VB emulator with all those debugging features
* Compose an album on Virtual Boy for the 20th anniversary event. Yeah, you heard me.



Write a reply

You must be logged in to reply to this topic.