It’s that time again, and attached to this post are the spoils. It’s only built for 32-bit Windows at the moment, mainly because Debian has an older version of SDL and I’ll need to build my own cross compilers for the ARM architectures, but hey, there’s the source code!
To use it, find yourself a Virtual Boy ROM file and drag it onto app.exe.
Controls:
• F6 – Step Over (works on functions and branches alike)
• F7 – Single Step (aka Step Into)
• Up – Scroll up (experimental – somewhat buggy)
• Down – Scroll down (experimental – somewhat buggy)
Eventually the emulated program will hang because it’s waiting on some hardware activity. Wario Land in particular checks for the VIP ready status, which of course never comes because that component isn’t implemented yet. If that happens during a Step Over, then the actual emulator program will hang as well, but such are the joys of alpha development.
__________
So what was done in August?
Emulation Core
This thing is every bit as delicious as I hoped it would be. There’s a single object type used to represent the entire emulation state of a virtual Virtual Boy, and a handful of API functions for working with it. You supply it with a ROM file and tell it to emulate, and off it goes.
Like mentioned before, various control points can be hooked into by the application to intercept things like bus accesses or pause before an instruction is executed. Clever use of these control points can enable things like disassembler and cheat engines to work without bogging down the library with excess code.
The disassembler in this build doesn’t actually use these control points. It just runs the emulate command with a maximum cycle target of 1, meaning it will break after every instruction. Lazy, sure, but again, this is what we call “the alpha stage of development.”
CPU Emulation
A joint effort between myself and handsome up there, the CPU emulator is very nearly implemented in its entirety. The missing features are bit string instructions (pending research), floating-point instructions (pending development) and exception handling (NP was set, what was I to do?).
Each instruction has its own function called by the library, and the library can be compiled in one of two ways depending on the compiler and speed of the target platform: use a big switch statement, or use function pointers. Never too many options, I guess.
Every instruction takes some number of CPU cycles to complete, and bus accesses may factor into that. Cycles are tracked and enumerated, and additional features will make use of that figure in order for everything to remain properly synchronized. When the time comes, all the other hardware components will be told to “do whatever you can in this many CPU cycles.”
GUI Subsystem
You wouldn’t know it from this program, but there’s a full-featured GUI subsystem in there. It supports a hierarchy of elements–each of which can have nested child elements–and handles mouse and keyboard focus on all of them.
I ended up doing the exact opposite of what I said I would do: I wrote a wrapper around SDL for cross-platform support and I’m doing the rendering with OpenGL directly. These decisions were made because SDL’s setup isn’t really analogous to how the emulator application works, and because OpenGL gives us some useful tools that SDL’s rendering API doesn’t.
Simple font support was implemented, and will be expanded in a future build. All fonts will be bitmaps in this program, and Unicode will be supported. Right now, only block 0x0000-0x00FF is supported, but the plan is for strings internal to the application to have full support for UTF-8 up to 0xFFFF, save for some fancy things like combining diacritics and oddball control characters (right-to-left, etc).
__________
So what will be done in September?
I don’t know about you, but I’ve always felt like having “only the CPU” was kind of a small step in terms of writing an emulator, but every time I sit down to think about it, I realize just what a percentage of the overall project it really is. The remaining system components are way less complex and will not take as long to implement, so I think this project is going to come together a lot faster than I expect.
That being said, I have some goals for September:
• Implement CPU exception processing.
• Implement the VIP. The whole VIP.
• Implement the game pad interface.
• Create basic GUI controls (buttons, scroll bars, etc.)
If time permits (aka “stretch goals”):
• Finish UTF-8 font support
• Create fileopen dialog
• Create disassembler window
• Create memory hex editor window
Something we here at Planet Virtual Boy can do in the mean time is toss around some designs for the user interface. What would you like to see? How would you organize things? What tools do you want to have at your disposal? Now’s the time to talk about it, because we’re nearing the time when we can make it happen.
Alright kids, see you on September 30!
-
This reply was modified 8 years, 11 months ago by
Guy Perfect.
-
This reply was modified 8 years, 11 months ago by
Guy Perfect.
Attachments:
It’s not time to turn in our homework yet, but Gookanheimer and I have made some handsome progress.
Attachments:
I spent the last week considering the best way to handle GUI stuff, and settled on a disappointing conclusion: it will be most appropriate to treat SDL as the target plaform, rather than try to make everything platform-independent.
When considering how to implement GUI things in a platform-independent way, the solution I kept coming back to was basically “write a generic API that wraps around SDL for one implementation and whatever else for other implementations.” Since SDL is already doing that with native windows and I/O, it just adds another layer of abstraction where there doesn’t really need to be one. If some port of the emulator application needs to implement its internal API regardless, it may as well implement a stand-in for the SDL functionality directly.
In other, more exciting news, I’ve decided to add [font=Courier New]armhf[/font] and [font=Courier New]arm64[/font] to the list of supported architectures for Linux. This benefits the general mobile market, but has particular significance for single-board development. Raspberry Pi in particular has a community with a strong emphasis on video game emulation.
__________
EDIT:
Oh yeah, forgot to mention… As part of the initiative to treat SDL as the target platform, I’ve also opted to forego OpenGL directly and use the SDL 2D rendering API instead. This was an easy decision because there’s about one finity ways to implement something, and even if my OpenGL code works one place, it may not be suitable in another. May as well code up the GUI renderer in such a way that it’s best-suited for the platform at hand, and that would be through letting SDL decide how to best carry out its drawing instructions.
A 3DS version of this emulator will probably just wind up using the emulation core library and implementing its own GUI from scratch…
-
This reply was modified 8 years, 11 months ago by
Guy Perfect.
Pokémon requires 32KB of cartridge RAM, but Virtual Boy games historically only had 8KB. Not sure if that’s the reason it’s not working, but it was my first thought when considering Game Boy emulation initially.
Based on my experience with the Pokémon save data, I believe some simple LZ77 compression would be sufficient for packing it in a Flash Boy Plus’s SRAM, but haven’t tried it. (-:
Finally got the architecture design nailed down and mostly implemented. Gookanheimer actually jumped in and is reportedly working on implementing the CPU instructions while I start on the GUI stuff. With any luck, we can actually get this done quickly without getting bored. (-:
With all that out of the way, let’s talk about the emulation core.
Emulation Core
In its simplest form, the emulation core library defines an object type representing an emulated Virtual Boy, and various functions for working with an instance of such an object. The core is self-contained and is not useful unless the application supplies input and output buffers. The application can extend the functionality of the emulation core by hooking into key control points.
I think the most appropriate name for this library is [font=Courier New]vue[/font], since that’s the acronym used by Nintendo to represent the system’s internal workings.
If the application invokes the main emulation routine without configuring any breakpoints, the program will effectively hang because the emulation will loop indefinitely. The following control points are available:
• Memory Access (Read, Write). The address and data size are passed to the control point handlers. If an instruction requested the access, a descriptor of the instruction is also passed to the handler before the instruction is executed. If instead an instruction is being fetched by the CPU, a Read handler will be triggered with some control value for the instruction parameter.
• Execute. A descriptor of the instruction is passed to the control point handler before the instruction is executed. The address can be determined by examining the CPU state, and the instruction’s size can be determined by examining the instruction data.
• Exception. The exception code is passed to the control point handler before the CPU state is modified according to the exception. Exceptions include instruction errors, interrupts, hardware breakpoints and duplexed exceptions.
• Fatal Exception. The control point handler is called before the CPU writes its debugging information to the bus. This condition only occurs if a third exception occurs during the processing of a duplexed exception.
If an emulation break isn’t required, the control point handler will return zero. Otherwise, the value returned will be ultimately given back to the application when execution stops.
The following special tasks can be performed during control point handling:
• Perform the default behavior for the memory access
• Execute the default behavior for the the instruction
• Update CPU state according to the exception
• Perform fatal exception operations
• Perform a memory access that bypasses control points and CPU processing
• Advance the program counter to the next instruction
• Advance the CPU timer by some number of cycles
I’m sure a few things will come and go as things get implemented, but this particular approach I think will work well because 1) it’s simple to implement in the emulation core library and 2) it’s flexible enough that a debugger can do a lot with it. If you wanted to, you could put an Execute trap on the entire address range and filter out everything except a particular instruction (like, say, [font=Courier New]JMP [r31][/font]).
Bus Routing
Those Read and Write control points form the basis of the interconnected web of components that will become the emulation core. They’re responsible for routing accesses to the appropriate components, fetching memory or carrying out actions according to the hardware component being accessed.
By simplifying the control points to just two functions, the core logic of routing bus addresses becomes simplified.
CPU Operations
For those who didn’t take CS101, a CPU hypothetically does the following three things over and over:
• Fetch – Load the next instruction from memory
• Decode – Figure out what the instruction’s bits mean
• Execute – Perform the task specified by the instruction
Additionally, each instruction takes a certain amount of time to complete, and not all instructions take the same amount of time. For the purposes of this emulator, the key interval of time here is the cycle. Since the VB’s processor runs at 20MHz, one cycle is one twenty-millionth of a second. To be accurate, we need to keep track of these as best we can.
Virtual Boy instructions are loaded in 16-bit units. The upper 6 bits of the first 16 bits read specify the instruction’s opcode, which is a unique identifier for which instruction is being represented. Each instruction (by opcode) is stored in one of seven different formats, with formats I through III requiring only 16 bits and formats IV through VII requiring 32. Therefore, depending on the value of the opcode, a second 16-bit read may be required to fully load the instruction.
The “fetch” and “decode” stages of our virtual CPU pipeline will happen concurrently. First, a 16-bit unit is read, then the opcode parsed from that, indirectly yielding the instruction’s format. An additional read may be performed, followed by parsing all of the instruction’s operands. This descriptor of the instruction will be stored in a temporary object, which is then passed to the execute processor.
The “execute” stage differs depending on the opcode. While certain common tasks can be split out into their own functions, the end result is that every opcode will result in calling a different function. Each of these execute handlers will be responsible for updating the CPU state and keeping track of how many cycles they take.
An exception can take place at four different locations within this process:
• If the hardware breakpoint is enabled and the instruction being fetched is located at the address specified by the [font=Courier New]ADTRE[/font] register, the address trap exception is processed prior to the “fetch” step.
• Illegal opcodes are detected during the “decode” step. You can’t execute an opcode that doesn’t correspond with an instruction, and VB does have a handful of those.
• General exceptions occur during the “execute” step. This includes things like division by zero, using NaNs in a floating-point operation, or trying to convert to a data type that can’t represent the result.
• Interrupts are checked after the instruction that was executing when they were raised finishes. That is to say, interrupts are accepted between the “execute” step of one instruction and the “fetch” step of the next.
Simple Disassembler
I’m a firm believer that any program function can be analyzed to the point where it can be determined with certainty that it operates correctly in all situations and is therefore free of bugs without even testing it. I’m also painfully aware that I tend to make silly mistakes while programming, so I don’t dare trust myself to do it right without sitting down and looking it over real close.
I also like to see things happen, rather than just knowing they happen. To that end, I want a way to verify the CPU emulator is working properly by using my eyeballs. This is a perfect place to implement a disassembler: it requires hooking into the emulation core’s control points, and although it’s GUI work, it’s work that needs to be done eventually anyway, so why the heck not?
A disassembler is a program that accepts compiled machine code as input and produces the equivalent human-readable assembly as output. (It’s possible to go a step further and make a decompiler that produces something like C code as output, but that’s way more work and isn’t really in the scope of this project.) In an emulator, a disassembler can be present while the guest program is executing, giving the developer some very good insight into exactly what’s going on in there.
This first-iteration disassembler won’t be fully-featured, but it will provide a reliable basis for testing and later expansion. It will have three primary jobs: 1) present the user with a disassembly of the program relative to its current execution address, 2) allow the user to advance one instruction at a time, and 3) display the current CPU state to the user.
A typical disassembler displays program code to the user one instruction per line, with multiple columns of information. This initial disassembler will have the following columns:
• Machine code bytes, in the order they appear in the ROM data
• The instruction’s mnemonic (human-readable “name”)
• The instruction’s operands
Later iterations will include a fourth column for automatic comments, such as a known function name for the instruction that calls the function. This will also use the standard V810 operand ordering, register names and SETF notation, but the final version will allow these things to be configured by the user.
__________
Phew, that was a lot of things to say. I’ll get right on it. If you don’t hear from me, then expect to see all these things in glorious compiled-o-vision come August 31.
Many thanks to those who have pitched in ideas so far. I’ve got a good idea of where to start and how to proceed, which is what I set out to do so hooray. (-:
I started this write-up before Gookanheimer came about, and I haven’t had a chance to discuss things with him yet, so I’ll just say this: until such a time that others are interested in working on the project with me, I’ll be working independently. As such, I will not be setting up a git repository at this time (just as well, since I’m no expert at git), and will be posting ZIP files from time to time containing the current progress and binaries for the project.
So hey, let’s get started.
__________
License
I’m in prime position to piss off a fair number of people by doing this, but I’m going to make another executive decision: no traces of GPL in any way, shape or form, at all, ever. I’m all for keeping open software open, but there are certain ways to take that ideology too far, and GPL’s requirement to make the source of your entire project available on request just because you used that one library is one of them. I will not reconsider my position on this matter.
This project will be released under the terms of the zlib license. This means anyone can use any part of the emulator code for any purpose–commercial or non-commercial–with or without making the source code to their software available. Heck, Nintendo themselves would be within their rights to use the Planet Virtual Boy emulator in Virtual Console releases and never tell anyone where they got it. (-:
GUI Stuff
Before getting into the emulation side of things, let’s take a moment to talk about GUI. The project will need to work with the native windowing systems of the target platforms, and in the interest of maintainability, a solution should be selected that minimizes the amount of code that will differ by platform.
And before getting into that, what are the target platforms? I specifically want to support Linux, Microsoft Windows and Apple OS X as operating systems. Exactly how far back to support kinda hinges on the dependencies of the GUI library, but it’d be cool to support as far back as Windows 95 and OS X Tiger. Even though usage share is trending for 64-bit OSes, 32-bit releases of the emulator are still important since some people still use them and they’re not really that big a deal to build for.
With that out of the way, the GUI solution should ideally support basic window management across all three target OSes, but without hogging resources or introducing some staggering runtime footprint.
The GUI library I want to go with is Simple DirectMedia Layer: it’s free, fairly minimal, supports many architectures and is supported by many game development platforms. It’s missing a few things like directory listing and exposing the POSIX sockets API, but anything that’s missing can be implemented without too much of a hassle. And hey, maybe we can improve SDL while we’re at it, right?
SDL is so minimal, in fact, that it doesn’t actually provide a GUI library out-of-the-box. What it does is gives you a system window into which you can draw pictures (and receive input, etc.). It has a built-in API for 2D accelerated graphics, but I think we should cut out the middle-man and use OpenGL directly. Let me explain… although SDL is supported on many platforms, and OpenGL on even more, there’s never going to be The One GUI Solution™ that works absolutely everywhere. By using OpenGL instead of SDL’s 2D library, that’s one part of the program that gets to be that much more portable as a result.
An additional complication is which OpenGL features to use. I’m leaning towards 1.1 since it’s supported on the oldest versions of Windows (and in software-rendered situations like virtual RDP), and I myself have some tablets and netbooks that don’t support shaders (and I really like programming with shaders). When it’s all said and done, OpenGL 1.1 seems to be the least common multiple and should hypothetically cast the broadest net for the user base of the emulator application.
As for GUI controls themselves, well… I’d really rather just implement those myself. Sure, it’s reinventing the wheel, but the application only needs some simple buttons and scroll bars and stuff. I really don’t feel like kissing frogs trying to find the right GUI library that works with our license and bloating the code base with controls we don’t need.
So that’s what I have in mind. If you know a better way, say so now, before something gets set in stone. (-:
Portability
Portability is paramount, especially for the emulation core. If the core library is to be useful in as many applications as possible, then it has to be written in such a way as to minimize the requirements of the application trying to use it.
This section applies mainly to the emulation core, so don’t get discouraged. The following are some considerations to increase portability of this central library:
• Common compilers. A language should be chosen to maximize the likelihood someone will be able to compile the emulation core on their target system. This is a bit of a sticky subject, since using old programming languages can be seen as failure to get with the times. But at least one PVB member has expressed interest in getting it to run on DOS, so it’s worth consideration. Much to everyone’s dismay, I want to support stock C89, even so far as to require /* style comments. If it won’t compile in gcc with [font=Courier New]-std=c89 -pedantic[/font], then maybe it shouldn’t be used. I’m open to suggestions if an argument can be made for using newer C features.
• No C runtime. Not every system using the emulation core will necessarily have a C runtime to draw from. This means any standard library functions that would otherwise be used by the core (such as math, string or memory allocation functions) will need to be implemented in C code to avoid the C runtime requirement.
• Architecture-independent implementation. Certain CPU architectures process the same instructions differently, even if the C code itself is unchanged. There are two key ways this can occur in my experience: endianness and alignment. No assumptions can be made regarding the order of bytes within data types, and unaligned memory accesses need to be avoided. Has anyone else dealt with any other architecture-specific gotchas we might need to watch out for?
Again, these considerations are primarily for the emulation core library. The GUI application with the debugger and stuff won’t be held to such strict rules.
Conventions
I started writing up coding conventions, then I remembered I would probably be working independently. Basically how this is going to go down is that I’ll establish a coding convention, then if anyone contributes to the code base, consistency should be maintained. (-:
July 2016 Follow-up
• Gauge interest in a collaborative emulator project to best determine the approach to take when carrying it out.
Like before, measuring interest in the project requires a Geiger counter. While there’s some stealth interest that can only be pressed out of people with a vigorous private message regimen, the overwhelming response is in the form of “I’m going to watch Guy make this emulator, if I can remember to check in on it from time to time.”
Accordingly, the project will be structured in a way that accommodates me working on it by myself. I have a particular way I like to organize my personal projects, and that’s what I’ll be doing with this one until such a time that other people want to get involved.
• Research various version-control systems and select the one that is most accommodating for the community. Establish a public repository.
The vote quickly went to git for this one, but actually setting up a git repository isn’t one of those things I’ve experienced, so I had to look into making it happen. There’s a few options out there, but I’m okay waiting on KR155E to install git-hosting software like he wants to do eventually.
Meaning yeah, for now, no git repository of any kind. And that’s just fine, since it’s me working on the project alone. (-:
• Draft up design documentation for each main section of the project. Collaborative brainstorming would be incredibly helpful.
I decided to nix this in light of working independently, since I have a pretty good picture in my head of the general points I want to hit along the way. If more people get involved, a more rigid roadmap can be established.
__________
August 2016
It’s time to start development! The primary focus of the whole project is of course the emulation core library, and that will be where development begins. Since I’m a cheeky rascal, I’ll approach the first step with a two-pronged attack: basic CPU operation and a simple stepping disassembler.
The goals for the month are as follows:
• Establish naming conventions. This is a lot like ordering toppings on a pizza, so expect a heated discussion on it.
• Specify the structure and functionality of the emulation core. This includes the features that allow debuggers and the like to hook into it.
• Implement all CPU functionality. (Except bit string instructions, since I want to put them through their paces before deciding how to best implement them).
• Implement a simple disassembler GUI that can help verify the CPU emulation works correctly. Needs to allow single-step.
As before, detailed discussion will be happening in the development thread, wherein I’ll be posting occasional source snapshots and their compiled binaries, so get over there and be a part of this!
Hey! Glad to have you on board. I came on to write up the goals for August and saw you posted. Check your PMs, I’ve got something for you. (-:
blitter wrote:
Absolutely false. (Seriously, this is the first Google result returned for “c wrapper for c++ library”– did you do any research yourself or are you blindly going by what your “associate” says?)
I understand that you have an emotional investment in using C++ (or, as the case may be, in not using C), but I remind you to think carefully before accusing someone of making unsubstantiated claims. I don’t want a constructive debate to devolve into name calling and hearsay.
The article you linked details a technique for implementing a C interface to a C++ library. I would posit that a superior approach would be to export to a DLL or something similar so that virtually any target language could load in the compiled library code at runtime. Heck, I was doing OpenGL in Visual Basic this way for years before switching to C. (-: From a literal standpoint, I will concede that this is a manner by which you can use C++ code in a C project.
That being said, as the article points out, this is not necessarily an elegant transition: every access to a C++ class’s fields or methods has to have a dedicated function for C to use. We’re left with a situation where we’re weighing a matter of convenience. Do we use C, where the source files can be used directly in a C++ project; or do we use C++, where a C interface will need to be maintained in order to use the code in a C project?
And what about porting to other languages? Many languages have classes, some have exceptions, and a few have operator overloading (the popular Java, incidentally, does not). If we leverage the powerful features that C++ brings to the party, what impact will that have on portability? It hardly needs to be said that C code will generally port with minimal modifications.
Like I said in my previous post, the main thing drawing me to C over C++ is portability. C can get tricky with indirection, but beyond that I believe a very strong case can be made that C is far more accommodating than C++ in this regard, and I consider that a significant factor in choosing which language to use.
blitter wrote:
[…] arbitrary restrictions […] preconceived notions […] less performant, top-heavy, memory-intensive […] without confirming it […] at best silly and at worst irresponsible […] recoup that .00001% […] speed or space issues […] time wasted over a mostly futile effort.
My priorities are portability and constructive discourse.
If anyone else feels that I’m being unreasonable, please say so in a reply to this thread or to me directly via PM. It’s not my intention to be stubborn or narrow-minded, and if additional feedback can put this project in a better direction, then I’d say only good can come of it.
blitter wrote:
Is it worth shooing away good C++ coders so you can recoup that .00001%?
I don’t believe that this is an actual issue: a good C++ programmer should be able to make heads or tails of a C program (and vice-versa).
Of course, there’s also the fact that C is trending nearly double what C++ is on the internet right now. If I were concerned with alienating programmers, I’d be more hesitant to decide on C++ for fear of driving away potential C developers.
That was an especially interesting speech. Thank you for sharing it. I’m tickled to death by the notion that Nintendo downloaded Super Mario Bros. from the internet and then sold it.
Saying that, it’s not what I had in mind. I’m more interested in shedding light on how emulator’s work internally.
__________
After some discussion with my associate, I’ve decided to stick with C over C++ for two prominent reasons:
1) Some of the features of C++ necessarily have more demanding runtime requirements to compensate. I mentioned the “new” keyword, which was confirmed to allocate a small block of memory every time an object is created. But it was also brought up that trapping exceptions–one of the most powerful features C++ offers over C–has a host of overhead making it work. The runtime library required to use a C++ program has a larger memory footprint than the C runtime. While the difference in performance is negligible, there’s still a relevant impact when using C++ instead of C.
2) It’s imperative that the project is portable–especially the emulation core. While it’s true that C code can be used in a C++ project, that’s not necessarily true if you flip it around. Even in the case of the user interface, I’d really rather design it in such a way that if someone wanted to use parts of it in some other project, they’d be able to. This point more than the previous one is what’s pulling me to the C side of things. No amount of C++ trickery is going to make the emulation core work in a C environment, performance notwithstanding.
Thanks for your feedback, everyone. I feel like I’m getting a pretty good picture of what people want to do and what everyone’s expectations are. Very helpful for planning purposes!
HorvatM wrote:
Haven’t we had this conversation already?
Pretty close, yeah. (-: But the difference this time is that I’m actually prioritizing this in my life because I feel I owe it to the PVB community to give back in a big way.
ElmerPCFX wrote:
[… I]f you just want a “better” emulator than what’s available, have you considered not throwing away the years of work that other people have done, and to actually save yourself a lot of time and build upon one of the existing emulators and improve it?
I do appreciate your insights, since if things were as quoted, the lion’s share of the work could be sidestepped. However, it’s not having an emulator that I’m after with this activity; it’s making one.
One of my desires is to help lift the veil on the internet that enshrouds emulation in an air of mysticism. An emulator isn’t some arcane box of hocus-pocus: it’s a rigidly-defined set of rules that get from point A to point B. I want people from all over the industry to be able to look at Planet Virtual Boy and, in doing so, learn a lot about how emulation works and maybe take the system a little more seriously in the process. (-:
Alongside development of the emulator for its own sake–which was the focus of the previous thread–I will be writing up articles about how different parts of the emulator operate. My hope is that someone who is interested in their own emulator endeavors can find these articles helpful.
blitter wrote:
The VB community is *tiny*– I don’t think I can stress this enough– […] A Virtual Boy-exclusive emulator such as this one only serves to isolate us from possible contributions.
I wouldn’t worry about that too much since the project isn’t trying to attract a lot of people to work on it. At minimum, I’ll still have my little monthly goals–I promise–so it will be completed eventually. Should someone choose another project to contribute to instead of this one, that’s perfectly fine. This one has its own ambitions.
If nothing else, working on an in-house emulator is better than sitting around scratching our heads waiting for the next reproduction to be sent out.
blitter wrote:
C has no advantage over C++ performance-wise, and vice versa. C++ is a superset of C, meaning that you can use some or none of the extra features provided by the language, with no penalty in the latter case.
You’re correct, and I will consider C++ for this project. Even if C is specifically desired for whatever portion of the code base, they can still be used verbatim in a C++ project. The ease-of-use benefits of C++ may be conductive to better collaborative support.
I’ll need to research it a bit. I’m a big proponent of the “don’t allocate for every individual object” camp, and I have a feeling C++’s “new” keyword falls into that category upon object instantiation. I’ll get with a colleague of mine who deals more in such things and see what insights he can offer.
I’m so grateful we have such expertise in this community, despite its size. I’d like to think we’re all enthusiasts here, and not just in passing. (-:
That’s clearly a fake screenshot. Obviously. Everyone knows that regardless of which memory bank you’ve selected, bus address 0xC0011E36 maps to VIP memory and therefore…
…
… You know, I think it actually is possible for a program to run in VIP memory, thereby potentially throwing an exception. I’m gonna try that!
So it took 11 days to process my 1-4 day processing order. On the other hand, DHL managed to get it here from China in three days over the weekend, so there’s that.
Putting on these no-ghosting 3D glasses and covering my left eye while looking at red things, they’re very visibly red. Like, not a tad red where I can make out what color they are, but rather a red object with a cyanish tint. The net effect when looking at an anaglyph? Ghosting.
bmo registered an account with Light in the Box because he was interested in buying some no-ghosting glasses, but wanted my feedback first. Even after unsubscribing from their e-mails, they continue to spam him. So the site earns the trifecta of poor customer service: unsolicited advertising, missed due dates and products that don’t match their descriptions.
Soooo, I’m still without a pair of anaglyph glasses that is capable of reducing ghosting to a negligible amount. I’m most pleased with these generic green/magenta ones, though. I think I’ll stick with them when I have the option of choosing which colors to look at.
Still hoping those “no ghosting” glasses do the trick. If they do, I sure as heck wouldn’t recommend paying extra for expedited shipping from Light in the Box. They said processing would be 1-4 business days, I bought it on February 15, and they’re estimating it will be shipped on March 1.
Let’s see if it’s worth the wait. (-:
Attachments:
blitter wrote:
Any plans to RE the VIP timings so that I *can* race the beam?
I’d like to eventually. It’s an after-project of the emulator thing, but not immediately something I expect to spend a lot of time on. The initial implementation of the VIP emulator will probably just render the whole darn thing on the first CPU cycle each frame until we can get the various timing mechanics figured out.
On the other hand, if we’re going to set up a PVB task force to work on this, I’ll gladly pitch in.
KR155E wrote:
I have been thinking about design tricks and came up with a question to those who have worked with direct screen draw: is it possible (and feasible) to do “post processing” on the Virtual Boy? i.e. can we manipulate the frame buffers after the VIP has done its job and written the next frame to it?
Yes, framebuffer memory can be accessed by the CPU after VIP drawing operations but before scanning to the displays. It’s certainly possible. Feasible, like blitter shows, is another matter. (-:
The VIP bus is slower than the CPU bus, so you need to make every access count. Reading from VIP memory then writing back to the same address is two accesses, whereas direct rendering generally only requires writes. The big bottleneck here is going to be transfer speed.
KR155E wrote:
I am thinking about a full screen HBias effect as en example, where we shift some rows by some pixels to create a wavy screen.
Know that framebuffer memory is stored with column-major ordering (top-to-bottom, then left-to-right), so to shift rows of pixels horizontally is going to be a beast to process. I’d have to try it, but I’m not sure there’s enough time even after a blank VIP drawing operation with zero windows to pull this off.
KR155E wrote:
Another example would be […] Yet another example […]
Since the time available after VIP processing but before display scanning is so small, I really think it would be more worthwhile to do CPU rendering before the VIP drawing procedure, uploading it to CHR memory instead of framebuffer memory. While there’s not enough CHR memory for two fullscreen blits in this manner, you could still go a long way if you’re clever with it. There’s way more than enough time to write all 2048 CHR blocks per frame than reading and writing both framebuffers after VIP drawing but before display. Plus, you’d get to use the entire drawing and display time preparing the graphics for the next frame.
The TriOviz Inficolor 3D spec is clearly using the same approach as Anachrome. It uses green and magenta to help the eyes equalize perceived brightness, and lets through a crazy amount of all three primary colors through both filters. Games designed for use with it require such an infinitesimal separation between the eyes that it can be hard to tell that an image is even 3D at all. I don’t recommend it.
The run-of-the-mill green/magenta are showing a typical amount of ghosting like I’ve seen in red/cyan glasses. They’re pretty good, and no doubt work great with movies, but still visually jarring when looking at Virtual Boy renders.
Here’s hoping the “no ghosting” glasses are actually that. Of course, they have to ship them out before that can happen, and the order is still “Processing…”
blitter wrote:
No.
Okay then.
3DBoyColor wrote:
Does the VIP have any polygon rendering capability, or is it all software rendered using the CPU?
The VIP is only designed to do 2D. It does have an affine feature (most often compared with the SNES “mode 7”) which can be used to simulate perspective, but it still amounts to processing in 2D the way it works. All “true 3D” or polygon-based graphics on Virtual Boy need to be done in software by the CPU, then transferred as completed 2D images to the VIP for display.
A common way to do this is to turn off the VIP’s drawing feature entirely and just have the CPU write to framebuffer memory directly.
3DBoyColor wrote:
Would you say the VB was designed to be easy to program? Or at least that’s what the hardware layout sounds like to me. Most other game consoles have a more complicated dynamic between the CPU, GFX chipset and the rest of the system.
Absolutely. For what each of the hardware components do, the way to do it from the CPU’s perspective is fairly simple and intuitive. The only real beef I have with any piece of Virtual Boy hardware is the fact that PCM wave buffers on the audio unit can’t be written to while audio is being generated, effectively locking you into five distinct waveforms unless you interrupt the sound…
3DBoyColor wrote:
The VIP sounds like it takes care of some of the more menial tasks involved with drawing the screens. Are programmers allowed much flexibility? Can it do raster effects? Or other unusual visual effects?
The VIP has the usual graphical setup: memory for 8×8-pixel character patterns (or “tiles”) and memory for “backgrounds”, which are gridlike canvases where you can arrange characters/tiles. It takes it a step further and lets you define individual windows, which have on-screen positions and dimensions, inside of which background segments can be arranged and scrolled around.
In addition to the usual “scrolling background” setup, windows can also shift individual lines of pixels horizontally by some amount, or the earlier-mentioned affine mode, which samples from backgrounds in a row of pixels in some direction other than “one pixel at a time left to right”
There are also “objects”, or “sprites”, which are individual 8×8-pixel characters that can float around the screen at arbitrary positions. These are traditionally used for anything in the scene that needs to move with pixel precision, but VB’s windowed setup allows you to do the same thing with backgrounds as well.
3DBoyColor wrote:
In Mario’s Tennis, a landscape effect is in use, like what is seen on the SNES (and other consoles if done in software). Is that landscape effect a hardware feature? Or is it done in software too? I’ve often wondered how much more powerful the VB is over the SNES.
It’s a little of both. This is an application of sampling a window background in affine mode. The gist of it is that, for each row of pixels, there’s a starting position (precise to a fraction of a pixel), and a direction–an amount by which the source position in the background will change for each horizontal pixel in the window. This vector is expressed as the change in X and Y distances.
A simple affine effect might sample with a Y of 0 and an X of -1, which would effectively sample exactly one pixel at a time from input, but right-to-left, causing it to be horizontally flipped. Other values for X and Y can be useful for scaling, rotating or perspective effects.
Specifying these affine parameters is another matter that requires the CPU to supply the needed information. The F-Zero tech I made a while back uses a lookup table to accelerate the process of getting information to the VIP, but the code that produced that table absolutely exists in the source and will run on the hardware–albeit with a lower framerate.
The same approach is used on SNES, except SNES doesn’t allow the application to specify affine parameters for each individual row of pixels as far as I know. The program actually has to wait for each row to be drawn, then reconfigure the affine parameters before the next row is processed. The original F-Zero really was a piece of work.
3DBoyColor wrote:
One other question, I know this would largely be a waste of time, but would the VB benefit from a co-processor on the cartridge? Think like the SNES. Or is it plenty powerful enough on its own (sort of like a GBA).
I’ve thought a lot about cartridge wingdings. It could absolutely benefit from in-cartridge processing, especially with the technology we have to work with nowadays. More than anything, though, I’d really love to see some better audio features. I mean, the audio lines do run through the cartridge before they go out to the speakers…