I just hit on an idea for capturing the VB display signal and putting it on a TV, and I’d like to get some thoughts and musings on it…this one may just do it with some work…
Anyway, I’ve just recently gotten into microchips and programming them (with MikroBasic and a serial interface burner). I came up with this thought. Some of it is based loosely on assumptions, so please help me out where I am off.
Assumption: The VB sends data to the LED arrays turning their individual lights on and off and at certain shades of brightness, and this together with the flipping of the mirrors makes the picture…
Concept: I envision a microchip being hooked up directly to the ribbon cable, receiving the on/off/pwm brightness signal directly. This chip has a sufficient bank of memory to be able to store a value for each possible pixel location. It takes into account the timing of where the mirror would be positioned, and writes the LED data being received to that offset in memory…for instance…
……..^–Mirror’s position, determined from crystal timing?
…………^—now mirror would be here, ,so we update this section of data ONLY.
So, at any given time, the whole video display is stored in memory, with the chip updating the seections as the VB scans through drawing them horizontally.
I couldn’t work out in my mind how to convert the horizontal drawing to a vertical one for TV, but with this concept I think it could work. But that is based on the next issue:
Concept #2:Second chip is connceted to the first one, and at each interval determined by its crystal, it reads in entirety the memory contents of the first chip. It takes that whole screen image and outputs each horizonal line, adding an H sync signal at the end, and at the very end adds the V sync signal. This would probably have to go into some kind of video converter circuit, like an AD714 chip (i think that’s what I used once) to make an RGB signal into NTSC/PAL.
One circuit per eye, and one outputs RED and one outputs BLUE.
I think this might work, but I need to understand how the pulse-width-modulation is used to calculate brightness, and how I can get the chip to detect/store this data.
With all this talk here about the VB dying…let’s bring it back to life with a TV output! This would rock on my 52″ widescreen TV with red/blue glasses!!
That’s pretty much how I had originally planned on doing it… I’ve found quite a few problems with that though. First, you’re gonna need to completely reverse engineer how the displays work. I hooked the displays up to my logic analyzer, and although how it works seems to be pretty simple, it appears to be VERY timing dependant. I have some notes somewhere on what I found with the displays, but IIRC it’s a nasty combination of serial and parallel output, and not really clocked.
IIRC the brightness was PWM, and I think the display was split up into parallel sections, where those pixels were output serially. It might have been something like split into several parallel pixel sections, where the pixels in each section were output serially and brightness was PWM, but also parallel (the only reason I’m thinking this is because when the cables start to get flaky, a lot of times it seems to only affect certain brightness levels).
IIRC an entire display period is 20ms. 5ms is used for the left eye, then there’s 5ms of nothing, then 5ms for the right eye, then 5ms of nothing (and repeat). That gives the overall refresh rate of 50Hz. Also, both the left and right displays are completely connected except for a single L/R display select.
Anyway, I don’t remember how much parallel stuff there was, but IIRC there’s 30 wires, several of which are power, and one is the select, so lets just take a guess and say the display is split into 14 sections of 16 pixels each. To draw a display, we need 16 pixel cycles per column, and 384 columns. Assuming there’s no blanking time between columns, we need 1/6144th of a 5ms cycle for a pixel. That means that a single pixel is about 0.8us. And 14 of them will be happening concurrently. Now if this is PWM, it will be VERY hard to determine to any sort of precision whether this is a light or a dark pixel (you’d have to use a good input capture… er… 14 of them). Then in a really short amount of time store that data in memory and begin reading the next outputs. If you could accomplish that, converting to the TV output wouldn’t be that bad since it could be done during the 5ms of nothing between display outputs (to go from vertical to horizontal, you could just make a copy of the entire memory to another block of memory rotated). If it was fast enough to be able to capture the display output, it should be fast enough to copy a bunch of memory.
Also, I know the timing isn’t identical throughout the cycle, since the swing of the mirror would cause a distorted image… either narrower or wider depending on the direction that the mirror is moving. So, you’d need to take that into account, and I think since the on time of the display can be longer or shorter because of the mirror position, the processor also increases/decreases the brightness to make it appear even across the display. So, that’s another thing you’d need to take into consideration.
You may want to check the patent for some details on the display, and I’ll try to dig up the notes I took on them, but I really think converting directly from the display output is a very tough solution. It’d at least require a pretty powerful and fast microcontroller. Of course I don’t want to discourage you from experimenting, I just wanted to give you some feedback on what I’ve looked at, and hopefully you can figure out a way around the problems.
The way I’m thinking of doing it now is basically tapping the framebuffer memory. By watching the activity on this memory, you can copy it by writing the data of the section read to a seperate copy stored. A seperate copy would be needed to make sure that you don’t interfere with any manipulation of the original when you’re reading it to output to a display. The nice thing about this way is that you’ll get the exact data stored in it’s original digital format, not converted to a pulse width, and then converted back. There are 4 framebuffers, L0, L1, R0, and R1. Each are 32KB each.
This way also has some problems though. For one, you know the relative brightness (whether it’s a 0, 1, 2, or 3), but you don’t know the absolute brightnesses which are stored in the brightness registers unless you also tap that memory (which is doable, but it may also be okay to manually control the brightness levels). Also, it would need a pretty fast processor to capture data writes if the processor was to handle this, but the way I think I’d do it is to basically add another chunk of memory in parallel with the existing framebuffer memory. I’d need to put a unidirectional buffer on it though to make sure that the memory would get the data when the framebuffer is written to, but not write to the VB data bus when my processor reads the data. By having the memory basically piggybacked, that would take a huge load off my processor, and I could probably use a simple microcontroller to periodically read my copy of the framebuffer and output this to the display (probably synced with the output periods of the display to ensure that the framebuffer has the correct data). That’s the other problem… The VB has two framebuffers per display. I don’t necessarily know which framebuffer is the active one without tapping that memory too.
Or, you could take two cameras and change one’s color to blue, overlay them, and output that to your TV :p .
Wow, that’s a long post 🙂 .
I thought displaying the VB on a screen defeats the purpose of its TRUE 3D nature, also they are working on a PSP VB emulator and it can output to TVs.