Modified Gigatron Design Ideas

Using, learning, programming and modding the Gigatron and anything related.
Forum rules
Be nice. No drama.
monsonite
Posts: 101
Joined: 17 May 2018, 07:17

Re: Modified Gigatron Design Ideas

Post by monsonite »

I would advice to design the computer by starting with the data paths. The instruction set can always follow. Many do it the other way around and end up with complex designs.
Yes - I'm just starting to find that out the hard way :(
PurpleGirl
Posts: 48
Joined: 09 Sep 2019, 08:19

Re: Modified Gigatron Design Ideas

Post by PurpleGirl »

I was talking about modifying a Gigatron. Since there are 12 NOPs, I could intercept one for use for the "video card" for it. Thus once I discover the opcode I want to intercept, I can use logic to see if the appropriate low bits are low and the high ones are high, make a single signal out of it, and use that to reset the counter chip used for counting pixels, thus resetting the sync circuitry and restarting in the top left corner. In addition to the blinking lights, unused NOPs can be used to signal other circuits.
User avatar
marcelk
Posts: 488
Joined: 13 May 2018, 08:26

Re: Modified Gigatron Design Ideas

Post by marcelk »

I don't know how to modify nop instructions without adding a lot of complexity, and that isn't really needed. If the purpose is just to drive an external video circuit using a Gigatron board, there are ready-to-go ways within the current circuit. In descending order of throughput:

1. Repurpose the OUT register (160 ns per byte)
2. Repurpose the XOUT register (0.5 µs per byte)
3. Use one of the four SPI ports on the I/O expander (21 µs per byte, full duplex)

Everything else is software. No matter how, it needs to change. Most of the work will be there, not in patching the hardware. Hope that helps? I briefly discussed such a setup in my VCFB presentation last week, and why we choose to go another way.
PurpleGirl
Posts: 48
Joined: 09 Sep 2019, 08:19

Re: Modified Gigatron Design Ideas

Post by PurpleGirl »

I already mentioned how I'd do a video card reset command. One of the NOPs can be piggybacked. They already do nothing. I never mentioned changing any, just adding functionality to one. So all it takes is adding minor decoder logic for the desired instruction. I mentioned how to NOR the pins expected to be off and ANDing that with the pins you expect to be on. Then that creates a signal that is only on when that alternate NOP is used, and it can hold the counter in reset. That sounds simple to me. So if you need to reset the video HW, you would use the modified NOP like you would any other opcode. So this can be used by the software when you need to go to the top of the screen and cannot trust the location of the raster. This signal would only be for syncing Gigatron software with the top of the page.

Yes, for sending the data, I could either intercept the memory (either via dual-ported RAM, or modding the execution unit to include a /HALT line)) or use the Out to write to any memory on the video card.

* In some ways, intercepting the memory can be the simplest if using dual-ported SRAM. The pixel counter strobes the port B address lines.

* Adding a /Halt line to get to read the memory might be harder since you cannot add the video hardware incrementally like that. It would be all or nothing. Disrupting the program flow would screw up all the software timings. So can't just transfer some of it to hardware and leave some of the rest to do later, since if the /Halt line is used to make memory reads safe, none of the software timings can be trusted. The Atari home computers used a modified 6502 CPU which included the /Halt line so the ANTIC (video coprocessor and DMA controller) could take over without corrupting the memory.

* Using the "Out" lines is good but if you want to do complete lines without regard to syncs in software, you probably should buffer it and send it to the monitor a screen at a time. So if you add RAM to the video card (19k minimum) and send it a screen at a time, you'd still have to worry about memory contention on the controller side. A /Halt line would not be feasible there since the timings and number of updated pixels are non-negotiable. The beauty of this approach is that the Gigatron is updating a page at a time at whatever speed it can, and new information overwrites old information in memory, with the pixel counter constantly strobing the address for each pixel.

And yes, I know the software would need to be changed, but that is immaterial to me at this stage of hardware planning.

I am after performance more than nostalgia or simplicity. I already know why you chose your approach and am not knocking that. I just have different goals.
PurpleGirl
Posts: 48
Joined: 09 Sep 2019, 08:19

Re: Modified Gigatron Design Ideas

Post by PurpleGirl »

A compromise comes to mind. The existing clock (or a clock divider if going for 12.5 Mhz) could be used to drive a pixel counter. The idea of intercepting a NOP instruction to create a video reset could be used on power-up to reset the pixel counter and any other time the pixels being sent might not be in sync. Gates and latches would be used to create the syncs off of the pixel counter. This would allow the ROM and software to be written to not update the sync signals, but the color information would still need to be cycle exact. That, in turn, should increase performance some and reduce some of the memory overhead.

The NOP idea would be a workaround since we don't have hardware generated interrupts or a /Halt line and the video circuitry can't ask for processing or pause the CPU. So restarting the pixel counter would be a workaround. If the video can't stop the CPU, then the CPU could reset the monitor to a known state.

This can be incremental. So after rewriting the ROM to not handle the syncs, then the next hurdle would involve getting the graphics circuitry to get the pixel data from the CPU via one of the previously mentioned methods (adding a halt line so the CPU can be paused and get it from the memory, using dual-ported RAM, or getting it from the port and perhaps buffering it in dedicated SRAM).

---

A crazy idea came on expanding colors. If there was a way to use the sync pins for additional colors and change the context of those pins dynamically. But the more I think of it, the less feasible it sounds.
Last edited by PurpleGirl on 01 Nov 2019, 11:04, edited 1 time in total.
at67
Posts: 639
Joined: 14 May 2018, 08:29

Re: Modified Gigatron Design Ideas

Post by at67 »

Have you thought about using a modern FIFO, (or even something older like the Dallas2009 which would almost fit the retro timeline), to decouple the video bitstream from the output VGA timing requirements?

In theory it would be pretty simple, the FIFO intercepts the video data at whatever speed the Gigatron can generate it at and then outputs it at the required rate for whatever video mode is required, (this works as long as the Gigatron is able to output the video bitstream at a higher rate than that required by the video output requirements).

The beauty of a decent FIFO is that not only is it dual ported and the input completely decoupled from the output, but it is also easily able to repeat output streams with simple external logic. This would allow the Gigatron to output 1 horizontal line as fast as it could and then have the FIFO + external logic repeat it any number of times required, (i.e. 4 times to emulate the fullscan line mode), but only require the processing of a 1 in 4 scanline mode.

This would also allow you to theoretically overclock the Gigatron to any frequency it is capable of and still have the output VGA timing meet it's strict requirements, (i.e. no half pixels at 12.5Mhz, no having to overclock to multiples of 6.25MHz, etc).

In practice adding a FIFO in between the OUTPUT register and the VGA signals would require some butchering, another option is to make it a separate daughter board that plugs into the OUTPUT register's socket providing compatibility with the old video generation method and adding it's own video output ports VGA/HDMI/etc, HSync/VSync generators, etc.

Something like this should in theory remain completely compatible with all current software as well, you would obviously need to change the video generation native code within the ROM, but it would be even simpler than it currently it is.

e.g.

Code: Select all

Check for FIFO needs data.
Splat Horizontal line to FIFO.
Repeat
The magic would be in the state machine controlling the FIFO and the NEED DATA signal back to the Gigatron, thus making the native code ROM changes required as simple as possible.

P.S. There are 8 outputs with 2 being used for SYNC generation, if you generate the SYNC signals yourself then you could have the native ROM code output all 8 bits of video memory, (would be a trivial change to the ROM as far as I can see), and thus expand the 6bits of colour depth to 8bits of colour depth. How you would interpret those 8bits would be completely up to you, you could use them as RAW RGB bits, e.g. R3:G3:B2: or R2:G2:B2:A2, etc; or even have them lookup a higher colour depth from a traditional video palette chip, i.e. an 8 bit lookup into a 24 bit palette.

If you did decide to take it this far, I personally would make a list of new video modes whilst still supporting the old R2:G2:B2 mode, thus enabling software compatibility with the original video model. The current video mode has 2 spare bits of video memory being unused for every single onscreen pixel, (although some applications use those 2 bits for advanced purposes).
PurpleGirl
Posts: 48
Joined: 09 Sep 2019, 08:19

Re: Modified Gigatron Design Ideas

Post by PurpleGirl »

What would be a decent FIFO to use? I never really thought of FIFOs and shift registers. So any information on those would be a good start.

Now I am not sure how such a card can tell the Gigatron that it needs data in its FIFO registers or how software could get it. There is currently no way to signal back to the Gigatron.

And there is another issue. How will the keyboard get its clock? That comes from the software-generated syncs. Would the keyboard UART need to get syncs from the video board? And would getting the syncs from there work?

And I thought on the video pinouts and figured if syncs could be removed from the output, that I could expand two of the colors a bit. What a daughterboard could do is to have 2 separate D/A converters and go to separate sockets. That would mean 256 colors. Not sure which 2 colors I'd want to expand if I were to do that. Could go for more reds/blues/purples, reds/greens/yellows/browns, or more blues/greens/teals. In that case, the pics could be remastered in the ROM to provide up to 4 times the color depth. For the new D/A converter, that would require recalculating the resistor values for 2 of the channels to provide the additional colors and to still match the impedance of the monitor. However, yeah, the 2 pins could allow for 4 palettes. But what advanced purposes are those used for besides syncs?

Comments on Suite-16
But going that far, maybe a new Gigatron-like machine could be built after Suite-16 is finished. I sure hope he adds some way to add ports and to get/send their data. He's out of instructions and you'd have to hack a NOP out of other instructions. Since his new IN and OUT instructions apparently work with the accumulator, I think he should use the last 8 bits (immediate field) to specify a port address (since that could allow up to 256). Then expansion could be simplified since any peripherals could know who should speak and who should listen. So have some simple address decoder logic for peripherals.

Using Second Gigatron for Video/Sound
Still, the idea of a 2nd dedicated Gigatron sounds pretty good. Communication could be done across shared dual-ported SRAM. Both ROMs would need to be modified for their roles. The port could be used to send more complex commands to the 2nd Gigatron (via the modified IN port of the 2nd one with a buffer instead of a UART) which could handle video and sound. For instance, if the memory is shared, the 2nd Gigatron could be told to blank the screen, manage sprites, play a tune using a table of notes in memory in the "background," etc. The 2nd one can run an emulator for a custom video and sound commands set. The extra ROM space on the 2nd one could be used to store more waveforms, character sets, textures, sprites, etc.
at67
Posts: 639
Joined: 14 May 2018, 08:29

Re: Modified Gigatron Design Ideas

Post by at67 »

*Note* Not sure what is up with the quoting system, but it seems broken in Mozzila and Edge under Win10 for this particular message :/

What would be a decent FIFO to use? I never really thought of FIFOs and shift registers. So any information on those would be a good start.

Here's a quick example of one that you can buy at a decent price and should have no trouble doing the job, (12ns, easily prototypable, etc).
https://au.mouser.com/ProductDetail/IDT ... FC5qLHk%3D

Now I am not sure how such a card can tell the Gigatron that it needs data in its FIFO registers or how software could get it. There is currently no way to signal back to the Gigatron.

For something like this to work you are doing some reasonable modifications to both software and hardware, so adding to the Gigatron's input capability shouldn't be that difficult. The Gigatron uses the IE signal, (active low), from the 139 bus decoder to signal that data can be input on the bus, (currently from the 595 shift register using instructions like 'st in,[15]'), so you could hack into the 595's socket in the same way as you would for the output. For a more comprehensive way of communicating with the Gigatron through an expansion bus, see this thread.
https://forum.gigatron.io/viewtopic.php?f=4&t=64

And there is another issue. How will the keyboard get its clock? That comes from the software-generated syncs. Would the keyboard UART need to get syncs from the video board? And would getting the syncs from there work?

I'm not really following this section, if I attempted what I am suggesting I would make the entire system backward compatible with current hardware and software from vCPU on upwards, (so not at the native code level, as you would be re-writing portions of the ROM). So the current PS2 keyboard would have to work as is, hacking into the 595's socket to add your own input would then entail that you emulate the current 595's functionality.

And I thought on the video pinouts and figured if syncs could be removed from the output, that I could expand two of the colors a bit. What a daughterboard could do is to have 2 separate D/A converters and go to separate sockets. That would mean 256 colors. Not sure which 2 colors I'd want to expand if I were to do that. Could go for more reds/blues/purples, reds/greens/yellows/browns, or more blues/greens/teals. In that case, the pics could be remastered in the ROM to provide up to 4 times the color depth. For the new D/A converter, that would require recalculating the resistor values for 2 of the channels to provide the additional colors and to still match the impedance of the monitor. However, yeah, the 2 pins could allow for 4 palettes. But what advanced purposes are those used for besides syncs?

Generally when choosing colour bit depths to expand you choose the colours based on human sensitivity and perception, (assuming you're outputting to a common display device and not something esoteric), so green, then red and lastly blue. I mentioned some common configurations. IMO the best way to use the extra 2 bits is as an 8bit lookup into a 24bit palette chip, like has been done a million times before, (e.g. VGA 256 colours out of 16.7M). There used to be many single chip solution palette chips that provided this functionality, (Brooktree used to be the master of this domain, my graphics hardware experience has waned in the last 15 or so years, so google is probably your best bet).

But going that far, maybe a new Gigatron-like machine could be built after Suite-16 is finished. I sure hope he adds some way to add ports and to get/send their data. He's out of instructions and you'd have to hack a NOP out of other instructions. Since his new IN and OUT instructions apparently work with the accumulator, I think he should use the last 8 bits (immediate field) to specify a port address (since that could allow up to 256). Then expansion could be simplified since any peripherals could know who should speak and who should listen. So have some simple address decoder logic for peripherals.

Adding generic memory and/or IO mapped ports to an expansion bus/daughter board with an API to provide access would be a significant advancement in the Gigatron's expansion abilities, as long as it was fully backwards compatible with the current eco-system I would be all over it.
PurpleGirl
Posts: 48
Joined: 09 Sep 2019, 08:19

Re: Modified Gigatron Design Ideas

Post by PurpleGirl »

Thank you for your reply and info. The FIFO sounds interesting, though I am unfamiliar with most of the signals other than Dx and Qx. It is interesting that the one you propose has 9 bits. I think they added the last bit for a traffic sync signal, but that could easily be used as a color bit.

On the keyboard clock, that comes from the monitor syncs which are software generated. If you remove the code for creating the video syncs, which is sort of my point for wanting a video card, then keyboard/joystick won't work, since Gigatron's UART won't get the syncs anymore, or even x-out (so no sound or lights) since the chip there also uses hsync as its clock. So I'm wondering if wiring that to the video card would work. If you strip out all the cycle-exact code from the ROM, the input port will not work anymore unless it gets its serial syncs from another source. That is how interconnected the Gigatron is.

I'm not familiar with palette chips or what they do. On the colors, I was just thinking of using them raw and maybe extending red and maybe blue. I did ask how Gigatron software uses the syncs in advanced ways.

And what I mentioned about Suite-16, while Monsonite intends it as a coprocessor, I'd love to see a Gigatron-like machine using that as the primary processor. The vCPU could be written to use the S16 instruction set, so Gigatron vCPU apps should work without modification. That would give more power due to being 16-bit, having a richer native instruction set, and the emulator being able to avoid expensive thunking operations. And doing all the other H/W like the Gigatron wouldn't be hard. Native apps would need to be ported. I think he'll use a single bidirectional port with 256 addresses, so adding video and sound could be done in the Gigatron way, with plenty of port expansion room. If the ports are 16-bit, they could do more (more colors, better sound resolution, or more blinkenlights), but from what I'm reading, he may stick with 8 for that (he refers to the data as "char" type).

And I haven't forgotten my other idea about chaining 2 Gigatrons and specializing them in their ROMs.
User avatar
marcelk
Posts: 488
Joined: 13 May 2018, 08:26

Re: Modified Gigatron Design Ideas

Post by marcelk »

PurpleGirl wrote: 18 Nov 2019, 11:25 It would be nice to find a way to chain 2 Gigatrons. I'm not sure how one would exactly do that. Sure, one could use dual-ported RAM to get the information out of RAM. Maybe the Out of the first one could be used to send special video and sound opcodes to a modified In of the second one, perhaps buffered through a shift register. The 2 ROMs would be modified to be fit for specialization.
[This came up in a different thread, but I prefer to share my thoughts on chaining here.]

From the first beginnings, early 2017, the OUT and IN ports were designed so that they can be connected for high speed data transfer. It's no coincidence that the IN and OUT blocks are at the same vertical position in the block diagram. It is a bit of a coincidence that the instruction set gives such a high significance to the /IE signal. But because of that, the processor core can input a byte, do an ALU operation on it, and then output the result all in the same 160 ns clock cycle. That makes it a simple DSP.

In the kit edition, the 74LS244 non-inverting tri-state buffer 'BUS-IN' got replaced with the 74HC595 shift-register. Many hobbyists are familiar with it. The breadboard prototype still has a 74LS244 on that location. The block diagram wasn't even updated: today it still shows there are 8 parallel input bits into the system:

Parallel IN.png
Parallel IN.png (42.84 KiB) Viewed 8975 times

The only reason we have the 74HC595 now, instead of a 74HCT244, is that we found those nice game controllers with a shift register inside. That was late 2017, and at that point in the project we were really not in the mood to shoehorn software shifting in the video loop (which would have been the "proper" solution, more in Gigatron style). "Perfect is the enemy of good", so we put in the 74HC595: it was cheaper, had fewer pins, saved us a month of work and nobody would care that it really doesn't belong there (from a purist point of view). But if you look closely at that chip, it has the same 74xx244 on the inside, with a shift register next to it. (In my mind I consider only the buffer as part of the processor proper.) Even though it's not quite a UART yet, with its 16 internal flip-flops it's already an ugly and overly complex thing:

74HC595.png
74HC595.png (262.13 KiB) Viewed 8975 times

Knowing that, it has become trivial to chain up two Gigatrons: replace one 74HC595 with an 74HCT244 and connect its A pins directly to the Q pins of the other system's OUT register. You may or may not want to share the system CLK lines as well, to make it a negative chip count modification. The keyboard or game controller goes on the right, where we still have a 74HCT595. But now we only trigger it whenever we need to poll it, and use two XOUT lines for that instead of the video sync signals. I think visually, so like this:

Chained.png
Chained.png (187.01 KiB) Viewed 8975 times

I'm sure it will work just like that. To do the same video tricks, the left processor maintains the display list (videoTable at page 1) and the pixel memory. The right processor sends update commands over some protocol. The left one can spend all of its vCPU time listening and processing those. It can run an almost standard ROM version with just that 1 application replacing the main menu. The software on the right processor can start from scratch with an empty EPROM: the world is its oyster. It has no timing responsibilities and with that it can run native code without restrictions. It can even fake the Loader protocol and load software this way (which needs to be interpreted because of the Harvard setup, so I would keep vCPU around for that). BabelFish needs to understand there's no 60 Hz signal anymore, those are small things.

For any dual-ported RAM concept I like to see some form of detail before I can join that part of the discussion. I like to see a data sheet of an actual component under consideration, and some sort of timing and layout concept. The reason is this: microcomputers and homecomputers were blessed with very fast RAM. Or actually they weren't and it only appears that way because they had processors made with very slow transistors inside. That made them cheap compared to "real" computers. In such a system, it makes sense to connect all kinds of external support logic (video, sound) to the same memory bus: those memories could easily handle multiple data streams because the processor was not using it at the fullest. In my mind, our situation is the opposite: we have very fast BJT-based logic organised in a way that saturates the 70 ns RAM chip's bandwidth (which is already very fast itself). Therefore expanding directly on the bus looks difficult to me.

Even if I carbon copy all the writes above address 0x100 to a second RAM, I don't know how to get it out in a clean way for display. I can double its speed and alternate access cycles. But then I can also just speed up the first RAM to begin with. And the one or two data sheets I've seen for dual-port memories have asymmetric access.

Edit: with a proper SYS extension, and in a shared clock configuration, the RAM-to-RAM transfer can be in bursts of 1 byte per clock cycle:

Code: Select all

Sender:
  ...
  ld [y,x++],out
  ld [y,x++],out
  ld [y,x++],out
  ...

Receiver:
  ...
  st in,[y,x++]
  st in,[y,x++]
  st in,[y,x++]
  ...
That is almost DMA already.
Locked