Modified Gigatron Design Ideas

Using, learning, programming and modding the Gigatron and anything related.
Forum rules
Be nice. No drama.
User avatar
marcelk
Posts: 488
Joined: 13 May 2018, 08:26

Re: Modified Gigatron Design Ideas

Post by marcelk »

A small tweak is desirable for easier synchronisation of the "almost DMA" protocol: you can remove the right-most 74HC595 and replace it with a 74xx244 as well. The need for a shift register was already gone, because the right-hand processor can poll the game controller whenever it pleases and toggle in the response at its own leisure.

This gives 7 new input lines (the 8th is for the game controller). Two of those will then be hooked up to the video sync signals. With this the application processor can learn when the video processor should be ready for data transfers.
PurpleGirl
Posts: 48
Joined: 09 Sep 2019, 08:19

Re: Modified Gigatron Design Ideas

Post by PurpleGirl »

Thank you! I only posted what I did in the other thread because of the relevance of having 1 Gigatron specialized for video, sound, and emulated interrupts. Actually implementing it is better discussed here. Software interrupts would still be used, just on the dedicated Gigatron. So one could have a dedicated video/sound unit and keep the vintage spirit for the most part.

It is interesting that Potato Semiconductors offers a high-speed (1.125 GHz) version of the 74LS244. That likely isn't too useful since they don't offer high-speed versions of the entire chipset.

What you describe sounds simpler than what I considered, though more limited and less flexible coding. The reason I considered dual-ported RAM was for ease of access, even bidirectional and to have an additional pathway. So if you have a pixel buffer on the first one, the second one could theoretically access it without assistance from the first Gigatron. So speed wasn't a consideration, but the ease of retrofitting and having multiple data paths. That way, literal data could come from RAM while commands could flow across the ports. So commands could come from the ports and literal data or arguments across the RAM channels. There would be no need to shadow them, just use them in situ if dealing with truly asynchronous dual-ported SRAM.

The expansion thread has more details on how to pull off using shared dual-port RAM. The high address bit on the 2nd one could be used to signal a multiplexer to create a 2nd bank to allow direct addressing of the first one.

Bidirectional could be useful since that could allow for status updates or asking for some coprocessing such as a random number generator. But if an RNG is needed for sound or graphics, it can be implemented where it's needed. So you could dedicate a fixed memory location on the first one for things such as RNG and even FP results.

Yeah, the native opcodes form a fast PIO arrangement, and wouldn't be any faster as DMA.

As for the keyboard, couldn't the syncs of the 2nd machine be the clocks? And for that to work, would that require syncing the 2 Gigatrons or otherwise deriving the clock (like a frequency divider if the first one is at 12.5 Mhz)?

And does the left one even need any vCPU beyond signal processing software?

I also wonder if there would need to be a boot order, such as boot the second one first and if there could be spurious signals on the ports.

I am not sure what the best way to do a protocol for the signal Gigatron. If there isn't a secondary way to access the first Gigatron, then it may require sending 16+ bits. It might need the first for the "opcode" and the next byte as the operand (if needed). It probably should be done to where partial or hierarchical decoding (polling and branching in this case) can be used.
PurpleGirl
Posts: 48
Joined: 09 Sep 2019, 08:19

Opcode mods

Post by PurpleGirl »

In the past, I mentioned a way to use a spare NOP as a way to signal an external device, such as resetting a video circuit if you don't know the state. You could make a crude decoder to NOR the pins that are supposed to be off and AND them with the ones that belong on for your instruction. The NOPs do nothing, so they can trigger outside devices. Just leave one NOP as it is.

But then last night, an idea came to mind. There could be a way to expand the opcodes to have other instructions. Use 2 NOPs to do shifting. So one NOP switches in other control circuitry and another switches the original circuitry back. Thus a group of opcodes can do different things when shifted. Just save the ones used by video and the most commonly used IP-related instructions (like branches).
User avatar
marcelk
Posts: 488
Joined: 13 May 2018, 08:26

Re: Opcode mods

Post by marcelk »

The processor has no chips with hidden state. Therefore with the right programming and sometimes some extra decoding logic, you can trigger external devices from practically any signal on the board. There are many examples out there that do just that.

The XOUT register is a good example. After all, XOUT is an external device to the processor. The RAM and I/O expander is another good one, but it needs 3 new chips. A third example is how we used A15 to drive an LED in one of the first breadboard videos.

You can also use 3 output pins from XOUT to drive a chain of 74HC595 shift registers. That gives an unlimited number of output lines, 8 for each in the chain.

HG has designed a completely different decoding scheme (but not built it yet). I believe it's even part-neutral.

Anything goes.
User avatar
marcelk
Posts: 488
Joined: 13 May 2018, 08:26

Re: Modified Gigatron Design Ideas

Post by marcelk »

PurpleGirl wrote: 22 Nov 2019, 00:35
As for the keyboard, couldn't the syncs of the 2nd machine be the clocks? And for that to work, would that require syncing the 2 Gigatrons or otherwise deriving the clock (like a frequency divider if the first one is at 12.5 Mhz)?
I have difficulty to follow. I don't understand what problem there is.
And does the left one even need any vCPU beyond signal processing software?
You can reprogram it in any way you like. Direct native code, vCPU code, 6502 code, or something new alike.
I also wonder if there would need to be a boot order, such as boot the second one first and if there could be spurious signals on the ports.
I don't follow, or at least, that isn't a new issue? None of the flip-flops are getting a hardware reset already, except for those in the program counter ("because you can init the rest in software").

But you can share the same MCP for a synchronised cold start if that makes you comfortable.
I am not sure what the best way to do a protocol for the signal Gigatron. If there isn't a secondary way to access the first Gigatron, then it may require sending 16+ bits. It might need the first for the "opcode" and the next byte as the operand (if needed). It probably should be done to where partial or hierarchical decoding (polling and branching in this case) can be used.
I don't follow. You can make a protocol with as little as 1 data line. If you split the outputs in groups, you can also hook them up in transputer style. Or in a hypercube.
PurpleGirl
Posts: 48
Joined: 09 Sep 2019, 08:19

Re: Opcode mods

Post by PurpleGirl »

Yeah, I thought the concept of CPU escape sequences would be interesting. So intercept 2 of the unused NOPs and piggyback onto them. One of them switches out some of the control circuitry to allow for opcodes to be recycled to add additional functionality. The other piggybacked NOP switches the original circuitry back in and returns all opcodes back to their original functionality. Of course, besides the components, the cost would be 2 cycles, 1 for setup and 1 for cleanup. Plus it could make code a tad more confusing. So it would look a bit like this.
  • ALT NOP 1; enter alternative opcode set
  • New instructions that duplicate opcodes, perhaps 32 of them or some other even group of 16.
  • "
  • "
  • ALT NOP 2; return to original instructions
Some caveats would be not interfering with anything used by the video, the program counter, or any "interrupts," and the added functionality should justify the cost of changing modes/contexts. Plus the branch quirk would need to be taken into account when coding to not run the affected opcodes in the wrong context after a branch.

This idea is nothing new. The x86 processors did this. For instance, to do MMX and similar on the Pentium, they swapped out the entire FP set. The setup was costly to use the new multimedia or signal processing instructions. So programmers would make sure that the performance gained by the newer instructions would justify changing modes. Even for DOS programming, you could use the new 32-bit instructions of 386 and higher CPUs in real mode. They used an escape sequence to use the new instructions.
PurpleGirl
Posts: 48
Joined: 09 Sep 2019, 08:19

Re: Modified Gigatron Design Ideas

Post by PurpleGirl »

I am not understanding why you are having difficulty following. I will try again.

I asked if the syncs from the 2nd Gigatron in this arrangement (the one you put on the left) could be used to drive the keyboard shift register on the first one (the one you put on the right)? Thus you wouldn't need to change the shift register, just get the otherwise software generated signals for the keyboard from the other Gigatron. And I asked as part of that if the clock would have to be shared (or derived if you overclock the first one) for the syncs to make the keyboard shift register work correctly. I didn't say there was a problem, just asking if the syncs generated from the 2nd one could be used on the first one since they would no longer be generated there.

I also asked that if you chain 2 Gigatrons, would the boot order of the 2 machines matter? Would the signal processor need to be booted first? Or would that even matter?

And I still don't see how 8 bits would be enough to push complex instructions or raw data to the 2nd one in a timely fashion. You get one byte per clock cycle. If you can't access an area of memory on the first one (such as dual-ported RAM), then the first one has to push pixel data through the port. This could take some time to build a data map on the 2nd one. Plus I'm not sure what all my protocol could do, given that there are 8 bits at a time. It might require the "opcode" for a first byte, followed by its arguments on 1 or more subsequent bytes, requiring a variable length, multi-byte arrangement. Sure, sending a command to clear the screen or silence music could be done in a single byte, but other things would likely require longer. Drawing a single full line could be easy to send across, but a complex pattern of pixels on and off at different intensities would require sending the full 160 bytes. I don't know what you mean by transputer or hypercube.
User avatar
marcelk
Posts: 488
Joined: 13 May 2018, 08:26

Re: Modified Gigatron Design Ideas

Post by marcelk »

PurpleGirl wrote: 22 Nov 2019, 21:07 I am not understanding why you are having difficulty following. I will try again.

I asked if the syncs from the 2nd Gigatron in this arrangement (the one you put on the left) could be used to drive the keyboard shift register on the first one (the one you put on the right)? Thus you wouldn't need to change the shift register, just get the otherwise software generated signals for the keyboard from the other Gigatron.
In the standard system, the game controller gets video sync signals only because we don't have other outputs available to drive it independently. But in the dual setup, this reason has disappeared: there's the second XOUT and that gives 8 outputs. So I don't know why the video signal and input controller should remain entangled. Programming for an entangled setup is not the easiest. And each time the video signal gets mixed up, we also lose the input. But you can keep that of course, as it has been done before. But in the dual setup it is worse, because the right-hand processor isn't even generating the video syncs. In my experience, I find it much simpler programming when you control the signals to your device directly, rather than having to respond to them. And as a rule of thumb, writing software costs many more hours than designing hardware.

But that's not the reason to remove the right-most shift register. The fundamental problem is that it's blocking 7 input lines! We need input lines for listening to the video processor. For example, when you work out some kind of "almost DMA" or message protocol, at some point you must sync up the message burst. For that you must know when the video processor is available to process data. One way is to deduce that from its sync signals. There are other ways, such as ACK signals and retry loops. But in every case you need input lines. The 74HC595 exposes only one and it goes to the game controller. But remove it, and you suddenly we have 8 inputs. And that is how it was originally designed. The input 74HC595 cripples the design, not helps it.
And I asked as part of that if the clock would have to be shared (or derived if you overclock the first one) for the syncs to make the keyboard shift register work correctly. I didn't say there was a problem, just asking if the syncs generated from the 2nd one could be used on the first one since they would no longer be generated there.
The crystal's 6.25 MHz clock is unrelated to what the game controller needs. That is several orders of magnitude slower. Now we poll it unconditionally at 60 Hz, but it can be latched and pulsed whenever you need it's value.

Avoid sending MHz's into the long cable anyway. It will work, but don't forget it's also an antenna.
I also asked that if you chain 2 Gigatrons, would the boot order of the 2 machines matter? Would the signal processor need to be booted first? Or would that even matter?
I believe it only depends on how you program the system as a whole. But as long as both halves can see the other, I wouldn't worry.
And I still don't see how 8 bits would be enough to push complex instructions or raw data to the 2nd one in a timely fashion. You get one byte per clock cycle. If you can't access an area of memory on the first one (such as dual-ported RAM),
If you ask me how I would chain up two Gigatrons, I spend an evening to draw what I know I can make work. But this is your project, and it doesn't mean there's one way. Best build it with the parts you're comfortable with, and in a way you know you can program. If the concept phase doesn't converge, it's too much at once. What you can then do is build smaller related projects first, to get familiar with the parts and the problem domain. We took 3 or 4 such steps for the Gigatron (some are still on my hackaday.io page).

Best is to proceed in whatever direction you're comfortable with, that's the most important factor for success. And please share diagrams and photos as you go!
PurpleGirl
Posts: 48
Joined: 09 Sep 2019, 08:19

Re: Modified Gigatron Design Ideas

Post by PurpleGirl »

I don't know if I will actually do the dual Gigatrons. Expense is an issue right now. Besides, I am the type who has to see the end before starting something.

I do have questions about X-Out (on the right one). It appears from the schematics that it is currently being synced by the H-sync, so what will it be synced with when the sync pulses are moved to the one you drew on the left? Can its clock line then attach to Clock2? If not, where?

And do the Out and X-Out have different opcodes controlling them? I mean, I've seen a list of all possible current opcodes, and I see no distinctions between Out and X-out.

http://www.iwriteiam.nl/PGigatron.html

So which of the ones marked there as "out" send to Out, and which ones go to X-out?

And, come to think of it, being able to read the syncs would be useful, and that could prevent any problems with the boot order issue, along with all the main reasons for sending them back to the one on the right. For instance, if there could be issues with that, a remedy could be to add a read loop of those signals early in the boot in ROM so it can halt until they can be read.
User avatar
marcelk
Posts: 488
Joined: 13 May 2018, 08:26

Re: Modified Gigatron Design Ideas

Post by marcelk »

For understanding the 8-bit architecture, I suggest to watch Walter's talk (and slides) on the Gigatron's site front page, browse our data sheets page, check the annotations in the instruction decoder schematics, or study the mini emulator Docs/gtemu.c. The latter acts as the definitive instruction set reference for emulator writers. It's also printed in the manual.

In brief:

OUT is one of the 4 user registers on the ALU result bus.

XOUT isn't part of the CPU proper and there are no opcodes for it because there's no need for that.

Indeed XOUT gets its clock from one of the OUT lines. That means that in my proposal, during high speed message transfer, the game controller could receive undesirable latch and pulse signals. But if AC is prepared well in software (load AC with the last set XOUT value), and if the message transfer sends from RAM instead of through AC, there will be no such spurious signals. Like this:

Code: Select all

  ld <msg,x
  ld >msg,y
  ld [xoutCache]
  ld [y,x++],out
  ld [y,x++],out
  ld [y,x++],out
  ...
 
Locked