Modified Gigatron Design Ideas

Using, learning, programming and modding the Gigatron and anything related.
Forum rules
Be nice. No drama.
User avatar
marcelk
Posts: 488
Joined: 13 May 2018, 08:26

Re: Opcode mods

Post by marcelk »

Thanks for clarifying. The only point I tried to make is that if you want to create extra output signals, every flip-flop's output on the board is exposed (as "bare metal you can solder to") and available for tapping into. None are hidden behind a buffer for example, as in many other designs. Therefore you don't need to restrict yourself to the IR flip-flop outputs. That's all.

Maybe I misunderstand, but if you reset a video controller while it is active, most screens will black out for a couple of seconds. Screens don't like discontinuities in the sync signals they receive. The more modern ones even less so than older ones. Just a caution.
PurpleGirl
Posts: 48
Joined: 09 Sep 2019, 08:19

Re: Opcode mods

Post by PurpleGirl »

Thanks, now I understand. And yes, I didn't think of that issue. But an external device reset pin could still be of use. In the case of a video card, it could instead signal to not display data until the next frame. So if it has memory, it can then load the memory from the start without display until the next v-sync. I was trying to find out how to get around not signaling back from the video device.

---

Most of the NOPs and AC=0 ops are ones where a load, store, or ALU op is being done on AC with itself. I'm familiar with similar in Intel architecture. For instance, since most memory ops took 3-7 cycles (with the stack ops being the fastest), if you could use the ALU to do register only ops to assign to AX, it would be much faster. So Xor Ax, Ax is faster than MOV AX, 0. However, such ops are not needed here since we have the Harvard architecture, so ALU ops offer no speed advantage over register loads.
finbarr_saunders
Posts: 1
Joined: 24 Jan 2020, 08:15

Re: Modified Gigatron Design Ideas

Post by finbarr_saunders »

I've made a breadboard VGA card with the same resolution and colours as a Gigatron that I have interfaced with a Arduino Mega 2560 and Warren Toomey's CSCvon8. See video for example of it running. https://www.youtube.com/watch?v=I49fX14YyK8. Maybe this could be used with a Gigatron somehow.
PurpleGirl
Posts: 48
Joined: 09 Sep 2019, 08:19

Re: Opcode mods

Post by PurpleGirl »

On the ROM control unit idea, if you can't get faster than 45ns then the theoretical maximum would be 22 Mhz, assuming you use a pipeline arrangement. One could transfer that to registers and do on a future cycle if necessary. One way to speed things up would be to shadow it, so with 10ns SRAM, you could get that to 100 Mhz.

Shadowing it could open the possibility of adding more instructions than instruction bits. For instance, if there was a way to modify the shadow copy of the control ROM, you could add a few extra instructions on demand, but I can see some problems with that in terms of security and coding/debugging. An example of that problem would be the microcodes on the modern x86 CPUs. They use soft-modifiable microcodes in case there are manufacturing defects, such as with the original Pentium. They screwed up one of the FPU ops. So in newer CPUs, they found ways to use the system ROM or even the OS to create microcode patches. However, you can realize the problems with this type of fix. If the PC maker or OS manufacturer can change the microcodes, then so can malware writers. So the CPU doesn't do what is expected, and in machines with protection mode, that means that memory that shouldn't be accessible could be made accessible. So any execution prevention schemes might no longer work.

Another addition to the cached ROM-based instructions would be instruction paging. If you have an 8-bit system but use 16-bit instruction memory, then an instruction could be added to change the upper byte. If one used this, they would need to make sure that instruction is on every page, and preferably at the same relative address. That would not be hard to implement, just add a register and a line to tie it to the operand register. So if you intercept a nop to use as the instruction page instruction, it can be given an operand that goes into the upper instruction address.

***

I think I know how to implement the halt line. It may be a tad simpler than I thought if I am right. The Cep pins of the low byte counters likely could be brought low. But reading on the chip, you should only bring it low when the clock is high. I don't know what would happen if you bring it low when the clock is low. My guess is that it might not disable the counter in time and it might increment.
Last edited by PurpleGirl on 26 Jan 2020, 21:35, edited 1 time in total.
GigaMike
Posts: 7
Joined: 21 Jan 2020, 19:48

Re: Opcode mods

Post by GigaMike »

For experimentation you can use a PLD such as the ATF22V10C which can get down to 10 or even 5ns. Then once you are satisfied you can convert to TTL and diode logic. You will need two of these chips as Gigatron has so many output control signals. The inputs to both chips can be commoned as you only need IR0-7, AC7 and CLK1 (which behaves as a logic signal not a clock per se). See my related thread on pluggable control units.
PurpleGirl
Posts: 48
Joined: 09 Sep 2019, 08:19

vCPU Coprocessor

Post by PurpleGirl »

I've been sort of wondering about the possibility of making a GCL/vCPU coprocessor so user programs could run at full speed. The instruction set is pretty much a done deal, though there would be room in the opcode map to create more instructions, even those not in the Gigatron. It would need to have 16-bit registers and the Von Neumann architecture.

But I'm still trying to work out the logistics. Like how would it know when to come active or halt, and what about the memory access and sharing with the Gigatron? Plus what if it needs more than a cycle, how would that be done? And I have other questions such as is vCPU only active against RAM, or can it work out of ROM? That would dictate the design a bit. And how could it be interfaced with the Gigatron? Tight integration might be good.

The advantage here is that instructions should take 3 cycles or less each, not 14-28.

I guess the way to wake it up would be for the Gigatron to put something in the coprocessor's program counter. It could have a halt instruction. It could be used at the end of some open-ended code. So it runs everything it is supposed to and hits a halt instruction rather than dead/random memory. And there could be a comparator on its program counter to wake it up when the PC is changed externally. One could add the ability for an external device to explicitly wake it up, and it could even be given a "dead address" (such as 0) where if it hits that address, it is halted.

But the more I think about it, one could just make a new 16-bit processor and build the Gigatron around it. With 16-bits of instruction space, it wouldn't be hard to have a 256-byte page of original instructions, a page of vCPU instructions, etc.
User avatar
ECL
Posts: 26
Joined: 01 Feb 2020, 17:56

dual core Gigatron CPU

Post by ECL »

Maybe one should follow the trend in the industry and release a dualcore CPU, like e.g. our competitor intel does. One might also find the funds to conduct a security audit to find whether the Gigatron CPU is vulnerable to spectre / meltdown attacks.

But jokes aside, the idea of a coprocessor or dual core appears intriguing. The second core should not be tasked with video output but rather use infrared or something (such as an electrical bus, like with those transputer links of yesteryear).

So essentially, the regular Gigatron would act as the video-displaycard or video-copro of the other core to run some decent Forth code or other serious workstation workloads. The second core might even be a small expansion board, ideally consisting of emitter-coupled-logic gates or ECL. Those might have to be scavenged from old, dumped Cray Supercomputer circuit boards though, as most 10000-series gate LSI are no longer on sale unfortunately, DIP ones no less so. And that is despite the "fact" that the logic gate 7400-series may well be past their prime. ;)

Seems, the whole world is wrong again preferring Schottky-TTL over ECL technology. Much like the whole world flocks to Redmond softwarez instead of the great Linux desktop. Only the North-Koreans (DPRK) seem to have understood this.
Last edited by ECL on 03 Feb 2020, 06:47, edited 2 times in total.
User avatar
ECL
Posts: 26
Joined: 01 Feb 2020, 17:56

Coprocessor

Post by ECL »

... such a vCPU copro could prove its viability already when run emulated on a x86 PC. One might consider splitting the workload between the personal computer and an attached Cray homecomputer in fact.
PurpleGirl
Posts: 48
Joined: 09 Sep 2019, 08:19

Re: vCPU Coprocessor

Post by PurpleGirl »

I hadn't thought of that, but someone could write a vCPU emulator for the PC or ARM that works through the USB port. That would be one way to test the concept.
PurpleGirl
Posts: 48
Joined: 09 Sep 2019, 08:19

Re: dual core Gigatron CPU

Post by PurpleGirl »

I've been mulling this over for a few months. For more processing power the following ideas came up in my mind:

* Discrete video card

* Doubling to 16-bit data and instructions, possibly adding some MMX style instructions for multiple register and ALU ops per instruction.

* Adding SMP/multicore support

* Adding a vCPU coprocessor

They each have their own advantages, disadvantages, and levels of difficulty to implement.

In my 16-bit idea, I've considered allowing separate 8-bit instructions in each half of the instruction. Maybe use one of the new bits to determine register bonding mode, whether to have 1 16-bit instruction or 2 8-bit instructions. That would also determine how the ALU works, whether to have 2 small ALUs or bind the borrow/carry and operate as 1. But the problem with scaling up the ALU is that it could limit the clock speed due to the number of lines for carries. The longer the adder chain for ALU, the longer it all takes. That could be done in programmable logic (like FPGA, PAL, CPLD, etc.) at the expense of vintage purity. Still, it would be nice to have 2 8-bit instructions when the video is being done in software so useful work can be done while rendering video and creating syncs and then using 16-bit instructions during ends of lines and v-syncs for parts of code that would work more efficiently as 16-bit rather than thunking 8-bit instructions to get a 16-bit result. Now, to make things easier and to save an instruction bit for the bonding bit, the upper half won't have Out or jumps. It would be next to impossible to have the instruction halves to be at different addresses. That is technically possible if you use 2 16-bit ROMs with their own separate lines. But coding could get very confusing, very fast.

Going the multicore route seems a little harder in some ways, but easier in others. The main logic can be a clone of existing logic. But how you will get the data over to it might be a challenge. And where to connect the Instruction and Data registers would be something to ponder. That could go to 16-bit RAM rather than ROM.

The original core could do video, sound, ROM routines, and coprocessing, while the 2nd one does the heavy code lifting. An idea on halting if necessary would be if the 2nd one asks the first one to do something that could get out of sequence, a less common NOP could be put in the code on the first one to tell the 2nd one it is safe to continue. The 2nd one could be made to read the IR of the first one and watch for the instruction to tell it that it is safe to resume. For instance, if the first core is asked to multiply a number, the 2nd one can wait on a specific instruction to know when it is safe to use the result from the first one. That prevents a race condition.
Locked