I’ve always come unstuck when it came to finding a balance between performance, extendibility and complexity.
Then, the other day I was sat thumbing through an electronics magazine from the late 1970’s and saw an advert for the Science Of Cambridge MK14 computer. This was a single board computer, with around 256 Bytes (that’s bytes, not mega-bytes) of RAM, a tiny ROM, calculator style keypad and as it happens, a calculator style seven-segment, 8 digit LED display. The computer was based on the SC/MP CPU (no longer manufactured) but it did have some interesting features. The actual CPU supported serial input/output and could even happily exist on a multi-CPU bus. This was all clever stuff back then especially when more traditional computers were costing many hundreds of pounds (serious money in the 70’s). All sorts of add-ons and projects sprang up for the MK14 including a PROM programmer, cassette interface and even a VDU, to name but a few.
I can remember wanting one of these machines but even though the price was around £40 for the kit, as a kid on pocket money it wasn’t to be. However, in mid 80’s I did get to work with bigger computers and though they had more memory and powerful CPU’s, the programmer always had to be mindful of how much system resource was available for a program. To be honest, the scarceness of system resources wasn’t often a problem to the carful programmer. Yes, it would have been nicer if things had run quicker and we’d had larger capacity disks (they were around 30Mb then), but the skill was in squeezing 110% out of the hardware. Understanding exactly how to arrange files on a disk for optimum performance, or how to squeeze every CPU cycle out of the hardware.
Modern computers even the latest Raspberry Pi don’t have these limitations. They are great to teach programming on with all their nice and easy to use programming languages and operating systems, but what they don’t teach or encourage is conservation of system resources as it’s just so simple to add more if needed. In my opinion modern software is bloated and too many programmers adopt the attitude that if it’s running too slow, just throw more hardware at it. The skill of working with limited resources has been lost and all this got me thinking about how much satisfaction was gained when you managed to do something truly remarkable, with a computer with so little.
So, my SBC won’t be built for lightning performance and contain almost limitless resources; that’s what a standard PC is for. It will however be cheap, extensible (if you’ve got the imagination), and flexible in one very important way; the CPU core instruction set will be customisable.
Computers contain a CPU and its instruction set is fixed. The SBC contains a PIC and its instruction set is also fixed, however it will run an emulation of a custom CPU and this emulation can of course be replaced for a different one. Having an emulation running will do nothing for performance of course, but, it will make the project more interesting and it means that you have the opportunity to design your own custom CPU instruction set. Now an FPGA or similar would probably have been a better choice for this project, but at this moment in time, I don’t have the development tools so a PIC will have to do. In the future, who knows?