Nic kaj posebnega Processor Types



Download 3.66 Mb.
Page11/15
Date28.05.2018
Size3.66 Mb.
#51588
1   ...   7   8   9   10   11   12   13   14   15

 

Where did it come from?


Anybody reading this in the UK will no doubt be familiar with Acorn's BBC micro. Some, sadly, seem to feel that the company never made it beyond that "odd thing with the black and red keys", while others can cast their mind back to the moment they booted RISC OS 4 on their new Kinetic, and gloat.

Either way, Acorn made use of the 6502 processor in the Atom, some kits, and some rackmount machines in the late seventies. As 1980 rolled in, the BBC went looking for a computer to fit a series of programmes they wanted to produce. Rather than these days, when the programmes are much more likely to fit the computer; the BBC had in mind the sort of specification it was looking for. A number of companies well known at the time tendered their designs. Acorn revamped the Atom design, throwing into it as much as possible, and building an entire working machine from the ground up in a matter of days. That's the stuff legends are made of, and that seems to be the stuff Acorn was good at, like "Hey, guys, let's pull an all-nighter and write an operating system".


The BBC loved the machine, and the rather naffly named "The Micro Program" was released in 1982 alongside the BBC microcomputer. It filled school computer rooms. Many were sold. Not many in American terms, but staggering in European terms.
The BBC micro, like earlier Acorn machines, was based around the 6502 processor - as were other popular computers such as the Apple II.
From the outset, you could have colour graphics and text on-screen. Not to be outdone, the BBC micro offered seven screen 'modes' of varying types - ranging from high resolution monochrome to eight colour (plus eight flashing colours) and an eight colour 'teletext' mode that only required 1K of memory per screen; a cassette interface for cheap and cheerful use, on-board provision for a floppy disc interface (you needed to add a couple of ICs like the 1772 disc controller, that's all), serial, four channel analogue, eight channel digital I/O, tube for co-processors, a 1MHz system bus for serious fiddling and for harddiscs... and by adding a couple of extra components, you had built-in networking.
Econet might have been slow and simple, but it was a revolution in those days, when it was stated that Bill Gates, among other notable gaffs, asked "what's a network?" - though this may well be urban legend. In any case, running multiple processor systems, and networking all sorts of machines was something that Acorn users were au fait with long before the PC marketplace kicked off, never mind implementing such things for itself.

However, Acorn had their sights set on the future, and between 1983 and 1985 the ARM processor design was designed by Steve Furber and Sophie Wilson (or, Roger Wilson, back then). This was a leap of faith and optimism, when only a year previous they had released a 32K 8 bit machine, they were then designing a 32 bit machine that could cope with up to 16Mb RAM, and some ROM as well.


Why?

Acorn continued to produce the BBC micro and variants. Indeed, the production of their most successful version of the BBC micro - the Master - only finished in May 1993. However, back a decade in 1983 it was quite clear to the innovators inside Acorn that the next generation of machine should provide something far better than rehashing old ideas over and over. In this, lay the problem. Which processor to use? There was nothing that stood out from the crowd. Acorn had produced a machine with the 16 bit 6502-alike, the 65C816, but this wasn't up to the vision that Acorn had. They tried all of the 16 and 32 bit processors available by building second processor units for the BBC micro to aid in their evaluation.

So there was one idea left. To make the processor that they were looking for. Something that kept the ideals of the 6502, but provided raw power. Something small, cheap - both to produce and to power, and something fairly simple both internally and to program. The important early design decisions were to use a fixed instruction length (which makes it possible to accurately disassemble any random memory address simply by looking to see what is there - every instruction is word aligned), and to use a load/store model.

In that day, companies were talking about extending their CISC processors. The 8088 became the 80186 (briefly), the 80286, and so on to the processor it is today. RISC processors existed, but the majority of them were designed in-house as embedded controllers. Acorn took their ideas and requirements and wrote a BASIC program that emulated the ARM 1 instruction set. The designers of the processor were new to processor design, some of the tools used were not exactly cutting edge. This prevented the processor design from being large and complex, which in it's way was the best thing, and is now being spun as a 'plus' for the ARM processor, as indeed it is.


While Acorn had very clear ideas of what they wanted the processor to do, they also wanted good all-round performance, rather than something so tailored to the end design that it obsoletes itself.

So. For the processor, Acorn rolled their own.

Please, take a moment to consider this.
Not only did Acorn create an entire powerful and innovative operating system with a tiny crew (MicroSoft probably employs more people to clean their toilets than Acorn employed in total); but they also designed their own chipset.
So basically these guys designed an entire computer from the ground up, on a tiny budget and with a tiny workforce.

You can fault Acorn for many things - lack of development, lack of advertising - but you can never fault them for having the sheer balls to pull it off in the first place.

At the time the "Archimedes" was released, it was widely touted as the world's fastest desktop machine. It also boasted a display system that could spit out loads of different resolutions. My A5000 (same video hardware) can output 640x480 at 256 colours, or 800x600 at 16 colours. It doesn't sound impressive, but this was using hardware developed in the mid '80s. The rest of the world (save Apple Macs) was using CGA and like; or Hercules for the truly deranged!

Not a lot was made of the fact that the machines were RISC. Maybe Acorn figured the name of the operating system (RISC OS) was a big hint. Maybe they figured they had enough going for the machine without getting all geeky.

So when, in the early '90s, Apple announced the world's first RISC desktop machine, we laughed. And Acorn ran a good-humoured advert in the Times welcoming Apple to RISC.

The chipset was:



  • ARM2
    This is the central processor, and originally stood for "Acorn RISC microprocessor" (rather than "ARM RISC machine", or whatever they've called it today (may I suggest "Advanced RISC Microprocessor"?)).
     

  • MEMC1 (Anna)
    This was the MEMory Controller. It was very soon replaced by the MEMC1a, which I do not think had a name.
    The RiscPC generation of machines use a MMU (Memory Management Unit).
     

  • VIDC1 (Arabella)
    This was the VIDeo Controller, though due to all it was capable of doing to pixels and sound, many knew it as the Very Ingenious Display Contraption. Certainly, the monitors that cannot be supported under RISC OS are few and far between. It is a trivial matter to switch from a modern 21" SVGA monitor to a television monitor.
    The RiscPC generation of machines use the VIDC20, which takes it the logical step further. Unfortunately, the VIDC is no longer able to keep up with the latest advances in display driver technology. Enter J. Kortink with his ViewFinder.
     

  • IOC (Albion)
    This was the Input/Output Controller, and it looked after podules and keyboards and basically anything that did I/O. In a flash of inspiration, it offered an IIC interface which is available on the expansion bus. My teletext receiver is hooked into this.
    RiscPC generation machines use the IOMD which is like a souped up IOC.

The ARM250 (mezzanine / macrocell) offered the ARM chipset on one piece of silicon. It was used in the A3010 and A3020 machines. It may have also been used in the A4000, but I've not seen inside such a machine.

 

The original operating system of the ARM-based machine was to be ARX, but it was taking too long and was running overbudget. So Arthur was designed. It has been said that Arthur's name derives from the porting of the BBC MOS "A RISC operating system by Thursday". Sadly, it has a lot of the hang ups of the BBC micro, such as a lack of memory protection (like 'modules' running in SVC mode (really only the kernel should run in SVC mode)), there's a plethora of unrelated things done with the OS_Byte SWI, the service call mechanism...


From Arthur came RISC OS, which improved certain aspects of the system, but perhaps the most significant improvement was the Desktop. Instead of a bizarre looking (and horribly coloured) thing that could only run a task at a time, it introduced proper co-operative multitasking.
The debate between pre-emptive and co-operative multitasking is legion, but I feel that Acorn wanted co-operative. That it was a design decision instead of a cop-out. Because, while it makes it slightly harder to program and more liable to problems with errant tasks, it fits so beautifully into Acorn's ethos. There's no process 'protection' like on Unix. You can drop to a privileged processor mode with little more than a SWI call, and a lot of stuff (that probably shouldn't) runs in SVC mode. Because, at it's heart, RISC OS is a hacker's operating system. Not the same type of 'hacking' that Linux and netbsd comes from - such things were not known in the home/office computer sector in those days, but in it's way, RISC OS is practically begging for you to whip out the disassembler and start poking around it's internals. The original Arthur PRMs said that any serious application would be written in assembler (a view they later changed, to suggesting serious applications would be written in C).

 

When the ARM processor team split off into ARM Ltd, they adopted a new numbering system for the processors. Originally, the numerical suffix reflected the revision of the device, the ARM 1, the ARM 2, the ARM 3 ... followed by the ARM two-and-a-half, which is 250 in the tradition of multiplying version numbers by a hundred.



Now, the single number reflects the macrocell as is always - ARM6, ARM7...

A processor with a twin number denotes a self-contained processor and basic interface circuitry, like the ARM60 and the VIDC20 (VIDC not strictly a processor, but part of the ARM chipset).

A processor with a triple number denotes the processor macrocell combined with other macrocells, or custom logic, like the ARM610 and the ARM710. Because of the simplicity of the designs, and the predefined parts, the ARM610 went from specification to silicon in under four months. Short development times are invaluable for custom devices where every development day matters... It also matters that ARM's designs will arrive on time, so you don't end up with your computer or PDA (or whatever) sitting there awaiting the processor. Within ARM's converted barn, a line of opened champagne bottles line the staircase - a testament to how many of their designs worked from the very first silicon implementation - which is virtually every single one of them.

 

 



So there you have it.

From an idea to a global leader in microprocessors (Intel has said recently it is making more ARM silicon than x86 silicon), the ARM processor's birth is wrapped in spectacular innovation.

While it is not entirely certain where RISC OS is heading, one thing is for sure. The beautiful processor in our RISC OS machines is going from strength to strength.

We at Heyrick wish ARM Ltd all the best...



Return to assembler index

Copyright © 2004 Richard Murray



Where might you
find an ARM?




 

The ARM processor is a powerful low-cost, efficient, low-power (consumption, that is) RISC processor. It's design was originally for the Archimedes desktop computer, but somewhat ironically numerous factors about its design make it unsuitable for use in a desktop machine (for example, the MMU and cache are the wrong way around). However, many factors about its design make it an exceptional choice for embedded applications.

So while many PCs can scream "Intel(R) inside", there are a steadily increasing number of devices that could scream "ARM inside", only ARM doesn't have an ego anywhere near as large as Intel. Oh, and yes I am aware that Intel are fabricating ARM processors. Oh what a tangled web we weave...

 

[last updated January 2002]



  • Gameboy Advance games console

  • Daewoo inet.top.box

  • Bush Internet TV / box

  • Datcom 2000 digital satellite receiver

  • Pace digital satellite receiver (supplied as part of the Sky package)

  • Numerous other digital cable / satellite receivers

  • Hauppauge WinTV DVB-S PC TV card

  • Oracle NC

  • LG Java computer

  • Millipede Apex Imager video board

  • Paradise AiTV set top box

  • Sony MZ-R90 minidisc

  • Win-Jam

  • JVC's digital camera 'Pixstar'

  • Lexmark Z12/22/32/42/52 colour Jetprinter

  • Samsung office laser printer

  • Samsung SmartJet MFP (printer/scanner/copier/fax)

  • Xerox colour inkjet printer

  • Digital logic analyzers from Controlware

  • IHU-2 Experimental Space Flight Computer

  • Siemens video phone

  • Wizcom's Quicktionary

  • Various GSM handsets, from the likes of Alcatel, AEG, Ericsson, Kenwood, NEC, Nokia...

  • Cable/ADSL modems, by manufacturers such as Caymen Systems, D-Link, and Zoom.

  • 3Com 3CD990-TX-97 10/100 PCI NIC with 3XP processor

  • Routers, bus adaptors, servers, crypto, gateways...

  • POS systems

  • Smart cards

  • Adaptec PCI to Ultra2 SCSI 64 bit RAID controller

  • ATA drive electronics controller systems (bare)

  • Iomega HipZip digital audio player

  • C pen, with OCR and IrDA

  • HP/Ericsson/Compaq pocket PCs

  • Psion series 5 hand-held PC (5mx used 36MHz ARM710T)

  • Various PDAs

  • And, of course, all of us using Archimedes / BBC (A30x0) / NetStation / RiscPC / A7000 / Mico / RiscStation computers!!!

This is not a complete list. Visit http://www.arm.com/ for a full list, with links to each item.

The above may not use ARM processors, but other hardware produced by ARM. It is rather difficult to discover what it actually inside half of these things, without owning one and taking it apart!



 

A site that gives images and interesting background information is the details of the IHU-2 experimental space flight computer.



Return to assembler index

Copyright © 2004 Richard Murray



RISC
vs
CISC




 

You can read a reply to this text by going here.

 

In the early days of computing, you had a lump of silicon which performed a number of instructions. As time progressed, more and more facilities were required, so more and more instructions were added. However, according to the 20-80 rule, 20% of the available instructions are likely to be used 80% of the time, with some instructions only used very rarely. Some of these instructions are very complex, so creating them in silicon is a very arduous task. Instead, the processor designer uses microcode. To illustrate this, we shall consider a modern CISC processor (such as a Pentium or 68000 series processor). The core, the base level, is a fast RISC processor. On top of that is an interpreter which 'sees' the CISC instructions, and breaks them down into simpler RISC instructions.



Already, we can see a pretty clear picture emerging. Why, if the processor is a simple RISC unit, don't we use that? Well, the answer lies more in politics than design. However Acorn saw this and not being constrained by the need to remain totally compatible with earlier technologies, they decided to implement their own RISC processor.

Up until now, we've not really considered the real differences between RISC and CISC, so...

A Complex Instruction Set Computer (CISC) provides a large and powerful range of instructions, which is less flexible to implement. For example, the 8086 microprocessor family has these instructions:

JA Jump if Above

JAE Jump if Above or Equal

JB Jump if Below

...

JPO Jump if Parity Odd



JS Jump if Sign

JZ Jump if Zero

There are 32 jump instructions in the 8086, and the 80386 adds more. I've not read a spec sheet for the Pentium-class processors, but I suspect it (and MMX) would give me a heart attack!

By contrast, the Reduced Instruction Set Computer (RISC) concept is to identify the sub-components and use those. As these are much simpler, they can be implemented directly in silicon, so will run at the maximum possible speed. Nothing is 'translated'. There are only two Jump instructions in the ARM processor - Branch and Branch with Link. The "if equal, if carry set, if zero" type of selection is handled by condition options, so for example:

BLNV Branch with Link NeVer (useful!)

BLEQ Branch with Link if EQual

and so on. The BL part is the instruction, and the following part is the condition. This is made more powerful by the fact that conditional execution can be applied to most instructions! This has the benefit that you can test something, then only do the next few commands if the criteria of the test matched. No branching off, you simply add conditional flags to the instructions you require to be conditional:

SWI "OS_DoSomethingOrOther" ; call the SWI

MVNVS R0, #0 ; If failed, set R0 to -1

MOVVC R0, #0 ; Else set R0 to 0

Or, for the 80486:

INT $...whatever... ; call the interrupt

CMP AX, 0 ; did it return zero?

JE failed ; if so, it failed, jump to fail code

MOV DX, 0 ; else set DX to 0

return


RET ; and return

failed


MOV DX, 0FFFFH ; failed - set DX to -1

JMP return

The odd flow in that example is designed to allow the fastest non-branching throughput in the 'did not fail' case. This is at the expense of two branches in the 'failed' case.
I am not, however, an x86 coder, so that can possibly be optimised - mail me if you have any suggestions...

 

Most modern CISC processors, such as the Pentium, uses a fast RISC core with an interpreter sitting between the core and the instruction. So when you are running Windows95 on a PC, it is not that much different to trying to get W95 running on the software PC emulator. Just imagine the power hidden inside the Pentium...



Another benefit of RISC is that it contains a large number of registers, most of which can be used as general purpose registers.

This is not to say that CISC processors cannot have a large number of registers, some do. However for it's use, a typical RISC processor requires more registers to give it additional flexibility. Gone are the days when you had two general purpose registers and an 'accumulator'.

One thing RISC does offer, though, is register independence. As you have seen above the ARM register set defines at minimum R15 as the program counter, and R14 as the link register (although, after saving the contents of R14 you can use this register as you wish). R0 to R13 can be used in any way you choose, although the Operating System defines R13 is used as a stack pointer. You can, if you don't require a stack, use R13 for your own purposes. APCS applies firmer rules and assigns more functions to registers (such as Stack Limit). However, none of these - with the exception of R15 and sometimes R14 - is a constraint applied by the processor. You do not need to worry about saving your accumulator in long instructions, you simply make good use of the available registers.

The 8086 offers you fourteen registers, but with caveats:


The first four (A, B, C, and D) are Data registers (a.k.a. scratch-pad registers). They are 16bit and accessed as two 8bit registers, thus register A is really AH (A, high-order byte) and AL (A low-order byte). These can be used as general purpose registers, but they can also have dedicated functions - Accumulator, Base, Count, and Data.
The next four registers are Segment registers for Code, Data, Extra, and Stack.
Then come the five Offset registers: Instruction Pointer (PC), SP and BP for the stack, then SI and DI for indexing data.
Finally, the flags register holds the processor state.
As you can see, most of the registers are tied up with the bizarre memory addressing scheme used by the 8086. So only four general purpose registers are available, and even they are not as flexible as ARM registers.

The ARM processor differs again in that it has a reduced number of instruction classes (Data Processing, Branching, Multiplying, Data Transfer, Software Interrupts).

A final example of minimal registers is the 6502 processor, which offers you:
  Accumulator - for results of arithmetic instructions
  X register  - First general purpose register
  Y register  - Second general purpose register
  PC          - Program Counter
  SP          - Stack Pointer, offset into page one (at &01xx).
  PSR         - Processor Status Register - the flags.
While it might seem like utter madness to only have two general purpose registers, the 6502 was a very popular processor in the '80s. Many famous computers have been built around it.
For the Europeans: consider the Acorn BBC Micro, Master, Electron...
For the Americans: consider the Apple2 and the Commadore PET.
The ORIC uses a 6502, and the C64 uses a variant of the 6502.
(in case you were wondering, the Speccy uses the other popular processor - the ever bizarre and freaky Z80)

So if entire systems could be created with a 6502, imagine the flexibility of the ARM processor.


It has been said that the 6502 is the bridge between CISC design and RISC. Acorn chose the 6502 for their original machines such as the Atom and the System# units. They went from there to design their own processor - the ARM.

 

To summarise the above, the advantages of a RISC processor are:



  • Quicker time-to-market. A smaller processor will have fewer instructions, and the design will be less complicated, so it may be produced more rapidly.
     

  • Smaller 'die size' - the RISC processor requires fewer transistors than comparable CISC processors...
    This in turn leads to a smaller silicon size (I once asked Russell King of ARMLinux fame where the StrongARM processor was - and I was looking right at it, it is that small!)
    ...which, in turn again, leads to less heat dissipation. Most of the heat of my ARM710 is actually generated by the 80486 in the slot beside it (and that's when it is supposed to be in 'standby').
     

  • Related to all of the above, it is a much lower power chip. ARM design processors in static form so that the processor clock can be stopped completely, rather than simply slowed down. The Solo computer (designed for use in third world countries) is a system that will run from a 12V battery, charging from a solar panel.
     

  • Internally, a RISC processor has a number of hardwired instructions.
    This was also true of the early CISC processors, but these days a typical CISC processor has a heart which executes microcode instructions which correlate to the instructions passed into the processor. Ironically, this 'heart' tends to be RISC. :-)
     

  • As touched on my Matthias below, a RISC processor's simplicity does not necessarily refer to a simple instruction set.
    He quotes LDREQ R0,[R1,R2,LSR #16]!, though I would prefer to quote the 26 bit instruction LDMEQFD R13!, {R0,R2-R4,PC}^ which restores R0, R2, R3, R4, and R15 from the fully descending stack pointed to by R13. The stack is adjusted accordingly. The '^' pushes the processor flags into R15 as well as the return address. And it is conditionally executed. This allows a tidy 'exit from routine' to be performed in a single instruction.
    Powerful, isn't it?
    The RISC concept, however, does not state that all the instructions are simple. If that were true, the ARM would not have a MUL, as you can do the exact same thing with looping ADDing. No, the RISC concept means the silicon is simple. It is a simple processor to implement.
    I'll leave it as an exercise for the reader to figure out the power of Mathias' example instruction. It is exactly on par with my example, if not slightly more so!

For a completion of this summary, and some very good points regarding the ARM processor, keep reading...

 

In response to the original version of this text, Matthias Seifert replied with a more specific and detailed analysis. He has kindly allowed me to reproduce his message here...



 

RISC vs ARM


You shouldn't call it "RISC vs CISC" but "ARM vs CISC". For example conditional execution of (almost) any instruction isn't a typical feature of RISC processors but can only(?) be found on ARMs. Furthermore there are quite some people claiming that an ARM isn't really a RISC processor as it doesn't provide only a simple instruction set, i.e. you'll hardly find any CISC processor which provides a single instruction as powerful as a

LDREQ R0,[R1,R2,LSR #16]!

Today it is wrong to claim that CISC processors execute the complex instructions more slowly, modern processors can execute most complex instructions with one cycle. They may need very long pipelines to do so (up to 25 stages or so with a Pentium III), but nonetheless they can. And complex instructions provide a big potential of optimisation, i.e. if you have an instruction which took 10 cycles with the old model and get the new model to execute it in 5 cycles you end up with a speed increase of 100% (without a higher clock frequency). On the other hand ARM processors executed most instruction in a single cycle right from the start and thus don't have this optimisation potential (except the MUL instruction).

The argument that RISC processors provide more registers than CISC processors isn't right. Just take a look at the (good old) 68000, it has about the same number of registers as the ARM has. And that 80x86 compatible processors don't provide more registers is just a matter of compatibility (I guess). But this argument isn't completely wrong: RISC processors are much simpler than CISC processors and thus take up much less space, thus leaving space for additional functionality like more registers. On the other hand, a RISC processor with only three or so registers would be a pain to program, i.e. RISC processors simply need more registers than CISC processors for the same job.

And the argument that RISC processors have pipelining whereas CISCs don't is plainly wrong. I.e. the ARM2 hadn't whereas the Pentium has...

The advantages of RISC against CISC are those today:



  • RISC processors are much simpler to build, by this again results in the following advantages:

    • easier to build, i.e. you can use already existing production facilities

    • much less expensive, just compare the price of a XScale with that of a Pentium III at 1 GHz...

    • less power consumption, which again gives two advantages:

      • much longer use of battery driven devices

      • no need for cooling of the device, which again gives to advantages:

        • smaller design of the whole device

        • no noise

 


  • RISC processors are much simpler to program which doesn't only help the assembler programmer, but the compiler designer, too. You'll hardly find any compiler which uses all the functions of a Pentium III optimally...

And then there are the benefits of the ARM processors:

  • Conditional execution of most instructions, which is a very powerful thing especially with large pipelines as you have to fill the whole pipeline every time a branch is taken, that's why CISC processors make a huge effort for branch prediction
     

  • The shifting of registers while other instructions are executed which mean that shifts take up no time at all (the 68000 took one cycle per bit to shift)
     

  • The conditional setting of flags, i.e. ADD and ADDS, which becomes extremely powerful together with the conditional execution of instructions
     

  • The free use of offsets when accessing memory, i.e.

  • LDR R0,[R1,#16]

  • LDR R0,[R1,#16]!

  • LDR R0,[R1],#16

  • LDR R0,[R1,R2]

  • LDR R0,[R1,R2]!

  • LDR R0,[R1],R2

...

The 68000 could only increase the address register by the size of the data read (i.e. by 1, 2 or 4). Just imagine how much better an ARM processor can be programmed to draw (not only) a vertical line on the screen.


 

  • The (almost) free use of all registers with all instructions (which may well be an advantage of any RISC processor). It simply is great to be able to use

  • ADD PC,PC,R0,LSL #2

  • MOV R0,R0

  • B R0is0

  • B R0is1

  • B R0is2

  • B R0is3

...

or even


ADD PC,PC,R0,LSL #3

MOV R0,R0

MOV R1,#1

B Continue

MOV R2,#2

B Comtinue

MOV R2,#4

B Continue

MOV R2,#8

B Continue

...

I used this technique when programming my C64 emulator even more excessively to emulate the 6510. There the shift is 8 which gives 256 bytes for each instruction to emulate. Within those 256 bytes there is not only the code for the emulation of the instruction but also the code to react on interrupts, the fetching of the next instruction and the jump to the emulation code of that instruction, i.e. the code to emulate the CLC (clear C flag) looks like this:



ADD R10,R10,#1 ; increment PC of 6510 to point to next

; instruction

BIC R6,R6,#1 ; clear C flag of 6510 status register

LDR R0,[R12,#64] ; read 6510 interrupt state

CMP R0,#0 ; interrupt occurred?

BNE &00018040 ; yes -> jump to interrupt handler

LDRB R1,[R4,#1]! ; read next instruction

ADD PC,R5,R1,LSL #8 ; jump to emulation code

MOV R0,R0 ; lots of these to fill up the 256 bytes

This means that there is only one single jump for each instruction emulated. By this (and a bit more) the emulator is able to reach 76% of the speed of the original C64 with an A3000, 116% with an A4000, 300% with an A5000 and 3441% with my RiscPC (SA at 287 MHz). The code may look hard to handle, but the source of it looks much better:

;-----------;

; $18 - CLC ;

;-----------;

ADD R10,R10,#1 ; increment PC of 6510

BIC R6,R6,#%00000001 ; clear C flag of 6510 status register

FNNextCommand ; do next command

FNFillFree ; fill remaining space

 



Download 3.66 Mb.

Share with your friends:
1   ...   7   8   9   10   11   12   13   14   15




The database is protected by copyright ©ininet.org 2024
send message

    Main page