Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As painful as it is, I found dealing with an interrupt-rich environment much easier in assembler than in other languages. The ability to use coroutines significantly reduced our efforts in building interrupt-based programs. Plus the ability to segregate tasks by interrupt level, this helped a bit.

Besides the yield statement in Python, very few other higher-level languages seem to have anything resembling a coroutine. Bliss/36 had it, but one could argue about how high-level it is.



As painful as it is, I found dealing with an interrupt-rich environment much easier in assembler than in other languages. The ability to use coroutines significantly reduced our efforts in building interrupt-based programs. Plus the ability to segregate tasks by interrupt level, this helped a bit.

I'd like to hear more. What you were trying to do? Why did it need coroutines/interrupts? How did you manage the much-lower-levelness of the language? Also, how is it that coroutines are easier in assembler in the first place? Is it because you can save the state of an execution path and jump to it later?

I've become interested lately in the idea of building a limited language (i.e. specific to the problem of one application), Lisp style, that expands directly to assembly language, as opposed to the normal way of doing it which is to expand to a high-level language that is then handed off to a general-purpose compiler. The idea is that because the language wouldn't be general-purpose, you could exercise much more control over the assembly language that is generated. What do you think of this?


This was a real-time data-acquisition process for a medical application that took ECG data transmitted over the phone line and returned an automatically-generated English-language diagnosis of that ECG to the hospital within ten minutes.

The incoming A/D data from three channels per phone line for ten or twenty phone lines came in every 2 milliseconds. One level of interrupt was assigned to that device, and a task was attached to that interrupt. The job of that was to store each sample in the appropriate buffer properly multiplexed. When the buffer was full, this task would trigger a software interrupt for the routine writing the buffers to disk. This was its own task as well.

In addition to the analog data coming in, there were also touch-tone style tones coming in that a) gave patient identification numbers b) signalled the end of groups of data (data was transmitted in four groups of three leads), and signalled the end of the phone call. This had its own interrupt task as well.

Similarly, when data was written to disk, it was simultaneously written to tape. That had its own task as well. When the phone call was complete, it was put in a queue, and fed to the analysis system. Reading the data and properly assembling it was another interrupt driven task. Then there was either punching paper tape and later sending data out over phone lines, and that had its set of tasks as well.

Interrupts were used because that is pretty much how you did things in that environment. There is a lot of stuff going on all at the same time, and nobody better be busy-waiting anything. It all needed to be overlapped.

We eventually figured out that these tasks were often pairwise synchronous tasks that used interrupts to "hand over" control to its cooperating partner. So the idea of a coroutine was implemented to make the code clearer. You could think of the main loop of each of these tasks as a loop. When it came to the point of needing input, we developed a convention that essentially translated to "get me an interrupt" and the current IP and registers would be saved in the tasks context area, and the task suspended. Then, when it woke up, you would be at the next instruction following the "get me an interrupt" line. With this technique, the system that made the phone calls and sent the messages (tasks for dialing, sending characters, reading from disk, managing tape logging) was developed with only one single thread error and no multi thread errors. I helped with the design of that part but not the coding.

So for hard real time, you kind of need to deal with the interrupts.

Not sure what you mean by How did you manage the much-lower-levelness of the language. At that time, the alternative was Fortran, so we were before C, bliss, and the like. There just wasn't an alternative.

Yes, the reasons that coroutines work in assembler (See Knuth's description) is that you can save the state of the execution path and start right back up where you left off.

Well I think your proposal has merit, but the control you mention is also done in languages like Python, C#, Ruby with the 'yield' statement.

Lisp to bare metal is good, but stuff today is so much faster that it changes the equation entirely. We were doing this on a machine with 3.5 microsecond instruction times (at best) and the whole thing resided in a machine with 36k of 32-bit words. So small was important too, although that seemed big at the time.

So the question is how much control do you really need with machines today that will give you a metric truckload of instructions in a microsecond and you have to pay more money to get memory smaller than a gig? Heck I even get microseconds with SBCL.

Smaller language is good; but the selection of features is kind of key, cause you can give yourself a combinatorial rash if you aren't careful. Not sure what language I would choose these days--likely to start with something you could express the problem in nicely, and see if the timing works out. I learned a rule then which is "first make it right, then make it fast".


So the question is how much control do you really need with machines today

Today the limiting factor is the OS, not the hardware. If your deadlines are long enough, it's surprising how well NT/XP works for low-level tasks. The problem we've had is that when we need deadlines below say 50mS, even a multi GHz Windows box can't guarantee that (XP can't guarantee anything!), so we need a mechanism to detect the missed deadline and continue without a hard failure.

Programming on a consumer-OS like Windows has so many benefits that we prefer to use that as the main platform and offload the sub-millisecond latency hard realtime stuff to smaller processors with code running on bare metal.

Sometimes it's hard coming to grips with the fact that a 4MHz PIC is outperforming a 2+ GHz Windows box, but silicon is cheap!


Quite correct.

In the scenario I described above, with each new release of their operating system (RT/M for Sigma 5) I had to go through and change any code that masked out the timer interrupts and verify that it did not cause harm in any part of the os. And this was designed to be a real-time os, which it otherwise was.

The delay in the 2ms interrupt caused by masking out the timer interrupt was estimated to be several times the quantization noise, an unacceptable result.

I don't know what the numbers are for Linux, but I would suspect there is a release that provides something workable. I would imagine that there are real-time OSes out there now.

Yes, but keep in mind that a 4mhz PIC is faster than the hardware we were working on then.

And yes, silicon is cheap. The multiplexing A/D converters we were using cost $40,000 each. Today you get the same thing for maybe a dollar, retail quantity one.

And XP is so bad that it can't key morse code reliably through the printer port for just the reasons you mention.


That sounds like a pretty cool system; thanks for sharing!


Yer welcome.

Done in 1969.


What is special about Python's yield statement compared to the ones in Ruby, C#, and other languages?


More significantly, continuations are a strict superset of coroutines, so standard Python is actually less powerful in this regard than languages with full continuation support, which if memory serves me includes Ruby, Smalltalk, and Scheme among others.


Python has coroutines as of 2.5: http://www.python.org/dev/peps/pep-0342/


Unfortunately, continuations are not supported in Ruby as of 1.9. They might come back in 2.0 though.


Er, limitations of my knowledge, as I have done more in python than the other two. My bad.


Also Lua


I'm not sure why you were down voted. An excellent overview of coroutines in general, and specifically as they are used in Lua, is http://www.inf.puc-rio.br/~roberto/docs/MCC15-04.pdf


Perl and C have coroutines. Haskell and Scheme have continuations.

(Contingent upon library use, of course.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: