544 lines
63 KiB
HTML
544 lines
63 KiB
HTML
|
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Strict//EN">
|
||
|
<html>
|
||
|
<head>
|
||
|
<title>Synthesis: An Efficient Implementation of Fundamental Operating System Services - Abstract</title>
|
||
|
<link rel="stylesheet" type="text/css" href="../css/style.css">
|
||
|
<link rel="stylesheet" type="text/css" href="style.css">
|
||
|
</head>
|
||
|
<body>
|
||
|
<div id="nav">
|
||
|
<a class=home href="../index.html">Alexia's Home</a>
|
||
|
<a href="index.html">Dissertation</a>
|
||
|
<a href="abs.html">Abstract</a>
|
||
|
<a href="ack.html">Acknowledgements</a>
|
||
|
<a href="toc.html">Contents</a>
|
||
|
<a href="ch1.html">Chapter 1</a>
|
||
|
<a href="ch2.html">Chapter 2</a>
|
||
|
<a class=here href="ch3.html">Chapter 3</a>
|
||
|
<a href="ch4.html">Chapter 4</a>
|
||
|
<a href="ch5.html">Chapter 5</a>
|
||
|
<a href="ch6.html">Chapter 6</a>
|
||
|
<a href="ch7.html">Chapter 7</a>
|
||
|
<a href="ch8.html">Chapter 8</a>
|
||
|
<a href="bib.html">Bibliography</a>
|
||
|
<a href="app-A.html">Appendix A</a>
|
||
|
</div>
|
||
|
|
||
|
|
||
|
<div id="running-title">
|
||
|
Synthesis: An Efficient Implementation of Fundamental Operating System Services - Abstract
|
||
|
</div>
|
||
|
|
||
|
<div id="content">
|
||
|
|
||
|
<h1>3. Kernel Code Generator</h1>
|
||
|
|
||
|
<div id="chapter-quote">
|
||
|
For, behold, I create new heavens and a new earth.<br>
|
||
|
-- The Bible, Isaiah
|
||
|
</div>
|
||
|
|
||
|
<h2>3.1 Fundamentals</h2>
|
||
|
|
||
|
<p>Kernel code synthesis is the name given to the idea of creating executable machine code at runtime as a means of improving operating system performance. This idea distinguishes Synthesis from all other operating systems research efforts, and is what helps make Synthesis efficient.
|
||
|
|
||
|
<p>Runtime code generation is the process of creating executable machine code during program execution for use later during the same execution [16]. This is in contrast to the usual way, where all the code that a program runs has been created at compile time, before program execution starts. In the case of an operating system kernel like Synthesis, the "program" is the operating system kernel, and the term "program execution" refers to the kernel's execution, which lasts from the time the system is started to the time it is shut down.
|
||
|
|
||
|
<p>There are performance benefits in doing runtime code generation because there is more information available at runtime. Special code can be created based on the particular data to be processed, rather than relying on general-purpose code that is slower. Runtime code generation can extend the benefits of detailed compile-time analysis by allowing certain data-dependent optimizations to be postponed to runtime, where they can be done more effectively because there is more information about the data. We want to make the best possible use of the information available at compile-time, and use runtime code generation to optimize data-dependent execution.
|
||
|
|
||
|
<p>The goal of runtime code generation can be stated simply:
|
||
|
|
||
|
<blockquote>Never evaluate something more than once.</blockquote>
|
||
|
|
||
|
<p>For example, suppose that the expression, <em>A * A + A * B + B * B</em> is to be evaluated for many different A while holding B = 1. It is more efficient to evaluate the reduced expression obtained by replacing B with 1: <em>A * A + A + 1</em>. Finding opportunities for such optimizations and performing them is the focus of this chapter.
|
||
|
|
||
|
<p>The problem is one of knowing how soon we can know what value a variable has, and how that information can be used to improve the program's code. In the previous example, if it can be deduced at compile time that B = 1, then a good compiler can perform precisely the reduction shown. But usually we can not know ahead of time what value a variable will have. B might be the result of a long calculation whose value is hard if not impossible to predict until the program is actually run. But when it is run, and we know B, runtime code generation allows us to use the newly-acquired information to reduce the expression.
|
||
|
|
||
|
<p>Specifically, we create specialized code once the value of B becomes known, using an idea called partial evaluation [15]. Partial evaluation is the building of simpler, easierto-evaluate expressions from complex ones by substituting variables that have a known, constant value with that constant. When two or more of these constants are combined in an arithmetic or logical operation, or when one of the constants is an identity for the operation, the operation can be eliminated. In the previous example, we no longer have to compute B * B, since we know it is 1, and we do not need to compute A * B, since we know it is A.
|
||
|
|
||
|
<p>There are strong parallels between runtime code generation and compiler code generation, and many of the ideas and terminology carry over from one to the other. Indeed, anything that a compiler does to create executable code can also be performed at runtime. But because compilation is an off-line process, there is usually less concern about the cost of code generation and therefore one has a wider palette of techniques to choose from. A compiler can afford to use powerful, time-consuming analysis methods and perform sophisticated optimizations - a luxury not always available at runtime.
|
||
|
|
||
|
<p>Three optimizations are of special interest to us, not only because they are easy to do, but because they are also effective in improving code quality. They are: <em>constant folding</em>, <em>constant propagation</em>, and <em>procedure inlining</em>. Constant folding replaces constant expressions like 5 * 4 with the equivalent value, 20. Constant propagation replaces variables that have known, constant value with that constant. For example, the fragment <em>x = 5; y = 4 * x;</em> becomes <em>x = 5; y = 4 * 5;</em> through constant propagation; <em>4 * 5</em> then becomes <em>20</em> through constant folding. Procedure inlining substitutes the body of a procedure, with its local variables appropriately renamed to avoid conflicts, in place of its call.
|
||
|
|
||
|
<p>There are three costs associated with runtime code generation: creation cost, paid each time a piece of code is created; execution cost, paid each time the code is used; and management costs, to keep track of where the code is and how it is being used. The hope is to use the information available at runtime to create better code than would otherwise be possible. In order to win, the savings of using the runtime-created code must exceed the cost of creating and managing that code. This means that for many applications, a fast code generator that creates good code will be superior to a slow code generator that creates excellent code. (The management problem is analogous to keeping track of ordinary, heap-allocated data structures, and the costs are similar, so they will not be considered further.)
|
||
|
|
||
|
<p>Synthesis focuses on techniques for implementing very fast runtime code generation. The goal is to broaden its applicability and extend its benefits, making it cheap enough so that even expressions and procedures that are not re-used often still benefit from having their code custom-created at runtime. To this end, the places where runtime code generation is used are limited to those where it is clear at compile time what the possible reductions will be. The following paragraphs describe the idea, while the next section describes the specific techniques.
|
||
|
|
||
|
<p>A fast runtime code generator can be built by making full use of the information available at compile time. In our example, we know at compile time that B will be held constant, but we do not know what the constant will be. But we can predict at compile-time what form the reduced expression will have: <em>A * A + C1 * A + C2</em>. Using this knowledge, we can build a simple code generator for the expression that copies a code template representing <em>A * A + C1 * A + C2</em> into newly allocated memory and computes and fills the constants: <em>C1 = B</em> and <em>C2 = B * B</em>. A code template is a fragment of code which has been compiled but contains "holes" for key values.
|
||
|
|
||
|
<p>Optimizations to the runtime-created code can also be pre-computed. In this example, interesting optimizations occur when B is 0, 1, or a power of two. Separate templates for each of these cases allow the most efficient code possible to be generated. The point is that there is plenty of information available at compile time to allow not just simple substitution of variables by constants, but also interesting and useful optimizations to happen at runtime with minimal analysis.
|
||
|
|
||
|
<p>The general idea is: treat runtime code generation as if it were just another "function" to be optimized, and apply the idea of partial evaluation recursively. That is, just as in the previous example we partially-evaluate the expression <em>A * A + A * B + B * B</em> with respect to the variable held constant, we can partially-evaluate the optimizations with respect to the parameters that the functions will be specialized under, with the result being specialized code-generator functions.
|
||
|
|
||
|
<p>Looking at a more complex example, suppose that the compiler knows, either through static control-flow analysis, or simply by the programmer telling it through some directives, that the function <em>f(p1, ...) = 4 * p1 + ...</em> will be specialized at runtime for constant p1. The compiler can deduce that the expression <em>4 * p1</em> will reduce to a constant, but it does not know what particular value that constant will have. It can capture this knowledge in a custom code generator for f that computes the value <em>4 * p1</em> when p1 becomes known and stores it in the correct spot in the machine code of the specialized function f, bypassing the need for analysis at runtime. In another example, consider the function g, <em>g(p1, ...) = if(p1 != 10) S1; else S2;</em>, also to be specialized for constant parameter p1. Since parameter p1 will be constant, we know at compile time that the if-statement will be either always true, or always false. We just don't know which. But again, we can create a specialized generator for g, one that evaluates the conditional when it becomes known and emits either S1 or S2 depending on the result.
|
||
|
|
||
|
<p>The idea applies recursively. For example, once we have a code generator for a particular kind of expression or statement, that same generator can be used each time that kind of expression occurs, even if it is in a different part of the program. Doing this limits the proliferation of code generators and keeps the program size small. The resulting runtime code generator has a hierarchical structure, with generators for the large functions calling sub-generators to create the individual statements, which in turn call yet lower-level generators, and so on, until at the bottom we have very simple generators that, for example, move a constant into a machine register in the most efficient way possible.
|
||
|
|
||
|
<h2>3.2 Methods of Runtime Code Generation</h2>
|
||
|
|
||
|
The three methods Synthesis uses to create machine code are: <em>factoring invariants</em>, <em>collapsing layers</em>, and <em>executable data structures</em>.
|
||
|
|
||
|
<h3>3.2.1 Factoring Invariants</h3>
|
||
|
|
||
|
<p>The factoring invariants method is equivalent to partial evaluation where it is known at compile time the variables over which a function will be partially evaluated. It is based on the observation that a functional restriction is usually easier to calculate than the original function. Consider a general function:
|
||
|
|
||
|
<blockquote><em>
|
||
|
F<sub>big</sub>(p1, p2, ... , pn)
|
||
|
</em></blockquote>
|
||
|
|
||
|
If we know that parameter p1 will be held constant over a set of invocations, we can factor it out to obtain an equivalent composite function:
|
||
|
|
||
|
<blockquote><em>
|
||
|
[ F<sup>create</sup>(p1) ] (p2, ... , pn) ≡ F<sub>big</sub>(p1, p2, ... , pn)
|
||
|
</em></blockquote>
|
||
|
|
||
|
F<sup>create</sup> is a second-order function. Given the parameter p1, F<sup>create</sup> returns another function, F<sub>small</sub>, which is the restriction of F<sub>big</sub> that has absorbed the constant argument p1:
|
||
|
|
||
|
<blockquote><em>F<sub>small</sub>(p2, ... , pn) ⊂ F<sub>big</sub>(p1, p2, ... , pn)</em></blockquote>
|
||
|
|
||
|
If F<sup>create</sup> is independent of global data, then for a given p1, F<sup>create</sup> will always compute the same F<sub>small</sub> regardless of global state. This allows F<sup>create</sup>(p1) to be evaluated once and the resulting F<sub>small</sub> used thereafter. If F<sub>small</sub> is executed m times, generating and using it pays off when
|
||
|
|
||
|
<blockquote><em>
|
||
|
Cost(F<sup>create</sup>) + m * Cost(F<sub>small</sub>) < m * Cost(F<sub>big</sub>)
|
||
|
</em></blockquote>
|
||
|
|
||
|
As the "factoring invariants" name suggests, this method resembles the constant propagation and constant folding optimizations done by compilers. The analogy is strong, but the difference is also significant. Constant folding eliminates static code and calculations. In addition, Factoring Invariants can also simplify dynamic data structure traversals that depend on the constant parameter p1.
|
||
|
|
||
|
<p>For example, we can apply this idea to improve the performance of the read system function. When reading a particular file, constant parameters include the device that the file resides on, the address of the kernel buffers, and the process performing the read. We can use file open as F<sup>create</sup>; the F<sub>small</sub> it generates becomes our read function. F<sup>create</sup> consists of many small procedure templates, each of which knows how to generate code for a basic operation such as "read disk block", "process TTY input", or "enqueue data." The parameters passed to F<sup>create</sup> determine which of these code-generating procedures are called and in what order. The final F<sub>small</sub> is created by filling these templates with addresses of the process table, device registers, and the like.
|
||
|
|
||
|
<h3>3.2.2 Collapsing Layers</h3>
|
||
|
|
||
|
<p>The collapsing layers method is equivalent to procedure inlining where it is known at compile time which procedures might be inlined. It is based on the observation that in a layered design, separation between layers is a part of specification, not implementation. In other words, procedure calls and context switches between functional layers can be bypassed at execution time. Let us consider an example from the layered OSI model:
|
||
|
|
||
|
<blockquote><em>
|
||
|
F<sub>big</sub>(p1, p2, ... , pn) ≡ F<sub>applica</sub>(p1, F<sub>present</sub>(p2, F<sub>session</sub>( ... F<sub>datalnk</sub>(pn) ... )))
|
||
|
</em></blockquote>
|
||
|
|
||
|
|
||
|
F<sub>applica</sub> is a function at the Application layer that calls successive lower layers to send a message. Through in-line code substitution of F<sub>present</sub> in F<sub>applica</sub>, we can obtain an equivalent flat function by eliminating the procedure call from the Application to the Presentation layer:
|
||
|
|
||
|
<blockquote><em>
|
||
|
F<sub>flatapplica</sub>(p1, p2, F<sub>session</sub>( ... )) ≡ F<sub>applica</sub>(p1, F<sub>present</sub>(p2, F<sub>session</sub>( ... )))
|
||
|
</em></blockquote>
|
||
|
|
||
|
The process to eliminate the procedure call can be embedded into two second-order functions. F<sup>create</sup>present returns code equivalent to F<sub>present</sub> and suitable for in-line insertion. F<sup>create</sup>applica incorporates that code to generate F flatapplica.
|
||
|
|
||
|
<blockquote><em>
|
||
|
F<sup>create</sup><sub>applica</sub>(p1, F<sup>create</sup><sub>present</sub>(p2, ... ), F<sub>flatapplica</sub>(p1, p2, ... ))
|
||
|
</em></blockquote>
|
||
|
|
||
|
This technique is analogous to in-line code substitution for procedure calls in compiler code generation. In addition to the elimination of procedure calls, the resulting code typically exhibit opportunities for further optimization, such as Factoring Invariants and elimination of data copying.
|
||
|
|
||
|
<p>By induction, F<sup>create</sup><sub>present</sub> can eliminate the procedure call to the Session layer, and down through all layers. When we execute F<sup>create</sup><sub>flatapplica</sub> to establish a virtual circuit, the F<sub>flatapplica</sub> code used thereafter to send and receive messages may consist of only sequential code. The performance gain analysis is similar to the one for factoring invariants.
|
||
|
|
||
|
<h3>3.2.3 Executable Data Structures</h3>
|
||
|
|
||
|
<p>The executable data structures method reduces the traversal time of data structures that are frequently traversed in a preferred way. It works by storing node-specific traversal code along with the data in each node, making the data structure self-traversing.
|
||
|
|
||
|
<p>Consider an active job queue managed by a simple round-robin scheduler. Each element in the queue contains two short sequences of code: <em>stopjob</em> and <em>startjob</em>. The <em>stopjob</em> saves the registers and branches into the next job's <em>startjob</em> routine (in the next element in queue). The <em>startjob</em> restores the new job's registers, installs the address of its own <em>stopjob</em> in the timer interrupt vector table, and resumes processing.
|
||
|
|
||
|
<p>An interrupt causing a context switch will execute the current program's <em>stopjob</em>, which saves the current state and branches directly into the next job's <em>startjob</em>. Note that the scheduler has been taken out of the loop. It is the queue itself that does the context switch, with a critical path on the order of ten machine instructions. The scheduler intervenes only to insert and delete elements from the queue.
|
||
|
|
||
|
<h3>3.2.4 Performance Gains</h3>
|
||
|
|
||
|
<p>Runtime code generation and partial evaluation can be thought of as a way of caching frequently visited states. It is interesting to contrast this type of caching with the caching that existing systems do using ordinary data structures. Generally, systems use data structures to capture state and remember expensive-to-compute values. For example, when a file is opened, a data structure is built to describe the file, including its location on disk and a pointer to the procedure to be used to read it. The read procedure interprets state stored in the data structure to determine what work is to be done and how to do it.
|
||
|
|
||
|
<p>In contrast, code synthesis encodes state directly into generated procedures. The resulting performance gains extend beyond just saving the cost of interpreting a data structure. To see this, let us examine the performance gains obtained from hard-wiring a constant directly into the code compared to fetching it from a data structure. Hardwiring embeds the constant in the instruction stream, so there is an immediate savings that comes from eliminating one or two levels of indirection and obviating the need to pass the structure pointer. These can be attributed to "saving the cost of interpretation." But hardwiring also opens up the possibility of further optimizations, such as constant folding, while fetching from a data structure admits no such optimizations. Constant folding becomes possible because once it is known that a parameter will be, say, 2, all pure functions of that parameter will likewise be constant and can be evaluated once and the constant result used thereafter. A similar flavor of optimization arises with IF-statements. In the code fragment "if(C) S1; else S2;", where the conditional, C, depends only on constant parameters, the generated code will contain either S1 or S2, never both, and no test. It is with this cascade of optimization possibilities that code synthesis obtains its most significant performance gains. The following section illustrates some of the places in the kernel where runtime code generation is used to advantage.
|
||
|
|
||
|
|
||
|
<h2>3.3 Uses of Code Synthesis in the Kernel</h2>
|
||
|
<h3>3.3.1 Buffers and Queues</h3>
|
||
|
|
||
|
<p>Buffers and queues can be implemented more efficiently with runtime code generation than without.
|
||
|
|
||
|
|
||
|
<div class=code>
|
||
|
<pre>
|
||
|
char buf[100], *bufp = &buf[0], *endp = &buf[100];
|
||
|
Put(c)
|
||
|
{
|
||
|
*bufp++ = c;
|
||
|
if(bufp == endp)
|
||
|
flush();
|
||
|
}
|
||
|
|
||
|
Put: // (character is passed register d0)
|
||
|
move.l (bufp),a0 // (1) Load buffer pointer into register a0
|
||
|
move.b d0,(a0)+ // (2) Store the character and increment the a0 register
|
||
|
move.l a0,(bufp) // (3) Update the buffer pointer
|
||
|
cmp.l (endp),a0 // (4) Test for end-of-buffer
|
||
|
beq flush // ... if end, jump to flush routine
|
||
|
rts // ... otherwise return
|
||
|
</pre>
|
||
|
<p class=caption>Figure 3.1: Hand-crafted assembler implementation of a buffer</p>
|
||
|
</div>
|
||
|
|
||
|
|
||
|
<p>Figure 3.1 shows a good, hand-written 68030 assembler implementation of a buffer.
|
||
|
|
||
|
<p>The C language code illustrates the intended function, while the 68030 assembler code shows the work involved. The work consists of: (1) loading the buffer pointer into a machine register; (2) storing the character in memory while incrementing the pointer register; (3) updating the buffer pointer in memory; and (4) testing for the end-of-buffer condition. This fragment executes in 28 machine cycles not counting the procedure call overhead.
|
||
|
|
||
|
|
||
|
<div class=code>
|
||
|
<pre>
|
||
|
Put: // (character is passed register d0)
|
||
|
move.l (P),a0 // Load buffer pointer into register a0
|
||
|
move.b d0,(a0,D) // Store the character
|
||
|
addq.w #1,(P+2) // Update the buffer pointer and test if reached end
|
||
|
beq flush // ... if end, jump to flush routine
|
||
|
rts // ... otherwise return
|
||
|
</pre>
|
||
|
<p class=caption>Figure 3.2: Better buffer implementation using code synthesis</p>
|
||
|
</div>
|
||
|
|
||
|
|
||
|
<table class=table>
|
||
|
<caption>
|
||
|
Table 3.1: CPU Cycles for Buffer-Put<br>
|
||
|
<small>68030 CPU, 25MHz, 1-wait-state main memory</small>
|
||
|
</caption>
|
||
|
<tr class=head><th><th>Cold cache<th>Warm cache
|
||
|
<tr><th>Code-synthesis (CPU cycles)<td class=number>29<td class=number>20
|
||
|
<tr><th>Hand-crafted assembly (CPU cycles)<td class=number>37<td class=number>28
|
||
|
<tr><th>Speedup<td class=number>1.4<td class=number>1.4
|
||
|
</table>
|
||
|
|
||
|
|
||
|
<p>Figure 3.2 shows the code-synthesis implementation of a buffer, which is 40% faster. Table 3.1 gives the actual measurements. The improvement comes from the elimination of the cmp instruction, for a savings of 8 cycles. The code relies on the implicit test for zero that occurs at the end of every arithmetic operation. Specifically, we arrange that the lower 16 bits of the pointer variable be zero when the end of buffer is reached, so that incrementing the pointer also implicitly tests for end-of-buffer.
|
||
|
|
||
|
<p>This is done for a general pointer as follows. The original bufp pointer is represented as the sum of two quantities: a pointer-like variable, <em>P</em>, and a constant displacement, <em>D</em>. Their sum, <em>P + D</em>, gives the current position in the buffer, and takes the place of the original bufp pointer. The character is stored in the buffer using the "<em>move.b d0,(a0,D)</em>" instruction which is just as fast as a simple register-indirect store. The displacement, <em>D</em>, is chosen so that when <em>P + D</em> points to the end of the buffer, P is 0 modulo 2<sup>16</sup>, that is, the least significant 16 bits of <em>P</em> are zero. The "<em>addq.w #1,(P+2)</em>" instruction then increments the lower 16 bits of the buffer pointer and also implicitly tests for end-of-buffer, which is indicated by a 0 result. For buffer sizes greater than 2<sup>16</sup> bytes, the flush routine can propagate the carry-out to the upper bits, flushing the buffer when the true end is reached.
|
||
|
|
||
|
<table class=table>
|
||
|
<caption>
|
||
|
Table 3.2: Comparison of C-Language "stdio" Libraries
|
||
|
</caption>
|
||
|
<tr class=head><th>10<sup>7</sup> Executions of:<th>Execution time, seconds<th>Size: Bytes/Invocation
|
||
|
<tr><th><span class=smallcaps>Unix</span> "putchar" macro<td>21.4 user; 0.1 system<td class=number>132
|
||
|
<tr><th>Synthesis "putchar" macro<td>13.0 user; 0.1 system<td class=number>30
|
||
|
<tr><th>Synthesis "putchar" function<td>19.0 user; 0.1 system<td class=number>8
|
||
|
</table>
|
||
|
|
||
|
<p>This performance gain can only be had using runtime code generation, because <em>D</em> must be a constant, embedded in the buffer's machine code, to take advantage of the fast memory-reference instruction. Were <em>D</em> a variable, the loss of fetching its value and indexing would offset the gain from eliminating the compare instruction. The 40% savings is significant because buffers and queues are used often. Another advantage is improved locality of reference: code synthesis puts both code and data in the same page of memory, increasing the likelihood of cache hits in the memory management unit's address translation cache.
|
||
|
|
||
|
<p>Outside the kernel, the Synthesis implementation of the C-language I/O library, "stdio," uses code-synthesized buffers at the user level. In a simple experiment, I replaced the <span class=smallcaps>Unix</span> stdio library with the Synthesis version. I compiled and ran a simple test program that invokes the putchar macro ten million times, using first the native <span class=smallcaps>Unix</span> stdio library supplied with the Sony NEWS workstation, and then the Synthesis version. Table 3.2 shows the Synthesis macro version is 1.6 times faster, and over 4 times smaller, than the <span class=smallcaps>Unix</span> version.
|
||
|
|
||
|
<p>The drastic reduction in code size comes about because code synthesis can take advantage of the extra knowledge available at runtime to eliminate execution paths that cannot be taken. The putchar operation, as defined in the C library, actually supports three kinds of buffering: block-buffered, line-buffered and unbuffered. Even though only one of these can be in effect at any one time, the C putchar macro must include code to handle all of them, since it cannot know ahead of time which one will be used. In contrast, code synthesis creates only the code handling the kind of buffering actually desired for the particular file being written to. Since putchar, being a macro, is expanded in-line every time it appears in the source code, the savings accumulate rapidly.
|
||
|
|
||
|
<p>Table 3.2 also shows that the Synthesis "putchar" function is slightly faster than the <span class=smallcaps>Unix</span> macro - a dramatic result, that even incurring a procedure call overhead, code synthesis still shows a speed advantage over conventional code in-lined with a macro.
|
||
|
|
||
|
<h3>3.3.2 Context Switches</h3>
|
||
|
|
||
|
<p>One reason that context switches are expensive in traditional systems like <span class=smallcaps>Unix</span> is that they always save and restore the entire CPU context, even though that may not be necessary. For example, a process that did not use floating point since it was switched in does not need to have its floating-point registers saved when it is switched out. Another reason is that saving context is often implemented as a two-step procedure: the CPU registers are first placed in a holding area, freeing them so they can be used to perform calculations and traverse data structures to find out where the context was to have been put, and finally copying it there from the holding area.
|
||
|
|
||
|
<p>A Synthesis context switch takes less time because only the part of the context being used is preserved, not all of it, and because the critical path traversing the ready queue is minimized with an executable data structure.
|
||
|
|
||
|
<p>The first step is to know how much context to preserve. Context switches can happen synchronously or asynchronously with thread execution. Asynchronous context switches are the result of external events forcing preemption of the processor, for example, at the end of a CPU quantum. Since they can happen at any time, it is hard to know in advance how much context is being used, so we preserve all of it. Synchronous context switches, on the other hand, happen as a result of the thread requesting them, for example, when relinquishing the CPU to wait for an I/O operation to finish. Since they occur in specific, well-defined points in the thread's execution, we can know exactly how much context will be needed and therefore can arrange to preserve only that much. For example, suppose a read procedure needs to block and wait for I/O to finish. Since it has already saved some registers on the stack as part of the normal procedure-call mechanism, there is no need to preserve them again as they will only be overwritten upon return.
|
||
|
|
||
|
|
||
|
<div class=code>
|
||
|
<pre>
|
||
|
proc:
|
||
|
:
|
||
|
:
|
||
|
{Save necessary context}
|
||
|
bsr swtch
|
||
|
res:
|
||
|
{Restore necessary context}
|
||
|
:
|
||
|
:
|
||
|
|
||
|
swtch:
|
||
|
move.l (Current),a0 // (1) Get address of current thread's TTE
|
||
|
move.l sp,(a0) // (2) Save its stack pointer
|
||
|
bsr find_next_thread // (3) Find another thread to run
|
||
|
move.l a0,(Current) // (4) Make that one current
|
||
|
move.l (a0),sp // (5) Load its stack pointer
|
||
|
rts // (6) Go run it!
|
||
|
</pre>
|
||
|
<p class=caption>Figure 3.3: Context Switch</p>
|
||
|
</div>
|
||
|
|
||
|
|
||
|
<p>Figure 3.3 illustrates the general idea. When a kernel thread executes code that decides that it should block, it saves whatever context it wishes to preserve on the active stack. It then calls the scheduler, swtch; doing so places the thread's program counter on the stack. At this point, the top of stack contains the address where the thread is to resume execution when it unblocks, with the machine registers and the rest of the context below that. In other words, the thread's context has been reduced to a single register: its stack pointer. The scheduler stores the stack pointer into the thread's control block, known as the thread table entry (TTE), which holds the thread state when it is not executing. It then selects another thread to run, shown as a call to the find next thread procedure in the figure, but actually implemented as an executable data structure as discussed later. The variable Current is updated to reflect the new thread and its stack pointer is loaded into the CPU. A return-from-subroutine (rts) instruction starts the thread running. It continues where it had left off (at label res), where it pops the previously-saved state off the stack and proceeds with its work.
|
||
|
|
||
|
<p>Figure 3.4 shows two TTEs. Each TTE contains code fragments that help with context switching: <em>sw_in</em> and <em>sw_in_mmu</em>, which loads the processor state from the TTE; and <em>sw_out</em>, which stores processor state back into the TTE. These code fragments are created specially for each thread. To switch in a thread for execution, the processor executes the thread's <em>sw_in</em> or <em>sw_in_mmu</em> procedure. To switch out a thread, the processor executes the thread's <em>sw_out</em> procedure.
|
||
|
|
||
|
<!-- FIGURE (IMG) GOES HERE - - FINISH -->
|
||
|
<img src="finish.png">
|
||
|
<p class=caption>Figure 3.4: Thread Context</p>
|
||
|
|
||
|
<p>Notice how the ready-to-run threads (waiting for CPU) are chained in an executable circular queue. A <em>jmp</em> instruction at the end of the <em>sw_out</em> procedure of the preceding thread points to the <em>sw_in</em> procedure of the following thread. Assume thread-0 is currently running. When its time quantum expires, the timer interrupt is vectored to thread-0's <em>sw_out</em>. This procedure saves the CPU registers into thread-0's register save area (TT0.reg). The jmp instruction then directs control flow to one of two entry points of the next thread's (thread-1) context-switch-in procedure, <em>sw_in</em> or <em>sw_in_mmu</em>. Control flows to <em>sw_in_mmu</em> when a change of address space is required; otherwise control flows to <em>sw_in</em>. The switch-in procedure then loads the CPU's vector base register with the address of thread-1's vector table, restores the processor's general registers, and resumes execution of thread-1. The entire switch takes 10.5 microseconds to switch integer-only contexts between threads in the same address space, or 56 microseconds including the floating point context and a change in address space.<sup>1</sup>
|
||
|
<div class=footnote><sup>1</sup> Previous papers incorrectly cite a floating-point context switch time of 15 µs [25] [18]. This error is believed to have been caused by a bug in the Synthesis assembler, which incorrectly filled the operand field of the floating-point move-multiple-registers instruction causing it to preserve just one register, instead of all eight. Since very few Synthesis applications use floating point, this bug remained undetected for a long time.</div>
|
||
|
|
||
|
<p>Table 3.3 summarizes the time taken by the various types of context switches in Synthesis, saving and restoring all the integer registers. These times include the hardware interrupt service overhead -- they show the elapsed time from the execution of the last instruction in the suspended thread to the first instruction in the next thread. Previously published papers report somewhat lower figures [25] [18]. This is because they did not include the interrupt-service overhead, and because of some extra overhead incurred in handling the 68882 floating point unit on the Sony NEWS workstation that does not occur on the Quamachine, as discussed later. For comparison, a call to a null procedure in the C language takes 1.4 microseconds, and the Sony <span class=smallcaps>Unix</span> context switch takes 170 microseconds.
|
||
|
|
||
|
|
||
|
<table class=table>
|
||
|
<caption>
|
||
|
Table 3.3: Cost of Thread Scheduling and Context Switch<br>
|
||
|
<small>68030 CPU, 25MHz, 1-wait-state main memory, cold cache</small>
|
||
|
</caption>
|
||
|
<tr class=head><th>Type of context switch<th>Time (µs)
|
||
|
<tr><th>Integer registers only<td class=number>10.5
|
||
|
<tr><th>Floating-point<td class=number>52
|
||
|
<tr><th>Integer, change address space<td class=number>16
|
||
|
<tr><th>Floating-point, change address space<td class=number>56
|
||
|
<tr><th>Null procedure call (C language)<td class=number>1.4
|
||
|
<tr><th>Sony NEWS, <span class=smallcaps>Unix</span><td class=number>170
|
||
|
<tr><th>NeXT Machine, Mach<td class=number>510
|
||
|
</table>
|
||
|
|
||
|
|
||
|
<p>In addition to reducing ready-queue traversal time, specialized context-switch code enables further optimizations, to move only needed data. The previous paragraph already touched on one of the optimizations: bypassing the MMU address space switch when it is not needed. The other optimizations occur in the handling of floating point registers, described now, and in the handling of interrupts, described in the next section.
|
||
|
|
||
|
<p>Switching the floating point context is expensive because of the large amount of state that must be saved. The registers are 96 bits wide; moving all eight registers requires 24 transfers of 32 bits each. The 68882 coprocessor compounds this cost, because each word transferred requires two bus cycles: one to fetch it from the coprocessor, and one to write it to memory. The result is that it takes about 50 microseconds just to save and restore the hundred-plus bytes of information comprising the floating point coprocessor state. This is more than five times the cost of doing an entire context switch without the floating point.
|
||
|
|
||
|
<p>Since preserving floating point context is so expensive, we use runtime tests to see if floating point had been used to avoid saving state that is not needed. Threads start out assuming floating point will not be used, and their context-switch code is created without it. When context-switching out, the context-save code checks whether the floating point unit had been used. It does this using the fsave instruction of the Motorola 68882 floating point coprocessor, which saves only the internal microcode state of the floating point processor [20]. The saved state can be tested to see if it is not null. If so, the user-visible floating-point state is saved, and the context-switch code re-created to include the floating-point context in subsequent context switches. Since the majority of threads in Synthesis do not use floating point, the savings are significant.
|
||
|
|
||
|
<p>Unfortunately, after a thread executes its first floating point instruction, floating point context will have to be preserved from that point on, even if no further floating-point instructions are issued. The context must be restored upon switch-in because a floating point instruction might be executed. The context must be saved upon switch-out even if no floating point instructions had been executed since switch-in because the 68882 cannot detect a lack of instruction execution. It can only tell us if its state is completely null. This is bad because sometimes a thread may use floating-point at first, for example, to initialize a table, and then not again. But with the 68882, we can only optimize the case when floating point is never used.
|
||
|
|
||
|
<p>The Quamachine has hardware to alleviate the problem. Its floating-point unit - also a 68882 - can be enabled and disabled by software command, allowing a lazyevaluation of floating-point context switches. Switching in a thread for execution loads its integer state and disables the floating-point unit. When a thread executes its first floating point instruction since the switch, it takes an illegal instruction trap. The kernel then loads the necessary state, first saving any prior state that may have been left there, reenables the floating-point unit, and the thread resumes with the interrupted instruction. The trap is taken only on the first floating-point instruction following a switch, and adds only 3 µs to the overhead of restoring the state. This is more than compensated for by the other savings: integer context-switch becomes 1.5 µs faster because there is no need for an fsave instruction to test for possible floating-point use; and even floating-point threads benefit when they block without a floating point instruction being issued since they were switched in, saving the cost of restoring and then saving that context. Indeed, if only a single thread is using floating point, the floating point context is never switched, remaining in the coprocessor.
|
||
|
|
||
|
<h3>3.3.3 Interrupt Handling</h3>
|
||
|
|
||
|
<p>A special case of context switching occurs in interrupt handling. Many systems, such as <span class=smallcaps>Unix</span>, perform a full context switch on each interrupt. For example, an examination of the running Sony <span class=smallcaps>Unix</span> kernel reveals that not only are all integer registers saved on each interrupt, but the active portion of the floating-point context as well. This is one of the reasons that interrupt handling is expensive on a traditional system, and the reason why the designers of those systems try hard to avoid frequent interrupts. As shown earlier, preserving the floating-point state can be very expensive. Doing so is superfluous unless the interrupt handler uses floating point; most do not.
|
||
|
|
||
|
<p>Synthesis interrupt handling is faster because it saves and restores only the part of the context that will be used by the service routine, not all of it. Code synthesis allows partial context to be saved efficiently. Since different interrupt procedures use different amounts of context, we can not, in general, know how much context to preserve until the interrupt is linked to its service procedure. Furthermore, it may be desirable to change service procedures, for example, when changing or installing new I/O drivers in the running kernel. Without code synthesis, we would have to save the union of all contexts used by all procedures that could be called from the interrupt, slowing down all because of the needs of a few.
|
||
|
|
||
|
<p>Examples taken from the Synthesis Sound-IO device driver illustrate the ideas and provide performance numbers. The Sound-IO device is a general-purpose, high-quality audio input and output device with stereo, 16-bit analog-to-digital and digital-to-analog converters, and a direct-digital input channel from a CD player. This device interrupts the processor once for every sound sample - 44100 times per second - a very high number by conventional measures. It is normally inconceivable to attach such high-rate interrupt sources to the main processor. Sony <span class=smallcaps>Unix</span>, for example, can service a maximum of 20,000 interrupts per second, and such a device could not be handled at all.<sup>2</sup> Efficient interrupt handing is mandatory, and the rest of this section shows how Synthesis can service high interrupt rates efficiently.
|
||
|
|
||
|
<div class=footnote><sup>2</sup>The Sony workstation has two processors, one of which is dedicated to I/O, including servicing I/O interrupts using a somewhat lighter-weight mechanism. They solve the overhead problem with specialized processors -- a trend that appears to be increasingly common. But this solution compounds latency, and does not negate my point, which is that existing operating systems have high overhead that discourage frequent invocations.</div>
|
||
|
|
||
|
<p>Several benefits of runtime code generation combine to improve the efficiency of interrupt handing in Synthesis: the use of the high-speed buffering code described in Section 3.3.1, the ability to create interrupt routines that save and restore only the part of the context being used, and the use of layer-collapsing to merge separate functions together.
|
||
|
|
||
|
|
||
|
<div class=code>
|
||
|
<pre>
|
||
|
intr: move.l a0,-(sp) // Save register a0
|
||
|
move.l (P),a0 // Get buffer pointer into reg. a0
|
||
|
move.l (cd_port),(a0,D) // Store CD data into address P+D
|
||
|
addq.w #4,(P+2) // Increment low 16 bits of P.
|
||
|
beq cd_done // ... flush buffer if full
|
||
|
move.l (sp)+,a0 // Restore register a0
|
||
|
rte // Return from interrupt
|
||
|
</pre>
|
||
|
<p class=caption>Figure 3.5: Synthesized Code for Sound Interrupt Processing - CD Active</p>
|
||
|
</div>
|
||
|
|
||
|
|
||
|
<p>Figure 3.5 shows the actual Synthesis code created to handle the Sound-IO interrupts when only the CD-player is active. It begins by saving a single register, a0, since that is the only one used. This is followed by the code for the specific sound I/O channels, in this case, the CD-player. The code is similar to the fast buffer described in 3.3.1, synthesized to move data from the CD port directly into the user's buffer. If the other input sources (such as the A-to-D input) also become active, the interrupt routine is re-written, placing their buffer code immediately following the CD-player's. The code ends by restoring the a0 register and returning from interrupt.
|
||
|
|
||
|
|
||
|
<div class=code>
|
||
|
<pre>
|
||
|
s.intr:
|
||
|
move.l a0,-(sp) // Save register a0
|
||
|
tst.b (cd_active) // Is the CD device active?
|
||
|
beq cd_no // ... no, jump
|
||
|
move.l (cd_buf),a0 // Get CD buffer pointer into reg. a0
|
||
|
move.l (cd_port),(a0)+ // Store CD data; increment pointer
|
||
|
move.l a0,(cd_buf) // Update CD buffer pointer
|
||
|
subq.l #1,(cd_cnt) // Decrement buffer count
|
||
|
beq cd_flush // ... jump if buffer full
|
||
|
cd_no:
|
||
|
tst.b (ad_active) // Is the AD device active?
|
||
|
beq ad_no // ... no, jump
|
||
|
:
|
||
|
: [handle AD device, similar to CD code]
|
||
|
:
|
||
|
ad_no:
|
||
|
tst.b (da_active) // Is the DA device active?
|
||
|
beq da_no // ... no, jump
|
||
|
:
|
||
|
: [handle DA device, similar to CD code]
|
||
|
:
|
||
|
da_no:
|
||
|
move.l (sp)+,a0 // Restore register a0
|
||
|
rte // Return from interrupt
|
||
|
</pre>
|
||
|
<p class=caption>Figure 3.6: Sound Interrupt Processing, Hand-Assembler</p>
|
||
|
</div>
|
||
|
|
||
|
|
||
|
<p>Figure 3.6 shows the best I have been able to achieve using hand-written assembly language, without the use of code synthesis. Like the Synthesis version, this uses only a single register, so we save and restore only that one.<sup>3</sup> But without code synthesis, we must include code for all the Sound-IO sources -- CD, AD, and DA -- testing and branching around the parts for the currently inactive channels. In addition, we can no longer use the fast buffer implementation of section 3.3.1 because that requires code synthesis.
|
||
|
|
||
|
<div class=footnote><sup>3</sup> Most existing systems neglect even this simple optimization. They save and restore all the registers, all the time.</div>
|
||
|
|
||
|
<p>Figure 3.7 shows another version, this one written in C, and invoked by a short assembly-language dispatch routine. It preserves only those registers clobbered by C procedure calls, and is representative of a carefully-written interrupt routine in C.
|
||
|
|
||
|
|
||
|
<div class=code>
|
||
|
<pre>
|
||
|
s_intr:
|
||
|
movem.l <d0-d2,a0-a2>,-(sp)
|
||
|
bsr _sound_intr
|
||
|
movem.l (sp)+,<d0-d2,a0-a2>
|
||
|
rte
|
||
|
|
||
|
sound_intr()
|
||
|
{
|
||
|
if(cd_active) {
|
||
|
*cd_buf++ = *cd_port;
|
||
|
if(--cd_cnt < 0)
|
||
|
cd_flush();
|
||
|
}
|
||
|
if(ad_active) {
|
||
|
...
|
||
|
}
|
||
|
if(da_active) {
|
||
|
...
|
||
|
}
|
||
|
}
|
||
|
</pre>
|
||
|
<p class=caption>Figure 3.7: Sound Interrupt Processing, C Code</p>
|
||
|
</div>
|
||
|
|
||
|
|
||
|
<p>The performance differences are summarized in Table 3.4. Measurements are divided into three groups. The first group, consisting of just a single row, shows the time taken by the hardware to process an interrupt and immediately return from it, without doing anything else. The second group shows the time taken by the various implementations of the interrupt handler when just the CD-player input channel is active. The third group is like the second, but with two active sources: the CD-player and AD channels.
|
||
|
|
||
|
|
||
|
<table class=table>
|
||
|
<caption>
|
||
|
Table 3.4: Processing Time for Sound-IO Interrupts<br>
|
||
|
<small>68030 CPU, 25MHz, 1-wait-state main memory, cold cache</small>
|
||
|
</caption>
|
||
|
<tr class=head><th><th>Time in µS<th>Speedup
|
||
|
<tr><th>Null Interrupt<td>2.0<td class=number>--
|
||
|
<tr><th>CD-in, code-synth<td>3.7<td class=number>--
|
||
|
<tr><th>CD-in, assembler<td>6.0<td class=number>2.4
|
||
|
<tr><th>CD-in, C<td>9.7<td class=number>4.5
|
||
|
<tr><th>CD-in, C & <span class=smallcaps>Unix</span><td>17.1<td class=number>8.9
|
||
|
<tr><th>CD+DA, code-synth<td>5.1<td class=number>--
|
||
|
<tr><th>CD+DA, assembler<td>7.7<td class=number>1.8
|
||
|
<tr><th>CD+DA, C<td>11.3<td class=number>3.0
|
||
|
<tr><th>CD+DA, C & <span class=smallcaps>Unix</span><td>18.9<td class=number>5.5
|
||
|
</table>
|
||
|
|
||
|
|
||
|
<p>Within each group of measurements, there are four rows. The first three rows show the time taken by the code synthesis, hand-assembler, and C implementations of the interrupt handler, in that order. The code fragments measured are those of figures 3.5, 3.6, and 3.7; the C code was compiled on the Sony NEWS workstation using "cc -O". The last row shows the time taken by the C version of the handler, but dispatched the way that Sony <span class=smallcaps>Unix</span> does, preserving all the machines registers prior to the call. The left column tells the elapsed execution time, in microseconds. The right column gives the ratio of times between the code synthesis implementation and the others. The null-interrupt time is subtracted before computing the ratio to give a better picture of what the implementation-specific performance increases are.
|
||
|
|
||
|
<p>As can be seen from the table, the performance gains of using code synthesis are impressive. With only one channel active, we get more than twice the performance of handwritten assembly language, almost five times more efficient than well-written C, and very nearly an order of magnitude better than traditional <span class=smallcaps>Unix</span> interrupt service. Furthermore, the non-code-synthesis versions of the driver supports only the two-channel, 16-bit linearencoding sound format. Extending support, as Synthesis does, to other sound formats, such as µ-Law, either requires more tests in the sound interrupt handler or an extra level of format conversion code between the application and the sound driver. Either option adds overhead that is not present in the code synthesis version, and would increase the time shown.
|
||
|
|
||
|
<p>With two channels active, the gain is still significant though somewhat less than that for one channel. The reason is that the overhead-reducing optimizations of code synthesis -- collapsing layers and preserving only context that is used -- become less important as the amount of work increases. But other optimizations of code synthesis, such as the fast buffer, continue to be effective and scale with the work load. In the limit, as the number of active channels becomes large, the C and assembly versions perform equally well, and the code synthesis version is about 40% faster.
|
||
|
|
||
|
<h3>3.3.4 System Calls</h3>
|
||
|
|
||
|
<p>Another use of code synthesis is to minimize the overhead of invoking system calls. In Synthesis the term "system call" is somewhat of a misnomer because the Synthesis system interface is based on procedure calls. A Synthesis system call is really a procedure call that happens to cross the protection boundary between user and kernel. This is important because, as we will see in Chapter 4, each Synthesis service has a set of procedures associated with it that delivers that service. Since the set of services provided is extensible, we need a more general way of invoking them. Combining procedure calls with runtime code generation lets us do this efficiently.
|
||
|
|
||
|
<div class=code>
|
||
|
<pre>
|
||
|
// --- User-level stub procedure ---
|
||
|
proc:
|
||
|
moveq #N,d2 // Load procedure index
|
||
|
trap #15 // Trap to kernel
|
||
|
rts // Return
|
||
|
|
||
|
// --- Dispatch to kernel procedure ---
|
||
|
trap15:
|
||
|
cmp.w #MAX,d2 // Check that procedure index is in range
|
||
|
bhs bad_call // ... jump if not
|
||
|
move.l (tab$,pc,d2*4),a2 // Get the procedure address
|
||
|
jsr (a2) // Call it
|
||
|
rte // Return to user-level
|
||
|
|
||
|
.align 4 // Table of kernel procedure addresses...
|
||
|
tab$:
|
||
|
dc.l fn0, fn1, fn2, fn3, ..., fnN
|
||
|
</pre>
|
||
|
<p class=caption>Figure 3.8: User-to-Kernel Procedure Call</p>
|
||
|
</div>
|
||
|
|
||
|
|
||
|
<p>Figure 3.8 shows how. The generated code consists of two parts: a user part, shown at the top of the figure, and a kernel part, shown at the bottom. The user part loads the procedure index number into the <em>d2</em> register and executes the trap instruction, switching the processor into kernel mode where it executes the kernel part of the code, beginning at label <em>trap15</em>. The kernel part begins with a limit check on the procedure index number, ensuring that the index is inside the table area and preventing cheating by buggy or malicious user code that may pass a bogus number. It then indexes the table and calls the kernel procedure. The kernel procedure typically performs its own checks, such as verifying that all pointers are valid, before proceeding with the work. It returns with the rte instruction, which takes the thread back into user mode, where it returns control to the caller. Since the user program can only specify an index into the procedure table, and not the procedure address itself, only the allowed procedures may be called, and only at the appropriate entry points. Even if the user part of the generated code is overwritten either accidentally or maliciously, it can never cause the kernel to do something that could not have been done through some other, untampered, sequence of calls.
|
||
|
|
||
|
|
||
|
<p>Runtime code generation gives the following advantages: each thread has its own table of vectors for exceptions and interrupts, including <em>trap 15</em>. This means that each thread's kernel calls vector directly to the correct dispatch procedure, saving a level of indirection that would otherwise have been required. This dispatch procedure, since it is thread-specific, can hard-wire certain constants, such as MAX and the base address of the kernel procedure table, saving the time of fetching them from a data structure.
|
||
|
|
||
|
<p>Furthermore, by thinking of kernel invocation not as a system call - which conjures up thoughts of heavyweight processing and large overheads - but as a procedure call, many other optimizations become easier to see. For example, ordinary procedures preserve only those registers which they use; kernel procedures can do likewise. Procedure calling conventions also do not require that all the registers be preserved across a call. Often, a number of registers are allowed to be "trashed" by the call, so that simple procedures can execute without preserving anything at all. Kernel procedures can follow this same convention. The fact that kernel procedures are called from user level does not make them special; one merely has to properly address the issues of protection, which is discussed further in Section 3.4.2.
|
||
|
|
||
|
<p>Besides dispatch, we also need to address the problem of how to move data between user space and kernel as efficiently as possible. There are two kinds of moves required: passing procedure arguments and return values, and passing large buffers of data. For passing arguments, the user-level stub procedures are generated to pass as many arguments as possible in the CPU registers, bypassing the expense of accessing the user stack from kernel mode. Return values are likewise passed in registers, and moved elsewhere as needed by the user-level stub procedure. This is similar in idea to using CPU registers for passing short messages in the V system [9].
|
||
|
|
||
|
<p>Passing large data buffers is made efficient using virtual memory tricks. The main idea is: when the kernel is invoked, it has the user address space mapped in. Synthesis reserves part of each address space for the kernel. This part is normally inaccessible from user programs. But when the processor executes the trap and switches into kernel mode, the kernel part of the address space becomes accessible in addition to the user part, and the kernel procedure can easily move data back and forth using the ordinary machine instructions. Prior to beginning such a move, the kernel procedure checks that no pointer refers to locations outside the user's address space - an easy check due to the way the virtual addresses are chosen: a single limit-comparison (two instructions) suffices.
|
||
|
|
||
|
<p>Further optimizations are also possible. Since the user-level stub is a real procedure, it can be in-line substituted into its caller. This can be done lazily -- the stub is written so that each time a call happens, it fetches the return address from the stack and modifies that point in the caller. Since the stubs are small, space expansion is minimal. Besides being effective, this mechanism requires minimal support from the language system to identify potential in-lineable procedure calls.
|
||
|
|
||
|
<p>Another optimization bypasses the kernel procedure dispatcher. There are 16 possible traps on the 68030. Three of these are already used, leaving 13 free for other purposes, such as to directly call heavily-used kernel procedures. If a particular kernel procedure is expected to be used often, an application can invoke the cache procedure call, and Synthesis will allocate an unused trap, set it to call the kernel procedure directly, and re-create the user-level stub to issue this trap. Since this trap directly calls the kernel procedure, there is no longer any need for a limit check or a dispatch table. Pre-assigned traps can also be used to import execution environments. Indeed, the Synthesis equivalent of the <span class=smallcaps>Unix</span> concept of "stdin" and "stdout" is implemented with cached kernel procedure calls. Specifically, <em>trap 1</em> writes to stdout, and trap 2 reads from stdin.
|
||
|
|
||
|
<p>Combining both optimizations results in a kernel procedure call that costs just a little more than a trap instruction. The various costs are summarized in Table 3.5. The middle block of measurements show the cost of various Synthesis null kernel procedure calls: the general-dispatched, non-inlined case; the general-dispatched, with the user-level stub inlined into the application's code; cached-kernel-trap, non-inlined; and cached-kerneltrap, inlined. For comparison, the cost of a null trap and a null procedure call in the C language is shown on the top two lines, and the cost of the trivial getpid system call in <span class=smallcaps>Unix</span> and Mach is shown on the bottom two lines.
|
||
|
|
||
|
<h2>3.4 Other Issues 3.4.1 Kernel Size</h2>
|
||
|
|
||
|
<p>Kernel size inflation is an important concern in Synthesis due to the potential redundancy in the many Fsmall and F flat programs generated by the same F<sup>create</sup>. This could be particularly bad if layer collapsing were used too enthusiastically. To limit memory use, F<sup>create</sup> can generate either in-line code or subroutine calls to shared code. The decision of when to expand in-line is made by the programmer writing F<sup>create</sup>. Full, memory-hungry in-line expansion is usually reserved for specific uses where its benefits are greatest: the performance-critical, frequently-executed paths of a function, where the performance gains justify increased memory use. Less frequently executed parts of a function are stored in a common area, shared by all instances through subroutine calls.
|
||
|
|
||
|
<table class=table>
|
||
|
<caption>
|
||
|
Table 3.5: Cost of Null System Call<br>
|
||
|
<small>68030 CPU, 25MHz, 1-wait-state main memory</small>
|
||
|
</caption>
|
||
|
<tr class=head><th><th>µS, cold cache<th>µS, warm cache
|
||
|
<tr><th>C procedure call<td class=number>1.2<td class=number>1.0
|
||
|
<tr><th>Null trap<td class=number>1.9<td class=number>1.6
|
||
|
<tr><th>Kernel call, general dispatch<td class=number>4.2<td class=number>3.5
|
||
|
<tr><th>Kernel call, general, in-lined<td class=number>3.5<td class=number>2.9
|
||
|
<tr><th>Kernel call, cached-trap<td class=number>3.5<td class=number>2.7
|
||
|
<tr><th>Kernel call, cached and in-lined<td class=number>2.7<td class=number>2.1
|
||
|
<tr><th><span class=smallcaps>Unix</span>, getpid<td class=number>40<td class=number>--
|
||
|
<tr><th>Mach, getpid<td class=number>88<td class=number>--
|
||
|
</table>
|
||
|
|
||
|
|
||
|
<p>In-line expansion does not always cost memory. If a function is small enough, expanding it in-line can take the same or less space than calling it. Examples of functions that are small enough include character-string comparisons and buffer-copy. For functions with many runtime-invariant parameters, the size expansion of inlining is offset by a size decrease that comes from not having to pass as many parameters.
|
||
|
|
||
|
<p>In practice, the actual memory needs are modest. Table 3.6 shows the total memory used by a full kernel -- including I/O buffers, virtual memory, network support, and a window system with two memory-resident fonts.
|
||
|
|
||
|
<table class=table>
|
||
|
<caption>
|
||
|
Table 3.6: Kernel Memory Requirements
|
||
|
</caption>
|
||
|
<tr class=head><th>System Activity<th>Memory Use, as code + data (Kbytes)
|
||
|
<tr><th>Boot image for full kernel<td class=number>140
|
||
|
<tr><th>One thread running<td class=number>Boot + 0.4 + 8
|
||
|
<tr><th>File system and disk buffers<td class=number>Boot + 6 + 400
|
||
|
<tr><th>100 threads, 300 open files<td class=number>Boot + 80 + 1400
|
||
|
</table>
|
||
|
|
||
|
|
||
|
<h3>3.4.2 Protecting Synthesized Code</h3>
|
||
|
|
||
|
The classic solutions used by other systems to protect their kernels from unauthorized tampering by user-level applications also work in the presence of synthesized code. Like many other systems, Synthesis needs at least two hardware-supported protection domains: a privileged mode that allows access to all the machine's resources, and a restricted mode that lets ordinary calculations happen but restricts access to resources. The privileged mode is called supervisor mode, and the restricted mode, user mode.
|
||
|
|
||
|
<p>Kernel data and code - both synthesized and not - are protected using memory management to make the kernel part of each address space inaccessible to user-level programs. Synthesized routines run in supervisor mode, so they can perform privileged operations such as accessing protected buffer pages.
|
||
|
|
||
|
<p>User-level programs enter supervisor mode using the trap instruction. This instruction provides a controlled - and the only - way for user-level programs to enter supervisor mode. The synthesized routine implementing the desired system service is accessed through a jump table in the protected area of the address space. The user program specifies an index into this table, ensuring the synthesized routines are always entered at the proper entry points. This protection mechanism is similar to Hydra's use of C-lists to prevent the forgery of capabilities [34].
|
||
|
|
||
|
<p>Once in kernel mode, the synthesized code handling the requested service can begin to do its job. Further protection is unnecessary because, by design, the kernel code generator only creates code that touches data the application is allowed to touch. For example, were a file inaccessible, its read procedure would never have been generated. Just before returning control to the caller, the synthesized code reverts to the previous (user-level) mode.
|
||
|
|
||
|
<p> There is still the question of invalidating the code when the operation it performs is no longer valid -- for example, invalidating the read procedure after a files has been closed. Currently, this is done by setting the corresponding function pointers in the KPT to an invalid address, preventing further calls to the function. The function's reference counter is then decremented, and its memory freed when the count reaches zero.
|
||
|
|
||
|
<h3>3.4.3 Non-coherent Instruction Cache</h3>
|
||
|
|
||
|
<p>A common assumption in the design of processors is that a program's instructions will not change as the program runs. For that reason, most processor's instruction caches are not coherent - writes to memory are not reflected in the cache. Runtime code generation violates this assumption, requiring that the instruction cache be flushed whenever code changes happen. Too much cache flushing reduces performance, both because programs execute slower when the needed instructions are not in cache and because flushing itself may be an expensive operation.
|
||
|
|
||
|
<p>The performance of self-modifying code, like that found in executable data structures, suffers the most from an incoherent instruction cache. This is because the ratio of code modification to use tends to be high. Ideally, we would like to flush with cache-line granularity to avoid losing good entries. Some caches provide only an all-or-nothing flush. But even line-at-a-time granularity has its disadvantages: it needs machine registers to hold the parameters, registers that may not be available during interrupt service without incurring the cost of saving and restoring them. Unfortunately for Synthesis, most cases of self-modifying code actually occur inside interrupt service routines where small amounts of data (e.g., one character for a TTY line) must be processed with minimal overhead. Fortunately, in all important cases the cost has been reduced to zero through careful layout of the code in memory using knowledge of the 68030 cache architecture to cause the subsequent instruction fetch to replace the cache line that needs flushing. But that trick is neither general nor portable.
|
||
|
|
||
|
<p>For the vast majority of code synthesis applications, an incoherent cache is not a big problem. The cost of flushing even a large cache contributes relatively little compared to the cost of allocating memory and creating the code. If code generation happens infrequently relative to the code's use, as is usually the case, the performance hit is small.
|
||
|
|
||
|
<p>Besides the performance hit from a cold cache, cache flushing itself may be slow. On the 68030 processor, for example, the instruction to flush the cache is privileged. Although this causes no special problems for the Synthesis kernel, it does force user-level programs that modify code to make a system call to flush the cache. I do not see any protectionrelated reason why that instruction must be privileged; perhaps making it so simplified processor design.
|
||
|
|
||
|
<h2>3.5 Summary</h2>
|
||
|
|
||
|
<p>This chapter showed:
|
||
|
<ol>
|
||
|
<li>that code synthesis allows important operating system functions such as buffering, context switching, interrupt handing, and system call dispatch to be implemented 1.4 to 2.4 times more efficiently than is possible using the best assemblylanguage implementation without code synthesis and 1.5 to 5 times better than well-written C code;
|
||
|
<li>that code synthesis is also effective at the user-level, achieving an 80% improvement for basic operations such as putchar; and
|
||
|
<li>that the anticipated size penalty does not, in fact, happen.
|
||
|
</ol>
|
||
|
|
||
|
<p>Before leaving this section, I want to call a moment's more attention to the interrupt handlers of Section 3.3.3. At first glance - and even on the second and third - the C-language code it looks to be as minimal as it can get. There does not seem to be any fat to cut. Table 3.4 has shown otherwise. The point is that sometimes, sources of overhead are hidden, not so easy to spot and optimize. They are a result of assumptions made and the programming language used, whether it be in the form of a common calling convention for procedures, or in conventions followed to simplify linking routines to interrupts. This section has shown that code synthesis is an important technique that enables general procedure interfacing while preserving -- and often bettering - the efficiency of custom-crafted code.
|
||
|
|
||
|
<p>The next chapter now shows how Synthesis is structured and how synergy between kernel code synthesis and good software engineering leads to a system that is general and easily expandable, but at the same time efficient.
|
||
|
</div>
|
||
|
|
||
|
</body>
|
||
|
</html>
|