-Unlike its distinguished ancestor CMU CL, SBCL currently runs only on X86
-(Linux, FreeBSD, and OpenBSD) and Alpha (Linux). For information on
-other ongoing ports, see the sbcl-devel mailing list, and/or the
-web site.
+Unlike its distinguished ancestor CMU CL, SBCL currently runs only on
+X86 (Linux, FreeBSD, and OpenBSD), Alpha (Linux), and SPARC (Linux).
+For information on other ongoing and possible ports, see the
+sbcl-devel mailing list, and/or the web site.
memory. To plagiarize a sadly apt description of a language not
renowned for the production of bulletproof software, "[The current
SBCL implementation of] Common Lisp makes it harder for you to shoot
yourself in the foot, but when you do, the entire universe explodes."
.TP 3
\--
memory. To plagiarize a sadly apt description of a language not
renowned for the production of bulletproof software, "[The current
SBCL implementation of] Common Lisp makes it harder for you to shoot
yourself in the foot, but when you do, the entire universe explodes."
.TP 3
\--
Like CMU CL, the SBCL system overcommits memory at startup. On typical
Unix-alikes like Linux and FreeBSD, this means that if the SBCL system
turns out to use more virtual memory than the system has available for
Like CMU CL, the SBCL system overcommits memory at startup. On typical
Unix-alikes like Linux and FreeBSD, this means that if the SBCL system
turns out to use more virtual memory than the system has available for
-SBCL, like most implementations of Common Lisp, has trouble
-passing floating point numbers around efficiently, because
-they're larger than a machine word. (Thus, they get "boxed" in
+SBCL, like most (maybe all?) implementations of Common Lisp on
+stock hardware, has trouble
+passing floating point numbers around efficiently, because a floating
+point number, plus a few extra bits to identify its type,
+is larger than a machine word. (Thus, they get "boxed" in
heap-allocated storage, causing GC overhead.) Within
a single compilation unit,
or when doing built-in operations like SQRT and AREF,
or some special operations like structure slot accesses,
this is avoidable: see the user manual for some
efficiency hints. But for general function calls across
heap-allocated storage, causing GC overhead.) Within
a single compilation unit,
or when doing built-in operations like SQRT and AREF,
or some special operations like structure slot accesses,
this is avoidable: see the user manual for some
efficiency hints. But for general function calls across
-the boundaries of compilation units, passing a floating point
-number as a function argument (or returning a floating point
-number as a function value) is a fundamentally slow operation.
+the boundaries of compilation units, passing the result of
+a floating point calculation
+as a function argument (or returning a floating point
+result as a function value) is a fundamentally slow operation.