Gauche Devlog

2021/08/26

Negative zero

Gauche supports IEEE754 negative zero -0.0. It simply wraps an IEEE754 double as a scheme object, so mostly it just works as specified in IEEE754 (and as supported by the underlying math library). Or, so we thought.

Let's recap the behavior of -0.0. It's numerically indistinguishable from 0.0 (so, it is not "an infinitely small value less than zero"):

(= -0.0 0.0) ⇒ #t

(< -0.0 0.0) ⇒ #f

(zero? -0.0) ⇒ #t

But it can make a difference when there's a functon f(x) such that it is discontinuous at x = 0, and f(x) goes to different values when x approaches to zero from positive side or negative side.

(/ 0.0)  ⇒ +inf.0
(/ -0.0) ⇒ -inf.0

For arithmetic primitive procedures, we simply pass unboxed double to the underlying math functions, so we didn't think we need to handle -0.0 specially.


The first wakeup call was this article via HackerNews:

One does not simply calculate the absolute value

It talks about writing abs in Java, but every time I saw articles like this I just try it out on Gauche, and alas!

;; Gauche 0.9.10
(abs -0.0) ⇒ -0.0    ; Ouch!

Yeah, the culprit was the C implementation of abs, the gist of which was:

   if (x < 0.0) return -x;
   else return x;

-0.0 doesn't satisfy x < 0.0 so it was returned without negation.

The easy fix is to use signbit.

   if (signbit(x)) return -x;

I reported the fix on Twitter, then somebody raised an issue: What about (eqv? -0.0 0.0)?

My initial reaction was that it should be #t, since (= -0.0 0.0) is #t. In fact, R5RS states this:

The eqv? procedure returns #t if: ... obj1 and obj2 are both numbers, are numerically equal (see = ...), and are either both exact or both inexact.

However, I realized that R7RS has more subtle definition.

The eqv? procedure returns #f if: ... obj1 and obj2 are both inexact numbers such that either they are numerically unequal (in the sense of =), or they do not yield the same results (...) when passed as arguments to any other procedure that can be defined as a finite composition of Scheme's standard arithmetic procedures, ...

Clearly, -0.0 and 0.0 don't yield the same results when passed to /, so it should return #f. (It is also mentioned in 6.2.4 that -0.0 is distinct from 0.0 in a sense of eqv?.)

Fix for this is a bit involved. When I fixed eqv?, a bunch of tests started failing. It looks like some inexact integer division routines in the tests yield -0.0, and are compared to 0.0 with equal?, which should follow eqv? if arguments are numbers.

It turned out that the root cause was rounding primitives returning -0.0:

;; Gauche 0.9.10
(ceiling -0.5) ⇒ -0.0

Although this itself is plausible, in most of the cases when you're thinking of integers (exact or inexact), you want to treat zero as zero. Certainly you don't want to deal with two distint zeros in quotients or remainders. The choices would be either leave the rounding primitives as are and fix the integer divisions, or change the rounding primitives altogether. I choose the latter.


The fixes are in the HEAD now.

;; Gauche 0.9.11
(eqv? -0.0 0.0) ⇒ #f
(ceiling -0.5) ⇒ 0.0

Tags: 0.9.11, Flonums

2021/06/09

Cookbook: Running commands remotely

(I frequently write throw-away scripts in Gauche, and it occurred to me that they can be a nice source of cookbook recipe. I'll write them up as I come to useful snippets.)

With run-process or do-process, you can invoke external commands (ref:gauche.process). One of their interesting feature is that you can run the commands on a remote host, if you have ssh access with public-key authentication to it.

Just add :host keyword argument.

(do-process '(ls) :host "remote.example.com")
 ;=> you see listing of your home directory at remote.example.com

Stdio is forwarded to the local process, so as process exit status. The :directory keyword argument works, though it is relative to your home directory of the remote machine. So, you can mix local execution and remote execution pretty much seamlessly.

The following snippet pushes local commits to the repo, pulls them on the remote machine, rebuild and restart the service. The return value of do-process is a boolean indicating command success or failure, so combining with and is like && in shell scripts.

(and (do-process '(git push) :directory *local-dir*)
     (do-process '(git pull) :host *host* :directory *remote-dir*)
     (do-process '(make)     :host *host* :directory *remote-dir*)
     (do-process '(make restart) :host *host* :directory *remote-dir*))

Tags: Cookbook, gauche.process

2021/05/31

Functional and linear-updating interfaces

Recent trend of data structure SRFIs is to provide two flavors of updating procedures:

  • Functional updaters never mutate the input structure, and always return a newly allocated structure.
  • Linear updaters are allowed to reuse the storage of input structure to produce the output, given that the caller guarantees the input structure will never be used.

Functional interface has a good nature--it won't create hidden dependencies thus the code is easy to reason about. It also plays nicely with concurrent execution, for you don't need to worry that your operations step on other threads' toes.

Linear updating interface gives the user to express opportunities of optimization. The implementation can take advantage of it to reduce allocations.

So, it appears to be a nice combination---except that, I think, the way they are currently specified is actually pulling each one's leg and reducing their merits.


Performance-sensitive users often frown on functional data structures, for they seem to copy everything every time. "It won't be that bad," functionally-minded users replies, "for it is often the case that the input structure and the updated structure can share most of their internals; the updated structure just allocates enough to store the updated parts. In the extreme case, the updater can just return the input as is, when it finds out the structure isn't altered at all (e.g. adjoining an item to a set that already has the item). The beauty of functional programming is that nobody cares whether it is shared or not---only the content matters."

It is true if everything is written functionally. However, to use the linear-updating interface, the caller needs to know that the structure to pass in isn't shared. If the functional interface may return a (partially) shared structure, it's hard to guarantee the "no-share" condition. Thus, SRFI states the functional interface always copies the input, even if there's no change at all. It can't take advantage of partial sharing as well, if the linear-updating version mutates internal structure.

This takes out the opportunity of optimization in the functional interface. The implementation needs to choose either (1) makes a slow functional version, in order to provide an efficient linear-updating version, or (2) makes a linear-updating version not mutate the input at all, and put functional optimizations.


I think we can do better. One idea is this:

  • The data structure has a mutability flag internally.
  • The functional interface always returns immutable data. It may return the input as is, or return a structure that partially shares the input, if the input structure is flagged immutable. If the input is flagged mutable, it always returns a fleshly copied immutable structure.
  • The linear-updating interface may mutate the input structure if it is flagged mutable, and copies if the input structure is flagged immutable.

If the SRFI does not provide an explicitly-mutating interface, it is actually almost indistinguishable from the existing SRFI spec, except when you compare input and output structures with eq?.

Given that explicitly-mutating interfaces (such as vector-set!) aren't popular in the SRFIs, I think it's good to allow the implementation to take the latter choice.


Discussion on srfi-discuss

Tags: srfi, Immutability, DataStructures

2021/04/18

Two concurrency utilities

Since I haven't written this blog for a while, it's a good time to catch up recent changes in Gauche HEAD.

Lately, we added a couple of utility modules that help to write concurrent programs.


control.future (draft:control.future) - A typical future, that is, evaluate the given expression in a separate thread concurrently. The result can be retrieved with future-get.

(use control.future)
(use rfc.http)

(let1 f (future (http-get "example.com" "/"))
  ... some computation ...
  (receive (code headers body) (future-get f)
    ...))

Like most of other synchronization operations in Gauche, future-get can take timeout parameter.


control.cseq (draft:control.cseq) - It's a lazy sequence but the data generator runs in a separate thread. It can abstract producer-consumer type concurrency.

The same concurrency can be achieved with <mtqueue> (ref:data.queue), and in fact control.cseq uses mtqeuee internally, but cseq is characteristic that you can very easily change lazy sequence code into concurrent code.

(generator->lseq gen) returns an lseq, which looks like an ordinary list, but when you walk down its cdr, the generator gen is called to generate more items.

(generator->cseq gen) also returns an lseq. But in this case, gen runs concurrently in a separate thread. In many cases, you can simply replace lseq to cseq to get the benefit of concurrency.

The module also provides coroutine->cseq, which uses coroutine to generate the items, run in a separate thread.

Tags: 0.9.11, Concurrency

2021/04/18

mtqueue and channel

Thread-safe queues, <mtqueue> (ref:data.queue), can naturally be used as a ``channel'', a communication primitive between producer thread(s) and consumer thread(s). In fact, I totally assumed the two were equivalent, and didn't bother creating a ``channel'' datatype specifically.

But I realized there was one difference--a channel can be closed.

Suppose I have an in-process ``server''---a thread looping over requests sent via an mtqueue. Other part of the program inserts its requests with enqueue/wait!. The server thread reads it and process it. All the synchronization is done in the queue, and that's the beauty.

Now, sometimes, you may want to shut down such server. Once it is shut down, we don't want to allow callers to put a new request into the queue, for it will sit in the queue unprocessed forever. How to implement it?

We may have a separate flag in our <server> object and ask the caller to check it before queuing a request. But such check must be done atomically with queue insertion, for other thread may set the flag after you checked it but before calling enqueue. Hence you need a separate lock for the flag and the queue, even the queue itself is thread-safe. It's kind of waste.

With a channel, attempt to put a request on a closed channel would be rejected, and that check is done atomically inside a channel.


For Gauche, I decided to enhance <mtqueue> to have ``close'' state. It's a bit of divergence from a queue in the original sense, but it's simpler than making a separate channel class on top of <mtqueue>. The lock operation and flag check is intertwined so deeply that it's difficult to separate them cleanly.

An mtqueue can only be closed via the synchronized queue operations such as enqueue/wait!. Usually, if you want to shut down the service, you need a special message, so it's reasonable that you close the queue simultaneously when you send such termination message.

(If you think a channel as a pipe, then such termination message is not necessary; the input end can be simply closed, and the output end reads something like #<eof>. Queue, on the other hand, is expected to read something that's explicitly put.)

The feature is available in the next release, 0.9.11.

Tags: data.queue, mtqeueue, Concurrency

More entries ...