2021/05/31
Functional and linear-updating interfaces
Recent trend of data structure SRFIs is to provide two flavors of updating procedures:
- Functional updaters never mutate the input structure, and always return a newly allocated structure.
- Linear updaters are allowed to reuse the storage of input structure to produce the output, given that the caller guarantees the input structure will never be used.
Functional interface has a good nature--it won't create hidden dependencies thus the code is easy to reason about. It also plays nicely with concurrent execution, for you don't need to worry that your operations step on other threads' toes.
Linear updating interface gives the user to express opportunities of optimization. The implementation can take advantage of it to reduce allocations.
So, it appears to be a nice combination---except that, I think, the way they are currently specified is actually pulling each one's leg and reducing their merits.
Performance-sensitive users often frown on functional data structures, for they seem to copy everything every time. "It won't be that bad," functionally-minded users replies, "for it is often the case that the input structure and the updated structure can share most of their internals; the updated structure just allocates enough to store the updated parts. In the extreme case, the updater can just return the input as is, when it finds out the structure isn't altered at all (e.g. adjoining an item to a set that already has the item). The beauty of functional programming is that nobody cares whether it is shared or not---only the content matters."
It is true if everything is written functionally. However, to use the linear-updating interface, the caller needs to know that the structure to pass in isn't shared. If the functional interface may return a (partially) shared structure, it's hard to guarantee the "no-share" condition. Thus, SRFI states the functional interface always copies the input, even if there's no change at all. It can't take advantage of partial sharing as well, if the linear-updating version mutates internal structure.
This takes out the opportunity of optimization in the functional interface. The implementation needs to choose either (1) makes a slow functional version, in order to provide an efficient linear-updating version, or (2) makes a linear-updating version not mutate the input at all, and put functional optimizations.
I think we can do better. One idea is this:
- The data structure has a mutability flag internally.
- The functional interface always returns immutable data. It may return the input as is, or return a structure that partially shares the input, if the input structure is flagged immutable. If the input is flagged mutable, it always returns a fleshly copied immutable structure.
- The linear-updating interface may mutate the input structure if it is flagged mutable, and copies if the input structure is flagged immutable.
If the SRFI does not provide an explicitly-mutating interface,
it is actually almost indistinguishable from the existing SRFI spec, except
when you compare input and output structures with eq?
.
Given that explicitly-mutating interfaces (such as vector-set!
)
aren't popular in the SRFIs, I think it's good to allow the implementation
to take the latter choice.
Tags: srfi, Immutability, DataStructures
2021/04/18
Two concurrency utilities
Since I haven't written this blog for a while, it's a good time to catch up recent changes in Gauche HEAD.
Lately, we added a couple of utility modules that help to write concurrent programs.
control.future (draft:control.future) - A typical future,
that is, evaluate the given expression in a separate thread concurrently.
The result can be retrieved with future-get
.
(use control.future) (use rfc.http) (let1 f (future (http-get "example.com" "/")) ... some computation ... (receive (code headers body) (future-get f) ...))
Like most of other synchronization operations in Gauche, future-get
can take timeout parameter.
control.cseq (draft:control.cseq) - It's a lazy sequence but the data generator runs in a separate thread. It can abstract producer-consumer type concurrency.
The same concurrency can be achieved with <mtqueue>
(ref:data.queue), and in fact control.cseq
uses
mtqeuee internally, but cseq is characteristic that you can
very easily change lazy sequence code into concurrent code.
(generator->lseq gen)
returns an lseq, which looks like an
ordinary list, but when you walk down its cdr
, the generator
gen
is called to generate more items.
(generator->cseq gen)
also returns an lseq. But in this case,
gen
runs concurrently in a separate thread. In many cases,
you can simply replace lseq
to cseq
to get the benefit
of concurrency.
The module also provides coroutine->cseq
, which uses coroutine
to generate the items, run in a separate thread.
Tags: 0.9.11, Concurrency
2021/04/18
mtqueue and channel
Thread-safe queues, <mtqueue>
(ref:data.queue), can naturally
be used as a ``channel'', a communication primitive between producer
thread(s) and consumer thread(s). In fact, I totally assumed
the two were equivalent, and didn't bother creating a ``channel''
datatype specifically.
But I realized there was one difference--a channel can be closed.
Suppose I have an in-process ``server''---a thread looping over
requests sent via an mtqueue. Other part of the program inserts
its requests with enqueue/wait!
. The server thread reads
it and process it. All the synchronization is done in the queue,
and that's the beauty.
Now, sometimes, you may want to shut down such server. Once it is shut down, we don't want to allow callers to put a new request into the queue, for it will sit in the queue unprocessed forever. How to implement it?
We may have a separate flag in our <server>
object and
ask the caller to check it before queuing a request. But such check
must be done atomically with queue insertion, for other thread
may set the flag after you checked it but before calling enqueue.
Hence you need a separate lock for the flag and the queue, even
the queue itself is thread-safe. It's kind of waste.
With a channel, attempt to put a request on a closed channel would be rejected, and that check is done atomically inside a channel.
For Gauche, I decided to enhance <mtqueue>
to have ``close''
state. It's a bit of divergence from a queue in the original sense,
but it's simpler than making a separate channel class on top of
<mtqueue>
. The lock operation and flag check is intertwined
so deeply that it's difficult to separate them cleanly.
An mtqueue can only be closed via the synchronized queue operations
such as enqueue/wait!
. Usually, if you want to shut down the service, you need a special message, so it's reasonable that you close
the queue simultaneously when you send such termination message.
(If you think a channel as a pipe, then such termination message is
not necessary; the input end can be simply closed, and the output end
reads something like #<eof>
. Queue, on the other hand, is
expected to read something that's explicitly put.)
The feature is available in the next release, 0.9.11.
Tags: data.queue, mtqeueue, Concurrency
2020/07/22
A curious case of rational to flonum conversion
Tanaka Akira filed a curious bug report in Ruby: https://bugs.ruby-lang.org/issues/17037
He computed 1 + (k/100)ε
in rational arithmetic,
then converted it to a floating-point number,
for k=0 to 100, where ε
is FLT_EPSILON
.
If k=0 it's 1, and k=100 it's nextfloat(1), so somewhere inbetween
the value switches from 1 to nextfloat(1). However, what he saw was that
the value flips more than once:
... [31, 1.0, (450359962737049631/450359962737049600)] [32, 1.0, (14073748835532801/14073748835532800)] [33, 1.0000000000000002, (450359962737049633/450359962737049600)] [34, 1.0000000000000002, (225179981368524817/225179981368524800)] [35, 1.0, (90071992547409927/90071992547409920)] [36, 1.0000000000000002, (112589990684262409/112589990684262400)] [37, 1.0000000000000002, (450359962737049637/450359962737049600)] [38, 1.0000000000000002, (225179981368524819/225179981368524800)] [39, 1.0000000000000002, (450359962737049639/450359962737049600)] [40, 1.0, (11258999068426241/11258999068426240)] [41, 1.0000000000000002, (450359962737049641/450359962737049600)] [42, 1.0000000000000002, (225179981368524821/225179981368524800)] [43, 1.0000000000000002, (450359962737049643/450359962737049600)] ...
Intrigued, I ran the same computation in Gauche. To my surprise, I got exactly the same result (0.9.9):
gosh> (dotimes (k 100) (let1 d (+ 1 (* k 1/100 (exact (flonum-epsilon)))) (print `(,k ,(inexact d) ,d)))) (0 1.0 1) (1 1.0 450359962737049601/450359962737049600) (2 1.0 225179981368524801/225179981368524800) ... (31 1.0 450359962737049631/450359962737049600) (32 1.0 14073748835532801/14073748835532800) (33 1.0000000000000002 450359962737049633/450359962737049600) (34 1.0000000000000002 225179981368524817/225179981368524800) (35 1.0 90071992547409927/90071992547409920) (36 1.0000000000000002 112589990684262409/112589990684262400) (37 1.0000000000000002 450359962737049637/450359962737049600) (38 1.0000000000000002 225179981368524819/225179981368524800) (39 1.0000000000000002 450359962737049639/450359962737049600) (40 1.0 11258999068426241/11258999068426240) (41 1.0000000000000002 450359962737049641/450359962737049600) (42 1.0000000000000002 225179981368524821/225179981368524800) (43 1.0000000000000002 450359962737049643/450359962737049600) (44 1.0000000000000002 112589990684262411/112589990684262400) (45 1.0000000000000002 90071992547409929/90071992547409920) (46 1.0000000000000002 225179981368524823/225179981368524800) (47 1.0000000000000002 450359962737049647/450359962737049600) (48 1.0000000000000002 28147497671065603/28147497671065600) (49 1.0000000000000002 450359962737049649/450359962737049600) (50 1.0 9007199254740993/9007199254740992) (51 1.0000000000000002 450359962737049651/450359962737049600) (52 1.0000000000000002 112589990684262413/112589990684262400) ...
To my knowledge, Gauche and Ruby don't share code regarding this feature. Whatever the issue is, it should be in the logic.
What Gauche did was basically converting the numerator and the denominator of the rational number to doubles, then did a floating-point division:
double dnumer = Scm_GetDouble(SCM_RATNUM_NUMER(obj)); double ddenom = Scm_GetDouble(SCM_RATNUM_DENOM(obj)); return dnumer/ddenom;
There's a case that dnumer and/or ddenom overflows that requires special handling, and also we have to make sure the floating-point division is done in IEEE double precision (as opposed to extended precision). I guess Ruby does the same, and that is the cause of the weird behavior. What's going on?
Let's look at the k=33. 1 + 33/100*ε
is
450359962737049633/450359962737049600
. This is closer to 1 than
nextfloat(1), so it should be rounded down.
The numerator, 450359962737049633
, consists of the following bits:
gosh> (format "~,,,5:b" 450359962737049633) "1100,10000,00000,00000,00000,00000,00000,00000,00000,00000,00001,00001" ;; ^53bit
Note that it requires more than 53bits to represent it accurately. If we convert it to double, the last 6 bits are rounded---since it is greater than the midpoint, it is rounded up:
1100,10000,00000,00000,00000,00000,00000,00000,00000,00000,00001,00001 ↓ 1100,10000,00000,00000,00000,00000,00000,00000,00000,00000,0001 * 2^6
Effectively, we now have 1 + 64/100*ε
. So we get 1.0000000000000002
rather than 1.0. We can say this is another case of double-rounding.
When k = 32, the last 6 bits falls on the exact midpoint, so it is rounded down by the even-rounding rule.
k=35, 40 and 50 are anomalies---the rational numbers are simplified, so their numerator happened to fit within 53 bits and we didn't lose the precision.
The fix I've implemented is as follows:
- Scale numerator by factor S so that numerator * S > 2^54 * denominator.
- Compute integer division. The quotient has at least 54 bits.
- Look at the 54-th bits.
- If it is 0, we can round down the quotient.
- If it is 1 and there's any '1' in the following bits, or the remainder of the division is not zero, we can round up the quotient.
- Otherwise, quotient is on the midpoint. We see the 53-th bit and round to even.
- Convert the quotient into double, then scale back.
If the result falls on the denormalized range, we have to adjust the significant digits, otherwise we'll get another double-rounding.
This solution involves lots of bignum allocations, so we want to optimize eventually.
2020/05/30
C API and promise
This ate up my whole afternoon so I write it down not to fall into it again.
I've got a really weird bug. We have a parameter (say P). P has a default value, but it may not be available at the initialization time. Basically, what we want is to delay evaluation of EXPR below until the value of P is actually taken:
(define P (make-parameter EXPR))
Simply wrapping EXPR with delay
did't cut it, for the user of P expected it to contain a value which wasn't a promise. We couldn't go to every place where P was used to wrap it with force
.
So we added a special flag in the parameter, which applies force
on the value whenever the value is taken. The feature isn't available from the Scheme world, though. It's only through C API, for we're not sure
if such feature is a good idea yet.
Anyway, P got such a flag, so we could also say (P (delay EXPR))
to alter the value of P, with the actual computation of EXPR is delayed. And it seemed working.
However, we ran into an issue when some code takes the value of P from C API. The internal of parameter object is a bit complicated, but you can assume there's an C API that retrieves the value of the given parameter.
Through C API, however, P's value looked like #<closure ...>
, whereas when I took P's value from the Scheme world, it returned the value of EXPR.
I started tracking it down and it was like
a rabbit hole. Scheme interface eventually calls the internal Scheme
procedure %primitive-parameter-ref
, which directly calls
C API Scm_PrimitiveParameterRef
. I inserted a debug stub
to show the result of C call. The C API returns the mysterious
closure, yet in the Scheme world it returns the desired value.
Does Gauche runtime intercept the return value from C world to Scheme
world? Nope. It's directly returned to the Scheme world. I have
no idea where this #<closure...>
came from, neither
how the value changes to the desired one.
Furthermore, I found that if I evaluate (P)
second time,
C API returns the desired value. But no code is called to actually
replacing P's value!
I poke around C stub generators, VM code, parameter code,... in vain.
Finally, I opened up the source of Scm_Force
, the C API
for force
. And BANG! The answer was there.
C runtime doesn't like call/cc
. C procedures return either
exactly once, or never. So, when you call back Scheme code from C,
you have to choose one of these two strategies:
- Restrict the called Scheme code to returns at most once. If a continuation captured within the Scheme code is invoked again later, and tries to return to the C code again, an error is thrown.
- Split your C code to two, before the callback (A) and after the callback (B). Both A and B are ordinary C function. A arranges B to be called after the Scheme callback returns. Effectively, you write it as a continuation-passing style. With this, a continuation captured within the Scheme callback can be re-invoked, which just calls B again.
Most of Gauche runtime in C adopts the latter strategy, so that
call/cc works seamlessly. By convention, the C API functions
that use the strategy are named Scm_VM***
. The caller
of such C API can't expect to get the final result as the C return
value, since such function may need more calculation (Scheme code
and B part) to get the final result.
Scm_Force
is that type of function, too. I only forgot to name
it as Scm_VMForce
.
Scm_PrimitiveParameterRef
casually called Scm_Force
when it has the delayed evaluation flag, expecting that it returns
the final value. But in fact, Scm_Force
can only be used
in conjunction of Scheme VM to obtain the final result.
Tag: BugStories
Comments (0)