Lua Lanes - multithreading in Lua |
Copyright © 2007-23 Asko Kauppi, Benoit Germain. All rights reserved.
Lua Lanes is published under the same MIT license as Lua 5.1, 5.2, 5.3 and 5.4.
This document was revised on 23-Feb-24, and applies to version 3.16.3.
Lua Lanes is a Lua extension library providing the possibility to run multiple Lua states in parallel. It is intended to be used for optimizing performance on multicore CPU's and to study ways to make Lua programs naturally parallel to begin with.
Lanes is included into your software by the regular require "lanes" method. No C side programming is needed; all APIs are Lua side, and most existing extension modules should work seamlessly together with the multiple lanes.
Starting with version 3.1.6, Lanes should build and run identically with either Lua 5.1 or Lua 5.2. Version 3.10.0 supports Lua 5.3.
See comparison of Lua Lanes with other Lua multithreading solutions.
Lua Lanes supports the following operating systems:
The underlying threading code can be compiled either towards Win32 API or Pthreads. Unfortunately, thread prioritization under Pthreads is a JOKE, requiring OS specific tweaks and guessing undocumented behaviour. Other features should be portable to any modern platform.
Lua Lanes is built simply by make on the supported platforms (make-vc for Visual C++). See README for system specific details and limitations.
To install Lanes, all you need are the lanes.lua and lanes/core.so|dll files to be reachable by Lua (see LUA_PATH, LUA_CPATH). Or use Lua Rocks package management.
> luarocks search lanes ... output listing Lua Lanes is there ... > luarocks install lanes ... output ...
When Lanes is embedded, it is possible to statically initialize with
extern void LANES_API luaopen_lanes_embedded( lua_State* L, lua_CFunction _luaopen_lanes); |
luaopen_lanes_embedded leaves the module table on the stack. lanes.configure() must still be called in order to use Lanes.
If _luaopen_lanes is NULL, a default loader will simply attempt the equivalent of luaL_dofile( L, "lanes.lua").
To embed Lanes, compile source files in you application. In any Lua state were you want to use Lanes, initialize it as follows:
#include "lanes.h" int load_lanes_lua( lua_State* L) { // retrieve lanes.lua from wherever it is stored and return the result of its execution // trivial example 1: luaL_dofile( L, "lanes.lua"); // trivial example 2: luaL_dostring( L, bin2c_lanes_lua); } void embed_lanes( lua_State* L) { // we need base libraries for Lanes for work luaL_openlibs( L); ... // will attempt luaL_dofile( L, "lanes.lua"); luaopen_lanes_embedded( L, NULL); lua_pop( L, 1); // another example with a custom loader luaopen_lanes_embedded( L, load_lanes_lua); lua_pop( L, 1); // a little test to make sure things work as expected luaL_dostring( L, "local lanes = require 'lanes'.configure{with_timers = false}; local l = lanes.linda()"); } |
The following sample shows how to initialize the Lanes module.
local lanes = require "lanes".configure() |
Starting with version 3.0-beta, requiring the module follows Lua 5.2 rules: the module is not available under the global name "lanes", but has to be accessed through require's return value.
After lanes is required, it is necessary to call lanes.configure(), which is the only function exposed by the module at this point. Calling configure() will perform one-time initializations and make the rest of the API available.
At the same time, configure() itself will be replaced by another function that raises an error if called again with differing arguments, if any.
Also, once Lanes is initialized, require() is replaced by another one that wraps it inside a mutex, both in the main state and in all created lanes. This prevents multiple thread-unsafe module initializations from several lanes to occur simultaneously. It remains to be seen whether this is actually useful or not: If a module is already threadsafe, protecting its initialization isn't useful. And if it is not, any parallel operation may crash without Lanes being able to do anything about it.
IMPORTANT NOTE: Starting with version 3.7.0, only the first occurence of require "lanes" must be followed by a call to .configure(). From this point, a simple require "lanes" will do wherever you need to require lanes again.
lanes.configure( [opt_tbl]) |
lanes.configure accepts an optional options table as sole argument.
name | value | definition | |
---|---|---|---|
.nb_keepers
|
integer >= 1 | Controls the number of keeper states used internally by lindas to transfer data between lanes. (see below). Default is 1. | |
.with_timers
|
nil/false/true | If equal to false or nil, Lanes doesn't start the timer service, and the associated API will be absent from the interface (see below). Default is true. | |
.verbose_errors
|
nil/false/true | (Since v3.6.3) If equal to true, Lanes will collect more information when transfering stuff across Lua states to help identify errors (with a cost). Default is false. | |
.protect_allocator
|
nil/false/true | REPLACED BY allocator="protected" AS OF VERSION v3.13.0. (Since v3.5.2) If equal to true, Lanes wraps all calls to the state's allocator function inside a mutex. Since v3.6.3, when left unset, Lanes attempts to autodetect this value for LuaJIT (the guess might be wrong if "ffi" isn't loaded though). Default is true when Lanes detects it is run by LuaJIT, else nil. | |
.allocator
|
nil/"protected"/function |
(Since v3.13.0) If nil, Lua states are created with lua_newstate() and reuse the allocator from the master state. If "protected", The default allocator obtained from lua_getallocf() in the master state is wrapped inside a critical section and used in all newly created states. If a function, this function is called prior to creating the state. It should return a full userdata containing the following structure:
|
|
.internal_allocator
|
"libc"/"allocator" |
(Since v3.16.1) Controls which allocator is used for Lanest internal allocations (for keeper and deep userdata management). If "libc", Lanes uses realloc and free. If "allocator", Lanes uses whatever was obtained from the "allocator" setting. This option is mostly useful for embedders that want control all memory allocations, but have issues when Lanes tries to use the Lua State allocator for internal purposes (especially with LuaJIT). |
|
.demote_full_userdata
|
nil/false/true | (Since v3.7.5) If equal to false or nil, Lanes raises an error when attempting to transfer a non-deep full userdata, else it will be demoted to a light userdata in the destination. Default is false (set to true to get the legacy behaviour). | |
.track_lanes
|
nil/false/anything | Any non-nil|false value instructs Lanes keeps track of all lanes, so that lanes.threads() can list them. If false, lanes.threads() will raise an error when called. Default is false. | |
.on_state_create
|
function/nil |
If provided, will be called in every created Lua state right after initializing the base libraries.
Keeper states will call it as well, but only if it is a C function (keeper states are not able to execute any user Lua code). Typical usage is twofold:
(Since version 3.7.6) If on_state_create() is a Lua function, it will be transfered normally before the call. If it is a C function, a C closure will be reconstructed in the created state from the C pointer. Lanes will raise an error if the function has upvalues. |
|
.shutdown_timeout
|
number >= 0 | (Since v3.3.0) Sets the duration in seconds Lanes will wait for graceful termination of running lanes at application shutdown. Irrelevant for builds using pthreads. Default is 0.25. |
(Since v3.5.0) Once Lanes is configured, one should register with Lanes the modules exporting functions that will be transferred either during lane generation or through lindas.
Use lanes.require() for this purpose. This will call the original require(), then add the result to the lookup databases.
(Since version 3.11) It is also possible to register a given module with lanes.register(). This function will raise an error if the registered module is not a function or table.
local m = lanes.require "modname" lanes.register( "modname", module) |
The following sample shows preparing a function for parallel calling, and calling it with varying arguments. Each of the two results is calculated in a separate OS thread, parallel to the calling one. Reading the results joins the threads, waiting for any results not already there.
local lanes = require "lanes".configure() f = lanes.gen( function( n) return 2 * n end) a = f( 1) b = f( 2) print( a[1], b[1] ) -- 2 4 |
func = lanes.gen( [libs_str | opt_tbl [, ...],] lane_func) lane_h = func( ...) |
The function returned by lanes.gen() is a "generator" for launching any number of lanes. They will share code, options, initial globals, but the particular arguments may vary. Only calling the generator function actually launches a lane, and provides a handle for controlling it.
Alternatively, lane_func may be a string, in which case it will be compiled in the lane. This was to be able to launch lanes with older versions of LuaJIT, which didn't not support lua_dump, used internally to transfer functions to the lane.
Lanes automatically copies upvalues over to the new lanes, so you need not wrap all the required elements into one 'wrapper' function. If lane_func uses some local values, or local functions, they will be there also in the new lanes.
libs_str
defines the standard libraries made available to the new Lua state:
(nothing) | no standard libraries (default) | ||
"base" or "" | root level names, print, assert, unpack etc. | ||
"bit" | bit.* namespace (LuaJIT) | ||
"bit32" | bit32.* namespace (Lua 5.1 and 5.2) | ||
"coroutine" | coroutine.* namespace (part of base in Lua 5.1 and 5.2) | ||
"debug" | debug.* namespace | ||
"ffi" | ffi.* namespace (LuaJIT) | ||
"io" | io.* namespace | ||
"jit" | jit.* namespace (LuaJIT) | ||
"math" | math.* namespace | ||
"os" | os.* namespace | ||
"package" | package.* namespace and require | ||
"string" | string.* namespace | ||
"table" | table.* namespace | ||
"utf8" | utf8.* namespace (Lua 5.3 and above) | ||
"*" | All standard libraries (including those specific to LuaJIT and not listed above), as well as lanes.core. This must be used alone. |
Initializing the standard libs takes a bit of time at each lane invocation. This is the main reason why "no libraries" is the default.
opt_tbl
is a collection of named options to control the way lanes are run:
name | value | definition |
---|---|---|
.globals
|
table |
Sets the globals table for the launched threads. This can be used for giving them constants. The key/value pairs of table are transfered in the lane globals after the libraries have been loaded and the modules required.
The global values of different lanes are in no manner connected; modifying one will only affect the particular lane. |
.required
|
table |
Lists modules that have to be required in order to be able to transfer functions they exposed. Starting with Lanes 3.0-beta, non-Lua functions are no longer copied by recreating a C closure from a C pointer, but are searched in lookup tables.
These tables are built from the modules listed here. required must be a list of strings, each one being the name of a module to be required. Each module is required with require() before the lanes function is invoked.
So, from the required module's point of view, requiring it manually from inside the lane body or having it required this way doesn't change anything. From the lane body's point of view, the only difference is that a module not creating a global won't be accessible.
Therefore, a lane body will also have to require a module manually, but this won't do anything more (see Lua's require documentation).
ATTEMPTING TO TRANSFER A FUNCTION REGISTERED BY A MODULE NOT LISTED HERE WILL RAISE AN ERROR. |
.gc_cb
|
function | (Since version 3.8.2) Callback that gets invoked when the lane is garbage collected. The function receives two arguments (the lane name and a string, either "closed" or "selfdestruct"). |
.priority
|
integer |
The priority of lanes generated in the range -3..+3 (default is 0).
These values are a mapping over the actual priority range of the underlying implementation.
Implementation and dependability of priorities varies by platform. Especially Linux kernel 2.6 is not supporting priorities in user mode. A lane can also change its own thread priority dynamically with lanes.set_thread_priority(). |
.package
|
table |
Introduced at version 3.0.
Specifying it when libs_str doesn't cause the package library to be loaded will generate an error.
If not specified, the created lane will receive the current values of package. Only path, cpath, preload and loaders (Lua 5.1)/searchers (Lua 5.2) are transfered. |
Each lane also gets a global function set_debug_threadname() that it can use anytime to do as the name says. Supported debuggers are Microsoft Visual Studio (for the C side) and Decoda (for the Lua side).
Starting with version 3.8.1, the lane has a new method lane:get_debug_threadname() that gives access to that name from the caller side (returns "<unnamed>" if unset, "<closed>" if the internal Lua state is closed).
If a lane body pulls a C function imported by a module required before Lanes itself (thus not through a hooked require), the lane generator creation will raise an error. The function name it shows is a path where it was found by scanning _G. As a utility, the name guessing functionality is exposed as such:
"type", "name" = lanes.nameof( o) |
Starting with version 3.8.3, lanes.nameof() searches the registry as well.
The lane handles are allowed to be 'let loose'; in other words you may execute a lane simply by:
lanes.gen( function( params) ... end ) ( ...) |
lanes.set_thread_priority( prio) |
Besides setting a default priority in the generator settings, each thread can change its own priority at will. This is also true for the main Lua state.
The priority must be in the range [-3,+3].
lanes.set_thread_affinity( affinity) |
Each thread can change its own affinity at will. This is also true for the main Lua state.
str = lane_h.status |
The current execution state of a lane can be read via its status member, providing one of these values: (2)
"pending" | Not started yet. Shouldn't stay very long in that state. | ||
"running" | running, not suspended on a Linda call. | ||
"waiting" | waiting at a Linda :receive() or :send() | ||
"done" | finished executing (results are ready) | ||
"error" | met an error (reading results will propagate it) | ||
"cancelled" | received cancellation and finished itself. | ||
"killed" | was forcefully killed by lane_h:cancel() (since v3.3.0) |
This is similar to coroutine.status, which has: "running" / "suspended" / "normal" / "dead". Not using the exact same names is intentional.
{{name = "name", status = "status", ...}|nil = lanes.threads() |
Only available if lane tracking feature is compiled (see HAVE_LANE_TRACKING in lanes.c) and track_lanes is set.
Returns an array table where each entry is a table containing a lane's name and status. Returns nil if no lane is running.
set_error_reporting( "basic"|"extended") |
Sets the error reporting mode. "basic" is selected by default.
A lane can be waited upon by simply reading its results. This can be done in two ways.
[val]= lane_h[1] |
Makes sure lane has finished, and gives its first (maybe only) return value. Other return values will be available in other lane_h indices.
If the lane ended in an error, it is propagated to master state at this place.
[...]|[nil,err,stack_tbl]= lane_h:join( [timeout_secs] ) |
Waits until the lane finishes, or timeout seconds have passed. Returns nil, "timeout" on timeout (since v3.13), nil,err,stack_tbl if the lane hit an error, nil, "killed" if forcefully killed (starting with v3.3.0), or the return values of the lane. Unlike in reading the results in table fashion, errors are not propagated.
stack_tbl is a table describing where the error was thrown.
In "extended" mode, stack_tbl is an array of tables containing info gathered with lua_getinfo() ("source","currentline","name","namewhat","what").
In "basic mode", stack_tbl is an array of "<filename>:<line>" strings. Use table.concat() to format it to your liking (or just ignore it).
If you use :join, make sure your lane main function returns a non-nil value so you can tell timeout and error cases apart from succesful return (using the .status property may be risky, since it might change between a timed out join and the moment you read it).
require "lanes".configure() f = lanes.gen( function() error "!!!" end) a = f( 1) --print( a[1]) -- propagates error v, err = a:join() -- no propagation if v == nil then error( "'a' faced error"..tostring(err)) -- manual propagation end |
If you want to wait for multiple lanes to finish (any of a set of lanes), use a Linda object. Give each lane a specific id, and send that id over a Linda once that thread is done (as the last thing you do).
require "lanes".configure() local sync_linda = lanes.linda() f = lanes.gen( function() dostuff() sync_linda:send( "done", true) end) a = f() b = f() c = f() sync_linda:receive( nil, sync_linda.batched, "done", 3) -- wait for 3 lanes to write something in "done" slot of sync_linda |
bool[,reason] = lane_h:cancel( "soft" [, timeout] [, wake_bool]) bool[,reason] = lane_h:cancel( "hard" [, timeout] [, force [, forcekill_timeout]]) bool[,reason] = lane_h:cancel( [mode, hookcount] [, timeout] [, force [, forcekill_timeout]]) |
cancel() sends a cancellation request to the lane.
First argument is a mode can be one of "hard", "soft", "count", "line", "call", "ret".
If mode is not specified, it defaults to "hard".
If mode is "soft", cancellation will only cause cancel_test() to return true, so that the lane can cleanup manually.
If wake_bool is true, the lane is also signalled so that execution returns from any pending linda operation. Linda operations detecting the cancellation request return lanes.cancel_error.
If mode is "hard", waits for the request to be processed, or a timeout to occur. Linda operations detecting the cancellation request will raise a special cancellation error (meaning they won't return in that case).
timeout defaults to 0 if not specified.
Other values of mode will asynchronously install the corresponding hook, then behave as "hard".
If force_kill_bool is true, forcekill_timeout can be set to tell how long lanes will wait for the OS thread to terminate before raising an error. Windows threads always terminate immediately, but it might not always be the case with some pthread implementations.
Returns true, lane_h.status if lane was already done (in "done", "error" or "cancelled" status), or the cancellation was fruitful within timeout_secs timeout period.
Returns false, "timeout" otherwise.
If the lane is still running after the timeout expired and force_kill is true, the OS thread running the lane is forcefully killed. This means no GC, probable OS resource leaks (thread stack, locks, DLL notifications), and should generally be the last resort.
Cancellation is tested before going to sleep in receive() or send() calls and after executing cancelstep Lua statements. Starting with version 3.0-beta, a pending receive()or send() call is awakened.
This means the execution of the lane will resume although the operation has not completed, to give the lane a chance to detect cancellation (even in the case the code waits on a linda with infinite timeout).
The code should be able to handle this situation appropriately if required (in other words, it should gracefully handle the fact that it didn't receive the expected values).
It is also possible to manually test for cancel requests with cancel_test().
set_finalizer( finalizer_func) void = finalizer_func( [err, stack_tbl]) |
The error call is used for throwing exceptions in Lua. What Lua does not offer, however, is scoped finalizers that would get called when a certain block of instructions gets exited, whether through peaceful return or abrupt error.
Since 2.0.3, Lanes registers a function set_finalizer in the lane's Lua state for doing this. Any functions given to it will be called in the lane Lua state, just prior to closing it. It is possible to set more than one finalizer. They are not called in any particular order.
An error in a finalizer itself overrides the state of the regular chunk (in practise, it would be highly preferable not to have errors in finalizers). If one finalizer errors, the others may not get called. If a finalizer error occurs after an error in the lane body, then this new error replaces the previous one (including the full stack trace).
local lane_body = function() set_finalizer( function( err, stk) if err and type( err) ~= "userdata" then -- no special error: true error print( " error: "..tostring(err)) elseif type( err) == "userdata" then -- lane cancellation is performed by throwing a special userdata as error print( "after cancel") else -- no error: we just got finalized print( "finalized") end end) end |
Communications between lanes is completely detached from the lane handles themselves. By itself, a lane can only provide return values once it's finished, or throw an error. Needs to communicate during runtime are handled by Linda objects, which are deep userdata instances. They can be provided to a lane as startup parameters, upvalues or in some other Linda's message.
Access to a Linda object means a lane can read or write to any of its data slots. Multiple lanes can be accessing the same Linda in parallel. No application level locking is required; each Linda operation is atomic.
require "lanes".configure() local linda = lanes.linda() local function loop( max) for i = 1, max do print( "sending: " .. i) linda:send( "x", i) -- linda as upvalue end end a = lanes.gen( "", loop)( 10000) while true do local key, val = linda:receive( 3.0, "x") -- timeout in seconds if val == nil then print( "timed out") break end print( tostring( linda) .. " received: " .. val) end |
Characteristics of the Lanes implementation of Lindas are:
h = lanes.linda( [opt_name, [opt_group]]) [true|lanes.cancel_error] = h:send( [timeout_secs,] [h.null,] key, ...) [key, val]|[lanes.cancel_error] = h:receive( [timeout_secs,] key [, ...]) [key, val [, ...]]|[lanes.cancel_error] = h:receive( timeout, h.batched, key, n_uint_min[, n_uint_max]) [true|lanes.cancel_error] = h:limit( key, n_uint) |
The send() and receive() methods use Linda keys as FIFO stacks (first in, first out). Timeouts are given in seconds (millisecond accuracy). If using numbers as the first Linda key, one must explicitly give nil as the timeout parameter to avoid ambiguities.
By default, stack sizes are unlimited but limits can be enforced using the limit() method. This can be useful to balance execution speeds in a producer/consumer scenario. Any negative value removes the limit.
A limit of 0 is allowed to block everything.
(Since version 3.7.7) if the key was full but the limit change added some room, limit() returns true and the linda is signalled so that send()-blocked threads are awakened.
Note that any number of lanes can be reading or writing a Linda. There can be many producers, and many consumers. It's up to you.
Hard cancellation will cause pending linda operations to abort execution of the lane through a cancellation error. This means that you have to install a finalizer in your lane if you want to run some code in that situation.
send() returns true if the sending succeeded, and false if the queue limit was met, and the queue did not empty enough during the given timeout.
(Since version 3.7.8) send() returns lanes.cancel_error if interrupted by a soft cancel request.
If no data is provided after the key, send() raises an error. Since version 3.9.3, if provided with linda.null before the actual key and there is no data to send, send() sends a single nil.
Also, if linda.null is sent as data in a linda, it will be read as a nil.
Equally, receive() returns a key and the value extracted from it, or nothing for timeout. Note that nils can be sent and received; the key value will tell it apart from a timeout.
Version 3.4.0 introduces an API change in the returned values: receive() returns the key followed by the value(s), in that order, and not the other way around.
(Since version 3.7.8) receive() returns lanes.cancel_error if interrupted by a soft cancel request.
Multiple values can be sent to a given key at once, atomically (the send will fail unless all the values fit within the queue limit). This can be useful for multiple producer scenarios, if the protocols used are giving data in streams of multiple units. Atomicity avoids the producers from garbling each others messages, which could happen if the units were sent individually.
When receiving from multiple slots, the keys are checked in order, which can be used for making priority queues.
bool|lanes.cancel_error = linda_h:set( key [, val [, ...]]) [[val [, ...]]|lanes.cancel_error] = linda_h:get( key [, count = 1]) |
The table access methods are for accessing a slot without queuing or consuming. They can be used for making shared tables of storage among the lanes.
Writing to a slot never blocks because it ignores the limit. It overwrites existing value and clears any possible queued entries.
Reading doesn't block either because get() returns whatever is available (which can be nothing), up to the specified count.
Table access and send()/receive() can be used together; reading a slot essentially peeks the next outcoming value of a queue.
set() signals the linda for write if a value is stored. If nothing special happens, set() returns nothing.
Since version 3.7.7, if the key was full but the new data count of the key after set() is below its limit, set() returns true and the linda is also signaled for read so that send()-blocked threads are awakened.
Since version 3.8.0, set() can write several values at the specified key, writing nil values is now possible, and clearing the contents at the specified key is done by not providing any value.
Also, get() can read several values at once. If the key contains no data, get() returns no value. This can be used to separate the case when reading stored nil values.
Since version 3.8.4, trying to send or receive data through a cancelled linda does nothing and returns lanes.cancel_error.
[val] = linda_h:count( [key[,...]]) |
Returns some information about the contents of the linda.
If no key is specified, and the linda is empty, returns nothing.
If no key is specified, and the linda is not empty, returns a table of key/count pairs that counts the number of items in each of the exiting keys of the linda. This count can be 0 if the key has been used but is empty.
If a single key is specified, returns the number of pending items, or nothing if the key is unknown.
If more than one key is specified, return a table of key/count pairs for the known keys.
[table] = linda_h:dump() |
Returns a table describing the full contents of a linda, or nil if the linda wasn't used yet.
void = linda_h:cancel("read"|"write"|"both"|"none") |
(Starting with version 3.8.4) Signals the linda so that lanes waiting for read, write, or both, wake up.
All linda operations (including get() and set()) will return lanes.cancel_error as when the calling lane is soft-cancelled as long as the linda is marked as cancelled.
"none" reset the linda's cancel status, but doesn't signal it.
If not void, the lane's cancel status overrides the linda's cancel status.
A linda is a gateway to read and write data inside some hidden Lua states, called keeper states. Lindas are hashed to a fixed number of keeper states, which are a locking entity.
The data sent through a linda is stored inside the associated keeper state in a Lua table where each linda slot is the key to another table containing a FIFO for that slot.
Each keeper state is associated with an OS mutex, to prevent concurrent access to the keeper state. The linda itself uses two signals to be made aware of operations occuring on it.
Whenever Lua code reads from or writes to a linda, the mutex is acquired. If linda limits don't block the operation, it is fulfilled, then the mutex is released.
If the linda has to block, the mutex is released and the OS thread sleeps, waiting for a linda operation to be signalled. When an operation occurs on the same linda, possibly fufilling the condition, or a timeout expires, the thread wakes up.
If the thread is woken but the condition is not yet fulfilled, it goes back to sleep, until the timeout expires.
When a lane is cancelled, the signal it is waiting on (if any) is signalled. In that case, the linda operation will return lanes.cancel_error.
A single Linda object provides an infinite number of slots, so why would you want to use several?
There are some important reasons:
Actually, you can. Make separate lanes to wait each, and then multiplex those events to a common Linda, but... :).
void = lanes.timer( linda_h, key, date_tbl|first_secs [,period_secs]) |
Timers are implemented as a lane. They can be disabled by setting "with_timers" to nil or false to lanes.configure().
Timers can be run once, or in a reoccurring fashion (period_secs > 0). The first occurrence can be given either as a date or as a relative delay in seconds. The date table is like what os.date("*t") returns, in the local time zone.
Once a timer expires, the key is set with the current time (in seconds, same offset as os.time() but with millisecond accuracy). The key can be waited upon using the regular Linda :receive() method.
A timer can be stopped simply with first_secs=0|nil and no period.
local lanes = require "lanes" lanes.configure() local linda = lanes.linda() -- First timer once a second, not synchronized to wall clock -- lanes.timer( linda, "sec", 1, 1) -- Timer to a future event (next even minute); wall clock synchronized -- local t = os.date( "*t", os.time() + 60) -- now + 1min t.sec = 0 lanes.timer( linda, "min", t, 60) -- reoccur every minute (sharp) while true do local key, v = linda:receive( "sec", "min") print( "Timer "..key..": "..v) end |
NOTE: Timer keys are set, not queued, so missing a beat is possible especially if the timer cycle is extremely small. The key value can be used to know the actual time passed.
Having the API as lanes.timer() is intentional. Another alternative would be linda_h:timer() but timers are not traditionally seen to be part of Lindas. Also, it would mean any lane getting a Linda handle would be able to modify timers on it. A third choice could be abstracting the timers out of Linda realm altogether (timer_h= lanes.timer( date|first_secs, period_secs )) but that would mean separate waiting functions for timers, and lindas. Even if a linda object and key was returned, that key couldn't be waited upon simultaneously with one's general linda events. The current system gives maximum capabilities with minimum API, and any smoothenings can easily be crafted in Lua at the application level. |
{[{linda, slot, when, period}[,...]]} = lanes.timers() |
The full list of active timers can be obtained. Obviously, this is a snapshot, and non-repeating timers might no longer exist by the time the results are inspected.
void = lanes.sleep( [seconds|false]) |
(Since version 3.9.7) A very simple way of sleeping when nothing else is available. Is implemented by attempting to read some data in an unused channel of the internal linda used for timers (this linda exists even when timers aren't enabled). Default duration is null, which should only cause a thread context switch.
Lanes does not generally require locks or critical sections to be used, at all. If necessary, a limited queue can be used to emulate them. lanes.lua offers some sugar to make it easy:
lock_func|lanes.cancel_error = lanes.genlock( linda_h, key [,N_uint=1]) bool|lanes.cancel_error = lock_func( M_uint [, "try"] ) -- acquire .. bool|lanes.cancel_error = lock_func( -M_uint) -- release |
The generated function acquires M tokens from the N available, or releases them if the value is negative. The acquiring call will suspend the lane, if necessary. Use M=N=1 for a critical section lock (only one lane allowed to enter).
When passsing "try" as second argument when acquiring, then lock_func operates on the linda with a timeout of 0 to emulate a TryLock() operation. If locking fails, lock_func returns false. "try" is ignored when releasing (as it it not expected to ever have to wait unless the acquisition/release pairs are not properly matched).
Upon successful lock/unlock, lock_func returns true (always the case when block-waiting for completion).
Note: The generated locks are not recursive (A single lane locking several times will consume tokens at each call, and can therefore deadlock itself). That would need another kind of generator, which is currently not implemented.
Similar sugar exists for atomic counters:
atomic_func|lanes.cancel_error = lanes.genatomic( linda_h, key [,initial_num=0.0]) new_num|lanes.cancel_error = atomic_func( [diff_num=+1.0]) |
Each time called, the generated function will change linda[key] atomically, without other lanes being able to interfere. The new value is returned. You can use either diff 0.0 or get to just read the current value.
Note that the generated functions can be passed on to other lanes.
Data passed between lanes (either as starting parameters, return values, upvalues or via Lindas) must conform to the following:
Originally, a C function was copied from one Lua state to another as follows:
// expects a C function on top of the source Lua stack copy_func( lua_State *dest, lua_State* source) { // extract C function pointer from source lua_CFunction func = lua_tocfunction( source, -1); // transfer upvalues int nup = transfer_upvalues( dest, source); // dest Lua stack contains a copy of all upvalues lua_pushcfunction( dest, func, nup); } |
This has the main drawback of not being LuaJIT-compatible, because some functions registered by LuaJIT are not regular C functions, but specially optimized implementations. As a result, lua_tocfunction() returns NULL for them.
Therefore, Lanes no longer transfers functions that way. Instead, functions are transfered as follows (more or less):
// expects a C function on top of the source Lua stack copy_func( lua_State *dest, lua_State* source) { // fetch function 'name' from source lookup database char const* funcname = lookup_func_name( source, -1); // lookup a function bound to this name in the destination state, and push it on the stack push_resolved_func( dest, funcname); } |
The devil lies in the details: what does "function lookup" mean?
Since functions are first class values, they don't have a name. All we know for sure is that when a C module registers some functions, they are accessible to the script that required the module through some exposed variables.
For example, loading the string base library creates a table accessible when indexing the global environment with key "string". Indexing this table with "match", "gsub", etc. will give us a function.
When a lane generator creates a lane and performs initializations described by the list of base libraries and the list of required modules, it recursively scans the table created by the initialisation of the module, looking for all values that are C functions.
Each time a function is encountered, the sequence of keys that reached that function is contatenated in a (hopefully) unique name. The [name, function] and [function, name] pairs are both stored in a lookup table in all involved Lua states (main Lua state and lanes states).
Then when a function is transfered from one state to another, all we have to do is retrieve the name associated to a function in the source Lua state, then with that name retrieve the equivalent function that already exists in the destination state.
Note that there is no need to transfer upvalues, as they are already bound to the function registered in the destination state. (And in any event, it is not possible to create a closure from a C function pushed on the stack, it can only be created with a lua_CFunction pointer).
There are several issues here:
string2 = string |
Most Lua extension modules should work unaltered with Lanes. If the module simply ties C side features to Lua, everything is fine without alterations. The luaopen_...() entry point will be called separately for each lane, where the module is require'd from.
If it, however, also does one-time C side initializations, these should be covered into a one-time-only construct such as below.
int luaopen_module( lua_State *L ) { static char been_here; /* 0 by ANSI C */ // Calls to 'require' serialized by Lanes; this is safe. if (!been_here) { been_here= 1; ... one time initializations ... } ... binding to Lua ... } |
Starting with version 3.13.0, a new way of passing full userdata across lanes uses a new __lanesclone metamethod. When a deep userdata is cloned, Lanes calls __lanesclone once, in the context of the source lane. The call receives the clone and original as light userdata, plus the actual userdata size, as in clone:__lanesclone(original,size), and should perform the actual cloning. A typical implementation would look like (BEWARE, THIS CHANGED WITH VERSION 3.16.0):
static int clonable_lanesclone( lua_State* L) { switch( lua_gettop( L)) { case 3: { struct s_MyClonableUserdata* self = lua_touserdata( L, 1); struct s_MyClonableUserdata* from = lua_touserdata( L, 2); size_t len = lua_tointeger( L, 3); assert( len == sizeof(struct s_MyClonableUserdata)); *self = *from; } return 0; default: (void) luaL_error( L, "Lanes called clonable_lanesclone with unexpected parameters"); } return 0; } |
NOTE: In the event the source userdata has uservalues, it is not necessary to create them for the clone, Lanes will handle their cloning.
Of course, more complex objects may require smarter cloning behavior than a simple memcpy. Also, the module initialisation code should make each metatable accessible from the module table itself as in:
int luaopen_deep_test(lua_State* L) { luaL_newlib( L, deep_module); // preregister the metatables for the types we can instanciate so that Lanes can know about them if( luaL_newmetatable( L, "clonable")) { luaL_setfuncs( L, clonable_mt, 0); lua_pushvalue(L, -1); lua_setfield(L, -2, "__index"); } lua_setfield(L, -2, "__clonableMT"); // actual name is not important if( luaL_newmetatable( L, "deep")) { luaL_setfuncs( L, deep_mt, 0); lua_pushvalue(L, -1); lua_setfield(L, -2, "__index"); } lua_setfield(L, -2, "__deepMT"); // actual name is not important return 1; } |
Then a new clonable userdata instance can just do like any non-Lanes aware userdata, as long as its metatable contains the aforementionned __lanesclone method.
int luaD_new_clonable( lua_State* L) { lua_newuserdata( L, sizeof( struct s_MyClonableUserdata)); luaL_setmetatable( L, "clonable"); return 1; } |
The mechanism Lanes uses for sharing Linda handles between separate Lua states can be used for custom userdata as well. Here's what to do.
void* idfunc( lua_State* L, DeepOp op_); |
Deep userdata management will take care of tying to __gc methods, and doing reference counting to see how many proxies are still there for accessing the data. Once there are none, the data will be freed through a call to the idfunc you provided.
Deep userdata in transit inside keeper states (sent in a linda but not yet consumed) don't call idfunc(eDO_delete) and aren't considered by reference counting. The rationale is the following:
If some non-keeper state holds a deep userdata for some deep object, then even if the keeper collects its own deep userdata, it shouldn't be cleaned up since the refcount is not 0.
OTOH, if a keeper state holds the last deep userdata for some deep object, then no lane can do actual work with it. Deep userdata's idfunc() is never called from a keeper state.
Therefore, Lanes can just call idfunc(eDO_delete) when the last non-keeper-held deep userdata is collected, as long as it doesn't do the same in a keeper state after that, since any remaining deep userdata in keeper states now hold stale pointers.
NOTE: The lifespan of deep userdata may exceed that of the Lua state that created it. The allocation of the data storage should not be tied to the Lua state used. In other words, use malloc()/free() or similar memory handling mechanism.
Lane handles are not implemented as deep userdata, and cannot thus be copied across lanes. This is intentional; problems would occur at least when multiple lanes were to wait upon one to get ready. Also, it is a matter of design simplicity.
The same benefits can be achieved by having a single worker lane spawn all the sublanes, and keep track of them. Communications to and from this lane can be handled via a Linda.
In multithreaded scenarios, giving multiple parameters to print() or file:write() may cause them to be overlapped in the output, something like this:
A: print( 1, 2, 3, 4 ) B: print( 'a', 'b', 'c', 'd' ) 1 a b 2 3 c d 4 |
Lanes is about making multithreading easy, and natural in the Lua state of mind. Expect performance not to be an issue, if your program is logically built. Here are some things one should consider, if best performance is vital:
Cancellation of lanes uses the Lua error mechanism with a special lightuserdata error sentinel. If you use pcall in code that needs to be cancellable from the outside, the special error might not get through to Lanes, thus preventing the Lane from being cleanly cancelled. You should throw any lightuserdata error further.
This system can actually be used by application to detect cancel, do your own cancellation duties, and pass on the error so Lanes will get it. If it does not get a clean cancellation from a lane in due time, it may forcefully kill the lane.
The sentinel is exposed as lanes.cancel_error, if you wish to use its actual value.
See CHANGES.
For feedback, questions and suggestions: