Planet GNU

Aggregation of development blogs from the GNU Project

December 08, 2022

FSF Blogs

FSF Events

December 07, 2022

FSF Blogs

Join the FSF and support the tech team

FSF tech team member Michael McMahon discusses the team's year-round jobs and responsibilities, and how it is all done in freedom and to support and strengthen the freedom of the free software community.

07 December, 2022 11:25PM

December 06, 2022

Fall Bulletin: Fully shareable, fully lovable

2022 Fall "Free Software Foundation Bulletin" is here! Read about how to protect your privacy, a reflection on this year's GNU Hackers' Meeting, what's new in Trisquel 11, and more!

06 December, 2022 09:23PM

December 05, 2022

GNUnet News

GNUnet 0.19.0

GNUnet 0.19.0 released

We are pleased to announce the release of GNUnet 0.19.0.
GNUnet is an alternative network stack for building secure, decentralized and privacy-preserving distributed applications. Our goal is to replace the old insecure Internet protocol stack. Starting from an application for secure publication of files, it has grown to include all kinds of basic protocol components and applications towards the creation of a GNU internet.

This is a new major release. It breaks protocol compatibility with the 0.18.x versions. Please be aware that Git master is thus henceforth (and has been for a while) INCOMPATIBLE with the 0.18.x GNUnet network, and interactions between old and new peers will result in issues. 0.18.x peers will be able to communicate with Git master or 0.19.x peers, but some services will not be compatible.
In terms of usability, users should be aware that there are still a number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.19.0 release is still only suitable for early adopters with some reasonable pain tolerance .

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.19.0 (since 0.18.2)

  • UTIL : Moved GNUNET_BIO_MetaData handling into FS .
  • BUILD : platform.h removed as it should not be used by third parties anyway. gnunet_config.h is renamed to gnunet_private_config.h and the new replacement gnunet_config.h is added to provide build information for components linking against/using GNUnet.
  • UTIL : Components part of gnunet_util_lib.h must now be included through gnunet_util_lib.h and through that header only .
  • NAMESTORE : gnunet-namestore can now parse a list of records into zones from stdin in new recordline format.
  • GTK : Added an identity selector to the search to accomodate for previously deprecated "default" identities for subsystems.
  • Other: Postgres plugins implementations modernized and previous regressions fixed.

A detailed list of changes can be found in the ChangeLog and the bug tracker .

Known Issues

  • There are known major design issues in the TRANSPORT, ATS and CORE subsystems which will need to be addressed in the future to achieve acceptable usability, performance and security.
  • There are known moderate implementation limitations in CADET that negatively impact performance.
  • There are known moderate design issues in FS that also impact usability and performance.
  • There are minor implementation limitations in SET that create unnecessary attack surface for availability.
  • The RPS subsystem remains experimental.
  • Some high-level tests in the test-suite fail non-deterministically due to the low-level TRANSPORT issues.

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.

Thanks

This release was the work of many people. The following people contributed code and were thus easily identified: Christian Grothoff, Tristan Schwieren, madmurphy, t3sserakt, TheJackiMonster and Martin Schanzenbach.

05 December, 2022 11:00PM

December 01, 2022

FSF Blogs

FSF Events

Free Software Directory meeting on IRC: Friday, December 30, starting at 12:00 EST (17:00 UTC)

Join the FSF and friends on Friday, December 30, from 12:00 to 15:00 EST (17:00 to 20:00 UTC) to help improve the Free Software Directory.

01 December, 2022 05:39PM

Free Software Directory meeting on IRC: Friday, December 23, starting at 12:00 EST (17:00 UTC)

Join the FSF and friends on Friday, December 23, from 12:00 to 15:00 EST (17:00 to 20:00 UTC) to help improve the Free Software Directory.

01 December, 2022 05:36PM

Free Software Directory meeting on IRC: Friday, December 16, starting at 12:00 EST (17:00 UTC)

Join the FSF and friends on Friday, December 16, from 12:00 to 15:00 EST (17:00 to 20:00 UTC) to help improve the Free Software Directory.

01 December, 2022 05:32PM

Free Software Directory meeting on IRC: Friday, December 09, starting at 12:00 EST (17:00 UTC)

Join the FSF and friends on Friday, December 09, from 12:00 to 15:00 EST (17:00 to 20:00 UTC) to help improve the Free Software Directory.

01 December, 2022 05:29PM

November 30, 2022

FSF Blogs

Fifteen years of LibrePlanet: Register now to join us on March 18 and 19

The fifteenth edition of the Free Software Foundation's (FSF) annual conference is only a couple of months away. Registration is open now.

30 November, 2022 09:38PM

texinfo @ Savannah

Texinfo 7.0.1 released

We have released version 7.0.1 of Texinfo, the GNU documentation format. This is a minor bug-fix release.

It's available via a mirror (xz is much smaller than gz, but gz is available too just in case):
http://ftpmirror.gnu.org/texinfo/texinfo-7.0.1.tar.xz
http://ftpmirror.gnu.org/texinfo/texinfo-7.0.1.tar.gz

Please send any comments to bug-texinfo@gnu.org.

Full announcement:
https://lists.gnu.org/archive/html/bug-texinfo/2022-11/msg00237.html

30 November, 2022 06:38PM by Gavin D. Smith

November 28, 2022

Andy Wingo

are ephemerons primitive?

Good evening :) A quick note, tonight: I've long thought that ephemerons are primitive and can't be implemented with mark functions and/or finalizers, but today I think I have a counterexample.

For context, one of the goals of the GC implementation I have been working on on is to replace Guile's current use of the Boehm-Demers-Weiser (BDW) conservative collector. Of course, changing a garbage collector for a production language runtime is risky, and for Guile one of the mitigation strategies for this work is that the new collector is behind an abstract API whose implementation can be chosen at compile-time, without requiring changes to user code. That way we can first switch to BDW-implementing-the-new-GC-API, then switch the implementation behind that API to something else.

Abstracting GC is a tricky problem to get right, and I thank the MMTk project for showing that this is possible -- you have user-facing APIs that need to be implemented by concrete collectors, but also extension points so that the user can provide some compile-time configuration too, for example to provide field-tracing visitors that take into account how a user wants to lay out objects.

Anyway. As we discussed last time, ephemerons are usually have explicit support from the GC, so we need an ephemeron abstraction as part of the abstract GC API. The question is, can BDW-GC provide an implementation of this API?

I think the answer is "yes, but it's very gnarly and will kill performance so bad that you won't want to do it."

the contenders

Consider that the primitives that you get with BDW-GC are custom mark functions, run on objects when they are found to be live by the mark workers; disappearing links, a kind of weak reference; and finalizers, which receive the object being finalized, can allocate, and indeed can resurrect the object.

BDW-GC's finalizers are a powerful primitive, but not one that is useful for implementing the "conjunction" aspect of ephemerons, as they cannot constrain the marker's idea of graph connectivity: a finalizer can only prolong the life of an object subgraph, not cut it short. So let's put finalizers aside.

Weak references have a tantalizingly close kind of conjunction property: if the weak reference itself is alive, and the referent is also otherwise reachable, then the weak reference can be dereferenced. However this primitive only involves the two objects E and K; there's no way to then condition traceability of a third object V to E and K.

We are left with mark functions. These are an extraordinarily powerful interface in BDW-GC, but somewhat expensive also: not inlined, and going against the grain of what BDW-GC is really about (heaps in which the majority of all references are conservative). But, OK. They way they work is, your program allocates a number of GC "kinds", and associates mark functions with those kinds. Then when you allocate objects, you use those kinds. BDW-GC will call your mark functions when tracing an object of those kinds.

Let's assume firstly that you have a kind for ephemerons; then when you go to mark an ephemeron E, you mark the value V only if the key K has been marked. Problem solved, right? Only halfway: you also have to handle the case in which E is marked first, then K. So you publish E to a global hash table, and... well. You would mark V when you mark a K for which there is a published E. But, for that you need a hook into marking V, and V can be any object...

So now we assume additionally that all objects are allocated with user-provided custom mark functions, and that all mark functions check if the marked object is in the published table of pending ephemerons, and if so marks values. This is essentially what a proper ephemeron implementation would do, though there are some optimizations one can do to avoid checking the table for each object before the mark stack runs empty for the first time. In this case, yes you can do it! Additionally if you register disappearing links for the K field in each E, you can know if an ephemeron E was marked dead in a previous collection. Add a pre-mark hook (something BDW-GC provides) to clear the pending ephemeron table, and you are in business.

yes, but no

So, it is possible to implement ephemerons with just custom mark functions. I wouldn't want to do it, though: missing the mostly-avoid-pending-ephemeron-check optimization would be devastating, and really what you want is support in the GC implementation. I think that for the BDW-GC implementation in whippet I'll just implement weak-key associations, in which the value is always marked strongly unless the key was dead on a previous collection, using disappearing links on the key field. That way a (possibly indirect) reference from a value V to a key K can indeed keep K alive, but oh well: it's a conservative approximation of what should happen, and not worse than what Guile has currently.

Good night and happy hacking!

28 November, 2022 09:11PM by Andy Wingo

November 24, 2022

hyperbole @ Savannah

Installing Hyperbole from GNU-devel ELPA Packages

Installing the latest development version of Hyperbole

The latest development version of Hyperbole can be installed directly from the GNU-devel ELPA Packages using built-in Emacs Package Manager.

The Elpa GNU-devel package repository provides a development version of Hyperbole. It pulls from the latest Hyperbole development branch to get the tip version and makes an installable package.  This is done on a daily basis. Installing this does not require any new package manager software.  Since Hyperbole is a mature package, this version is usually fine to use and is updated on a day-to-day basis.  But new features are tested on this branch and once in awhile it may break for a short time before a fix is pushed.

To download and install this version of the Hyperbole, you should add the following lines to your personal Emacs initialization file, typically "~/.emacs". (For further details, see info page "(emacs)Init File", or Init-File).

(when (< emacs-major-version 27)
  (error "Hyperbole requires Emacs 27 or above; you are running version %d" emacs-major-version))
(require 'package)
(add-to-list 'package-archives '("gnu-devel" . "https://elpa.gnu.org/devel/"))
(unless (package-installed-p 'hyperbole)
  (package-refresh-contents)
  (package-install 'hyperbole))
(hyperbole-mode 1)

Now save the file and restart Emacs.  Hyperbole will then be downloaded and compiled for use with your version of Emacs; give it a minute or two. You may see a bunch of compilation warnings but these can be safely ignored.

24 November, 2022 10:33PM by Mats Lidell

November 22, 2022

parallel @ Savannah

GNU Parallel 20221122 ('Херсо́н') released

GNU Parallel 20221122 ('Херсо́н') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  [GNU Parallel] is the most amazing tool ever invented for bioinformatics!
    -- Istvan Albert https://www.ialbert.me/

New in this release:

  • Support for IPv6 adresses and _ in hostnames in --sshlogin.
  • Use --total-jobs for --eta/--bar if generating jobs is slow.
  • A lot of bug fixed in --latest-line.
  • Better support for MSYS2.
  • Better Text::CSV error messages.
  • --bar supports UTF8.
  • GNU Parallel is now on Mastodon: @GNU_Parallel@hostux.social
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
    12345678 883c667e 01eed62f 975ad28b 6d50e22a
    $ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
    cc21b4c9 43fd03e9 3ae1ae49 e28573c0
    $ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
    79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
    fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 November, 2022 07:03PM by Ole Tange

November 21, 2022

Luca Saiu

Announcing make-gallery, a simple web image gallery generator

I wrote a script generating an image gallery suitable to be included in web pages. Since it can be generally useful I cleaned it up and published it, of course as free software (https://www.gnu.org/philosophy/free-sw.html); you are welcome to download a copy of ‘make-gallery’ from . The software is released under the GNU General Public Licence (https://www.gnu.org/licenses/gpl-3.0.html) version 3 or later; the generated code is in the public domain. I hate the web I have never made a mystery of my personal dislike for the web with its gratuitous ever-growing complexity, inefficiency, lack of expressivity, hostility to the developer and to ... [Read more]

21 November, 2022 12:33AM by Luca Saiu (positron@gnu.org)

November 20, 2022

hyperbole @ Savannah

GNU Hyperbole 8.0.0, the Epiphany release, is now available on GNU ELPA

========================================================================

  • Overview

========================================================================

GNU Hyperbole 8.0.0, the Epiphany release, is now available on GNU ELPA.
Hyperbole is a unique hypertextual information management Emacs package
that works across all Emacs modes, letting the computer do the hard work
while you benefit from its sophisticated context-sensitive linking and
navigation capabilities.  Hyperbole has always been one of the best
documented Emacs packages.  With Version 8 comes excellent test coverage:
over 200 automated tests to ensure quality. We hope you'll give it a try.

What's new in this release is described here:

  www.gnu.org/s/hyperbole/HY-NEWS.html

  Everything back until release 7.1.3 is new since the last major
  release announcement (over a year ago), so updates are extensive.

If you prefer video introductions, visit the videos linked to below; otherwise,
skip to the next section.

GNU Hyperbole Videos

========================================================================

  • Introduction

========================================================================

Hyperbole is like Markdown for hypertext.  Hyperbole automatically
recognizes dozens of common patterns in any buffer regardless of mode
and can instantly activate them as hyperbuttons with a single key:
email addresses, URLs, grep -n outputs, programming backtraces,
sequences of Emacs keys, programming identifiers, Texinfo and Info
cross-references, Org links, Markdown links and on and on.  All you do
is load Hyperbole and then your text comes to life with no extra
effort or complex formatting.

Hyperbole interlinks all your working information within Emacs for
fast access and editing, not just within special modes.  Every button
is automatically assigned a type and new types can be developed for
your own buttons with simple function definitions.  You can create
your own buttons by simply dragging between two buffers.

But Hyperbole is also a hub controller for your information supplying
built-in capabilities of contact management/hierarchical record
lookup, legal-numbered outlines with hyperlinkable views and a unique
window and frame manager.  It is even Org-compatible so you can use
all of Org's capabilities together with Hyperbole.

Hyperbole is unique, powerful, extensively documented, and free.  Like
Emacs, Org, Counsel and Helm, Hyperbole has many different uses all
based around the theme of reducing cognitive load and improving your
everyday information management.  It reduces cognitive load by using
a single Action Key, {M-RET}, across many different contexts
which automatically chooses the best action

Then as you grow with it across time, it helps you build new capabilities
that continue to speed your work.

========================================================================

  • Installing and Using Hyperbole

========================================================================

To install within GNU Emacs, use:

   {M-x package-install RET hyperbole RET}

   Hyperbole installs in less than a minute and can be uninstalled even
   faster if ever need be.  Give it a try.

Then to invoke its minibuffer menu, use:

   {C-h h} or {M-x hyperbole RET}

The best way to get a feel for many of its capabilities is to invoke the
all new, interactive DEMO and explore sections of interest:

   {C-h h d d}

To permanently activate Hyperbole in your Emacs initialization file, add
the line:

   (hyperbole-mode 1)

Hyperbole is a minor mode that may be disabled at any time with:

   {C-u 0 hyperbole-mode RET}

The Hyperbole home page with screenshots is here:

   www.gnu.org/s/hyperbole

For use cases, see:

   www.gnu.org/s/hyperbole/HY-WHY.html

For what users think about Hyperbole, see:

   www.gnu.org/s/hyperbole/hyperbole.html#user-quotes

Enjoy,

The Hyperbole Team

20 November, 2022 10:28PM by Mats Lidell

gnulib @ Savannah

Gnulib helps you get away from fork() + exec()

Spawning a new process has traditionally been coded by a fork() call, followed by an execv/execl/execlp/execvp call in the child process. This is often referred to as the fork + exec idiom.

In 90% of the cases, there is something better: the posix_spawn/posix_spawnp functions.

Why is that better?

First, it's faster. The glibc implementation of posix_spawn, on Linux, uses a specialized system call (clone3) with a custom child-process stack, that makes it outperform the fork + exec idiom already now. And another speedup of 30% is being considered, see https://lwn.net/Articles/908268/ .

Second, it's more portable. While most Unix-like operating systems nowadays have both fork and posix_spawn, there are platforms which don't have fork(), namely Windows (excluding Cygwin). Comes in Gnulib for portability: Gnulib provides a posix_spawn implementation not only for the Unix platforms which lack it (today, that's only HP-UX), but also for Windows. In fact, Gnulib's posix_spawn implementation is the world's first for Windows platforms; the mingw libraries don't have one.

Why only in 90% of the cases?

Typically, between the fork and exec part, the application code will set up or configure some things in the child process. Such as closing file descriptors (this is necessary when pipes are involved), changing the current directory, and things like that.

posix_spawn has a certain set of setup / configuration "actions" that are supported. Namely, searching for the program in $PATH, opening files, shuffling arounds or closing file descriptors, and setting the tty-related process group. If that's all that the application code needs, then posix_spawn fits the bill. That should be 90% of the cases in practice.

How to do the change?

Before you replace a bit of fork + exec code with posix_spawn, you need to understand the main difference: The setup / configuration "actions" are encoded as C system calls in the old approach. Whereas with posix_spawn they are specified declaratively, by constructing an "actions" object in memory.

When you have done this change, you would test it on a glibc system.

And finally, for portability, import the Gnulib modules corresponding to all the posix_spawn* functions that you need.

20 November, 2022 04:27PM by Bruno Haible

November 16, 2022

lightning @ Savannah

GNU lightning 2.2.0 release

GNU lightning 2.2.0 released!

GNU lightning is a library to aid in making portable programs
that compile assembly code at run time.

Development:
http://git.savannah.gnu.org/cgit/lightning.git

Download release:
ftp://ftp.gnu.org/gnu/lightning/lightning-2.2.0.tar.gz

  GNU Lightning 2.2.0 extends the 2.1.4 release adding support for
Darwin aarch64, tested on Apple M1.

  Now there is the new --enable-devel-strong-type-checking configure
option, not enabled by default, but code that works with that option
will work on Apple M1.

  This release required significant rework as the Apple abi in aarch64
requires arguments to be truncated and zero/sign extended, unlike all
other ports. Jit generation will understand it, and use the system ABI,
avoiding double truncate and zero/sign extension.

  Due to the significant rework, the library major number was bumped,
and the opportunity used to reorder the jit_code_t enumeration.

16 November, 2022 03:15PM by Paulo César Pereira de Andrade

November 13, 2022

GNUnet News

NGI Zero Entrust: "GNS to DNS Migration and Zone Management"

NGI Zero Entrust: "GNS to DNS Migration and Zone Management"

We are happy to announce that we have successfully acquired funding for further GNS development and polishing!

The GNU Name System specification is in its final stages . Migration paths and large-scale testing as well as generating interest in running GNS zones and registrars is the next logical step. Hence, this project aims to

  1. Facilitate the management of GNS zones by administrators.
  2. Provide users with means to resolve real-world names by (partially) mirroring the DNS root zone.

Ad 1.: To ease adoption, a framework for GNS registrars will be developed for zone management. The registrar framework will allow GNS zone administrators to provide a web-interface for subdomain registration by other users. The services may also be provided for a fee similar to how DNS domain registrars operate to cover running costs. The framework is envisioned to support integration of privacy-friendly payments with GNU Taler .

Ad 2.: We are already hosting and shipping a zone for gnunet.org as part of our GNS implementation. To demonstrate how existing DNS registrars could migrate zones from DNS to GNS we plan to run multiple GNS zones ourselves which contain the zone information from real-world DNS top-level domains. This will also show how GNS can be used to secure the existing DNS namespace from censorship and outages when used in parallel. A selection of existing top-level domains for which open data exists will be hosted and served through GNS in order to facilitate the daily use of the name system. We are are planning to integrate at least three DNS zones and publish them through GNS for users to resolve in a default GNUnet installation.

Watch this space and the mailing list for updates!

This work is generously funded by NLnet as part of their NGI Zero Entrust Programme .

13 November, 2022 11:00PM

November 10, 2022

GNU Taler news

Richard Stallman's Business Pitch for Taler Systems SA

To fund further development of GNU Taler, Taler Systems SA is still looking for investors. Our chief moral officer has recorded a special business pitch for those that are interested.

10 November, 2022 11:00PM

November 08, 2022

poke @ Savannah

Binary Tools devroom @ FOSDEM 2023

GNU poke will be part of the Binary Tools devroom at the next edition of FOSDEM, to be celebrated 4th and 5th February 2023 in Brussels.

Below is the Call For Proposals for the devroom.  Hope to see you there, is gonna be fun! :)

Dates
=====

   25th November     CFP deadline
   15th December     Announcement of selected activities
   4 & 5th February  Conference dates

About the devroom
=================

  The Binary Tools Devroom at FOSDEM 2023 is an informal, technical,
  event oriented to authors, users and enthusiasts of FLOSS
  programs that deal with binary data.

  This includes binary editors, libraries to encode and decode data,
  parser generators, binary data description languages and frameworks,
  binary formats and encodings, assemblers, debuggers, reverse
  engineering suites, and the like.

  The goal of the devroom is for developers to get in touch with each
  other and with users of their tools, have interesting and hopefully
  productive discussions, and finally what is most important: to have
  fun.

Suggested Topics
================

  Here is a non-exhaustive list of binary tools about which we would
  like to have activities:

   - GNU poke
   - fq
   - radare2
   - kaitai struct
   - binwalk
   - wireshark

   Both using (like a nice hack) and developing the tools are on-topic.
   Activities on increasing collaboration between the tools are
   particularly encouraged.

Proposals
=========

   Proposals should be made through the FOSDEM Pentabarf submission tool]. You
   do not need to create a new Pentabarf account if you already have one from a
   past year.

   https://penta.fosdem.org/submission/FOSDEM23

   Please select the "Binary Tools Devroom" as the track and ensure
   you include the following information when submitting a proposal:

   - The name of the person, or persons, doing the proposed activity.

  

   - A short bio (one paragraph) for each person.
   - If desired, a photo 8-)
   - The title of the activity.
   - Activity abstract.
   - Duration of the activity: 15 minutes or 30 minutes.

   The deadline for submissions is November 25th, 2022. FOSDEM will be
   held on the weekend of February 4-5, 2023 and the Binary Tools
   devroom will take place on Sunday, February 5, 2023 in Brussels,
   Belgium.

Contact
=======

  The organizers of the devroom can be reached by sending email to
  binary-devroom-manager@fosdem.org.

  We are also in the #binary-tools IRC channel at irc.libera.chat.

  Please do not hesitate to contact us if you have any inquiry or
  suggestion for the devroom.

08 November, 2022 07:48PM by Jose E. Marchesi

November 07, 2022

texinfo @ Savannah

Texinfo 7.0 released

We have released version 7.0 of Texinfo, the GNU documentation format.

It's available via a mirror (xz is much smaller than gz, but gz is available too just in case):

http://ftpmirror.gnu.org/texinfo/texinfo-7.0.tar.xz
http://ftpmirror.gnu.org/texinfo/texinfo-7.0.tar.gz

Please send any comments to bug-texinfo@gnu.org.

Full announcement:
https://lists.gnu.org/archive/html/bug-texinfo/2022-11/msg00036.html

07 November, 2022 09:15PM by Gavin D. Smith

November 06, 2022

sed @ Savannah

sed-4.9 released [stable]

This is to announce sed-4.9, a stable release.

There have been 51 commits by 9 people in the nearly three years since 4.8.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Antonio Diaz Diaz (1)
  Assaf Gordon (5)
  Chris Marusich (1)
  Jim Meyering (28)
  Marvin Schmidt (1)
  Oğuz (1)
  Paul Eggert (11)
  Renaud Pacalet (1)
  Tobias Stoeckmann (2)

Jim [on behalf of the sed maintainers]
==================================================================

Here is the GNU sed home page:
    http://gnu.org/s/sed/

For a summary of changes and contributors, see:
  http://git.sv.gnu.org/gitweb/?p=sed.git;a=shortlog;h=v4.9
or run this command from a git-cloned sed directory:
  git shortlog v4.8..v4.9

To summarize the 2383 gnulib-related changes, run these commands
from a git-cloned sed directory:
  git checkout v4.9
  git submodule summary v4.8

==================================================================
Here are the compressed sources:
  https://ftp.gnu.org/gnu/sed/sed-4.9.tar.gz   (2.2MB)
  https://ftp.gnu.org/gnu/sed/sed-4.9.tar.xz   (1.4MB)

Here are the GPG detached signatures:
  https://ftp.gnu.org/gnu/sed/sed-4.9.tar.gz.sig
  https://ftp.gnu.org/gnu/sed/sed-4.9.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

69ad1f6be316fff4b23594287f16dfd14cd88093  sed-4.9.tar.gz
0UeKGPAzpzrBaCKQH2Uz0wtr5WG8vORv/Xq86TYCKC4  sed-4.9.tar.gz
8ded1b543f1f558cbd5d7b713602f6a8ee84bde4  sed-4.9.tar.xz
biJrcy4c1zlGStaGK9Ghq6QteYKSLaelNRljHSSXUYE  sed-4.9.tar.xz

The SHA256 checksum is base64 encoded, instead of the
hexadecimal encoding that most checksum tools default to.

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify sed-4.9.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
        Key fingerprint = 155D 3FC5 00C8 3448 6D1E  EA67 7FD9 FCCB 000B EEEE
  uid                   [ unknown] Jim Meyering <jim@meyering.net>
  uid                   [ unknown] Jim Meyering <meyering@fb.com>
  uid                   [ unknown] Jim Meyering <meyering@gnu.org>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key jim@meyering.net

  gpg --recv-keys 7FD9FCCB000BEEEE

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=sed&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify sed-4.9.tar.gz.sig


This release was bootstrapped with the following tools:
  Autoconf 2.72a.65-d081
  Automake 1.16i
  Gnulib v0.1-5550-g0524746392

NEWS

* Noteworthy changes in release 4.9 (2022-11-06) [stable]

** Bug fixes

  'sed --follow-symlinks -i' no longer loops forever when its operand
  is a symbolic link cycle.
  [bug introduced in sed 4.2]

  a program with an execution line longer than 2GB can no longer trigger
  an out-of-bounds memory write.

  using the R command to read an input line of length longer than 2GB
  can no longer trigger an out-of-bounds memory read.

  In locales using UTF-8 encoding, the regular expression '.' no
  longer sometimes fails to match Unicode characters U+D400 through
  U+D7FF (some Hangul Syllables, and Hangul Jamo Extended-B) and
  Unicode characters U+108000 through U+10FFFF (half of Supplemental
  Private Use Area plane B).
  [bug introduced in sed 4.8]

  I/O errors involving temp files no longer confuse sed into using a
  FILE * pointer after fclosing it, which has undefined behavior in C.

** New Features

  The 'r' command now accepts address 0, allowing inserting a file before
  the first line.

** Changes in behavior

   Sed now prints the less-surprising variant in a corner case of
   POSIX-unspecified behavior.  Before, this would print "n".
   Now, it prints "X":

    printf n | sed 'sn\nnXn'; echo

06 November, 2022 10:47PM by Jim Meyering

Parabola GNU/Linux-libre

systemd encrypted boot may be broken by upgrade to openssl v3 (systemd-cryptsetup), and various libcrypto.so.1.1 errors - suggest to postpone upgrading

until https://bugs.archlinux.org/task/76440 is resolved

FS#76440 : systemd-cryptsetup still refers to libcrypto.so.1.1 after upgrading to openssl3

see: https://labs.parabola.nu/issues/3368

UPDATE 2022-11-08: fixed in cryptsetup 2.5.0-4

06 November, 2022 11:33AM by bill auger

November 04, 2022

GNU Taler news

GNU Taler v0.9 released

We are happy to announce the release of GNU Taler v0.9.0.

04 November, 2022 11:00PM

lightning @ Savannah

GNU lightning 2.1.4 release

GNU lightning 2.1.4 released!

GNU lightning is a library to aid in making portable programs
that compile assembly code at run time.

Development:
http://git.savannah.gnu.org/cgit/lightning.git

Download release:
ftp://ftp.gnu.org/gnu/lightning/lightning-2.1.4.tar.gz

  2.1.4 main features are the new Loongarch port, currently supporting
only Linux 64 bit, and a new rewrite of the register live and
unknown state logic. Now it should be faster to generate code.

The matrix of built and tested environments is:
aarch64 Linux
alpha Linux (QEMU)
armv7l Linux (QEMU)
armv7hl Linux (QEMU)
hppa Linux (32 bit, QEMU)
i686 Linux, FreeBSD, NetBSD, OpenBSD and Cygwin/MingW
ia64 Linux
mips Linux
powerpc32 AIX
powerpc64 AIX
powerpc64le Linux
riscv Linux
s390 Linux
s390x Linux
sparc Linux
sparc64 Linux
x32 Linux
x86_64 Linux and Cygwin/MingW


  Highlights are:

  • Faster jit generation.
  • New loongarch port.
  • New skip instruction and rework of the align instruction.
  • New bswapr_us, bswapr_ui, bswapr_ul byte swap instructions.
  • New movzr and movnr conditional move instructions.
  • New casr and casi atomic compare and swap instructions.
  • Use short unconditional jumps and calls to forward, not yet defined labels.
  • And several bug fixes and optimizations.

04 November, 2022 12:43PM by Paulo César Pereira de Andrade

November 03, 2022

GNUnet News

GNUnet 0.18.1

GNUnet 0.18.1

This is a bugfix release for gnunet 0.18.0.

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.18.1 (since 0.18.0)

  • IDENTITY :
    • Major internal API cleanup with respect to key serialization.
    • Removed deprecated default subsystem API.
  • TESTING : Fix broken tests.
  • GTK : Update with recent changes to IDENTITY.

03 November, 2022 11:00PM

October 31, 2022

Andy Wingo

ephemerons and finalizers

Good day, hackfolk. Today we continue the series on garbage collection with some notes on ephemerons and finalizers.

conjunctions and disjunctions

First described in a 1997 paper by Barry Hayes, which attributes the invention to George Bosworth, ephemerons are a kind of weak key-value association.

Thinking about the problem abstractly, consider that the garbage collector's job is to keep live objects and recycle memory for dead objects, making that memory available for future allocations. Formally speaking, we can say:

  • An object is live if it is in the root set

  • An object is live it is referenced by any live object.

This circular definition uses the word any, indicating a disjunction: a single incoming reference from a live object is sufficient to mark a referent object as live.

Ephemerons augment this definition with a conjunction:

  • An object V is live if, for an ephemeron E containing an association betweeen objects K and V, both E and K are live.

This is a more annoying property for a garbage collector to track. If you happen to mark K as live and then you mark E as live, then you can just continue to trace V. But if you see E first and then you mark K, you don't really have a direct edge to V. (Indeed this is one of the main purposes for ephemerons: associating data with an object, here K, without actually modifying that object.)

During a trace of the object graph, you can know if an object is definitely alive by checking if it was visited already, but if it wasn't visited yet that doesn't mean it's not live: we might just have not gotten to it yet. Therefore one common implementation strategy is to wait until tracing the object graph is done before tracing ephemerons. But then we have another annoying problem, which is that tracing ephemerons can result in finding more live ephemerons, requiring another tracing cycle, and so on. Mozilla's Steve Fink wrote a nice article on this issue earlier this year, with some mitigations.

finalizers aren't quite ephemerons

All that is by way of introduction. If you just have an object graph with strong references and ephemerons, our definitions are clear and consistent. However, if we add some more features, we muddy the waters.

Consider finalizers. The basic idea is that you can attach one or a number of finalizers to an object, and that when the object becomes unreachable (not live), the system will invoke a function. One way to imagine this is a global association from finalizable object O to finalizer F.

As it is, this definition is underspecified in a few ways. One, what happens if F references O? It could be a GC-managed closure, after all. Would that prevent O from being collected?

Ephemerons solve this problem, in a way; we could trace the table of finalizers like a table of ephemerons. In that way F would only be traced if O is live already, so that by itself it wouldn't keep O alive. But then if O becomes dead, you'd want to invoke F, so you'd need it to be live, so reachability of finalizers is not quite the same as ephemeron-reachability: indeed logically all F values in the finalizer table are live, because they all will be invoked at some point.

In the end, if F references O, then F actually keeps O alive. Whether this prevents O from being finalized depends on our definition for finalizability. We could say that an object is finalizable if it is found to be unreachable after a full trace, and the finalizers F are in the root set. Or we could say that an object is finalizable if it is unreachable after a partial trace, in which finalizers are not themselves in the initial root set, and instead we trace them after determining the finalizable set.

Having finalizers in the initial root set is unfortunate: there's no quick check you can make when adding a finalizer to signal this problem to the user, and it's very hard to convey to a user exactly how it is that an object is referenced. You'd have to add lots of gnarly documentation on top of the already unavoidable gnarliness that you already had to write. But, perhaps it is a local maximum.

Incidentally, you might think that you can get around these issues by saying "don't reference objects from their finalizers", and that's true in a way. However it's not uncommon for finalizers to receive the object being finalized as an argument; after all, it's that object which probably encapsulates the information necessary for its finalization. Of course this can lead to the finalizer prolonging the longevity of an object, perhaps by storing it to a shared data structure. This is a risk for correct program construction (the finalized object might reference live-but-already-finalized objects), but not really a burden for the garbage collector, except in that it's a serialization point in the collection algorithm: you trace, you compute the finalizable set, then you have to trace the finalizables again.

ephemerons vs finalizers

The gnarliness continues! Imagine that O is associated with a finalizer F, and also, via ephemeron E, some auxiliary data V. Imagine that at the end of the trace, O is unreachable and so will be dead. Imagine that F receives O as an argument, and that F looks up the association for O in E. Is the association to V still there?

Guile's documentation on guardians, a finalization-like facility, specifies that weak associations (i.e. ephemerons) remain in place when an object becomes collectable, though I think in practice this has been broken since Guile switched to the BDW-GC collector some 20 years ago or so and I would like to fix it.

One nice solution falls out if you prohibit resuscitation by not including finalizer closures in the root set and not passing the finalizable object to the finalizer function. In that way you will never be able to look up E×OV, because you don't have O. This is the path that JavaScript has taken, for example, with WeakMap and FinalizationRegistry.

However if you allow for resuscitation, for example by passing finalizable objects as an argument to finalizers, I am not sure that there is an optimal answer. Recall that with resuscitation, the trace proceeds in three phases: first trace the graph, then compute and enqueue the finalizables, then trace the finalizables. When do you perform the conjunction for the ephemeron trace? You could do so after the initial trace, which might augment the live set, protecting some objects from finalization, but possibly missing ephemeron associations added in the later trace of finalizable objects. Or you could trace ephemerons at the very end, preserving all associations for finalizable objects (and their referents), which would allow more objects to be finalized at the same time.

Probably if you trace ephemerons early you will also want to trace them later, as you would do so because you think ephemeron associations are important, as you want them to prevent objects from being finalized, and it would be weird if they were not present for finalizable objects. This adds more serialization to the trace algorithm, though:

  1. (Add finalizers to the root set?)

  2. Trace from the roots

  3. Trace ephemerons?

  4. Compute finalizables

  5. Trace finalizables (and finalizer closures if not done in 1)

  6. Trace ephemerons again?

These last few paragraphs are the reason for today's post. It's not clear to me that there is an optimal way to compose ephemerons and finalizers in the presence of resuscitation. If you add finalizers to the root set, you might prevent objects from being collected. If you defer them until later, you lose the optimization that you can skip steps 5 and 6 if there are no finalizables. If you trace (not-yet-visited) ephemerons twice, that's overhead; if you trace them only once, the user could get what they perceive as premature finalization of otherwise reachable objects.

In Guile I think I am going to try to add finalizers to the root set, pass the finalizable to the finalizer as an argument, and trace ephemerons twice if there are finalizable objects. I think this wil minimize incoming bug reports. I am bummed though that I can't eliminate them by construction.

Until next time, happy hacking!

31 October, 2022 12:21PM by Andy Wingo

make @ Savannah

GNU Make 4.4 Released!

The next stable version of GNU Make, version 4.4, has been released and is available for download from https://ftp.gnu.org/gnu/make/

Please see the NEWS file that comes with the GNU make distribution for details on user-visible changes.

31 October, 2022 07:06AM by Paul D. Smith

October 29, 2022

GNUnet News

libgnunetchat 0.1.1

libgnunetchat 0.1.1 released

This is mostly a bugfix release for libgnunetchat 0.1.0. But it will also update the build process of libgnunetchat to use GNU Automake and it will ensure compatibility with latest changes in GNUnet 0.18.0.

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

29 October, 2022 10:00PM

October 26, 2022

GNUnet 0.18.0

GNUnet 0.18.0 released

We are pleased to announce the release of GNUnet 0.18.0.
GNUnet is an alternative network stack for building secure, decentralized and privacy-preserving distributed applications. Our goal is to replace the old insecure Internet protocol stack. Starting from an application for secure publication of files, it has grown to include all kinds of basic protocol components and applications towards the creation of a GNU internet.

This is a new major release. It breaks protocol compatibility with the 0.17.x versions. Please be aware that Git master is thus henceforth (and has been for a while) INCOMPATIBLE with the 0.17.x GNUnet network, and interactions between old and new peers will result in issues. 0.17.x peers will be able to communicate with Git master or 0.18.x peers, but some services - in particular the DHT - will not be compatible.
In terms of usability, users should be aware that there are still a number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.18.0 release is still only suitable for early adopters with some reasonable pain tolerance .

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.18.0 (since 0.17.6)

  • UTIL : Added enum GNUNET_ErrorCode for better error handling throughout the API.
  • NAMESTORE :
    • Moved namecache updates out of namestore and into zonemaster. This fixes issues from version 0.17.6 with respect to premature namestore monitor update messages and zone propagation. [ #7378 ]
    • Added a new API for bulk imports: GNUNET_NAMESTORE_records_store2 . The API can be combined with the transactional API in order to significantly improve namestore performance for lage zones. For postgres databases, storing records is around 20x faster than the old API. [ #7379 ]
    • New database setup utility gnunet-namestore-dbtool . Databases can be initialized and reset using this new CLI. Currently, database plugins still allow to initialize databases automatically as well by setting INIT_ON_CONNECT (Default: YES). [ #7204 ]
    • There are new APIs for zone iterations and monitoring which support filtering of records using GNUNET_GNSRECORD_Filter . By default, maintenance records such as TOMBSTONE s are filtered. [ #7193 ]
    • New zonefile import utility gnunet-namestore-zonefile that for DNS zone files. [ #7396 ]
    • Make use of new enum GNUNET_ErrorCode in C and REST API. [ #7399 ]
    • Included handling of orphaned GNS records. Records are orphaned of Egos are (accidentally) deleted which makes operations on records difficult but at the same time existing records are still published. [ #7401 , #7402 ]
    • Updated the C API documentation to reflect the above changes.
    • Updated the user documentation to reflect the above changes and included various tutorials on zone management.
    • Updated the REST API and its documentation to reflect the above changes.
  • ZONEMASTER : Zonemaster now uses worker threads for record signing.
  • DHT :
    • The specification has been updated to reflect the changes. LSD0004
  • BUILD :
    • Fix mysql/mariadb detection (again). [ #7356 ]
  • PACKAGING : Revamped the RPM package available through Fedora COPR and submitted it .

A detailed list of changes can be found in the ChangeLog and the bug tracker .

Known Issues

  • There are known major design issues in the TRANSPORT, ATS and CORE subsystems which will need to be addressed in the future to achieve acceptable usability, performance and security.
  • There are known moderate implementation limitations in CADET that negatively impact performance.
  • There are known moderate design issues in FS that also impact usability and performance.
  • There are minor implementation limitations in SET that create unnecessary attack surface for availability.
  • The RPS subsystem remains experimental.
  • Some high-level tests in the test-suite fail non-deterministically due to the low-level TRANSPORT issues.

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.

Thanks

This release was the work of many people. The following people contributed code and were thus easily identified: Bernd Fix, Christian Grothoff, Tristan Schwieren, madmurphy, Willow Liquorice, t3sserakt, TheJackiMonster and Martin Schanzenbach. We are greatful for funding from NGI Zero DISCOVERY that has supported several developers over the last four years to work on the GNU Name System and related subsystems.

26 October, 2022 10:00PM

October 24, 2022

gnuastro @ Savannah

Gnuastro 0.19 released

The 19th release of GNU Astronomy Utilities (Gnuastro) is now available. See the full announcement for all the new features in this release and the many bugs that have been found and fixed: https://lists.gnu.org/archive/html/info-gnuastro/2022-10/msg00001.html

24 October, 2022 11:16AM by Mohammad Akhlaghi

October 23, 2022

Luca Saiu

SMTP, OrangeWebsite and using your own computing resources

I have had a personal server with the domain ‘ageinghacker.net’ since 2010. At the beginning I was sharing hosting costs with two or three other people, each of us running a virtual machine inside a Virtual Private Server. By 2016 my requirements had grown, I wanted stability and so decided to rent a VPS by myself. Around that time I had also decided to run a Tor exit node for the benefit of the global community, and more in general wanted my server to be in a country that allowed some freedom of speech; since I did not, then like ... [Read more]

23 October, 2022 10:35PM by Luca Saiu (positron@gnu.org)

October 22, 2022

parallel @ Savannah

GNU Parallel 20221022 ('Nord Stream') released

GNU Parallel 20221022 ('Nord Stream') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  If used properly, #gnuparallel actually enables time travel.
    -- Dr. James Wasmuth @jdwasmuth@twitter

New in this release:

  • --latest-line chops line length at terminal width.
  • Determine max command length faster on Microsoft Windows.
  • Bug fixes and man page updates.

News about GNU Parallel:

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
    12345678 883c667e 01eed62f 975ad28b 6d50e22a
    $ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
    cc21b4c9 43fd03e9 3ae1ae49 e28573c0
    $ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
    79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
    fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 October, 2022 07:13PM by Ole Tange

Andy Wingo

the sticky mark-bit algorithm

Good day, hackfolk!

The Sticky Mark-Bit Algorithm

Also an intro to mark-sweep GC

7 Oct 2022 – Igalia

Andy Wingo

A funny post today; I gave an internal presentation at work recently describing the so-called "sticky mark bit" algorithm. I figured I might as well post it here, as a gift to you from your local garbage human.

Automatic Memory Management

“Don’t free, the system will do it for you”

Eliminate a class of bugs: use-after-free

Relative to bare malloc/free, qualitative performance improvements

  • cheap bump-pointer allocation
  • cheap reclamation/recycling
  • better locality

Continuum: bmalloc / tcmalloc grow towards GC

Before diving in though, we start with some broad context about automatic memory management. The term mostly means "garbage collection" these days, but really it describes a component of a system that provides fresh memory for new objects and automatically reclaims memory for objects that won't be needed in the program's future. This stands in contrast to manual memory management, which relies on the programmer to free their objects.

Of course, automatic memory management ensures some valuable system-wide properties, like lack of use-after-free vulnerabilities. But also by enlarging the scope of the memory management system to include full object lifetimes, we gain some potential speed benefits, for example eliminating any cost for free, in the case of e.g. a semi-space collector.

Automatic Memory Management

Two strategies to determine live object graph

  • Reference counting
  • Tracing

What to do if you trace

  • Mark, and then sweep or compact
  • Evacuate

Tracing O(n) in live object count

I should mention that reference counting is a form of automatic memory management. It's not enough on its own; unreachable cycles in the object reference graph have to be detected either by a heap tracer or broken by weak references.

It used to be that we GC nerds made fun of reference counting as being an expensive, half-assed solution that didn't work very well, but there have been some fundamental advances in the state of the art in the last 10 years or so.

But this talk is more about the other kind of memory management, which involves periodically tracing the graph of objects in the heap. Generally speaking, as you trace you can do one of two things: mark the object, simply setting a bit indicating that an object is live, or evacuate the object to some other location. If you mark, you may choose to then compact by sliding all objects down to lower addresses, squeezing out any holes, or you might sweep all holes into a free list for use by further allocations.

Mark-sweep GC (1/3)

freelist := []

allocate():
  if freelist is empty: collect()
  return freelist.pop()

collect():
  mark()
  sweep()
  if freelist is empty: abort

Concretely, let's look closer at mark-sweep. Let's assume for the moment that all objects are the same size. Allocation pops fresh objects off a freelist, and collects if there is none. Collection does a mark and then a sweep, aborting if sweeping yielded no free objects.

Mark-sweep GC (2/3)

mark():
  worklist := []
  for ref in get_roots():
    if mark_one(ref):
      worklist.add(ref)
  while worklist is not empty:
    for ref in trace(worklist.pop()):
      if mark_one(ref):
        worklist.add(ref)

sweep():
  for ref in heap:
    if marked(ref):
      unmark_one(ref)
    else
      freelist.add(ref)

Going a bit deeper, here we have some basic implementations of mark and sweep. Marking starts with the roots: edges from outside the automatically-managed heap indicating a set of initial live objects. You might get these by maintaining a stack of objects that are currently in use. Then it traces references from these roots to other objects, until there are no more references to trace. It will visit each live object exactly once, and so is O(n) in the number of live objects.

Sweeping requires the ability to iterate the heap. With the precondition here that collect is only ever called with an empty freelist, it will clear the mark bit from each live object it sees, and otherwise add newly-freed objects to the global freelist. Sweep is O(n) in total heap size, but some optimizations can amortize this cost.

Mark-sweep GC (3/3)

marked := 1

get_tag(ref):
  return *(uintptr_t*)ref
set_tag(ref, tag):
  *(uintptr_t*)ref = tag

marked(ref):
  return (get_tag(ref) & 1) == marked
mark_one(ref):
  if marked(ref): return false;
  set_tag(ref, (get_tag(ref) & ~1) | marked)
  return true
unmark_one(ref):
  set_tag(ref, (get_tag(ref) ^ 1))

Finally, some details on how you might represent a mark bit. If a ref is a pointer, we could store the mark bit in the first word of the objects, as we do here. You can choose instead to store them in a side table, but it doesn't matter for today's example.

Observations

Freelist implementation crucial to allocation speed

Non-contiguous allocation suboptimal for locality

World is stopped during collect(): “GC pause”

mark O(n) in live data, sweep O(n) in total heap size

Touches a lot of memory

The salient point is that these O(n) operations happen when the world is stopped. This can be noticeable, even taking seconds for the largest heap sizes. It sure would be nice to have the benefits of GC, but with lower pause times.

Optimization: rotate mark bit

flip():
  marked ^= 1

collect():
  flip()
  mark()
  sweep()
  if freelist is empty: abort

unmark_one(ref):
  pass

Avoid touching mark bits for live data

Incidentally, before moving on, I should mention an optimization to mark bit representation: instead of clearing the mark bit for live objects during the sweep phase, we could just choose to flip our interpretation of what the mark bit means. This allows unmark_one to become a no-op.

Reducing pause time

Parallel tracing: parallelize mark. Clear improvement, but speedup depends on object graph shape (e.g. linked lists).

Concurrent tracing: mark while your program is running. Tricky, and not always a win (“Retrofitting Parallelism onto OCaml”, ICFP 2020).

Partial tracing: mark only a subgraph. Divide space into regions, record inter-region links, collect one region only. Overhead to keep track of inter-region edges.

Now, let's revisit the pause time question. What can we do about it? In general there are three strategies.

Generational GC

Partial tracing

Two spaces: nursery and oldgen

Allocations in nursery (usually)

Objects can be promoted/tenured from nursery to oldgen

Minor GC: just trace the nursery

Major GC: trace nursery and oldgen

“Objects tend to die young”

Overhead of old-to-new edges offset by less amortized time spent tracing

Today's talk is about partial tracing. The basic idea is that instead of tracing the whole graph, just trace a part of it, ideally a small part.

A simple and effective strategy for partitioning a heap into subgraphs is generational garbage collection. The idea is that objects tend to die young, and that therefore it can be profitable to focus attention on collecting objects that were allocated more recently. You therefore partition the heap graph into two parts, young and old, and you generally try to trace just the young generation.

The difficulty with partitioning the heap graph is that you need to maintain a set of inter-partition edges, and you do so by imposing overhead on the user program. But a generational partition minimizes this cost because you never do an only-old-generation collection, so you don't need to remember new-to-old edges, and mutations of old objects are less common than new.

Generational GC

Usual implementation: semispace nursery and mark-compact oldgen

Tenuring via evacuation from nursery to oldgen

Excellent locality in nursery

Very cheap allocation (bump-pointer)

But... evacuation requires all incoming edges to an object to be updated to new location

Requires precise enumeration of all edges

Usually the generational partition is reflected in the address space: there is a nursery and it is in these pages and an oldgen in these other pages, and never the twain shall meet. To tenure an object is to actually move it from the nursery to the old generation. But moving objects requires that the collector be able to enumerate all incoming edges to that object, and then to have the collector update them, which can be a bit of a hassle.

JavaScriptCore

No precise stack roots, neither in generated nor C++ code

Compare to V8’s Handle<> in C++, stack maps in generated code

Stack roots conservative: integers that happen to hold addresses of objects treated as object graph edges

(Cheaper implementation strategy, can eliminate some bugs)

Specifically in JavaScriptCore, the JavaScript engine of WebKit and the Safari browser, we have a problem. JavaScriptCore uses a technique known as "conservative root-finding": it just iterates over the words in a thread's stack to see if any of those words might reference an object on the heap. If they do, JSC conservatively assumes that it is indeed a reference, and keeps that object live.

Of course a given word on the stack could just be an integer which happens to be an object's address. In that case we would hold on to too much data, but that's not so terrible.

Conservative root-finding is again one of those things that GC nerds like to make fun of, but the pendulum seems to be swinging back its way; perhaps another article on that some other day.

JavaScriptCore

Automatic memory management eliminates use-after-free...

...except when combined with manual memory management

Prevent type confusion due to reuse of memory for object of different shape

addrof/fakeobj primitives: phrack.org/issues/70/3.html

Type-segregated heaps

No evacuation: no generational GC?

The other thing about JSC is that it is constantly under attack by malicious web sites, and that any bug in it is a step towards hackers taking over your phone. Besides bugs inside JSC, there are bugs also in the objects exposed to JavaScript from the web UI. Although use-after-free bugs are impossible with a fully traceable object graph, references to and from DOM objects might not be traceable by the collector, instead referencing GC-managed objects by reference counting or weak references or even manual memory management. Bugs in these interfaces are a source of exploitable vulnerabilities.

In brief, there seems to be a decent case for trying to mitigate use-after-free bugs. Beyond the nuclear option of not freeing, one step we could take would be to avoid re-using memory between objects of different shapes. So you have a heap for objects with 3 fields, another objects with 4 fields, and so on.

But it would seem that this mitigation is at least somewhat incompatible with the usual strategy of generational collection, where we use a semi-space nursery. The nursery memory gets re-used all the time for all kinds of objects. So does that rule out generational collection?

Sticky mark bit algorithm

collect(is_major=false):
  if is_major: flip()
  mark(is_major)
  sweep()
  if freelist is empty:
    if is_major: abort
    collect(true)

mark(is_major):
  worklist := []
  if not is_major:
    worklist += remembered_set
    remembered_set := []
  ...

Turns out, you can generationally partition a mark-sweep heap.

Recall that to visit each live object, you trace the heap, setting mark bits. To visit them all again, you have to clear the mark bit between traces. Our first collect implementation did so in sweep, via unmark_one; then with the optimization we switched to clear them all before the next trace in flip().

Here, then, the trick is that you just don't clear the mark bit between traces for a minor collection (tracing just the nursery). In that way all objects that were live at the previous collection are considered the old generation. Marking an object is tenuring, in-place.

There are just two tiny modifications to mark-sweep to implement sticky mark bit collection: one, flip the mark bit only on major collections; and two, include a remembered set in the roots for minor collections.

Sticky mark bit algorithm

Mark bit from previous trace “sticky”: avoid flip for minor collections

Consequence: old objects not traced, as they are already marked

Old-to-young edges: the “remembered set”

Write barrier

write_field(object, offset, value):
  remember(object)
  object[offset] = value

The remembered set is maintained by instrumenting each write that the program makes with a little call out to code from the garbage collector. This code is the write barrier, and here we use it to add to the set of objects that might reference new objects. There are many ways to implement this write barrier but that's a topic for another day.

JavaScriptCore

Parallel GC: Multiple collector threads

Concurrent GC: mark runs while JS program running; “riptide”; interaction with write barriers

Generational GC: in-place, non-moving GC generational via sticky mark bit algorithm

Alan Demers, “Combining generational and conservative garbage collection: framework and implementations”, POPL ’90

So returning to JavaScriptCore and the general techniques for reducing pause times, I can summarize to note that it does them all. It traces both in parallel and concurrently, and it tries to trace just newly-allocated objects using the sticky mark bit algorithm.

Conclusions

A little-used algorithm

Motivation for JSC: conservative roots

Original motivation: conservative roots; write barrier enforced by OS-level page protections

Revived in “Sticky Immix”

Better than nothing, not quite as good as semi-space nursery

I find that people that are interested in generational GC go straight for the semispace nursery. There are some advantages to that approach: allocation is generally cheaper in a semispace than in a mark space, locality among new objects is better, locality after tenuring is better, and you have better access locality during a nursery collection.

But if for some reason you find yourself unable to enumerate all roots, you can still take advantage of generational collection via the sticky mark-bit algorithm. It's a simple change that improves performance, as long as you are able to insert write barriers on all heap object mutations.

The challenge with a sticky-mark-bit approach to generations is avoiding the O(n) sweep phase. There are a few strategies, but more on that another day perhaps.

And with that, presentation done. Until next time, happy hacking!

22 October, 2022 08:42AM by Andy Wingo

October 12, 2022

GNU Health

Happy birthday, GNU Health!

On a day like this, October 12th, 2008, I registered the “Medical” project at SourceForge. Fourteen years later, GNU Health has become the Libre digital health ecosystem used by governments, hospitals, laboratories, research institutions and health professionals around the globe.

I want to sincerely thank all the professionals who believed in the project since early on… from small clinics in the African rain forest, to many public primary care institutions in Argentina, to the largest hospital in India and Asia (AIIMS).

GNU Health, the Libre digital health ecosystem

Institutions such as the University of Entre Rios in Argentina, Leibniz University Hanover, the United Nations Institute for Global Health, the World Health organization and the European Bioinformatics Institute (EBI), Digital Public Goods Alliance, have helped the GNU Health project, by providing training, implementations or valuable resources in areas related to coding standards and medical genetics.

Many thanks to our sponsors, particularly Thymbra and openSUSE who have been supporting GNU Health since day one, sponsoring our annual congress (GNUHealthCon). In addition, openSUSE has donated raspberry pi devices for development and for implementation projects, as well as packaging GNU Health for their distribution. Thank you Fosshost, for all this years of hosting the GNU Health HMIS and the BigBlueButton for our conferences!

Thank you European Open Source Observatory Repository (OSOR) / Joinup and the Free Software Foundation Europe for your work in making GNU Health a reality in Europe, specially in the Public Health sector.

Immense gratitude to the GNU operating system, particularly, to Richard Stallman -father of the Free Software movement- who in 2011 declared GNU Health an official GNU project. Since that day, all the components of the GH ecosystem are hosted in Savannah.

GNU Package
GNU Health is an official GNU Package

The GNU Health ecosystem would not exist today without the Libre Software community. Excellent Libre projects like Tryton, LibreOffice, PostgreSQL, Flask, Python, GNUPG, Apache, and many others make GNU Health a reality. We’re so happy to count with our sister community Orthanc, a great Libre Medical Imaging project that makes the perfect GNU Health partner in hospital settings and diagnostic imaging.

Last but not least: Thank you to the core team and to the community around the world: Developers, testers, translators, artists, documentation team, podcasters and journalists … I can not name you all… but the success of GNU Health belongs to you.

On a day like this, 14 years ago, the revolution for freedom and equity in healthcare began. And this is just starting…. at GNU Solidario, we’ll keep on advancing Social Medicine, and fighting so health remains a non-negotiable human right, no matter where you live. After all, GNU Health is a Social project with a little bit of technology behind.

Happy and Healthy hacking!

Luis Falcón

(Original document: https://my.gnusolidario.org/2022/10/12/happy-birthday-gnu-health/)

12 October, 2022 06:37PM by Luis Falcon

October 10, 2022

Luca Saiu

A personal reflection on the GNU Hackers' Meeting 2022

According to the definition on the web site (https://www.gnu.org/ghm/2022/) “The GNU Hackers’ Meetings or ‘GHMs’ are a venue to discuss technical topics related to GNU and free software”. And GHMs are in fact events structured as conferences with talks and presentation slides and all; very technical indeed, the way we like them and the way they should be. But if we take the time for attending every year since 2007 or so, and organising, it is mostly for the fun of spending time with our GNU friends in a relaxed environment. After many years in which most GNU Hackers’ Meetings ... [Read more]

10 October, 2022 10:30PM by Luca Saiu (positron@gnu.org)

October 03, 2022

Andy Wingo

on "correct and efficient work-stealing for weak memory models"

Hello all, a quick post today. Inspired by Rust as a Language for High Performance GC Implementation by Yi Lin et al, a few months ago I had a look to see how the basic Rust concurrency facilities that they used were implemented.

One of the key components that Lin et al used was a Chase-Lev work-stealing double-ended queue (deque). The 2005 article Dynamic Circular Work-Stealing Deque by David Chase and Yossi Lev is a nice read defining this data structure. It's used when you have a single producer of values, but multiple threads competing to claim those values. This is useful when implementing per-CPU schedulers or work queues; each CPU pushes on any items that it has to its own deque, and pops them also, but when it runs out of work, it goes to see if it can steal work from other CPUs.

The 2013 paper Correct and Efficient Work-Stealing for Weak Memory Models by Nhat Min Lê et al updates the Chase-Lev paper by relaxing the concurrency primitives from the original big-hammer sequential-consistency operations used in the Chase-Lev paper to an appropriate mix of C11 relaxed, acquire/release, and sequentially-consistent operations. The paper therefore has a C11 translation of the original algorithm, and a proof of correctness. It's quite pleasant. Here's the a version in Rust's crossbeam crate, and here's the same thing in C.

I had been using this updated C11 Chase-Lev deque implementation for a while with no complaints in a parallel garbage collector. Each worker thread would keep a local unsynchronized work queue, which when it grew too large would donate half of its work to a per-worker Chase-Lev deque. Then if it ran out of work, it would go through all the workers, seeing if it could steal some work.

My use of the deque was thus limited to only the push and steal primitives, but not take (using the language of the Lê et al paper). take is like steal, except that it takes values from the producer end of the deque, and it can't run concurrently with push. In practice take only used by the the thread that also calls push. Cool.

Well I thought, you know, before a worker thread goes to steal from some other thread, it might as well see if it can do a cheap take on its own deque to see if it could take back some work that it had previously offloaded there. But here I ran into a bug. A brief internet search didn't turn up anything, so here we are to mention it.

Specifically, there is a bug in the Lê et al paper that is not in the Chase-Lev paper. The original paper is in Java, and the C11 version is in, well, C11. The issue is.... integer overflow! In brief, push will increment bottom, and steal increments top. take, on the other hand, can decrement bottom. It uses size_t to represent bottom. I think you see where this is going; if you take on an empty deque in the initial state, you create a situation that looks just like a deque with (size_t)-1 elements, causing garbage reads and all kinds of delightful behavior.

The funny thing is that I looked at the proof and I looked at the industrial applications of the deque and I thought well, I just have to transcribe the algorithm exactly and I'll be golden. But it just goes to show that proving one property of an algorithm doesn't necessarily imply that the algorithm is correct.

03 October, 2022 08:24AM by Andy Wingo

September 28, 2022

GNU Guix

Wrapping up Ten Years of Guix in Paris

Two weeks ago, some of us were in Paris, France, to celebrate ten years of Guix! The event included 22 talks and 12 lightning talks, covering topics ranging from reproducible research on Friday and Guix hacking on Saturday and Sunday.

If you couldn’t make it in Paris, and if you missed the live stream, we have some good news: videos of the talks and supporting material are now available from the program page!

If you weren’t there, there are things you definitely missed though: more than 60 participants from a diverse range of backgrounds—a rare opportunity for scientists and hackers to meet!—, impromptu discussions and encounters, and of course not one but two crazy birthday cakes (yup! on one day it was vanilla/blueberry-flavored, and on the other day it was chocolate/passion fruit, but both were equally beautiful!).

Picture of the Guix birthday cake.

There are a few more pictures on the web site.

It might seem a bit of a stretch at first, but there is a connection between, say, bioinformatics pipelines, OCaml bootstrapping, and Guix Home: it’s about deploying complex software stacks in a way that is not only convenient but also transparent and reproducible. It’s about retaining control, both collectively and individually, over the “software supply chain” at a time when the most popular option is to give up.

We have lots of people to thank, starting with the speakers and participants: thanks for sharing your knowledge and enthusiasm, and thank you for making it a warm and friendly event! Thanks to the sponsors of the event without which all this would have been impossible.

Special thanks to Nicolas Dandrimont of the Debian video team for setting up the video equipment, tirelessly working during all three days and even afterwards to prepare the “final cut”—you rock!! Thanks to Leo Famulari for setting up the live streaming server on short notice, and to Luis Felipe for designing the unanimously acclaimed Ten Years of Guix graphics, the kakemono, and the video intros and outros (check out the freely-licensed SVG source!), all that under pretty tight time constraints. Thanks also to Andreas Enge with their Guix Europe hat on for addressing last-minute hiccups behind the scenes.

Organizing this event has certainly been exhausting, but seeing it come true and meeting both new faces and old-timers was a great reward for us. Despite the occasional shenanigans—delayed talks, one talk cancellation, and worst of all: running out of coffee and tea after lunch—we hope it was enjoyable for all.

For those in Europe, our next in-person meeting is probably going to be FOSDEM. And maybe this will inspire some to organize events in other regions of the world and/or on-line meetups!

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64, and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

28 September, 2022 04:30PM by Ludovic Courtès, Tanguy Le Carrour, Simon Tournier

September 25, 2022

Parabola GNU/Linux-libre

[From Arch] Removing python2 from the repositories

Python 2 went end of life January 2020. Since then Arch has been actively cutting down the number of projects depending on python2 in their repositories, and they have finally been able to drop it from our distribution, making it disappear from Parabola too. If you still have python2 installed on your system consider removing it and any python2 package.

If you still require the python2 package you can keep it around, but please be aware that there will be no security updates.

25 September, 2022 08:23PM by David P.

September 22, 2022

parallel @ Savannah

GNU Parallel 20220922 ('Elizabeth') released

GNU Parallel 20220922 ('Elizabeth') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  reduced our backend test pipelines from 4 to 1.30 hrs. gnu parallel for the win!!!
     -- Swapnil Sahu @CaffeinatedWryy@twitter

New in this release:

  • --colour-failed only changes output for failing jobs.
  • Password for --sshlogin can be put in $SSHPASS.
  • Examples are moved from `man parallel` to `man parallel_examples`.
  • Bug fixes and man page updates.

News about GNU Parallel:

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
    12345678 883c667e 01eed62f 975ad28b 6d50e22a
    $ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
    cc21b4c9 43fd03e9 3ae1ae49 e28573c0
    $ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
    79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
    fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 September, 2022 07:49PM by Ole Tange

September 20, 2022

Simon Josefsson

Privilege separation of GSS-API credentials for Apache

To protect web resources with Kerberos you may use Apache HTTPD with mod_auth_gssapi — however, all web scripts (e.g., PHP) run under Apache will have access to the Kerberos long-term symmetric secret credential (keytab). If someone can get it, they can impersonate your server, which is bad.

The gssproxy project makes it possible to introduce privilege separation to reduce the attack surface. There is a tutorial for RPM-based distributions (Fedora, RHEL, AlmaLinux, etc), but I wanted to get this to work on a DPKG-based distribution (Debian, Ubuntu, Trisquel, PureOS, etc) and found it worthwhile to document the process. I’m using Ubuntu 22.04 below, but have tested it on Debian 11 as well. I have adopted the gssproxy package in Debian, and testing this setup is part of the scripted autopkgtest/debci regression testing.

First install the required packages:

root@foo:~# apt-get update
root@foo:~# apt-get install -y apache2 libapache2-mod-auth-gssapi gssproxy curl

This should give you a working and running web server. Verify it is operational under the proper hostname, I’ll use foo.sjd.se in this writeup.

root@foo:~# curl --head http://foo.sjd.se/
HTTP/1.1 200 OK

The next step is to create a keytab containing the Kerberos V5 secrets for your host, the exact steps depends on your environment (usually kadmin ktadd or ipa-getkeytab), but use the string “HTTP/foo.sjd.se” and then confirm using something like the following.

root@foo:~# ls -la /etc/gssproxy/httpd.keytab
-rw------- 1 root root 176 Sep 18 06:44 /etc/gssproxy/httpd.keytab
root@foo:~# klist -k /etc/gssproxy/httpd.keytab -e
Keytab name: FILE:/etc/gssproxy/httpd.keytab
KVNO Principal
---- --------------------------------------------------------------------------
   2 HTTP/foo.sjd.se@GSSPROXY.EXAMPLE.ORG (aes256-cts-hmac-sha1-96) 
   2 HTTP/foo.sjd.se@GSSPROXY.EXAMPLE.ORG (aes128-cts-hmac-sha1-96) 
root@foo:~# 

The file should be owned by root and not be in the default /etc/krb5.keytab location, so Apache’s libapache2-mod-auth-gssapi will have to use gssproxy to use it.

Then configure gssproxy to find the credential and use it with Apache.

root@foo:~# cat<<EOF > /etc/gssproxy/80-httpd.conf
[service/HTTP]
mechs = krb5
cred_store = keytab:/etc/gssproxy/httpd.keytab
cred_store = ccache:/var/lib/gssproxy/clients/krb5cc_%U
euid = www-data
process = /usr/sbin/apache2
EOF

For debugging, it may be useful to enable more gssproxy logging:

root@foo:~# cat<<EOF > /etc/gssproxy/gssproxy.conf
[gssproxy]
debug_level = 1
EOF
root@foo:~#

Restart gssproxy so it finds the new configuration, and monitor syslog as follows:

root@foo:~# tail -F /var/log/syslog &
root@foo:~# systemctl restart gssproxy

You should see something like this in the log file:

Sep 18 07:03:15 foo gssproxy[4076]: [2022/09/18 05:03:15]: Exiting after receiving a signal
Sep 18 07:03:15 foo systemd[1]: Stopping GSSAPI Proxy Daemon…
Sep 18 07:03:15 foo systemd[1]: gssproxy.service: Deactivated successfully.
Sep 18 07:03:15 foo systemd[1]: Stopped GSSAPI Proxy Daemon.
Sep 18 07:03:15 foo gssproxy[4092]: [2022/09/18 05:03:15]: Debug Enabled (level: 1)
Sep 18 07:03:15 foo systemd[1]: Starting GSSAPI Proxy Daemon…
Sep 18 07:03:15 foo gssproxy[4093]: [2022/09/18 05:03:15]: Kernel doesn't support GSS-Proxy (can't open /proc/net/rpc/use-gss-proxy: 2 (No such file or directory))
Sep 18 07:03:15 foo gssproxy[4093]: [2022/09/18 05:03:15]: Problem with kernel communication! NFS server will not work
Sep 18 07:03:15 foo systemd[1]: Started GSSAPI Proxy Daemon.
Sep 18 07:03:15 foo gssproxy[4093]: [2022/09/18 05:03:15]: Initialization complete.

The NFS-related errors is due to a default gssproxy configuration file, it is harmless and if you don’t use NFS with GSS-API you can silence it like this:

root@foo:~# rm /etc/gssproxy/24-nfs-server.conf
root@foo:~# systemctl try-reload-or-restart gssproxy

The log should now indicate that it loaded the keytab:

Sep 18 07:18:59 foo systemd[1]: Reloading GSSAPI Proxy Daemon…
Sep 18 07:18:59 foo gssproxy[4182]: [2022/09/18 05:18:59]: Received SIGHUP; re-reading config.
Sep 18 07:18:59 foo gssproxy[4182]: [2022/09/18 05:18:59]: Service: HTTP, Keytab: /etc/gssproxy/httpd.keytab, Enctype: 18
Sep 18 07:18:59 foo gssproxy[4182]: [2022/09/18 05:18:59]: New config loaded successfully.
Sep 18 07:18:59 foo systemd[1]: Reloaded GSSAPI Proxy Daemon.

To instruct Apache — or actually, the MIT Kerberos V5 GSS-API library used by mod_auth_gssap loaded by Apache — to use gssproxy instead of using /etc/krb5.keytab as usual, Apache needs to be started in an environment that has GSS_USE_PROXY=1 set. The background is covered by the gssproxy-mech(8) man page and explained by the gssproxy README.

When systemd is used the following can be used to set the environment variable, note the final command to reload systemd.

root@foo:~# mkdir -p /etc/systemd/system/apache2.service.d
root@foo:~# cat<<EOF > /etc/systemd/system/apache2.service.d/gssproxy.conf
[Service]
Environment=GSS_USE_PROXY=1
EOF
root@foo:~# systemctl daemon-reload

The next step is to configure a GSS-API protected Apache resource:

root@foo:~# cat<<EOF > /etc/apache2/conf-available/private.conf
<Location /private>
  AuthType GSSAPI
  AuthName "GSSAPI Login"
  Require valid-user
</Location>

Enable the configuration and restart Apache — the suggested use of reload is not sufficient, because then it won’t be restarted with the newly introduced GSS_USE_PROXY variable. This just applies to the first time, after the first restart you may use reload again.

root@foo:~# a2enconf private
Enabling conf private.
To activate the new configuration, you need to run:
systemctl reload apache2
root@foo:~# systemctl restart apache2

When you have debug messages enabled, the log may look like this:

Sep 18 07:32:23 foo systemd[1]: Stopping The Apache HTTP Server…
Sep 18 07:32:23 foo gssproxy[4182]: [2022/09/18 05:32:23]: Client [2022/09/18 05:32:23]: (/usr/sbin/apache2) [2022/09/18 05:32:23]: connected (fd = 10)[2022/09/18 05:32:23]: (pid = 4651) (uid = 0) (gid = 0)[2022/09/18 05:32:23]:
Sep 18 07:32:23 foo gssproxy[4182]: message repeated 4 times: [ [2022/09/18 05:32:23]: Client [2022/09/18 05:32:23]: (/usr/sbin/apache2) [2022/09/18 05:32:23]: connected (fd = 10)[2022/09/18 05:32:23]: (pid = 4651) (uid = 0) (gid = 0)[2022/09/18 05:32:23]:]
Sep 18 07:32:23 foo systemd[1]: apache2.service: Deactivated successfully.
Sep 18 07:32:23 foo systemd[1]: Stopped The Apache HTTP Server.
Sep 18 07:32:23 foo systemd[1]: Starting The Apache HTTP Server…
Sep 18 07:32:23 foo gssproxy[4182]: [2022/09/18 05:32:23]: Client [2022/09/18 05:32:23]: (/usr/sbin/apache2) [2022/09/18 05:32:23]: connected (fd = 10)[2022/09/18 05:32:23]: (pid = 4657) (uid = 0) (gid = 0)[2022/09/18 05:32:23]:
root@foo:~# Sep 18 07:32:23 foo gssproxy[4182]: message repeated 8 times: [ [2022/09/18 05:32:23]: Client [2022/09/18 05:32:23]: (/usr/sbin/apache2) [2022/09/18 05:32:23]: connected (fd = 10)[2022/09/18 05:32:23]: (pid = 4657) (uid = 0) (gid = 0)[2022/09/18 05:32:23]:]
Sep 18 07:32:23 foo systemd[1]: Started The Apache HTTP Server.

Finally, set up a dummy test page on the server:

root@foo:~# echo OK > /var/www/html/private

To verify that the server is working properly you may acquire tickets locally and then use curl to retrieve the GSS-API protected resource. The "--negotiate" enables SPNEGO and "--user :" asks curl to use username from the environment.

root@foo:~# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: jas@GSSPROXY.EXAMPLE.ORG

Valid starting Expires Service principal
09/18/22 07:40:37 09/19/22 07:40:37 krbtgt/GSSPROXY.EXAMPLE.ORG@GSSPROXY.EXAMPLE.ORG
root@foo:~# curl --negotiate --user : http://foo.sjd.se/private
OK
root@foo:~#

The log should contain something like this:

Sep 18 07:56:00 foo gssproxy[4872]: [2022/09/18 05:56:00]: Client [2022/09/18 05:56:00]: (/usr/sbin/apache2) [2022/09/18 05:56:00]: connected (fd = 10)[2022/09/18 05:56:00]: (pid = 5042) (uid = 33) (gid = 33)[2022/09/18 05:56:00]:
Sep 18 07:56:00 foo gssproxy[4872]: [CID 10][2022/09/18 05:56:00]: gp_rpc_execute: executing 6 (GSSX_ACQUIRE_CRED) for service "HTTP", euid: 33,socket: (null)
Sep 18 07:56:00 foo gssproxy[4872]: [CID 10][2022/09/18 05:56:00]: gp_rpc_execute: executing 6 (GSSX_ACQUIRE_CRED) for service "HTTP", euid: 33,socket: (null)
Sep 18 07:56:00 foo gssproxy[4872]: [CID 10][2022/09/18 05:56:00]: gp_rpc_execute: executing 1 (GSSX_INDICATE_MECHS) for service "HTTP", euid: 33,socket: (null)
Sep 18 07:56:00 foo gssproxy[4872]: [CID 10][2022/09/18 05:56:00]: gp_rpc_execute: executing 6 (GSSX_ACQUIRE_CRED) for service "HTTP", euid: 33,socket: (null)
Sep 18 07:56:00 foo gssproxy[4872]: [CID 10][2022/09/18 05:56:00]: gp_rpc_execute: executing 9 (GSSX_ACCEPT_SEC_CONTEXT) for service "HTTP", euid: 33,socket: (null)

The Apache log will look like this, notice the authenticated username shown.

127.0.0.1 - jas@GSSPROXY.EXAMPLE.ORG [18/Sep/2022:07:56:00 +0200] "GET /private HTTP/1.1" 200 481 "-" "curl/7.81.0"

Congratulations, and happy hacking!

20 September, 2022 06:40AM by simon

September 19, 2022

poke @ Savannah

[VIDEO] Testing the toolchain with GNU poke: assembling RISC-V instructions

Don't miss this little talk from Mohammad-Reza Nabipoor about leveraging GNU poke as a test tool in the assembler.   He uses RISC-V to explore how to better write pickles for instruction sets.  Looks promising!

https://www.youtube.com/watch?v=n09mhw4-m_E

19 September, 2022 11:52AM by Jose E. Marchesi

September 13, 2022

unifont @ Savannah

Unifont 15.0.01 Released

13 September 2022 Unifont 15.0.01 is now available.  This is a major release corresponding to today's Unicode 15.0.0 release.

Download this release from GNU server mirrors at:

     https://ftpmirror.gnu.org/unifont/unifont-15.0.01/

or if that fails,

     https://ftp.gnu.org/gnu/unifont/unifont-15.0.01/

or, as a last resort,

     ftp://ftp.gnu.org/gnu/unifont/unifont-15.0.01/

These files are also available on the unifoundry.com website:

     https://unifoundry.com/pub/unifont/unifont-15.0.01/

Font files are in the subdirectory

     https://unifoundry.com/pub/unifont/unifont-15.0.01/font-builds/

A more detailed description of font changes is available at

      https://unifoundry.com/unifont/index.html

and of utility program changes at

      http://unifoundry.com/unifont/unifont-utilities.html

13 September, 2022 06:03PM by Paul Hardy

September 12, 2022

Luca Saiu

GNU Hackers' Meeting 2022: Call for presentations, even remote

The GNU Hackers’ Meetings or or “GHMs” are a friendly and informal venue to discuss technical topics related to GNU (https://www.gnu.org) and free software (https://www.gnu.org/philosophy/free-sw.html); anybody is welcome to register and attend. The GNU Hackers’ Meeting 2022 will take place on October 1st and October 2st in İzmir, Turkey; see the event home page at . We decided to help students who wish to attend by contributing 50€ out of their 60€ attendance fee (required by the hotel for use of the conference room, coffee and snacks) so that students will need to only pay 10€, upon presenting proof of ... [Read more]

12 September, 2022 04:05PM by Luca Saiu (positron@gnu.org)

September 04, 2022

The GNU Hackers' Meeting 2022 is less than one month away

The GNU Hackers’ Meetings are a venue to discuss technical topics related to GNU and free software. GNU Hackers’ Meetings have been taking place since 2007: you may want to look at the pages documenting most past editions (https://www.gnu.org/ghm/previous.html) which in many cases also include presentation slides and video recordings. The event atmosphere is always friendly and informal. Anybody is welcome to register and attend, including newcomers. The next GNU Hackers’ Meeting will take place in İzmir, Turkey on Saturday 1st and Sunday 2nd October 2022. We updated the GHM 2022 web page (https://www.gnu.org/ghm/2022) with information about the venue, accommodation ... [Read more]

04 September, 2022 06:21PM by Luca Saiu (positron@gnu.org)

September 03, 2022

grep @ Savannah

grep-3.8 released [stable]

This is to announce grep-3.8, a stable release.
Special thanks to Carlo Arenas for adding PCRE2 support
and to Paul Eggert for his many fine changes.

There have been 104 commits by 6 people in the 55 weeks since 3.7.
See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Carlo Marcelo Arenas Belón (2)
  Helge Kreutzmann (1)
  Jim Meyering (27)
  Ondřej Fiala (1)
  Paul Eggert (71)
  Ulrich Eckhardt (2)

Jim [on behalf of the grep maintainers]
==================================================================

Here is the GNU grep home page:
    http://gnu.org/s/grep/

For a summary of changes and contributors, see:
  http://git.sv.gnu.org/gitweb/?p=grep.git;a=shortlog;h=v3.8
or run this command from a git-cloned grep directory:
  git shortlog v3.7..v3.8

To summarize the 432 gnulib-related changes, run these commands
from a git-cloned grep directory:
  git checkout v3.8
  git submodule summary v3.7

==================================================================
Here are the compressed sources:
  https://ftp.gnu.org/gnu/grep/grep-3.8.tar.gz   (2.8MB)
  https://ftp.gnu.org/gnu/grep/grep-3.8.tar.xz   (1.7MB)

Here are the GPG detached signatures:
  https://ftp.gnu.org/gnu/grep/grep-3.8.tar.gz.sig
  https://ftp.gnu.org/gnu/grep/grep-3.8.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:
eb3bf741fefb2d64e67d9ea6d74c723ea0efddb6  grep-3.8.tar.gz
jeYKUWnAwf3YFwvZO72ldbh7/Pp95jGbi9YNwgvi+5c  grep-3.8.tar.gz
6d0d32cabaf44efac9e1d2c449eb041525c54b2e  grep-3.8.tar.xz
SY18wbT7CBkE2HND/rtzR1z3ceQk+35hQa/2YBOrw4I  grep-3.8.tar.xz

Each SHA256 checksum is base64 encoded, preferred over the much
longer hexadecimal encoding that most checksum tools default to.

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify grep-3.8.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
        Key fingerprint = 155D 3FC5 00C8 3448 6D1E  EA67 7FD9 FCCB 000B EEEE
  uid   Jim Meyering <jim@meyering.net>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key jim@meyering.net
  gpg --recv-keys 7FD9FCCB000BEEEE
  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=grep&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify grep-3.8.tar.gz.sig

This release was bootstrapped with the following tools:
  Autoconf 2.72a.55-bc66c
  Automake 1.16i
  Gnulib v0.1-5279-g19435dc207

==================================================================
NEWS

* Noteworthy changes in release 3.8 (2022-09-02) [stable]

** Changes in behavior

  The -P option is now based on PCRE2 instead of the older PCRE,
  thanks to code contributed by Carlo Arenas.

  The egrep and fgrep commands, which have been deprecated since
  release 2.5.3 (2007), now warn that they are obsolescent and should
  be replaced by grep -E and grep -F.

  The confusing GREP_COLOR environment variable is now obsolescent.
  Instead of GREP_COLOR='xxx', use GREP_COLORS='mt=xxx'.  grep now
  warns if GREP_COLOR is used and is not overridden by GREP_COLORS.
  Also, grep now treats GREP_COLOR like GREP_COLORS by silently
  ignoring it if it attempts to inject ANSI terminal escapes.

  Regular expressions with stray backslashes now cause warnings, as
  their unspecified behavior can lead to unexpected results.
  For example, '\a' and 'a' are not always equivalent
  <https://bugs.gnu.org/39678>.  Similarly, regular expressions or
  subexpressions that start with a repetition operator now also cause
  warnings due to their unspecified behavior; for example, *a(+b|{1}c)
  now has three reasons to warn.  The warnings are intended as a
  transition aid; they are likely to be errors in future releases.

  Regular expressions like [:space:] are now errors even if
  POSIXLY_CORRECT is set, since POSIX now allows the GNU behavior.

** Bug fixes

  In locales using UTF-8 encoding, the regular expression '.' no
  longer sometimes fails to match Unicode characters U+D400 through
  U+D7FF (some Hangul Syllables, and Hangul Jamo Extended-B) and
  Unicode characters U+108000 through U+10FFFF (half of Supplemental
  Private Use Area plane B).
  [bug introduced in grep 3.4]

  The -s option no longer suppresses "binary file matches" messages.
  [Bug#51860 introduced in grep 3.5]

** Documentation improvements

  The manual now covers unspecified behavior in patterns like \x, (+),
  and range expressions outside the POSIX locale.

03 September, 2022 08:04AM by Jim Meyering

September 02, 2022

remotecontrol @ Savannah

September 01, 2022

Andy Wingo

new month, new brainworm

Today, a brainworm! I had a thought a few days ago and can't get it out of my head, so I need to pass it on to another host.

So, imagine a world in which there is a a drive to build a kind of Kubernetes on top of WebAssembly. Kubernetes nodes are generally containers, associated with additional metadata indicating their place in overall system topology (network connections and so on). (I am not a Kubernetes specialist, as you can see; corrections welcome.) Now in a WebAssembly cloud, the nodes would be components, probably also with additional topological metadata. VC-backed companies will duke it out for dominance of the WebAssembly cloud space, and in a couple years we will probably emerge with an open source project that has become a de-facto standard (though it might be dominated by one or two players).

In this world, Kubernetes and Spiffy-Wasm-Cloud will coexist. One of the success factors for Kubernetes was that you can just put your old database binary inside a container: it's the same ABI as when you run your database in a virtual machine, or on (so-called!) bare metal. The means of composition are TCP and UDP network connections between containers, possibly facilitated by some kind of network fabric. In contrast, in Spiffy-Wasm-Cloud we aren't starting from the kernel ABI, with processes and such: instead there's WASI, which is more of a kind of specialized and limited libc. You can't just drop in your database binary, you have to write code to get it to conform to the new interfaces.

One consequence of this situation is that I expect WASI and the component model to develop a rich network API, to allow WebAssembly components to interoperate not just with end-users but also other (micro-)services running in the same cloud. Likewise there is room here for a company to develop some complicated network fabrics for linking these things together.

However, WebAssembly-to-WebAssembly links are better expressed via typed functional interfaces; it's more expressive and can be faster. Not only can you end up having fine-grained composition that looks more like lightweight Erlang processes, you can also string together components in a pipeline with communications overhead approaching that of a simple function call. Relative to Kubernetes, there are potential 10x-100x improvements to be had, in throughput and in memory footprint, at least in some cases. It's the promise of this kind of improvement that can drive investment in this area, and eventually adoption.

But, you still have some legacy things running in containers. What to do? Well... Maybe recompile them to WebAssembly? That's my brain-worm.

A container is a file system image containing executable files and data. Starting with the executable files, they are in machine code, generally x64, and interoperate with system libraries and the run-time via an ABI. You could compile them to WebAssembly instead. You could interpret them as data, or JIT-compile them as webvm does, or directly compile them to WebAssembly. This is the sort of thing you hire Fabrice Bellard to do ;) Then you have the filesystem. Let's assume it is stateless: any change to the filesystem at runtime doesn't need to be preserved. (I understand this is a goal, though I could be wrong.) So you could put the filesystem in memory, as some kind of addressable data structure, and you make the libc interface access that data structure. It's something like the microkernel approach. And then you translate whatever topological connectivity metadata you had for Kubernetes to your Spiffy-Wasm-Cloud's format.

Anyway in the end you have a WebAssembly module and some metadata, and you can run it in your WebAssembly cloud. Or on the more basic level, you have a container and you can now run it on any machine with a WebAssembly implementation, even on other architectures (coucou RISC-V!).

Anyway, that's the tweet. Have fun, whoever gets to work on this :)

01 September, 2022 10:12AM by Andy Wingo

August 31, 2022

freeipmi @ Savannah

FreeIPMI 1.6.10 Released

o Support IPv6 Lan configuration in ipmi-config.  IPv6
  configuration is supported in the new Lan6_Conf section.
o Fix static compilation issues by renaming a number of internal
  functions.
o Misc documentation corrections.

https://ftp.gnu.org/gnu/freeipmi/freeipmi-1.6.10.tar.gz

31 August, 2022 05:13PM by Albert Chu

August 30, 2022

Parabola GNU/Linux-libre

Grub bootloader upgrade and configuration incompatibilities

2022-08-30 - Christian Hesse

Recent changes in grub added a new command option to fwsetup and changed the way the command is invoked in the generated boot configuration. Depending on your system hardware and setup this could cause an unbootable system due to incompatibilities between the installed bootloader and configuration. After a grub package update it is advised to run both, installation and regeneration of configuration:

grub-install ...
grub-mkconfig -o /boot/grub/grub.cfg

30 August, 2022 09:22PM by bill auger

August 22, 2022

parallel @ Savannah

GNU Parallel 20220822 ('Rushdie') released

GNU Parallel 20220822 ('Rushdie') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  Parallel is Good Stuff (tm)
    -- bloopernova@ycombinator

New in this release:

  • --header 0 allows using {filename} as replacement string
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
    12345678 883c667e 01eed62f 975ad28b 6d50e22a
    $ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
    cc21b4c9 43fd03e9 3ae1ae49 e28573c0
    $ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
    79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
    fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 August, 2022 08:25PM by Ole Tange

August 11, 2022

a2ps @ Savannah

a2ps 4.14.92 released [alpha]

This alpha release reverts extensive whitespace changes to --help output, so
as not to annoy translators (thanks, Benno Schulenberg!).  These will be
restored before the next stable release in a “whitespace-only” change.

11 August, 2022 03:06PM by Reuben Thomas

August 09, 2022

GNU Taler news

Zero-Knowledge Age Restriction for GNU Taler

We propose a design for a privacy-friendly method of age restriction in e-commerce that is aligned with the principle of subsidiarity. The design is presented as an extension of a privacy-friendly payment protocol with a zero-knowledge scheme that cryprographically augments coins for this purpose. Our scheme enables buyers to prove to be of sufficient age for a particular transaction without disclosing it. Our modification preserves the privacy and security properties of the payment system such as the anonymity of minors as buyers as well as unlinkability of transactions. We show how our scheme can be instantiated with ECDSA as well with a variant of EdDSA, respectively, and how it can be integrated with the GNU Taler payment system. We provide formal proofs and implementation of our proposal. Key performance measurements for various CPU architectures and implementations are presented.

09 August, 2022 10:00PM

GNU Health

Cirugía Solidaria chooses GNU Health

The GNU Health community keeps growing, and that makes us very proud! This time, the Spanish non-profit organization Cirugía Solidaria has chosen GNU Health as their Hospital and Lab Management system.

Cirugía Solidaria was born in 2000 by a team of surgeons, anesthetists and nurses from “Virgen de la Arrixaca Hospital”, in Murcia, Spain, with the goal to provide medical assistance and to perform surgeries to underprivileged population and those in risk of social exclusion. Currently, Cirugía Solidaria counts with a multi-disciplinary team of health professionals around Spain that just made its 20th anniversary of cooperation.

GNUHealth Hospital Management client for Cirugía Solidaria

Around a month ago I received a message from Dr. Cerezuela, expressing their willingness to be part of the GNU Health community. Their main missions currently are focused, but not limited, to the African continent.

Source: Cirugía Solidaria

After several conferences and meetings, this August 1st 2022, Cirugía Solidaria and GNU Solidario signed an agreement to cooperate in the implementation, training and maintenance of the GNU Health Hospital Management and Lab Information System in those countries and health institutions where Cirugía Solidaria will be present.

Source: Cirugía Solidaria

This is very exciting. We have many projects in different countries from Africa, and working with Cirugía Solidaria will help to generate more local capacity, to cover the needs of those health professionals and their population.

This is not just about surgeries or health informatics. GNU Health will allow Cirugía Solidaria to create sustainable projects. They will have unified clinical and surgical histories, telemedicine; assess the nutritional and educational status of the population, and many other socioeconomic determinants of health and disease.

I want to give our warmest welcome to the team of Cirurgía Solidaria, and we are very much looking forward to cooperating with this great organization, for the betterment our our societies, and for those that need it most.

About GNU Health

The GNU Health project provides the tools for individuals, health professionals, institutions and governments to proactively assess and improve the underlying determinants of health, from the socioeconomic agents to the molecular basis of disease. From primary health care to precision medicine.

GNU Health is a Libre, community driven project from GNU Solidario, a non-profit humanitarian organization focused on Social Medicine. Our project has been adopted by public and private health institutions and laboratories, multilateral organizations and national public health systems around the world.

The GNU Health project provides the tools for individuals, health professionals, institutions and governments to proactively assess and improve the underlying determinants of health, from the socioeconomic agents to the molecular basis of disease. From primary health care to precision medicine.

The following are the main components that make up the GNU Health ecosystem:

  • Social Medicine and Public HealthHospital Management (HMIS)
  • Laboratory Management (Occhiolino)
  • Personal Health Record (MyGNUHealth)
  • Bioinformatics and Medical Genetics
  • Thalamus and Federated health networks
  • GNU Health embedded on Single Board devices

GNU Health is a GNU (www.gnu.org) official package, awarded with the Free Software Foundation award of Social benefit, among others. GNU Health has been adopted by many hospitals, governments and multilateral organizations around the globe.

See also:

GNU Health : https://www.gnuhealth.org

GNU Solidario : https://www.gnusolidario.org

Digital Public Good Alliance: https://digitalpublicgoods.net/

Original post : https://my.gnusolidario.org/2022/08/09/cirugia-solidaria-chooses-gnu-health/

09 August, 2022 05:22PM by Luis Falcon

August 08, 2022

a2ps @ Savannah

a2ps 4.14.91 released [alpha]

This alpha release marks the return of GNU a2ps to the Translation Project.

Some other minor issues have also been fixed.


Here are the compressed sources and a GPG detached signature:
  https://alpha.gnu.org/gnu/a2ps/a2ps-4.14.91.tar.gz
  https://alpha.gnu.org/gnu/a2ps/a2ps-4.14.91.tar.gz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

36c2514304132eb2eb8921252145ced28f209182  a2ps-4.14.91.tar.gz
1LQ+pPTsYhMbt09CdSMrTaMP55VIi0MP7oaa+zDvRG0  a2ps-4.14.91.tar.gz

The SHA256 checksum is base64 encoded, instead of the
hexadecimal encoding that most checksum tools default to.

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify a2ps-4.14.91.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa2048 2013-12-11 [SC]
        2409 3F01 6FFE 8602 EF44  9BB8 4C8E F3DA 3FD3 7230
  uid   Reuben Thomas <rrt@sc3d.org>
  uid   keybase.io/rrt <rrt@keybase.io>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key rrt@sc3d.org

  gpg --recv-keys 4C8EF3DA3FD37230

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=a2ps&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify a2ps-4.14.91.tar.gz.sig


This release was bootstrapped with the following tools:
  Autoconf 2.69
  Automake 1.16.1
  Gnulib v0.1-5347-gc0c72120f0

NEWS

* Noteworthy changes in release 4.14.91 (2022-08-08) [alpha]
 * Build:
   - Re-add a2ps to the Translation Project, and remove po files from git.
 * Bug fixes:
   - Remove reference to @COM_distill@ variable in a2ps_cfg.in.
 * Documentation:
   - Format --help output consistently to 80 columns.
   - Fix a couple of message typos.

08 August, 2022 07:47PM by Reuben Thomas

August 06, 2022

Christine Lemmer-Webber

A Guile Steel smelting pot

Last month I made a blogpost titled Guile Steel: A Proposal for a Systems Lisp. It got more attention than I anticipated, which is both a blessing and and curse. I mean, mostly the former, the curse isn't so serious, it's mostly that the post was aimed at a specific community and got more coverage than that, and funny things happen when things leave their intended context.

The blessing is that real, actual progress has happened, in terms of organization, actual development (thanks to others mostly!), and a compilation of potential directions. In many ways "Guile Steel" was meant to be a meta project, somewhat biased around Guile but more so a clever name to start brewing some ideas (and gathering intelligence) around, a call-to-arms for those who are likeminded, a test even to see if there are enough likeminded people out there. The answer to that one is: yes, and there's actually a lot that's happening or has happened historically. I actually think Lisp is going through a quiet renaissance and is on the verge or a major revival, but that's a topic for another post. The goal of this post is to give a lay of the landscape, as I've seen it since then. There's a lot out there.

If you enjoy this post by the way, there's an IRC channel: #guile-steel on irc.libera.chat. It's surprisingly well populated given that people have only shown up through word of mouth.

First, an aside on language (again)

Also by-the-way, it's debatable what "systems language" even means, and the previous post spent some time debating that. Language is mostly fuzzy, and subject to the constant battle between fuzzy and crisp systems, and "systems language" is something people evoke to make themselves sound like very crispy people, even though the term could hardly be fuzzier.

We're embracing the hand-waviness here; I've previously mentioned that "Blockchain" is to "Bitcoin" what "Roguelike" is to "Rogue". Similarly, "Systems Language" is to "C/Rust" what "Roguelike" is to "Rogue".

My friend Technomancy put things about as well or as succinctly as you could: "low-level enough to bootstrap a runtime". We'll extend "runtime" to not only mean "programming language runtime" but also "operating system runtime", and that's the scope of this post.

With that, let's start diving into directions.

Carp and Scopes

I was unaware at the time of writing the previous post of two remarkable "systems lisps" that already existed, are here, now, today, and you can and maybe should use them: Carp and Scopes. Both of them are statically typed, and both perform automatic memory management without the overhead of a garbage collector or reference counting in a style familiar to Rust.

They are also both kind of similar yet different. Carp is written on top of Haskell, looks a lot like Clojure in style. Scopes is written in C++, looks a lot like Scheme, and has an optional, non-parenthetical whitespace syntax which reminds me a lot of Wisp, so is maybe more amenable to the type of people who fear parentheses or must work with people who do.

I can't make a judgement about either; I would like to find some time to try each of them. Scopes looks more up my alley of the two. If someone packaged either of these languages for Guix I would try it in a heartbeat.

Anyway, Carp and Scopes already are systems lisps of sorts you can try today. (If you've made something cool with either of them, let me know.)

Pre-Scheme

There's a lot to say on this one, despite its obscurity, enough that I'm going to give it several sub-headings. I'll get the big one up front: Andrew Whatson is doing an incredible job porting Pre-Scheme to Guile. But I'll get more to that below.

What the heck is Pre-Scheme anyway?

PreScheme (or is it Pre-Scheme or prescheme or what? nobody is consistent, and I won't be either) is a "systems lisp" that is used to bootstrap the incredible but "sleeper hit" (or shall we say "cult classic"?) of programming language runtimes, Scheme48. PreScheme compiles to C, is statically typed with type inference based on a modified version of Hindley-Milner, and uses manual memory management for heap-allocated resources (much like C) rather than garbage collection. (C is just the current main target, compiling directly to native architectures or WebAssembly is also possible.)

The wild things about PreScheme are that unlike C or Rust, you can hack on it live at the REPL just like Scheme, and you can even incorporate a certain amount of Scheme, and it mostly looks like Scheme. But it still compiles down efficiently to low-level code.

It's used to implement Scheme48's virtual machine and garbage collector, and is bootstrappable from a working Scheme48, but there's also apparently a version sitting around somewhere on top of some metacircular Scheme which Jonathan Rees wrote on top of Common Lisp, giving it a good bootstrapping story. While used for Scheme48, and usable from it today, there's no reason you can't use it for other things, and a few smaller projects have.

What's more wild about PreScheme is how incredibly good of an idea it is, how long it's sat around (since the 80s, with a lot of work happening in the 90s!), and how little attention it's gotten. PreScheme's thoughtful design actually follows from Richard Kelsey's amazing PhD dissertation, Compilation By Program Transformation, which really feels like the kind of obscure CS thing that, if you've made it this far in this writeup, you probably would love reading. (Thank you to Olin Shivers for reviving this dissertation in LaTeX, which otherwise would have been lost to history.)

guile-prescheme

Now I did mention prescheme and how I thought that was a fairly interesting starting point on the last Guile Steel blogpost, and I actually got several people reaching out to me saying they wanted to take up this initiative, and a few of them suggested maybe they should start porting PreScheme to Guile, and I said "yes you should!" to all of them, but one person took up the initiative quickly and has been doing a straight and faithful port to Guile named guile-prescheme.

The emulator (which isn't too much code really) has already worked for a couple of weeks (which means you can already hack PreScheme at Guile's REPL, and Andrew Whatson says that the "compile to C" compiler is already well on its way, and will likely be there in about a month.

The main challenge apparently is the conversion of macros, which are stuck in the r4rs era of Scheme. Andrew has been slowly converting everything to syntax-case. syntax-case is encoded in r6rs and even more appealingly r7rs-small, which begs the question: how general of a port is this? Does it really have to just be to Guile? And that brings us to our next subsection...

The Secret Society of PreScheme Revivalists

Okay there's not really a secret society, we just have an email thread going and I organized a video call recently, and we're likely to do another one (I hope). This one was really good, very productive. (We didn't record it, sadly. Maybe we should have.)

On said video call we got Andrew Whatson of course, who's doing the current porting effort, but also Richard Kelsey (the original brain behind PreScheme, co-author of much of Scheme48), Michael Sperber (current maintainer of Scheme48 and also someone who has used PreScheme previously commercially, to do some monte carlo simulation things for some financial firm or something curious like that), and Jonathan Rees (co-author of Scheme48, and one of my friends who I like to call up and talk about all sorts of curious topics). There were a few others, all cool people, and also me, hand-waving excitedly as usual.

As an aside, my wife Morgan says my superpower is that I'm good at "showing up in a room and being excited and getting everyone else excited", and she's right, I think. And... well there's just a lot of interesting stuff in computer science history, amazing people whose work has just been mostly ignored, stuff left on the shelf. It's not what the Tech Influencers (TM) are currently touting, it's not what a FAANG company is going to currently hire you to do, but if you're trying to solve the hard problems people don't even realize they have, your best bet is to scour history.

I don't know if it's true or not but this felt like one of those times where the folks who have worked on PreScheme historically seemed kind of surprised that here we had a gathering of people who are extremely interested in the stuff they've done, but also happy about it. Anyway, that seemed like my reading, I like to think so anyway. Andrew (I think?) said some nice things about how it was just exciting to be able to talk to the people who have done these things, and I agree. It is cool stuff. We are grateful to be able to talk about it.

The conversation was really nice, we got some interesting historical information (some of that which I've conveyed here), and Richard Kelsey indicated that he's been doing work on microcontrollers and wishes he could be using PreScheme, but the things clients/employers get nervous about is "will be able to actually hire anyone to work on this stuff who isn't just you?" I'd like to think that we're building up enough enthusiasm where we can demonstrate in the affirmative, but that's going to take some time.

Anyway, I hinted in the last part that some of the more interesting conversation came to, just how portable is this port? Andrew indicated that he thought that the port to Guile as he was doing it was already helping to make things more portable. Andrew is just focusing on Guile first, but is avoiding the Guile-specific ice-9 namespace of Guile modules (which in this case, from a standardization perspective, becomes a little bit too appropriate) and is using as much generic Scheme and SRFI extensions as possible. Once the Guile version gets working, the goal is then to try porting to a more standardized form of Scheme (probably r7rs-small), and then that would mean that any Scheme following that standard could use the same version of PreScheme. Michael Sperber seemed to indicate that maybe Scheme48 could use this version too.

This would actually be pretty incredible because it would mean that any version of Scheme following the Scheme standard would suddenly have access to PreScheme, and any of those could also be used to bootstrap a PreScheme based Scheme.

A PreScheme JIT?

I thought Andrew Whatson (flatwhatson here) said this well enough himself so I'm just going to quote it verbatim:

<flatwhatson> re: pre-scheme interest for bootstrapping, i think it's more
              interesting than just "compiling to C"
<flatwhatson> Michael Sperber's rejected paper "A Tractable Native-Code Scheme
              System" describes repurposing the pre-scheme compiler (more
              accurately called the transformational compiler) as a jit
              byte-code optimizer and native-code emitter
<flatwhatson> the prescheme compiler basically lowers prescheme code to a
              virtual machine-code and then emits that as C
<flatwhatson> i think it would be feasible to directly emit native code at
              that point instead
<flatwhatson> https://www.deinprogramm.de/sperber/papers/tractable-native-code-scheme-system.pdf
<flatwhatson> Kelsey's dissertation describes transforming a high-level
              language to a low-level language, not specifically scheme to C.
<flatwhatson> > The machine language is an assembly language written in the
              syntax of the intermediate language and has a much simpler
              semantics. The machine is assumed to be a Von Neumann machine
              with a store and register-to-register instructions. Identifiers
              represent the machine’s registers and primitive procedures are
              the machine’s instructions.
<flatwhatson> Also, we have code for the unreleased byte-code jit-compiling
              native-emitting version of Scheme 48:
              https://www.s48.org/cgi-bin/hgwebdir.cgi/s48-compiler/

(How the hell that paper was rejected btw, I have no idea. It's great.)

Future directions for PreScheme

One obvious improvement to PreScheme is: compile to WebAssembly (aka WASM)! This would be pretty neat and maybe, maybe, maybe could mean a good path to getting more Schemes in the browser without using Emscripten (which is a bit heavy-handed of an approach). Andrew and I both think this is a fun idea, worth exploring. I think once the "compile to C" part of the port to Guile is done, it's worth beginning to look at in earnest.

Relatedly, it would also, I think, be pretty neat if guile-prescheme was compelling enough for more of Guile to be rewritten in it. This would improve Guile's already-better-than-most bootstrapping story and also make hacking on certain parts of Guile's internals more accessible and pleasant to a larger part of Guile's existing userbase.

The other obvious improvement to PreScheme is exploring (handwave handwave handwave) the kinds of automated memory management which have become popular with Rust's borrow checker and also appear in Carp and Scopes, as discussed above.

3L: The Computing System of the Future (???)

I mentioned that an appealing use of PreScheme might be to write not just a language runtime, but also an operating system. A very interesting project called 3L exists and is real and does just that. In fact, it's also a capability-secure operating system, and it cites all the right stuff and has all the right ideas going for it. And it's using PreScheme!

Now the problem is, seemingly nobody I know who would be interested in exactly a project like this even had heard of it before (except for the incredible and incredibly nice hacker pukkamustard is the one who made me even aware of it by mentioning it in the #guile-steel chatroom), and I couldn't even find the actual code on the main webpage. But there actually is source code, not a lot of it, but it's there, and in a way "not a lot of it" is not a bad thing here, because what's there looks stunningly similar to a very familiar metacircular evaluator, which begs the question, is that really enough, though?

And actually maybe it is, because hey look there's a demo video and a nice talk. And it's using Scheme48!

(As a complete aside: I'd be much more likely to play with Scheme48 if someone Geiser support for it... that's something I've poked at doing every now and then but I haven't had enough of a dedicated block of time. If you, dear reader, feel inspired enough to add such support, or actually if you give 3L a try, let me know).

Anyway, cool stuff, I've been meaning to reach out to the author, maybe I will after I post this. I wonder what's come of it. (It's also missing a license file or any indicators, but maybe we could get that fixed easily enough?)

WebAssembly

I had a call with someone recently who said WebAssembly was really only useful for C/Rust users, and I thought this was fairly surprising/confusing, but maybe that's because I think WebAssembly is pretty cool and have hand-coded a small amount of it for fun. Its text-based syntax is S-Expression based which makes it appealing for lispy type folks, and just easy to parse and work with in general.

It's stretching it a bit to call WebAssembly a Lisp, it's really just something that's designed to be an intermediate language (eg in GCC), a place where compiler authors often deign it okay/acceptable to use s-expressions because they don't fear that they'll scare off non-lispers or PLT people, because hey most users aren't going to touch this stuff anyway, right?

I dunno, I consider it a win at least that s-expressions have survived here. I showed up to an in-person WebAssembly meeting once and talked to one of the developers about it, praised them for this choice, and they said "Oh... yeah, well, we initially did it because it was the easiest thing to start with, and then eventually we came to like it, which I guess is the usual thing that happens with S-Expressions." (Literally true, look up the history of M-Expressions vs S-Expressions.)

At any rate, most people aren't coding WebAssembly by hand. However, you could, and if you're going to, a Lisp based environment is actually a really good choice. wasm-adventure is a really cool little demo game (try it!), all hand-written in WebAssembly kinda sorta. The README gives its motivations as "Is it possible (and enjoyable) to write a game directly in web assembly's text format? Eventually, would it be cool to generate wat from Scheme code using the Racket lang system?", and the answer becomes an emphatic "yes". What's interesting is that Lisp's venerable quasiquote does much of the heavy lifting to make, without too much work, a little DSL for authoring WebAssembly which results in some surprisingly easy to read code compared to generic WebAssembly. (The author, Zoé Martin, is another one of those quiet geniuses you run into on the internet; she has a lovely homebrew computer design too.)

So what I'd really like is to see more languages compiling directly to WebAssembly without emscripten as an intermediate hack. Guile especially, of course. Andy Wingo gave an amazing little talk on this where he does a little (quasi, pre-recorded) live coding demo of compiling to WebAssembly and I thought "YES!!! Compiling to WASM is probably right around the corner" and it turns out that's probably not the case because Wingo would like to see some important extensions to WASM land, and I guess, yes that probably makes sense, and also he's working on a new garbage collector which seems damn cool and like it'll be really good for Guile and maybe even could help the compiling to WASM story even before the much desired WASM-GC extension we all want lands, but who knows. I mean it would also be nice to have like, the tail call elimination extension, etc etc etc. But see also Wingo's writeup about targeting the web from last year, etc etc. (And on that note, I mean, is Webassembly the new Kubernetes?)

As another aside, there are two interesting Schemes which are actually written in WebAssembly, or rather, one written directly in hand-coded WASM named scheme.wasm, and one which compiles itself to Webassembly called Schism (which has a cool paper, but sadly hasn't been updated in a couple of years).

As another another aside, I was on a video call with Douglas Crockford at one point and mentioned WebAssembly and how cool I thought it was, and Crock kinda went "meh" to it, and I was like what? I mean it has ocap people you and I have both collaborated on it with, overall its design seems pretty good, better than most of the things of its ilk that have been tried before, why are you meh'ing WebAssembly? And Crock said that well, it's not that WebAssembly is bad, it's just that it felt like an opportunity to do something impactful, and it's "just another von neumann architecture", like, boring, can't we do better? But when I asked for specific alternatives, Crock didn't have a specific design in mind, just thought that maybe we could do better, maybe it could even incorporate actors at a more fundamental level.

Well... it turns out we both know someone who did just that, and (so I hear) both recently got nerdsniped by that very same person who had just such an architecture...

Mycelia and uFork

So I had a call with sorta-colleague, sorta-friend I guess? I'm talking about Dale Schumacher, and I don't know him super well, we don't get to talk that much, but I've enjoyed the amount we have. Dale has been trying to get me to have a video call for a while, we finally did, and I was expecting us to talk about our respective actor'y system projects, and we did... but the big surprise was hearing about Mycelia, Dale's ocap-secure hybrid-actor-model-lisp-machine-lambda-calculus operating system, and its equally astounding, actually maybe more astounding, virtual machine and maybe potentially CPU architecture design, uFork. We're going to take a major digression but I promise that it ties back in.

This isn't the first time Dale's reached out and it's resulted in me being surprised and nerdsniped. A few years ago Dale reached out to me to talk about this programming language he wrote called Humus. What's astounding personally about Humus is that it has an eerie amount of similarity to Spritely Goblins, the ocap distributed object architecture I've been working on the last few years, despite that we fully designed our systems independently. Dale beat me to it, but it was an independent reinvention in the sense that I simply wasn't aware of Humus until Dale started emailing me.

The eerie similarity is because I think Dale and I's systems are the most seriously true-to-form implementations of the "Classic Actor Model" that have been implemented in recent times (more true than say, Erlang, which does some other things, and "Classic" thrown on there because Carl Hewitt has some new ideas that he feels strongly should now be associated with "Actor Model" that can be layered on Dale and I's systems, but are not there at the base layer). (Actually, Goblins supports one other thing that makes it more the vat model of computation, but that isn't important for this post.) The Classic Actor Model says (hand-waving past pre-determinism in the general case, at least from the perspective of a single actor, due to ordering concerns... but those too can be layered on) that you can do pretty much all computation in terms of just actors, which are these funky distributed objects which handle messages one at a time, and while handling them are only able to do some combination of three things: (1) send messages to actors they know about, (2) create new actors (and get their address in the process, which they can share with other actors should they choose... argument passing, basically), and (3) designate their behavior for the next time they are handling a message. It's pretty common to use "become" for last that operation, but the curious thing that both Dale and I did was use lambdas as the thing you become. (By the way, those familiar with Scheme history should notice something interesting and familiar here, and for that same reason Dale and I are also in the shared company of being scolded by Carl Hewitt for saying our actors are made out of lambdas, despite him liking our systems otherwise, I think...)

I remarked off-hand that "well I guess one of the main differences between our systems, and maybe a thing you might not like, is that mine is lispy / based on Scheme, and..."

Dale waved his hand. "That's mostly surface..."

"Surface syntax, yeah I know. So I guess it doesn't..."

"No wait it does matter. What I'm trying to show you is that I actually do like that kind of stuff. In fact I have some projects which use it at a fundamental level. Here... let me show you..." And that's when we started talking about uFork, the instruction architecture he was working on, which I later found was actually part of a larger thing called Mycelia.

Well I'm glad I got the lecture directly from Dale because, let's see, how does the Mycelia project brand itself (at the time of writing)? "A bare-metal actor operating system for Raspberry Pi." Well, maybe this underwhelming self-description is why seemingly nobody I know (yes, like 3L above) has seemingly heard about it, despite it being extremely up the alley of the kind of programming people I tend to hang out with.

Mycelia is not just some throwaway raspberry pi project (which is certainly the impression I would have gotten from lazily scanning that page). Most of those are like, some cute repackaging of some existing FOSS POSIX'y thing. But Mycelia is an actually-working, you-can-actually-boot-it-on-real-hardware open source operating system (under Apache v2) with a ton of novel ideas which happens to be targeting the Raspberry Pi, but it could be ported to run on anything.

Anyway, there are a lot of interesting stuff in there, but here's a bulleted list summary. For Mycelia:

  • It's is an object-capability-secure operating system
  • It has a Lisp-like language for coding, pretty much Scheme-like, to hack around on
  • The Kernel language / Vau calculus show up, which is... wild
  • It encodes the Actor model and the Lambda calculus in a way that is sensible and coherent afaict
  • It is a "Lisp Machine" in many senses of the term.

But the uFork virtual machine / abstract idea for a CPU also are curious on their own. I dunno, I spent the other night reading it kind of wildly after our call. It also encodes the lambda calculus / actor model in fundamental instructions.

Dale was telling me he'd like to build an actual, physical CPU, but of course that takes a lot of resources, so he might settle for an FPGA for now. The architecture, should it be built, also somehow encodes a hardware garbage collector, which I haven't heard of anything doing that since the actual physical Lisp Machines died out.

At any rate, Dale was really excited to tell me about why his system encoded instructions operating on memory split in quads. He asked me why I thought that would be; I'm not honestly sharp enough in this kind of area to know, sadly, though I said "I hope it's not because you're planning on encoding RDF at the CPU layer". Thankfully it's not that, but then he started mentioning how his system encodes a stream of continuations...

Wait, that sounds familiar. "Have you ever heard of something called sectorlisp?" I asked, with a raised eyebrow.

"Scroll to the bottom of the document," Dale said, grinning.

Oh, there it was. Of course.

sectorlisp

The most technically impressive thing I think I've ever seen is John McCarthy's "Lisp implemented in Lisp", also known as a "metacircular evaluator". If you aren't familiar with it, it's been summarized well in the talk The Most Beautiful Program Ever Written by William Byrd. I think the best way to understand it really, and (I'm biased) the easiest to read version of things is in the Scheme in Scheme section of A Scheme Primer (though I wrote that for my work, and as said, I'm biased... I don't think I did anything new there, just explained ideas as simply as I could).

The second most technically impressive thing I've ever seen is sectorlisp, and the two are directly related. According to its README, "sectorlisp is a 512-byte implementation of LISP that's able to bootstrap John McCarthy's meta-circular evaluator on bare metal." Where traditional metacircular evaluator examples can be misconstrued as being the stuff of pure abstractlandia, sectorlisp gets brutally direct about things. In one sector (half a kilobyte!!!), sectorlisp manages to encode a whole-ass lisp system that actually runs. And yet, the nature of the metacircular evaluator persists. (Take that, person on Hacker News who called metacircular evaluators "cheating"! Even if you think mine was, I don't think you can accuse sectorlisp's of that.)

If you do nothing else, watch the sectorlisp blinkenlights demo, even just as an astounding visual demo alone. (Blinkenlights is another project of Justine's, and also wildly impressive.) I highly recommend the following blogposts of Justine's: SectorLISP Now Fits in One Sector, Lisp with GC in 436 bytes, and especially Lambda Calculus in 383 Bytes. Hikaru Ikuta (woodrush) has also written some amazing blogposts, including Extending SectorLISP to Implement BASIC REPLs and Games and Building a Neural Network in Pure Lisp Without Built-In Numbers Using Only Atoms and Lists (and related, but not part of sectorlisp: A Lisp Interpreter Implemented in Conway's Game of Life, which gave off strong Wireworld Computer vibes for me). If you are only going to lazily scan through one of those blogposts, I recommend it be Lambda Calculus in 383 Bytes, which has some wild representations of the ideas (including visually), a bit too advanced for me at present admittedly, though I stare at them in wonder.

I had a bunch more stuff here, partly because the author is someone I find both impressive technically but who has also said some fairly controversial things... to say the least. But I think it was too much of a digression for this article. The short version is that Justine's stuff is probably the smartest, most mind-blowing tech I've ever seen, kinda scarily and intimidatingly smart, and it's hard to mentally reconcile that with some of those statements. I don't know, maybe she wants to move past that phase, I'd like to think so. I think she hasn't said anything like that in a long time, and it feels out of phase with the rest of this post but... it feels like something that needs to be acknowledged.

GOOL, GOAL, and OpenGOAL

Every now and then when people say Lisp couldn't possibly be performant, Lisp people like to bring up that Naughty Dog famously had its own Lisp implementations for most of its earlier games. Andy Gavin has written about GOOL, which was a mix of lisp and assembly and of course lisp generating assembly, and I don't think much was written about its followup GOAL until OpenGOAL came up, which... I haven't looked at it too much tbh. I guess it's getting interesting for some people for the ability to play Jak and Daxter on modern hardware (which I've never played but looked fun), but I'm more curious if someone's gonna poke at it to do something completely different.

But I do know some of the vague legends. I don't remember if this is true or where I read it but one of them that Sony accused Naughty Dog of accessing some devkit or APIs they weren't supposed to have access to because they were pulling off a bunch of features performantly in Crash Bandicoot that were assumed otherwise not possible. But nope, just lisp nerds making DSLs that pull off some crazy shit I guess.

BONUS: Shoutout to Kandria

Oh, speaking of games written in Lisp, I guess Kandria's engine is gonna be FOSS, and that game looks wicked cool, maybe support their Kickstarter while you still can. It's not really in the theme of this post from the definition of "systems lisp" I gave earlier, but this blogpost about its lispy internals is really neat.

Okay here's everything else we're done

This was a long-ass post. There's other things maybe that could be discussed, so I'll just dump briefly:

Okay, that's it. Hopefully you found something interesting out of this post. Meanwhile, I was just gonna spend an hour on this. I spent my whole day! Eeek!

06 August, 2022 06:35PM by Christine Lemmer-Webber (cwebber@dustycloud.org)

August 02, 2022

libc @ Savannah

The GNU C Library version 2.36 is now available

The GNU C Library
=================

The GNU C Library version 2.36 is now available.

The GNU C Library is used as the C library in the GNU system and
in GNU/Linux systems, as well as many other systems that use Linux
as the kernel.

The GNU C Library is primarily designed to be a portable
and high performance C library.  It follows all relevant
standards including ISO C11 and POSIX.1-2017.  It is also
internationalized and has one of the most complete
internationalization interfaces known.

The GNU C Library webpage is at http://www.gnu.org/software/libc/

Packages for the 2.36 release may be downloaded from:
        http://ftpmirror.gnu.org/libc/
        http://ftp.gnu.org/gnu/libc/

The mirror list is at http://www.gnu.org/order/ftp.html

NEWS for version 2.36
=====================

Major new features:

  • Support for DT_RELR relative relocation format has been added to

  glibc.  This is a new ELF dynamic tag that improves the size of
  relative relocations in shared object files and position independent
  executables (PIE).  DT_RELR generation requires linker support for
  -z pack-relative-relocs option, which is supported for some targets
  in recent binutils versions.  Lazy binding doesn't apply to DT_RELR.

  • On Linux, the pidfd_open, pidfd_getfd, and pidfd_send_signal functions

  have been added.  The pidfd functionality provides access to a process
  while avoiding the issue of PID reuse on tranditional Unix systems.

  • On Linux, the process_madvise function has been added. It has the

  same functionality as madvise but alters the target process identified
  by the pidfd.

  • On Linux, the process_mrelease function has been added.  It allows a

  caller to release the memory of a dying process.  The release of the
  memory is carried out in the context of the caller, using the caller's
  CPU affinity, and priority with CPU usage accounted to the caller.

  • The “no-aaaa” DNS stub resolver option has been added.  System

  administrators can use it to suppress AAAA queries made by the stub
  resolver, including AAAA lookups triggered by NSS-based interfaces
  such as getaddrinfo.  Only DNS lookups are affected: IPv6 data in
  /etc/hosts is still used, getaddrinfo with AI_PASSIVE will still
  produce IPv6 addresses, and configured IPv6 name servers are still
  used.  To produce correct Name Error (NXDOMAIN) results, AAAA queries
  are translated to A queries.  The new resolver option is intended
  primarily for diagnostic purposes, to rule out that AAAA DNS queries
  have adverse impact.  It is incompatible with EDNS0 usage and DNSSEC
  validation by applications.

  • On Linux, the fsopen, fsmount, move_mount, fsconfig, fspick, open_tree,

  and mount_setattr have been added.  They are part of the new Linux kernel
  mount APIs that allow applications to more flexibly configure and operate
  on filesystem mounts.  The new mount APIs are specifically designed to work
  with namespaces.

  • localedef now accepts locale definition files encoded in UTF-8.

  Previously, input bytes not within the ASCII range resulted in
  unpredictable output.

  • Support for the mbrtoc8 and c8rtomb multibyte/UTF-8 character conversion

  functions has been added per the ISO C2X N2653 and C++20 P0482R6 proposals.
  Support for the char8_t typedef has been added per the ISO C2X N2653
  proposal.  The functions are declared in uchar.h in C2X mode or when the
  _GNU_SOURCE macro or C++20 __cpp_char8_t feature test macro is defined.
  The char8_t typedef is declared in uchar.h in C2X mode or when the
  _GNU_SOURCE macro is defined and the C++20 __cpp_char8_t feature test macro
  is not defined (if __cpp_char8_t is defined, then char8_t is a builtin type).

  • The functions arc4random, arc4random_buf, and arc4random_uniform have been

  added.  The functions wrap getrandom and/or /dev/urandom to return high-
  quality randomness from the kernel.

  • Support for LoongArch running on Linux has been added.  This port requires

  as least binutils 2.38, GCC 12, and Linux 5.19.  Currently only hard-float
  ABI is supported:

    - loongarch64-linux-gnu

  The LoongArch ABI is 64-bit little-endian.

Deprecated and removed features, and other changes affecting compatibility:

  • Support for prelink will be removed in the next release; this includes

  removal of the LD_TRACE_PRELINKING, and LD_USE_LOAD_BIAS, environment
  variables and their functionality in the dynamic loader.

  • The Linux kernel version check has been removed along with the

  LD_ASSUME_KERNEL environment variable.  The minimum kernel used to built
  glibc is still provided through NT_GNU_ABI_TAG ELF note and also printed
  when libc.so is issued directly.

  • On Linux, The LD_LIBRARY_VERSION environment variable has been removed.

The following bugs are resolved with this release:

  [14932] dynamic-link: dlsym(handle, "foo") and dlsym(RTLD_NEXT, "foo")
    return different result with versioned "foo"
  [16355] libc: syslog.h's SYSLOG_NAMES namespace violation and utter
    mess
  [23293] dynamic-link: aarch64: getauxval is broken when run as ld.so
    ./exe and ld.so adjusts argv on the stack
  [24595] nptl: [2.28 Regression]: Deadlock in atfork handler which
    calls dlclose
  [25744] locale: mbrtowc with Big5-HKSCS returns 2 instead of 1 when
    consuming the second byte of certain double byte characters
  [25812] stdio: Libio vtable protection is sometimes only partially
    enforced
  [27054] libc: pthread_atfork handlers that call pthread_atfork
    deadlock
  [27924] dynamic-link: ld.so: Support DT_RELR relative relocation
    format
  [28128] build: declare_symbol_alias doesn't work for assembly codes
  [28566] network: getnameinfo with NI_NOFQDN is not thread safe
  [28752] nss: Segfault in getpwuid when stat fails
  [28815] libc: realpath should not copy to resolved buffer on error
  [28828] stdio: fputwc crashes
  [28838] libc: FAIL: elf/tst-p_align3
  [28845] locale: ld-monetary.c should be updated to match ISO C and
    other standards.
  [28850] libc: linux: __get_nprocs_sched reads uninitialized memory
    from the stack
  [28852] libc: getaddrinfo leaks memory with AI_ALL
  [28853] libc: tst-spawn6 changes current foreground process group
    (breaks test isolation)
  [28857] libc: FAIL: elf/tst-audit24a
  [28860] build: --enable-kernel=5.1.0 build fails because of missing
    __convert_scm_timestamps
  [28865] libc: linux: _SC_NPROCESSORS_CONF and _SC_NPROCESSORS_ONLN are
    inaccurate without /sys and /proc
  [28868] dynamic-link: Dynamic loader DFS algorithm segfaults on
    missing libraries
  [28880] libc: Program crashes if date beyone 2038
  [28883] libc: sysdeps/unix/sysv/linux/select.c: __select64
    !__ASSUME_TIME64_SYSCALLS && !__ASSUME_PSELECT fails on Microblaze
  [28896] string: strncmp-avx2-rtm and wcsncmp-avx2-rtm fallback on non-
    rtm variants when avoiding overflow
  [28922] build: The .d dependency files aren't always generated
  [28931] libc: hosts lookup broken for SUCCESS=CONTINUE and
    SUCCESS=MERGE
  [28936] build: nm: No such file
  [28950] localedata: Add locale for ISO code "tok" (Toki Pona)
  [28953] nss: NSS lookup result can be incorrect if function lookup
    clobbers errno
  [28970] math: benchtest: libmvec benchmark doesn't build with make
    bench.
  [28991] libc: sysconf(_SC_NPROCESSORS_CONF) should read
    /sys/devices/system/cpu/possible
  [28993] libc: closefrom() iterates until max int if no access to
    /proc/self/fd/
  [28996] libc: realpath fails to copy partial result to resolved buffer
    on ENOENT and EACCES
  [29027] math: [ia64] fabs fails with sNAN input
  [29029] nptl: poll() spuriously returns EINTR during thread
    cancellation and with cancellation disabled
  [29030] string: GLIBC 2.35 regression - Fortify crash on certain valid
    uses of mbsrtowcs (*** buffer overflow detected ***: terminated)
  [29062] dynamic-link: Memory leak in _dl_find_object_update if object
    is promoted to global scope
  [29069] libc: fstatat64_time64_statx wrapper broken on MIPS N32 with
    -D_FILE_OFFSET_BITS=64 and -D_TIME_BITS=64
  [29071] dynamic-link: m68k: Removal of ELF_DURING_STARTUP optimization
    broke ld.so
  [29097] time: fchmodat does not handle 64 bit time_t for
    AT_SYMLINK_NOFOLLOW
  [29109] libc: posix_spawn() always returns 1 (EPERM) on clone()
    failure
  [29141] libc: _FORTIFY_SOURCE=3 fail for gcc 12/glibc 2.35
  [29162] string: [PATCH] string.h syntactic error:
    include/bits/string_fortified.h:110: error: expected ',' or ';'
    before '__fortified_attr_access'
  [29165] libc: [Regression] broken argv adjustment
  [29187] dynamic-link: [regression] broken argv adjustment for nios2
  [29193] math: sincos produces a different output than sin/cos
  [29197] string: __strncpy_power9() uses uninitialised register vs18
    value for filling after \0
  [29203] libc: daemon is not y2038 aware
  [29204] libc: getusershell is not 2038 aware
  [29207] libc: posix_fallocate fallback implementation is not y2038
    aware
  [29208] libc: fpathconf(_PC_ASYNC_IO) is not y2038 aware
  [29209] libc: isfdtype is not y2038 aware
  [29210] network: ruserpass is not y2038 aware
  [29211] libc: __open_catalog is not y2038 aware
  [29213] libc: gconv_parseconfdir is not y2038 aware
  [29214] nptl: pthread_setcanceltype fails to set type
  [29225] network: Mistyped define statement in socket/sys/socket.h in
    line 184
  [29274] nptl: __read_chk is not a cancellation point
  [29279] libc: undefined reference to `mbstowcs_chk' after
    464d189b9622932a75302290625de84931656ec0
  [29304] libc: mq_timedreceive does not handle 64 bit syscall return
    correct for !__ASSUME_TIME64_SYSCALLS
  [29403] libc: st_atim, st_mtim, st_ctim stat struct members are
    missing on microblaze with largefile

Release Notes
=============

https://sourceware.org/glibc/wiki/Release/2.36

Contributors
============

This release was made possible by the contributions of many people.
The maintainers are grateful to everyone who has contributed
changes or bug reports.  These include:

=Joshua Kinard
Adhemerval Zanella
Adhemerval Zanella Netto
Alan Modra
Andreas Schwab
Arjun Shankar
Arnout Vandecappelle (Essensium/Mind)
Carlos O'Donell
Cristian Rodríguez
DJ Delorie
Danila Kutenin
Darius Rad
Dmitriy Fedchenko
Dmitry V. Levin
Emil Soleyman-Zomalan
Fangrui Song
Florian Weimer
Gleb Fotengauer-Malinovskiy
Guilherme Janczak
H.J. Lu
Ilyahoo Proshel
Jason A. Donenfeld
Joan Bruguera
John David Anglin
Jonathan Wakely
Joseph Myers
José Bollo
Kito Cheng
Maciej W. Rozycki
Mark Wielaard
Matheus Castanho
Max Gautier
Michael Hudson-Doyle
Nicholas Guriev
Noah Goldstein
Paul E. Murphy
Raghuveer Devulapalli
Ricardo Bittencourt
Sam James
Samuel Thibault
Sergei Trofimovich
Siddhesh Poyarekar
Stafford Horne
Stefan Liebler
Steve Grubb
Su Lifan
Sunil K Pandey
Szabolcs Nagy
Tejas Belagod
Tom Coldrick
Tom Honermann
Tulio Magno Quites Machado Filho
WANG Xuerui
Wangyang Guo
Wilco Dijkstra
Xi Ruoyao
Xiaoming Ni
Yang Yanchao
caiyinyu

02 August, 2022 01:17AM by Carlos O'Donell

July 31, 2022

gcide @ Savannah

GCIDE version 0.53

GCIDE version 0.53 is available for download.

31 July, 2022 07:13AM by Sergey Poznyakoff