Planet GNU

Aggregation of development blogs from the GNU Project

June 18, 2021

GNU Guix

Substitutes now also available from bordeaux.guix.gnu.org

There have been a number of different project operated sources of substitutes, for the last couple of years the default source of substitutes has been ci.guix.gnu.org (with a few different URLs).

Now, in addition to ci.guix.gnu.org, bordeaux.guix.gnu.org is a default substitute server.

Put that way, this development maybe doesn't sound particularly interesting. Why is a second substitute server useful? There's some thoughts on that exact question in the next section. If you're just interested in how to use (or how not to use) substitutes from bordeaux.guix.gnu.org, then you can just skip ahead to the last section.

Why a second source of substitutes?

This change is an important milestone, following on from the work that started on the Guix Build Coordinator towards the start of 2020.

Back in 2020, the substitute availability from ci.guix.gnu.org was often an issue. There seemed to be a number of contributing factors, including some parts of the architecture. Without going too much in to the details of the issues, aspects of the design of the Guix Build Coordinator were specifically meant to avoid some of these issues.

While there were some very positive results from testing back in 2020, it's taken so long to bring the substitute availability benefits to general users of Guix that ci.guix.gnu.org has changed and improved significantly in the meantime. This means that any benefits in terms of substitute availability are less significant now.

One clearer benefit of just having two independent sources of substitutes is redundancy. While the availability of ci.guix.gnu.org has been very high (in my opinion), having a second independent substitute server should mean that if there's a future issue with users accessing either source of substitutes, the disruption should be reduced.

I'm also excited about the new possibilities offered by having a second substitute server, particularly one using the Guix Build Coordinator to manage the builds.

Substitutes for the Hurd is already something that's been prototyped, so I'm hopeful that bordeaux.guix.gnu.org can start using childhurd VMs to build things soon.

Looking a bit further forward, I think there's some benefits to be had in doing further work on how the nar and narinfo files used for substitutes are managed. There are some rough plans already on how to address the retention of nars, and how to look at high performance mirrors.

Having two substitute servers is one step towards stronger trust policies for substitutes (as discussed on guix-devel, where you would only use a substitute if both ci.guix.gnu.org and bordeaux.guix.gnu.org have built it exactly the same. This would help protect against the compromise of a single substitute server.

Using substitutes from bordeaux.guix.gnu.org

If you're using Guix System, and haven't altered the default substitute configuration, updating guix (via guix pull), reconfiguring using the updated guix, and then restarting the guix-daemon should enable substitutes from bordeaux.guix.gnu.org.

If the ACL is being managed manually, you might need to add the public key for bordeaux.guix.gnu.org manually as well.

When using Guix on a foreign distribution with the default substitute configuration, you'll need to run guix pull as root, then restart the guix-daemon. You'll then need to add the public key for bordeaux.guix.gnu.org to the ACL.

guix archive --authorize < /root/.config/guix/current/share/guix/bordeaux.guix.gnu.org.pub

If you want to just use ci.guix.gnu.org, or bordeaux.guix.gnu.org for that matter, you'll need to adjust the substitute urls configuration for the guix-daemon to just refer to the substitute servers you want to use.

18 June, 2021 12:00PM by Christopher Baines

June 17, 2021

gdbm @ Savannah

Version 1.20

Version 1.20 is available for download.

Changes in this version:

New bucket cache

The bucket cache support has been rewritten from scratch.  The new code provides for significant speed up of search operations.

Change in the mmap prereading strategy

Pre-reading of the memory mapper regions, introduced in version 1.19 can be advantageous only when doing intensive look-ups on a read-only
database.  It degrades performance otherwise, especially if doing multiple inserts.  Therefore, this version introduces a new flag
to gdbm_open: GDBM_PREREAD.  When given, it enables pre-reading of memory mapped regions. (details)

17 June, 2021 11:07AM by Sergey Poznyakoff

June 16, 2021

GNU Taler news

How to issue a privacy-preserving central bank digital currency

We are happy to announce the publication of our policy brief on"How to issue a privacy-preserving central bank digital currency" by The European Money and Finance Forum.

16 June, 2021 10:00PM

FSF Blogs

June 15, 2021

Listen to LibrePlanet 2021 audio in your podcast app!

15 June, 2021 05:34PM

FSF News

June 11, 2021

FSF and GNU move official IRC channels to Libera.Chat network

11 June, 2021 08:40PM

GNU Guix

Reproducible data processing pipelines

Last week, we at Guix-HPC published videos of a workshop on reproducible software environments we organized on-line. The videos are well worth watching—especially if you’re into reproducible research, and especially if you speak French or want to practice. This post, though, is more of a meta-post: it’s about how we processed these videos. “A workshop on reproducibility ought to have a reproducible video pipeline”, we thought. So this is what we did!

From BigBlueButton to WebM

Over the last year and half, perhaps you had the “opportunity” to participate in an on-line conference, or even to organize one. If so, chances are that you already know BigBlueButton (BBB), the free software video conferencing suite initially designed for on-line teaching. In a nutshell, it allows participants to chat (audio, video, and keyboard), and speakers can share their screen or a PDF slide deck. Organizers can also record the session.

BBB then creates a link to recorded sessions with a custom JavaScript player that replays everything: typed chat, audio and video (webcams), shared screens, and slide decks. This BBB replay a bit too rough though and often not the thing you’d like to publish after the conference. Instead, you’d rather do a bit of editing: adjusting the start and end time of each talk, removing live chat from what’s displayed (which allows you to remove info that personally identifies participants, too!), and so forth. Turns out this kind of post-processing is a bit of work, primarily because BBB does “the right thing” of recording each stream separately, in the most appropriate form: webcam and screen shares are recorded as separate videos, chat is recorded as text with timings, slide decks is recorded as a bunch of PNGs plus timings, and then there’s a bunch of XML files with metadata putting it all together.

Anyway, with a bit of searching, we quickly found the handy bbb-render tool, which can first download all these files and then assemble them using the Python interface to the GStreamer Editing Services (GES). Good thing: we don’t have to figure out all these things; we “just” have to run these two scripts in an environment with the right dependencies. And guess what: we know of a great tool to control execution environments!

A “deployment-aware Makefile”

So we have a process that takes input files—those PNGs, videos, and XML files—and produces output files—WebM video files. As developers we immediately recognize a pattern and the timeless tool to deal with it: make. The web already seems to contain countless BBB post-processing makefiles (and shell scripts, too). We were going to contribute to this while we suddenly realized that we know of another great tool to express such processes: Guix! Bonus: while a makefile would address just the tip of the iceberg—running bbb-render—Guix can also take care of the tedious task of deploying the right environment to run bbb-render in.

What we did was to write some sort of a deployment-aware makefile. It’s still a relatively unconventional way to use Guix, but one that’s very convenient. We’re talking about videos, but really, you could use the same approach for any kind of processing graph where you’d be tempted to just use make.

The end result here is a Guix file that returns a manifest—a list of videos to “build”. You can build the videos with:

guix build -m render-videos.scm

Overall, the file defines a bunch of functions (procedures in traditional Scheme parlance), each of which takes input files and produces output files. More accurately, these functions returns objects that describe how to build their output from the input files—similar to how a makefile rule describes how to build its target(s) from its prerequisite(s). (The reader familiar with functional programming may recognize a monad here, and indeed, those build descriptions can be thought of as monadic values in a hypothetical “Guix build” monad; technically though, they’re regular Scheme values.)

Let’s take a guided tour of this 300-line file.

Rendering

The first step in this file describes where bbb-render can be found and how to run it to produce a GES “project” file, which we’ll use later to render the video:

(define bbb-render
  (origin
    (method git-fetch)
    (uri (git-reference (url "https://github.com/plugorgau/bbb-render")
                        (commit "a3c10518aedc1bd9e2b71a4af54903adf1d972e5")))
    (file-name "bbb-render-checkout")
    (sha256
     (base32 "1sf99xp334aa0qgp99byvh8k39kc88al8l2wy77zx7fyvknxjy98"))))

(define rendering-profile
  (profile
   (content (specifications->manifest
             '("gstreamer" "gst-editing-services" "gobject-introspection"
               "gst-plugins-base" "gst-plugins-good"
               "python-wrapper" "python-pygobject" "python-intervaltree")))))

(define* (video-ges-project bbb-data start end
                            #:key (webcam-size 25))
  "Return a GStreamer Editing Services (GES) project for the video,
starting at START seconds and ending at END seconds.  BBB-DATA is the raw
BigBlueButton directory as fetched by bbb-render's 'download.py' script.
WEBCAM-SIZE is the percentage of the screen occupied by the webcam."
  (computed-file "video.ges"
                 (with-extensions (list (specification->package "guile-gcrypt"))
                  (with-imported-modules (source-module-closure
                                          '((guix build utils)
                                            (guix profiles)))
                    #~(begin
                        (use-modules (guix build utils) (guix profiles)
                                     (guix search-paths) (ice-9 match))

                        (define search-paths
                          (profile-search-paths #+rendering-profile))

                        (for-each (match-lambda
                                    ((spec . value)
                                     (setenv
                                      (search-path-specification-variable
                                       spec)
                                      value)))
                                  search-paths)

                        (invoke "python"
                                #+(file-append bbb-render "/make-xges.py")
                                #+bbb-data #$output
                                "--start" #$(number->string start)
                                "--end" #$(number->string end)
                                "--webcam-size"
                                #$(number->string webcam-size)))))))

First it defines the source code location of bbb-render as an “origin”. Second, it defines rendering-profile as a “profile” containing all the packages needed to run bbb-render’s make-xges.py script. The specification->manifest procedure creates a manifest from a set of packages specs, and likewise specification->package returns the package that matches a given spec. You can try these things at the guix repl prompt:

$ guix repl
GNU Guile 3.0.7
Copyright (C) 1995-2021 Free Software Foundation, Inc.

Guile comes with ABSOLUTELY NO WARRANTY; for details type `,show w'.
This program is free software, and you are welcome to redistribute it
under certain conditions; type `,show c' for details.

Enter `,help' for help.
scheme@(guix-user)> ,use(guix profiles)
scheme@(guix-user)> ,use(gnu)
scheme@(guix-user)> (specification->package "guile@2.0")
$1 = #<package guile@2.0.14 gnu/packages/guile.scm:139 7f416be776e0>
scheme@(guix-user)> (specifications->manifest '("guile" "gstreamer" "python"))
$2 = #<<manifest> entries: (#<<manifest-entry> name: "guile" version: "3.0.7" …> #<<manifest-entry> name: "gstreamer" version: "1.18.2" …> …)

Last, it defines video-ges-project as a function that takes the BBB raw data, a start and end time, and produces a video.ges file. There are three key elements here:

  1. computed-file is a function to produce a file, video.ges in this case, by running the code you give it as its second argument—the recipe, in makefile terms.
  2. The recipe passed to computed-file is a G-expression (or “gexp”), introduced by this fancy #~ (hash tilde) notation. G-expressions are a way to stage code, to mark it for eventual execution. Indeed, that code will only be executed if and when we run guix build (without --dry-run), and only if the result is not already in the store.
  3. The gexp refers to rendering-profile, to bbb-render, to bbb-data and so on by escaping with the #+ or #$ syntax (they’re equivalent, unless doing cross-compilation). During build, these reference items in the store, such as /gnu/store/…-bbb-render, which is itself the result of “building” the origin we’ve seen above. The #$output reference corresponds to the build result of this computed-file, the complete file name of video.ges under /gnu/store.

That’s quite a lot already! Of course, this real-world example is more intimidating than the toy examples you’d find in the manual, but really, pretty much everything’s there. Let’s see in more detail at what’s inside this gexp.

The gexp first imports a bunch of helper modules with build utilities and tools to manipulate profiles and search path environment variables. The for-each call iterates over search path environment variables—PATH, PYTHONPATH, and so on—, setting them so that the python command is found and so that the needed Python modules are found.

The with-imported-modules form above indicates that the (guix build utils) and (guix profiles) modules, which are part of Guix, along with their dependencies (their closure), need to be imported in the build environment. What about with-extensions? Those (guix …) module indirectly depend on additional modules, provided by the guile-gcrypt package, hence this spec.

Next comes the ges->webm function which, as the name implies, takes a .ges file and produces a WebM video file by invoking ges-launch-1.0. The end result is a video containing the recording’s audio, the webcam and screen share (or slide deck), but not the chat.

Opening and closing

We have a WebM video, so we’re pretty much done, right? But… we’d also like to have an opening, showing the talk title and the speaker’s name, as well as a closing. How do we get that done?

Perhaps a bit of a sledgehammer, but it turns out that we chose to produce those still images with LaTeX/Beamer, from these templates.

We need again several processing steps:

  1. We first define the latex->pdf function that takes a template .tex file, a speaker name and title. It copies the template, replaces placeholders with the speaker name and title, and runs pdflatex to produce the PDF.
  2. The pdf->bitmap function takes a PDF and returns a suitably-sized JPEG.
  3. image->webm takes that JPEG and invokes ffmpeg to render it as WebM, with the right resolution, frame rate, and audio track.

With that in place, we define a sweet and small function that produces the opening WebM file for a given talk:

(define (opening title speaker)
  (image->webm
   (pdf->bitmap (latex->pdf (local-file "opening.tex") "opening.pdf"
                            #:title title #:speaker speaker)
                "opening.jpg")
   "opening.webm" #:duration 5))

We need one last function, video-with-opening/closing, that given a talk, an opening, and a closing, concatenates them by invoking ffmpeg.

Putting it all together

Now we have all the building blocks!

We use local-file to refer to the raw BBB data, taken from disk:

(define raw-bbb-data/monday
  ;; The raw BigBlueButton data as returned by './download.py URL', where
  ;; 'download.py' is part of bbb-render.
  (local-file "bbb-video-data.monday" "bbb-video-data"
              #:recursive? #t))

(define raw-bbb-data/tuesday
  (local-file "bbb-video-data.tuesday" "bbb-video-data"
              #:recursive? #t))

No, the raw data is not in the Git repository (it’s too big and contains personally-identifying information about participants), so this assumes that there’s a bbb-video-data.monday and a bbb-video-data.tuesday in the same directory as render-videos.scm.

For good measure, we define a <talk> data type:

(define-record-type <talk>
  (talk title speaker start end cam-size data)
  talk?
  (title     talk-title)
  (speaker   talk-speaker)
  (start     talk-start)           ;start time in seconds
  (end       talk-end)             ;end time
  (cam-size  talk-webcam-size)     ;percentage used for the webcam
  (data      talk-bbb-data))       ;BigBlueButton data

… such that we can easily define talks, along with talk->video, which takes a talk and return a complete, final video:

(define (talk->video talk)
  "Given a talk, return a complete video, with opening and closing."
  (define file-name
    (string-append (canonicalize-string (talk-speaker talk))
                   ".webm"))

  (let ((raw (ges->webm (video-ges-project (talk-bbb-data talk)
                                           (talk-start talk)
                                           (talk-end talk)
                                           #:webcam-size
                                           (talk-webcam-size talk))
                        file-name))
        (opening (opening (talk-title talk) (talk-speaker talk))))
    (video-with-opening/closing file-name raw
                                opening closing.webm)))

The very last bit iterates over the talks and returns a manifest containing all the final videos. Now we can build the ready-to-be-published videos, all at once:

$ guix build -m render-videos.scm
[… time passes…]
/gnu/store/…-emmanuel-agullo.webm
/gnu/store/…-francois-rue.webm
…

Voilà!

Image of an old TV screen showing a video opening.

Why all the fuss?

OK, maybe you’re thinking “this is just another hackish script to fiddle with videos”, and that’s right! It’s also worth mentioning another approach: Racket’s video language, which is designed to manipulate video abstractions, similar to GES but with a sweet high-level functional interface.

But look, this one’s different: it’s self-contained, it’s reproducible, and it has the right abstraction level. Self-contained is a big thing; it means you can run it and it knows what software to deploy, what environment variables to set, and so on, for each step of the pipeline. Granted, it could be simplified with appropriate high-level interfaces in Guix. But remember: the alternative is a makefile (“deployment-unaware”) completed by a README file giving a vague idea of the dependencies needed. The reproducible bit is pretty nice too (especially for a workshop on reproducibility). It also means there’s caching: videos or intermediate byproducts already in the store don’t need to be recomputed. Last, we have access to a general-purpose programming language where we can build abstractions, such as the <talk> data type, that makes the whole thing more pleasant to work with and more maintainable.

Hopefully that’ll inspire you to have a reproducible video pipeline for your next on-line event, or maybe that’ll inspire you to replace your old makefile and shelly habits for data processing!

High-performance computing (HPC) people might be wondering how to go from here and build “computing-resource-aware” or “storage-resource-aware” pipelines where each computing step could be submitted to the job scheduler of an HPC cluster and use distributed file systems for intermediate results rather than /gnu/store. If you’re one of these folks, do take a look at how the Guix Workflow Language addresses these issues.

Acknowledgments

Thanks to Konrad Hinsen for valuable feedback on an earlier draft.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64 and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

11 June, 2021 05:00PM by Ludovic Courtès

June 09, 2021

www-zh-cn @ Savannah

Welcome our new member - jiderlesi

Dear www-zh-cn-translators:

It's a good time to welcome our new member:

User Details:
 -------------
Name:    Yuqi Feng
Login:   jiderlesi
Email:   jiderlesi@outlook.de <mailto:jiderlesi@outlook.de>

We thank jiderlesi for her/his commitment for contributing to GNU Chinese Translation.
We wish jiderlesi has a wonderful and successful free journey.

09 June, 2021 07:41AM by Wensheng XIE

June 07, 2021

GNU Health

IFMSA Bangladesh joins the GNU Health Alliance

The non-profit organization with 3500+ medical students and 65 universities across the country is now part of the GNU Health Alliance of Academic and Research Institutions
The non-profit organization with 3500+ medical students and 65 universities across the country is now part of the GNU Health Alliance of Academic and Research Institutions

It’s a great day for Bangladesh. It’s a great day for public health! Today, GNU Solidario and the International Federation of Medical Students Association, IFMSA Bangladesh, have signed an initial 5-year partnership on the grounds of the GNU Health Alliance of Academic and Research Institutions.

IFMSA Bangladesh is a non-for-profit, non-political organization that comprises 3500+ medical students from over 65 schools of Medicine across Bangladesh. They are a solid organization, very well organized, with different standing committees and support divisions.

IFMSA vision and mission fits very well with those of GNU Solidario advancement of Social Medicine. IFMSA has projects on Public Health (reproductive health; personal hygiene; cardiovascular disease and cancer prevention, … ), Human rights and peace (campaigns to end violence against women; protection of the underprivileged elders and children.. ). I am positive the GNU Health ecosystem will help them reach their goals in each of their projects!

The GNU Health Alliance of Academic and Research Institutions is extremely happy to have IFMSA Bangladesh as a member. IFMSA Bangladesh joins now a group of outstanding researchers and institutions that have made phenomenal advancements in health informatics and contributions to public health. Some examples:

  • The National University of Entre Ríos (UNER) has been awarded the project to use GNU Health as a real-time observatory for the COVID-19 pandemic, by the Government of Argentina. In the context of the GNU Health Alliance, UNER has also developed the oral health package for GNU Health; and implemented the GNU Health Hospital Management Information System component in many public health care institutions in the country. The team from the UNER has traveled to Cameroon to implement GNU Health HMIS in several health facilities in the country, as well as training their health professionals.
  • Thymbra Healthcare (R&D Labs) has contributed the medical genetics and precision medicine. Currently, Thymbra is focused on MyGNUHealth, the GNU Health Personal Health Record (PHR) for KDE plasma mobile and desktops devices, and working on the integration of MyGNUHealth with the PinePhone.
  • Khadas has signed an agreement to work on with the GNU Health community in Artificial Intelligence and medical imaging, as well on integrating Single Board Computers (SBCs) with GNU Health (the GNU Health in a Box project)

The fact that an association of 3500+ medical students embrace GNU Health means that all these bright future doctors from Bangladesh will also bear the ethics and philosophy of Libre Software to their communities. Public Health can not be run by private corporations, nor by proprietary software.

IFMSA has 5 years ahead to make a wonderful revolution in the public health care system. Health institutions will be able to implement state-of-the-art health informatics. Medical students can learn GNU Health inside-out, and conduct workshops across the country in the Libre digital health ecosystem. Most importantly, I am positive GNU Health will provide a wonderful opportunity to improve the health promotion and disease prevention campaigns in Bangladesh.

As the president of GNU Solidario, I am truly honored and looking forward to start collaborating with our colleagues from Bangladesh, and, when the pandemic is over, be able to meet them in person.

My most sincere appreciation to IFMSA Bangladesh for becoming part of the GNU Health community. To the 3500+ members, a very warm welcome!

Let’s keep building communities that foster universal health care, freedom and social medicine around the world.

For further information about the GNU Health Alliance of Academic and Research Institutions, please contact us at:

GNU Health Alliance: alliance@gnuhealth.org

Press: press@gnuhealth.org

General Information : info@gnuhealth.org

07 June, 2021 06:44PM by Luis Falcon

edma @ Savannah

GNU/EDMA 0.19.1. Alpha Release

GNU/EDMA 0.19.1 has been released as an Alpha Version. This version tries to fix the long standing issue with 64bits platforms.

In order to fix that problem this version adds a dependency on `libffi`.

This is an alpha release and it is still under test and can be downloaded from:

http://alpha.gnu.org/gnu/edma/

Any feedback or comment is welcomed

Best Regards
David

07 June, 2021 07:14AM by David Martínez Oliveira

June 05, 2021

gsl @ Savannah

GNU Scientific Library 2.7 released

Version 2.7 of the GNU Scientific Library (GSL) is now available. GSL provides a large collection of routines for numerical computing in C.

This release introduces some new features and fixes several bugs. The full NEWS file entry is appended below.

The file details for this release are:

ftp://ftp.gnu.org/gnu/gsl/gsl-2.7.tar.gz
ftp://ftp.gnu.org/gnu/gsl/gsl-2.7.tar.gz.sig

The GSL project homepage is http://www.gnu.org/software/gsl/

GSL is free software distributed under the GNU General Public License.

Thanks to everyone who reported bugs and contributed improvements.

Patrick Alken

-------------------------------

  • What is new in gsl-2.7:
    • fixed doc bug for gsl_histogram_min_bin (lhcsky at 163.com)
    • fixed bug #60335 (spmatrix test failure, J. Lamb)
    • clarified documentation on interpolation accelerators (V. Krishnan)
    • fixed bug #45521 (erroneous GSL_ERROR_NULL in ode-initval2, thanks to M. Sitte)
    • added support for native C complex number types in gsl_complex when using a C11 compiler
    • upgraded to autoconf 2.71, automake 1.16.3, libtool 2.4.6
    • updated exponential fitting example for nonlinear least squares
    • added banded LU decomposition and solver (gsl_linalg_LU_band)
    • New functions added to the library:

    - gsl_matrix_norm1
    - gsl_spmatrix_norm1
    - gsl_matrix_complex_conjtrans_memcpy
    - gsl_linalg_QL: decomp, unpack
    - gsl_linalg_complex_QR_* (thanks to Christian Krueger)
    - gsl_vector_sum
    - gsl_matrix_scale_rows
    - gsl_matrix_scale_columns
    - gsl_multilarge_linear_matrix_ptr
    - gsl_multilarge_linear_rhs_ptr
    - gsl_spmatrix_dense_add (renamed from gsl_spmatrix_add_to_dense)
    - gsl_spmatrix_dense_sub
    - gsl_linalg_cholesky_band: solvem, svxm, scale, scale_apply
    - gsl_linalg_QR_UD: decomp, lssolve
    - gsl_linalg_QR_UU: decomp, lssolve,QTvec
    - gsl_linalg_QR_UZ: decomp
    - gsl_multifit_linear_lcurvature
    - gsl_spline2d_eval_extrap

    • bug fix in checking vector lengths in gsl_vector_memcpy (dieggsy@pm.me)
    • made gsl_sf_legendre_array_index() inline and documented gsl_sf_legendre_nlm()|

05 June, 2021 03:02PM by Patrick Alken

poke @ Savannah

GNU poke 1.3 released

I am happy to announce a new release of GNU poke, version 1.3.

This is a bug fix release in the poke 1.x series.

See the file NEWS in the released tarball for a detailed list of
changes in this release.

The tarball poke-1.3.tar.gz is now available at
https://ftp.gnu.org/gnu/poke/poke-1.3.tar.gz.

  GNU poke (http://www.jemarch.net/poke) is an interactive, extensible
  editor for binary data.  Not limited to editing basic entities such
  as bits and bytes, it provides a full-fledged procedural,
  interactive programming language designed to describe data
  structures and to operate on them.

This release is the product of a month of work resulting in 41
commits, made by 4 contributors.

Thanks to the people who contributed with code and/or documentation to
this release.  In certain but no significant order they are:

  Mohammad-Reza Nabipoor <m.nabipoor@yahoo.com>
  Egeyar Bagcioglu <egeyar@gmail.com>
  Konstantinos Chasialis <sdi1600195@di.uoa.gr> 

As always, thank you all!

And this is all for now.
Happy poking!

--
Jose E. Marchesi
Frankfurt am Main
5 June 2021

05 June, 2021 10:55AM by Jose E. Marchesi

June 04, 2021

GNU Health

Liberating our mobile computing

Last week I got the PineTime, a free/libre smartwatch. In the past months, I’ve been working on MyGNUHealth and porting it to the PinePhone.

Why doing so? Because running free/libre operating systems and having control of the applications on your mobile phones and wearables is the right thing to do.

Yesterday, I told myself: “This is the day to move away from Android and take control over my phone”. And I made the switch. Now I am using a PinePhone on Manjaro running KDE plasma mobile. I have also switched my smartwatch to the PineTime.

The mobile phone and smartwatch were the last pieces of hardware and software to liberate. All my computing is now libre. No proprietary operating systems, no closed-source applications. Not on my laptop, not in my desktop, not on my phone.

Facing and overcoming the social pressure

At the moment I ditched Android, I felt an immense sense of relief and happiness. It took me back 30 years ago, early FreeBSD and GNU/Linux times, being in control of every component of my computer.

We can not put our daily life activities, electronic transactions and data in the hands of the corporations. Android phones shipped today are full of “bloatware” and closed-source applications. We can safely call most of those applications spyware.

The PinePhone is a libre computer, with a phone. All the applications are Libre Software. I have SSH, most of the cool KDE plasma applications I enjoy in the desktop, I can have them now in my pocket. Again, most importantly, I am free.

Of course, freedom comes with a price. The price to face social and corporate pressure. For instance, somebody asked me yesterday how to deal with banking without the app. My answer was, I never used an app for banking. Running a proprietary financial application is shooting at the heart of your privacy. If your bank does not let you do your transactions from any standard web browser, then change your bank. Quick digression… the financial system and the big technological corporations are desperately trying to get rid of good all coins and bills. This is yet another attack on our privacy. Nobody needs to know when, where and what I buy.

A brighter future depends on us

Some people might argue that this technology might not be ready for prime time, yet. I would say that I am ok with it, and the more we join, the more feedback we provide, and the better end result we’ll get.

The Pine64 project is mainly a community-oriented ecosystem. Its hardware, operating system and applications are from the community and for the community. I am developing MyGNUHealth Personal Health Record to be run on KDE Plasma, both for desktop and for the PinePhone and other Libre mobile devices. It is my commitment with freedom, privacy and universal healthcare to deliver health informatics in Libre, privacy focused platforms that anyone can adopt.

MyGNUHealth Personal Health Record running on the Desktop and on the PinePhone. The PineTime smartwatch as the next companion for MyGNUHealth. All these components are privacy focused, Free/Libre Software and hardware.

It depends on you to be prisoner of the corporation and massive surveillance systems, or to be in full control of your programming, health information and life. It takes commitment to achieve it… some components might be too bleeding edge or the camera might not have the highest resolutions and you won’t have the Whatsapp “app” (removing that application would actually be a blessing). It’s a very small price to pay for freedom and privacy. It’s a very small price to pay for the advancement of our society.

InfiniTime firmware upgrade using Siglo.

It’s been many years since I’ve been in the look out for a truly libre phone. After many projects that succumbed, the PinePhone is the first one that has gained momentum. Please support the PinePhone project. Support KDE plasma mobile. Support Arch, Manjaro, openSUSE, FreeBSD or your favorite Libre operating system. Support those who make Libre convergent applications that can be run on mobile devices, like Kirigami. Support InfiniTime and any free/libre firmware for smartwatches, as well as their companions as Siglo or Amazfish.

The future of Libre mobile computing is now, more than ever, in your hands.

Happy and healthy hacking.

04 June, 2021 03:45PM by Luis Falcon

May 30, 2021

gnuastro @ Savannah

Gnuastro 0.15 released

The 15th release of Gnuastro is now available. See the full announcement for more.

30 May, 2021 11:41PM by Mohammad Akhlaghi

May 29, 2021

m4 @ Savannah

May 27, 2021

FSF Blogs

May GNU Spotlight with Mike Gerwitz: 11 new GNU releases!

11 new GNU releases in the last month (as of May 25, 2021):

27 May, 2021 04:59PM

FSF News

Apply to be the FSF's next executive director

The Free Software Foundation (FSF), a Massachusetts 501(c)(3) charity with a worldwide mission to protect computer user freedom, seeks a principled, compassionate, and capable leader to be its new executive director. This position can be remote or based in our Boston office.

27 May, 2021 04:25PM

May 25, 2021

FSF Events

May 22, 2021

parallel @ Savannah

GNU Parallel 20210522 ('Gaza') released

GNU Parallel 20210522 ('Gaza') has been released. It is available for download at: lbry://@GnuParallel:4

Please help spreading GNU Parallel by making a testimonial video like Juan Sierra Pons: http://www.elsotanillo.net/wp-content/uploads/GnuParallel_JuanSierraPons.mp4

It does not have to be as detailed as Juan's. It is perfectly fine if you just say your name, and what field you are using GNU Parallel for.

Quote of the month:

  If you work with lots of files at once
  Take a good look at GNU parallel
  Change your life for the better
    -- French @notareverser@twitter

New in this release:

  • --plus includes {%%regexp} and {##regexp}.
  • Bug fixes and man page updates.

News about GNU Parallel:

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep c82233e7da3166308632ac8c34f850c0
    12345678 c82233e7 da316630 8632ac8c 34f850c0
    $ md5sum install.sh | grep ae3d7aac5e15cf3dfc87046cfc5918d2
    ae3d7aac 5e15cf3d fc87046c fc5918d2
    $ sha512sum install.sh | grep dfc00d823137271a6d96225cea9e89f533ff6c81f
    9c5198d5 31a3b755 b7910ece 3a42d206 c804694d fc00d823 137271a6 d96225ce
    a9e89f53 3ff6c81f f52b298b ef9fb613 2d3f9ccd 0e2c7bd3 c35978b5 79acb5ca
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 May, 2021 08:20PM by Ole Tange

May 20, 2021

freeipmi @ Savannah

FreeIPMI 1.6.8 Released

o Fix incorrect sensor read corner case on BMCs that use non-default LUNs (LP#1926299).
o Remove hard coded paths from system config files (i.e. mostly files in /etc).  Have paths updated based on options to configure.

https://ftp.gnu.org/gnu/freeipmi/freeipmi-1.6.8.tar.gz

20 May, 2021 04:28AM by Albert Chu

May 13, 2021

Andy Wingo

cross-module inlining in guile

Greetings, hackers of spaceship Earth! Today's missive is about cross-module inlining in Guile.

a bit of history

Back in the day... what am I saying? I always start these posts with loads of context. Probably you know it all already. 10 years ago, Guile's partial evaluation pass extended the macro-writer's bill of rights to Schemers of the Guile persuasion. This pass makes local function definitions free in many cases: if they should be inlined and constant-folded, you are confident that they will be. peval lets you write clear programs with well-factored code and still have good optimization.

The peval pass did have a limitation, though, which wasn't its fault. In Guile, modules have historically been a first-order concept: modules are a kind of object with a hash table inside, which you build by mutating. I speak crassly but that's how it is. In such a world, it's hard to reason about top-level bindings: what module do they belong to? Could they ever be overridden? When you have a free reference to a, and there's a top-level definition of a in the current compilation unit, is that the a that's being referenced, or could it be something else? Could the binding be mutated in the future?

During the Guile 2.0 and 2.2 cycles, we punted on this question. But for 3.0, we added the notion of declarative modules. For these modules, bindings which are defined once in a module and which are not mutated in the compilation unit are declarative bindings, which can be reasoned about lexically. We actually translate them to a form of letrec*, which then enables inlining via peval, contification, and closure optimization -- in descending order of preference.

The upshot is that with Guile 3.0, top-level bindings are no longer optimization barriers, in the case of declarative modules, which are compatible enough with historic semantics and usage that they are on by default.

However, module boundaries have still been an optimization barrier. Take (srfi srfi-1), a little utility library on lists. One definition in the library is xcons, which is cons with arguments reversed. It's literally (lambda (cdr car) (cons car cdr)). But does the compiler know that? Would it know that (car (xcons x y)) is the same as y? Until now, no, because no part of the optimizer will look into bindings from outside the compilation unit.

mr compiler, tear down this wall

But no longer! Guile can now inline across module boundaries -- in some circumstances. This feature will be part of a future Guile 3.0.8.

There are actually two parts of this. One is the compiler can identify a set of "inlinable" values from a declarative module. An inlinable value is a small copyable expression. A copyable expression has no identity (it isn't a fresh allocation), and doesn't reference any module-private binding. Notably, lambda expressions can be copyable, depending on what they reference. The compiler then extends the module definition that's residualized in the compiled file to include a little procedure that, when passed a name, will return the Tree-IL representation of that binding. The design of that was a little tricky; we want to avoid overhead when using the module outside of the compiler, even relocations. See compute-encoding in that module for details.

With all of that, we can call ((module-inlinable-exports (resolve-interface '(srfi srfi-1))) 'xcons) and get back the Tree-IL equivalent of (lambda (cdr car) (cons car cdr)). Neat!

The other half of the facility is the actual inlining. Here we lean on peval again, causing <module-ref> forms to trigger an attempt to copy the term from the imported module to the residual expression, limited by the same effort counter as the rest of peval.

The end result is that we can be absolutely sure that constants in imported declarative modules will inline into their uses, and fairly sure that "small" procedures will inline too.

caveat: compiled imported modules

There are a few caveats about this facility, and they are sufficiently sharp that I should probably fix them some day. The first one is that for an imported module to expose inlinable definitions, the imported module needs to have been compiled already, not loaded from source. When you load a module from source using the interpreter instead of compiling it first, the pipeline is optimized for minimizing the latency between when you ask for the module and when it is available. There's no time to analyze the module to determine which exports are inlinable and so the module exposes no inlinable exports.

This caveat is mitigated by automatic compilation, enabled by default, which will compile imported modules as needed.

It could also be fixed for modules by topologically ordering the module compilation sequence; this would allow some parallelism in the build but less than before, though for module graphs with cycles (these exist!) you'd still have some weirdness.

caveat: abi fragility

Before Guile supported cross-module inlining, there was only explicit inlining across modules in Guile, facilitated by macros. If you write a module that has a define-inlinable export and you think about its ABI, then you know to consider any definition referenced by the inlinable export, and you know by experience that its code may be copied into other compilation units. Guile doesn't automatically recompile a dependent module when a macro that it uses changes, currently anyway. Admittedly this situation leans more on culture than on tools, which could be improved.

However, with automatically inlinable exports, this changes. Any definition in a module could be inlined into its uses in other modules. This may alter the ABI of a module in unexpected ways: you think that module C depends on module B, but after inlining it may depend on module A as well. Updating module B might not update the inlined copies of values from B into C -- as in the case of define-inlinable, but less lexically apparent.

At higher optimization levels, even private definitions in a module can be inlinable; these may be referenced if an exported macro from the module expands to a term that references a module-private variable, or if an inlinable exported binding references the private binding. But these optimization levels are off by default precisely because I fear the bugs.

Probably what this cries out for is some more sensible dependency tracking in build systems, but that is a topic for another day.

caveat: reproducibility

When you make a fresh checkout of Guile from git and build it, the build proceeds in the following way.

Firstly, we build libguile, the run-time library implemented in C.

Then we compile a "core" subset of Scheme files at optimization level -O1. This subset should include the evaluator, reader, macro expander, basic run-time, and compilers. (There is a bootstrap evaluator, reader, and macro expander in C, to start this process.) Say we have source files S0, S1, S2 and so on; generally speaking, these files correspond to Guile modules M0, M1, M2 etc. This first build produces compiled files C0, C1, C2, and so on. When compiling a file S2 implementing module M2, which happens to import M1 and M0, it may be M1 and M0 are provided by compiled files C1 and C0, or possibly they are loaded from the source files S1 and S0, or C1 and S0, or S1 and C0.

The bootstrap build uses make for parallelism, with each compile process starts afresh, importing all the modules that comprise the compiler and then using them to compile the target file. As the build proceeds, more and more module imports can be "serviced" by compiled files instead of source files, making the build go faster and faster. However this introduces system-specific nondeterminism as to the set of compiled files available when compiling any other file. This strategy works because it doesn't really matter whether module M1 is provided by compiled file C1 or source file S1; the compiler and the interpreter implement the same language.

Once the compiler is compiled at optimization level -O1, Guile then uses that freshly built compiler to build everything at -O2. We do it in this way because building some files at -O1 then all files at -O2 takes less time than going straight to -O2. If this sounds weird, that's because it is.

The resulting build is reproducible... mostly. There is a bug in which some unique identifiers generated as part of the implementation of macros can be non-reproducible in some cases, and that disabling parallel builds seems to solve the problem. The issue being that gensym (or equivalent) might be called a different number of times depending on whether you are loading a compiled module, or whether you need to read and macro-expand it. The resulting compiled files are equivalent under alpha-renaming but not bit-identical. This is a bug to fix.

Anyway, at optimization level -O1, Guile will record inlinable definitions. At -O2, Guile will actually try to do cross-module inlining. We run into two issues when compiling Guile; one is if we are in the -O2 phase, and we compile a module M which uses module N, and N is not in the set of "core" modules. In that case depending on parallelism and compile order, N may be loaded from source, in which case it has no inlinable exports, or from a compiled file, in which case it does. This is not a great situation for the reliability of this optimization. I think probably in Guile we will switch so that all modules are compiled at -O1 before compiling at -O2.

The second issue is more subtle: inlinable bindings are recorded after optimization of the Tree-IL. This is more optimal than recording inlinable bindings before optimization, as a term that is not inlinable due to its size in its initial form may become small enough to inline after optimization. However, at -O2, optimization includes cross-module inlining! A term that is inlinable at -O1 may become not inlinable at -O2 because it gets slightly larger, or vice-versa: terms that are too large at -O1 could shrink at -O2. We don't even have a guarantee that we will reach a fixed point even if we repeatedly recompile all inputs at -O2, because we allow non-shrinking optimizations.

I think this probably calls for a topological ordering of module compilation inside Guile and perhaps in other modules. That would at least give us reproducibility, provided we avoid the feedback loop of keeping around -O2 files compiled from a previous round, even if they are "up to date" (their corresponding source file didn't change).

and for what?

People who have worked on inliners will know what I mean that a good inliner is like a combine harvester: ruthlessly efficient, a qualitative improvement compared to not having one, but there is a pointy end with lots of whirling blades and it's important to stop at the end of the row. You do develop a sense of what will and won't inline, and I think Dybvig's "Macro writer's bill of rights" encompasses this sense. Luckily people don't lose fingers or limbs to inliners, but inliners can maim expectations, and cross-module inlining more so.

Still, what it buys us is the freedom to be abstract. I can define a module like:

(define-module (elf)
  #:export (ET_NONE ET_REL ET_EXEC ET_DYN ET_CORE))

(define ET_NONE		0)		; No file type
(define ET_REL		1)		; Relocatable file
(define ET_EXEC		2)		; Executable file
(define ET_DYN		3)		; Shared object file
(define ET_CORE		4)		; Core file

And if a module uses my (elf) module and references ET_DYN, I know that the module boundary doesn't prevent the value from being inlined as a constant (and possibly unboxed, etc).

I took a look and on our usual microbenchmark suite, cross-module inlining doesn't make a difference. But that's both a historical oddity and a bug: firstly that the benchmark suite comes from an old Scheme world that didn't have modules, and so won't benefit from cross-module inlining. Secondly, Scheme definitions from the "default" environment that aren't explicitly recognized as primitives aren't inlined, as the (guile) module isn't declarative. (Probably we should fix the latter at some point.)

But still, I'm really excited about this change! Guile developers use modules heavily and have been stepping around this optimization boundary for years. I count 100 direct uses of define-inlinable in Guile, a number of them inside macros, and many of these are to explicitly hack around the optimization barrier. I really look forward to seeing if we can remove some of these over time, to go back to plain old define and just trust the compiler to do what's needed.

by the numbers

I ran a quick analysis of the modules include in Guile to see what the impact was. Of the 321 files that define modules, 318 of them are declarative, and 88 contain inlinable exports (27% of total). Of the 6519 total bindings exported by declarative modules, 566 of those are inlinable (8.7%). Of the inlinable exports, 388 (69%) are functions (lambda expressions), 156 (28%) are constants, and 22 (4%) are "primitives" referenced by value and not by name, meaning definitions like (define head car) (instead of re-exporting car as head).

On the use side, 90 declarative modules import inlinable bindings (29%), resulting in about 1178 total attempts to copy inlinable bindings. 902 of those attempts are to copy a lambda expressions in operator position, which means that peval will attempt to inline their code. 46 of these attempts fail, perhaps due to size or effort constraints. 191 other attempts end up inlining constant values. 20 inlining attempts fail, perhaps because a lambda is used for a value. Somehow, 19 copied inlinable values get elided because they are evaluated only for their side effects, probably to clean up let-bound values that become unused due to copy propagation.

All in all, an interesting endeavor, and one to improve on going forward. Thanks for reading, and catch you next time!

13 May, 2021 11:25AM by Andy Wingo

May 11, 2021

GNU Guix

GNU Guix 1.3.0 released

Image of a Guix test pilot.

We are pleased to announce the release of GNU Guix version 1.3.0!

The release comes with ISO-9660 installation images, a virtual machine image, and with tarballs to install the package manager on top of your GNU/Linux distro, either from source or from binaries. Guix users can update by running guix pull.

It’s been almost 6 months since the last release, during which 212 people contributed code and packages, and a number of people contributed to other important tasks—code review, system administration, translation, web site updates, Outreachy mentoring, and more.

There’s been more than 8,300 commits in that time frame, which we’ll humbly try to summarize in these release notes.

User experience

A distinguishing Guix feature is its support for declarative deployment: instead of running a bunch of guix install and guix remove commands, you run guix package --manifest=manifest.scm, where manifest.scm lists the software you want to install in a snippet that looks like this:

;; This is 'manifest.scm'.
(specifications->manifest
  (list "emacs" "guile" "gcc-toolchain"))

Doing that installs exactly the packages listed. You can have that file under version control and share it with others, which is convenient. Until now, one would have to write the manifest by hand—not insurmountable, but still a barrier to someone willing to migrate to the declarative model.

The new guix package --export-manifest command (and its companion --export-channels option) produces a manifest based on the contents of an existing profile. That makes it easy to transition from the classic “imperative” model, where you run guix install as needed, to the more formal declarative model. This was long awaited!

Users who like to always run the latest and greatest pieces of the free software commons will love the new --with-latest package transformation option. Using the same code as guix refresh, this option looks for the latest upstream release of a package, fetches it, authenticates it, and builds it. This is useful in cases where the new version is not yet packaged in Guix. For example, the command below, if run today, will (attempt to) install QEMU 6.0.0:

$ guix install qemu --with-latest=qemu 
The following package will be upgraded:
   qemu 5.2.0 → 6.0.0

Starting download of /tmp/guix-file.eHO6MU
From https://download.qemu.org//qemu-6.0.0.tar.bz2...
 …0.tar.bz2  123.3MiB                                                                                                                      28.2MiB/s 00:04 [##################] 100.0%

Starting download of /tmp/guix-file.9NRlvT
From https://download.qemu.org//qemu-6.0.0.tar.bz2.sig...
 …tar.bz2.sig  310B                                                                                                                         1.2MiB/s 00:00 [##################] 100.0%
gpgv: Signature made Thu 29 Apr 2021 09:28:25 PM CEST
gpgv:                using RSA key CEACC9E15534EBABB82D3FA03353C9CEF108B584
gpgv: Good signature from "Michael Roth <michael.roth@amd.com>"
gpgv:                 aka "Michael Roth <mdroth@utexas.edu>"
gpgv:                 aka "Michael Roth <flukshun@gmail.com>"
The following derivation will be built:
   /gnu/store/ypz433vzsbg3vjp5374fr9lhsm7jjxa4-qemu-6.0.0.drv

…

There’s one obvious caveat: this is not guaranteed to work. If the new version has a different build system, or if it requires extra dependencies compared to the version currently packaged, the build process will fail. Yet, it provides users with additional flexibility which can be convenient at times. For developers, it’s also a quick way to check whether a given package successfully builds against the latest version of one of its dependencies.

Several changes were made here and there to improve user experience. As an example, a new --verbosity level was added. By default (--verbosity=1), fewer details about downloads get printed, which matches the expectation of most users.

Another handy improvement is suggestions when making typos:

$ guix package --export-manifests
guix package: error: export-manifests: unrecognized option
hint: Did you mean `export-manifest'?

$ guix remve vim
guix: remve: command not found
hint: Did you mean `remove'?

Try `guix --help' for more information.

People setting up build offloading over SSH will enjoy the simplified process, where the guile executable no longer needs to be in PATH, with appropriate GUILE_LOAD_PATH settings, on target machines. Instead, offloading now channels all its operations through guix repl.

The Guix reference manual is fully translated into French, German, and Spanish, with preliminary translations in Russian, Chinese, and other languages. Guix itself is fully translated in French, German, and Slovak, and partially translated in almost twenty other languages. Translations are now handled on Weblate, and you can help!

Developer tools

We have good news for packagers! First, guix import comes with a new Go recursive importer, that can create package definitions or templates thereof for whole sets of Go packages. The guix import crate command, for Rust packages, now honors “semantic versioning” when used in recursive mode.

The guix refresh command now includes new “updaters”: sourceforge, for code hosted on SourceForge, and generic-html which, as the name implies, is a generic update that works by scanning package home pages. This greatly improves guix refresh coverage.

Packagers and developers may also like the new --with-patch package transformation option, which provides a way to build a bunch of packages with a patch applied to one or several of them.

Building on the Guix System image API introduced in v1.2.0, the guix system vm-image and guix system disk-image are superseded by a unified guix system image command. For example,

guix system vm-image --save-provenance config.scm

becomes

guix system image -t qcow2 --save-provenance config.scm

while

guix system disk-image -t iso9660 gnu/system/install.scm

becomes

guix system image -t iso9660 gnu/system/install.scm

This brings performance benefits; while a virtual machine used to be involved in the production of the image artifacts, the low-level bits are now handled by the dedicated genimage tool. Another benefit is that the qcow2 format is now compressed, which removes the need to manually compress the images by post-processing them with xz or another compressor. To learn more about the guix system image command, you can refer to its documentation.

Last but not least, the introduction of the GUIX_EXTENSIONS_PATH Guix search path should make it possible for Guix extensions, such as the Guix Workflow Language, to have their Guile modules automatically discovered, simplifying their deployments.

Performance

One thing you will hopefully notice is that substitute installation (downloading pre-built binaries) became faster, as we explained before. This is in part due to the opportunistic use of zstd compression, which has a high decompression throughput. The daemon and guix publish support zstd as an additional compression method, next to gzip and lzip.

Another change that can help fetch substitutes more quickly is local substitute server discovery. The new --discover option of guix-daemon instructs it to discover and use substitute servers on the local-area network (LAN) advertised with the mDNS/DNS-SD protocols, using Avahi. Similarly, guix publish has a new --advertise option to advertise itself on the LAN.

On Guix System, you can run herd discover guix-daemon on to turn discovery on temporarily, or you can enable it in your system configuration. Opportunistic use of neighboring substitute servers is entirely safe, thanks to reproducible builds.

In other news, guix system init has been optimized, which contributes to making Guix System installation faster.

On some machines with limited resources, building the Guix modules is an expensive operation. A new procedure, channel-with-substitutes-available from the (guix ci) module, can now be used to pull Guix to the latest commit which has already been built by the build farm. Refer to the documentation for an example.

POWER9 support, packages, services, and more!

POWER9 support is now available as a technology preview, thanks to the tireless work of those who helped porting Guix to that platform. There aren't many POWER9 binary substitutes available yet, due to the limited POWER9 capacity of our build farm, but if you are not afraid of building many packages from source, we'd be thrilled to hear back from your experience!

2,000 packages were added, for a total of more than 17K packages; 3,100 were updated. The distribution comes with GNU libc 2.31, GCC 10.3, Xfce 4.16.0, Linux-libre 5.11.15, LibreOffice 6.4.7.2, and Emacs 27.2, to name a few. Among the many packaging changes, one that stands out is the new OCaml bootstrap: the OCaml package is now built entirely from source via camlboot. Package updates also include Cuirass 1.0, the service that powers our build farm.

The services catalog has also seen new additions such as wireguard, syncthing, ipfs, a simplified and more convenient service for Cuirass, and more! You can search for services via the guix system search facility.

The NEWS file lists additional noteworthy changes and bug fixes you may be interested in.

Try it!

The installation script has been improved to allow for more automation. For example, if you are in a hurry, you can run it with:

# yes | ./install.sh

to proceed to install the Guix binary on your system without any prompt!

You may also be interested in trying the Guix System demonstration VM image which now supports clipboard integration with the host and dynamic resizing thanks to the SPICE protocol, which we hope will improve the user experience.

To review all the installation options at your disposal, consult the download page and don't hesitate to get in touch with us.

Enjoy!

Credits

Luis Felipe (illustration)

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64 and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

11 May, 2021 10:00PM by Ludovic Courtès, Maxim Cournoyer

m4 @ Savannah

GNU M4 1.4.18d released [beta]

More details at https://lists.gnu.org/archive/html/m4-discuss/2021-05/msg00002.html
This beta release will enable translators time to prepare .po files before the stable release of 1.4.19 later this month.

11 May, 2021 05:15PM by Eric Blake

May 10, 2021

GNU Guile

GNU Guile 3.0.7 released

We are humbled to announce the release of GNU Guile 3.0.7. This release fixes a number of bugs, a couple of which were introduced in the previous release. For full details, see the NEWS entry. See the release note for signatures, download links, and all the rest. Happy hacking!

10 May, 2021 09:00AM by Andy Wingo (guile-devel@gnu.org)

May 09, 2021

GNUnet News

DISSENS: Decentralized Identities for Self-sovereign End-users

DISSENS: Decentralized Identities for Self-sovereign End-users (NGI TRUST)

Since mid 2020, a consortium between Taler Systems S.A. , the Bern University of Applied Sciences and Fraunhofer AISEC has been working on bringing privacy-friendly payments using GNU Taler and self-sovereign identity using GNUnet's re:claimID together in an e-commerce framework.

Content

Registrations of accounts prior to receiving services online is the standard process for commercial offerings on the Internet which depend on two corner stones of the Web: Payment processing and digital identities. The use of third-party identity provider services (IdPs) is practical as it delegates the task of verifying and storing personal information. The use of payment processors is convenient for the customer as it provides one-click payments. However, the quasi-oligopoly of services providers in those areas include Google and Facebook for identities and PayPal or Stripe for payment processing. Those corporations are not only based in privacy-unfriendly jurisdictions, but also exploit private data for profit.

DISSENS makes the case that what is urgently needed are fundamentally different, user-centric and privacy-friendly alternatives to the above. Self-sovereign identity (SSI) management is the way to replace IdPs with a user-centric, decentralized mechanism where data and access control is fully under the control of the data subject. In combination with a privacy-friendly payment system, DISSENS aims to achieve the same one-click user experience that is currently achieved by privacy-invasive account-based Web shops, but without the users having to setup accounts.

To achieve this, DISSENS integrates re:claimID with the GNU Taler payment system in a pilot in order to demonstrate the practical feasibility and benefits of privacy enhancing technologies for users and commercial service providers. DISSENS also implements a reference scenario which includes credentials issued by the partners Fraunhofer AISEC and BFH for employees and students, respectively. Users are able to access and use a pilot service developed by Taler Systems S.A. while being able to claim specific discounts for students and researchers.

This approach offers significant benefits over existing solutions built using other SSI systems such as Sovrin or serto (formerly uPort):

No gatekeepers; No vendor lock-in:

The approach is completely open to issuers and does not impose any registration restrictions (such as registration fees) in order to define domain specific credentials. Further, the system does not impose a consortium-based governance model — which tend to eventually be driven by commercial interests and not consumer interests. The design enables all participants in the ecosystem to participate without prior onboarding while at the same time being offered full transparency and control regarding their personal data and processes involved.

Support for non-interactive business processes:

At the same time, unlike the SSI systems cited above, re:claimID offers a way to access user information without online interaction with the user. Offline access of shared identity data is a crucial requirement in almost any business process as such processes often occur after direct interaction with the user. For example, customer information such as billing addresses are required in — possibly recurring — back office billing processes which occur well after interaction with a customer.

Scalability and sustainability:

Finally, both re:claimID as the SSI system as well as Taler do not suffer from the usual predicament Blockchain-based systems find themselves in: Both systems do not require a decentralized, public ledger. This eliminates the need for consensus mechanisms, which do not scale and are ecologically unsustainable. In fact, DISSENS employs decentralization only where it provides the most value and use more efficient technology stacks where needed: re:claimID builds on top of the GNU Name System , which makes use of a DHT, an efficient (O(log n)) peer-to-peer data structure. For payments, GNU Taler uses centralized infrastructure operated by audited and regulated exchange providers and facilitates account-less end-to-end interactions between customers and services where all parties have O(1) transaction costs.

The result of DISSENS will provide businesses and credential issuers with ready-to-use and standards-compliant templates to build privacy-friendly services in the Web. The aim of the DISSENS project was to design a technology stack which combines privacy-friendly online payments with self-sovereign personal data management. The result enables users to be in complete control over their digital identity and personal information while at the same time being able to selectively share information necessary to use commercial services. The pilot demonstrates a sustainable, user-centric, standard-compliant and accessible use case for public service employees and students in the domain of commercial food delivery. It serves as an easy-to-adapt template for the integration of other scenarios and use cases.

Future work

GNUnet is working on the underlying components mature to the point that Taler+re:claimID can be recommended to operators to enable for account-less shopping with or without verified credentials. This will also require the continuation of our work on the low-level transport rewrite as it is a core component of GNS which in turn is what makes re:claimID spin.

Links

This work is generously funded by the EC's Next Generation Internet (NGI) initiative as part of their NGI TRUST programme.

09 May, 2021 10:00PM

May 08, 2021

m4 @ Savannah

GNU M4 1.4.18b released [beta]

More details at https://lists.gnu.org/archive/html/m4-discuss/2021-05/msg00001.html
This beta release will enable translators time to prepare .po files before the stable release of 1.4.19 later this month.

08 May, 2021 10:56AM by Eric Blake

May 07, 2021

FSF Blogs

May 06, 2021

health @ Savannah

Beta testers for MyGNUHealth Personal Health Record

Dear community

I am very happy to announce that the documentation for MyGNUHealth beta is now online.

We would love beta testers both in the desktop (KDE Plasma) and in the PinePhone, so if everything goes well, shortly we will be able to release the first stable release.

We would like to count with **translators** of the documentation and the application itself. We are working with the KDE community in these areas.

For those of you interested, I suggest the first step is to read the current MyGNUHealth documentation and take it from there :)

https://www.gnuhealth.org/docs

Axel and other friends from the KDE community have been early testers since alpha stage. Now we need to go further and check it with a larger audience.

If you want to become a beta MyGNUHealth beta tester, send us a mail to info@gnuhealth.org .

All the best
Luis

06 May, 2021 09:38PM by Luis Falcon

FSF Blogs

remotecontrol @ Savannah

Nest Thermostat bug disables Google Home app control

https://9to5google.com/2021/05/04/nest-thermostat-migration-google-home-app-bug/

Nest Thermostat bug disables Google Home app control with endless account migration loop.

06 May, 2021 10:36AM by Stephen H. Dawson DSL

May 05, 2021

poke @ Savannah

Pokology: a community-driven website about GNU poke

We are happy to announce the availability of a new website, https://pokology.org.

Pokology is a community-driven live repository of knowledge relative to GNU poke, maintained by the poke developers, users and friends.

The site is similar to a wiki, collectively maintained as a git repository.  The contents are written in comfortable org-mode files which get automatically published to HTML.

Happy poking!

05 May, 2021 12:30AM by Jose E. Marchesi

April 30, 2021

FSF News

Free Software Wireless-N Mini Router v3 from ThinkPenguin, Inc. now FSF-certified to Respect Your Freedom

BOSTON, Massachusetts, USA -- Friday, April 30th, 2021 -- The Free Software Foundation (FSF) today awarded Respects Your Freedom (RYF) certification to the Free Software Wireless-N Mini Router v3 (TPE-R1300) from ThinkPenguin, Inc. The RYF certification mark means that these products meet the FSF's standards in regard to users' freedom, control over the product, and privacy.

30 April, 2021 03:45PM

April 28, 2021

FSF board frequently asked questions (FAQ)

28 April, 2021 05:50PM

GNU Guile

GNU Guile 3.0.6 released

We are pleased to announce the release of GNU Guile 3.0.6. This release improves source-location information for compiled code, removes the dependency on libltdl, fixes some important bugs, adds an optional bundled "mini-gmp" library, as well as the usual set of minor optimizations and bug fixes. For full details, see the NEWS entry. See the release note for signatures, download links, and all the rest. Happy hacking!

28 April, 2021 07:20AM by Andy Wingo (guile-devel@gnu.org)

April 27, 2021

dico @ Savannah

Version 2.11

Version 2.11 of GNU dico is available for download.  This version fixes several bugs and inconsistencies in the gcide module and the gcider utility.  Also, this version drops the support for Python 2.7.

27 April, 2021 07:40AM by Sergey Poznyakoff

April 24, 2021

Christopher Allan Webber

Beyond the shouting match: what is a blockchain, really?

If there's one thing that's true about the word "blockchain", it's that these days people have strong opinions about it. Open your social media feed and you'll see people either heaping praises on blockchains, calling them the saviors of humanity, or condemning them as destroying and burning down the planet and making the rich richer and the poor poorer and generally all the other kinds of fights that people like to have about capitalism (also a quasi-vague word occupying some hotly contested mental real estate).

There are good reasons to hold opinions about various aspects of what are called "blockchains", and I too have some pretty strong opinions I'll be getting into in a followup article. The followup article will be about "cryptocurrencies", which many people also seem to think of as synonymous with "blockchains", but this isn't particularly true either, but we'll deal with that one then.

In the meanwhile, some of the fighting on the internet is kind of confusing, but even more importantly, kind of confused. Some of it might be what I call "sportsballing": for whatever reason, for or against blockchains has become part of your local sportsball team, and we've all got to be team players or we're gonna let the local team down already, right? And the thing about sportsballing is that it's kind of arbitrary and it kind of isn't, because you might pick a sportsball team because you did all your research or you might have picked it because that just happens to be the team in your area or the team your friends like, but god almighty once you've picked your sportsball team let's actually not talk against it because that might be giving in to the other side. But sportsballing kind of isn't arbitrary either because it tends to be initially connected to real communities of real human beings and there's usually a deeper cultural web than appears at surface level, so when you're poking at it, it appears surface-level shallow but there are some real intricacies beneath the surface. (But anyway, go sportsball team.)

But I digress. There are important issues to discuss, yet people aren't really discussing them, partly because people mean different things. "Blockchain" is a strange term that encompasses a wide idea space, and what people consider or assume essential to it vary just as widely, and thus when two people are arguing they might not even be arguing about the same thing. So let's get to unpacking.

"Blockchain" as handwaving towards decentralized networks in general

Years ago I was at a conference about decentralized networked technology, and I was having a conversation with someone I had just met. This person was telling me how excited they were about blockchains... finally we have decentralized network designs, and so this seems really useful for society!

I paused for a moment and said yes, blockchains can be useful for some things, though they tend to have significant costs or at least tradeoffs. It's good that we also have other decentralized network technology; for example, the ActivityPub standard I was involved in had no blockchains but did rely on the much older "classic actor model."

"Oh," the other person said, "I didn't know there were other kinds of decentralized network designs. I thought that 'blockchain' just meant 'decentralized network technology'."

It was as if a light had turned on and illuminated the room for me. Oh! This explained so many conversations I had been having over the years. Of course... for many people, blockchains like Bitcoin were the first ever exposure they had (aside from email, which maybe they never gave much thought to as being decentralized) of something that involved a decentralized protocol. So for many people, "blockchain" and "decentralized technology" are synonyms, if not in technical design, but in terms of understanding of a space.

Mark S. Miller, who was standing next to me, smiled and gave a very interesting followup: "There is only one case in which you need a blockchain, and that is in a decentralized system which needs to converge on a single order of events, such as a public ledger dealing with the double spending problem."

Two revelations at once. It was a good conversation... it was a good start. But I think there's more.

Blockchains are the "cloud" of merkle trees

As time has gone on, the discourse over blockchains has gotten more dramatic. This is partly because what a "blockchain" is hasn't been well defined.

All terminology exists on an ever-present battle between fuzziness and crispness, with some terms being much clearer than others. The term "boolean" has a fairly crisp definition in computer science, but if I ask you to show me your "stove", the device you show me today may be incomprehensible to someone's definition a few centuries ago, particularly in that today it might not involve fire. Trying to define as in terms of its functionality can also cause confusion: if I asked you to show me a stove, and you showed me a computer processor or a car engine, I might be fairly confused, even though technically people enjoy showing off that they can cook eggs on both of these devices when they get hot enough. (See also: Identity is a Katamari, language is a Katamari explosion.)

Still, some terms are fuzzier than others, and as far as terms go, "blockchain" is quite fuzzy. Hence my joke: "Blockchains are the 'cloud' of merkle trees."

This ~joke tends to get a lot of laughs out of a particular kind of audience, and confused looks from others, so let me explain. The one thing everyone seems to agree on is that it's a "chain of blocks", but all that really seems to mean is that it's a merkle tree... really, just an immutable datastructure where one node points at the parent node which points at the parent node all the way up. The joke then is not that this merkle tree runs on a cloud, but that "cloud computing" means approximately nothing: it's marketing speak for some vague handwavey set of "other peoples' computers are doing computation somewhere, possibly on your behalf sometimes." Therefore, "cloud of merkle trees" refers to the vagueness of the situation. (As everyone knows, jokes are funnier when fully explained, so I'll turn on my "STUDIO LAUGHTER" sign here.)

So, a blockchain is a chain of blocks, ie a merkle tree, and I mean, technically speaking, that means that Git is a blockchain (especially if the commits are signed), but when you see someone arguing on the internet about whether or not blockchains are "good" or "bad", they probably weren't thinking about git, which aside from having a high barrier of entry in its interface and some concerns about the hashing algorithm used, isn't really something likely to drag you into an internet flamewar.

"Blockchain" is to "Bitcoin" what "Roguelike" is to "Rogue"

These days it's common to see people either heaping praises on blockchains or criticizing them, and those people tend to be shouting past one another. I'll save unpacking that for another post. In the meanwhile though, it's worth noting that people might not be talking about the same things.

What isn't in doubt is whether or not Bitcoin is a blockchain... trying to understand and then explore the problem space around Bitcoin is what created the term "blockchain". It's a bit like the video game genre of roguelikes, which started with the game Rogue, particularly explored and expanded upon in NetHack, and then suddenly exploding into the indie game scene as a "genre" of its own. Except the genre has become fuzzier and fuzzier as people have explored the surrounding space. What is essential? Is a grid based layout essential? Is a non-euclidean grid acceptable? Do you have to provide an ascii or ansi art interface so people can play in their terminals? Dare we allow unicode characters? What if we throw out terminals altogether and just play on a grid of 2d pixelart? What about 3d art? What about permadeath? What about the fantasy theme? What about random level generation? What are the key features of a roguelike?

Well now we're at the point where I pick up a game like Blazing Beaks and it calls itself a "roguelite", which I guess is embracing the point that terminology has gotten extremely fuzzy... this game feels more like Robotron than Rogue.

So... if "blockchain" is to Bitcoin what "roguelike" is to Rogue, then what's essential to a blockchain? Does the blockchain have to be applied to a financial instrument, or can it be used to store updateable information about eg identity? Is global consensus required? Or what about a "trusted quorum" of nodes, such as in Hyperledger? Is "mining" some kind of asset a key part of the system? Is proof of work acceptable, or is proof of stake okay? What about proof of space, proof of space-time, proof of pudding?

On top of all this, some of the terms around blockchains have been absorbed as if into them. For instance, I think to many people, "smart contract" means something like "code which runs on a blockchain" thanks to Ethereum's major adoption of the term, but the E programming language described "smart contracts" as the "likely killer app of distributed capabilities" all the way back in 1999, and was borrowing the term from Nick Szabo, but really the same folks working on E had described many of those same ideas in the Agoric Papers back in 1988. Bitcoin wasn't even a thing at all until at least 2008, so depending on how you look at it, "smart contracts" precede "blockchains" by one or two decades. So "blockchain" has somehow even rolled up terms outside of its space as if within it. (By the way, I don't think anyone has given a good and crisp definition for "smart contract" either despite some of these people trying to give me one, so let me give you one that I think is better and embraces its fuzziness: "Smart contracts allow you to do the kinds of things you might do with legal contracts, but relying on networked computation instead of a traditional state-based legal system." It's too bad more people also don't know about the huge role that Mark Miller's "split contracts" idea plays into this space because that's what makes the idea finally makes sense... but that's a conversation for another time.) (EDIT: Well, after I wrote this, Kate Sills lent me her definition, which I think is the best one: "Smart contracts are credible commitments using technology, and outside a state-provided legal system." I like it!)

So anyway, the point of this whole section is to say that kind of like roguelike, people are thinking of different things as essential to blockchains. Everyone roughly agrees on the jumping-off point of ideas but since not everyone agrees from there, it's good to check in when we're having the conversation. Wait, you do/don't like this game because it's a roguelike? Maybe we should check in on what features you mean. Likewise for blockchains. Because if you're blaming blockchains for burning down the planet, more than likely you're not condemning signed git repositories (or at least, if you're condemning them, you're probably doing so about it from an aspect that isn't the fundamental datastructure... probably).

This is an "easier said than done" kind of thing though, because of course, I'm kind of getting into some "in the weeds" level of details here... but it's the "in the weeds" where all the substance of the disagreements really are. The person you are talking with might not actually even know or consider the same aspects to be essential that you consider essential though, so taking some time to ask which things we mean can help us lead to a more productive conversation sooner.

"Blockchain" as an identity signal

First, a digression. One thing that's kind of curious about the term "virtue signal" is that in general it tends to be used as a kind of virtue signal. It's kind of like the word hipster in the previous decade, which weirdly seemed to be obsessively and pejoratively used by people who resembled hipsters than anyone else. Hence I used to make a joke called "hipster recursion", which is that since hipsters seem more obsessesed with pejorative labeling of hipsterism than anyone else, there's no way to call someone a "hipster" without yourself taking on hipster-like traits, and so inevitably even this conversation is N-levels deep into hipster recursion for some numerical value of N.

"Virtue signaling" appears similar, but even more ironically so (which is a pretty amazing feat given how much of hipsterdom seems to surround a kind of inauthentic irony). When I hear someone say "virtue signaling" with a kind of sneer, part of that seems to be acknowledging that other people are sending signals merely to impress others that they are some kind of the same group but it seems as if it's being raised as in a you-know-and-I-know-that-by-me-acknowledging-this-I'm-above-virtue-signaling kind of way. Except that by any possible definition of virtue signaling, the above appears to be a kind of virtue signaling, so now we're into virtue signaling recursion.

Well, one way to claw our way out of the rabbithole of all this is to drop the pejorative aspect of it and just acknowledge that signaling is something that everyone does. Hence me saying "identity signaling" here. You can't really escape identity signaling, or even sportsballing, but you can acknowledge that it's a thing that we all do, and there's a reason for it: people only have so much time to find out information about each other, so they're searching for clues that they might align and that, if they introduce you to their peer group, that you might align with them as well, without access to a god-like view of the universe where they know exactly what you think and exactly what kinds of things you've done and exactly what way you'll behave in the future or whether or not you share the same values. (After all, what else is virtue ethics but an ethical framework that takes this in its most condensed form as its foundation?) But it's true that at its worst, this seems to result in shallow, quick, judgmental behavior, usually based on stereotypes of the other side... which can be unfortunate or unfair to whomever is being talked about. But also on the flip side, people also do identity signal to each other because they want to create a sense of community and bonding. That's what a lot of culture is. It's worth acknowledging then that this occurs, recognizing its use and limitations, without pretending that we are above it.

So wow, that's quite a major digression, so now let's get back to "identity signaling". There is definitely a lot of identity signaling that tends to happen around the word "blockchain", for or against. Around the critiques of the worst of this, I tend to agree: I find much of the machismo hyper-white-male-privilege that surrounds some of the "blockchain" space uncomfortable or cringey.

But I also have some close friends who are not male and/or are people of color and those ones tend to actually suffer the worst of it from these communities internally, but also seem to find things of value in them, but particularly seem to feel squeezed externally when the field is reduced to these kinds of (anti?-)patterns. There's something sad about that, where I see on the one hand friends complaining about blockchain from the outside on behalf of people who on the inside seem to be both struggling internally but then kind of crushed by being lumped into the same identified problems externally. This is hardly a unique problem but it's worth highlighting for a moment I think.

But anyway, I've taken a bunch of time on this, more than I care to, maybe because (irony again?) I feel that too much of public conversation is also hyperfocusing on this aspect... whether there's a subculture around blockchain, whether or not that subculture is good or bad, etc. There's a lot worthwhile in unpacking this discourse-wise, but some of the criticisms of blockchains as a technology (to the extent it even is coherently one) seem to get lumped up into all of this. It's good to provide thoughtful cultural critique, particularly one which encourages healthy social change. And we can't escape identity signaling. But as someone who's trying to figure out what properties of networked systems we do and don't want, I feel like I'm trying to navigate the machine and for whatever reason, my foot keeps getting caught in the gears here. Well, maybe that itself is pointing to some architectural mistakes, but socially architectural ones. But it's useful to also be able to draw boundaries around it so that we know where this part of the conversation begins and ends.

"Blockchain" as "decentralized centralization" (or "decentralized convergence")

One of the weird things about people having the idea of "blockchains" as being synonymous with "decentralization" is that it's kind of both very true and very untrue, depending on what abstraction layer you're looking at.

For a moment, I'm going to frame this in harsh terms: blockchains are decentralized centralization.

What? How dare I! You'll notice that this section is in harsh contrast to the "blockchain as handwaving towards decentralized networks in general" section... well, I am acknowledging the decentralized aspect of it, but the weird thing about a blockchain is that it's a decentralized set of nodes converging on (creating a centrality of!) a single abstract machine.

Contrast with classic actor model systems like CapTP in Spritely Goblins, or as less good examples (because they aren't quite as behavior-oriented as they are correspondence-oriented, usually) ActivityPub or SMTP (ie, email). All of these systems involve decentralized computation and collaboration stemming from sending messages to actors (aka "distributed objects"). Of CapTP this is especially clear and extreme: computations happen in parallel across many collaborating machines (and even better, many collaborating objects on many collaborating machines), and the behavior of other machines and their objects is often even opaque to you. (CapTP survives this in a beautiful way, being able to do well on anonymous, peer to peer, "mutually suspicious" networks. But maybe read my rambling thoughts about CapTP elsewhere.)

While to some degree there are some very clever tricks in the world of cryptography where you may be able to get back some of the opacity, this tends to be very expensive, adding an expensive component to the already inescapable additional expenses of a blockchain. A multi-party blockchain with some kind of consensus will always, by definition be slower than a single machine operating alone.

If you are irritated by this framing: good. It's probably good to be irritated by it at least once, if you can recognize the portion of truth in it. But maybe that needs some unpacking to get there. It might be better to say "blockchains are decentralized convergence", but I have some other phrasing that might be helpful.

"Blockchain" as "a single machine that many people run"

There's value in having a single abstract machine that many people run. The most famous source of value is in the "double spending problem". How do we make sure that when someone has money, they don't spend that money twice?

Traditional accounting solves this with a linear, sequential ledger, and it turns out that the right solution boils down to the same thing in computers. Emphasis on sequential: in order to make sure money balances out right, we really do have to be able to order things.

Here's the thing though: the double spending problem was in a sense solved in terms of single-computers a long time ago in the object capability security community. Capability-based Financial Instruments was written about a decade before blockchains even existed and showed off how to make a "mint" (kind of like a fiat-currency bank) that can be implemented in about 25 lines of code in the right architecture (I've ported it to Goblins, for instance) and yet has both distributed accounts and is robust against corruption on errors.

However, this seems to be running on a "single-computer based machine", and again operates like a fiat currency. Anyone can create their own fiat currency like this, and they are cheap, cheap, cheap (and fast!) to make. But it does rely on sequentiality to some degree to operate correctly (avoiding a class of attacks called "re-entrancy attacks").

But this "single-computer based machine" might bother you for a couple reasons:

  • We might be afraid the server might crash and service will be interrupted, or worse yet, we will no longer be able to access our accounts.

  • Or, even if we could trade these on an open market, and maybe diversify our portfolio, maybe we don't want to have to trust a single operator or even some appointed team of operators... maybe we have a lot of money in one of these systems and we want to be sure that it won't suddenly vanish due to corruption.

Well, if our code operates deterministically, then what if from the same initial conditions (or saved snapshot of the system) we replay all input messages to the machine? Functional programmers know: we'll end up with the same result.

So okay, we might want to be sure this doesn't accidentally get corrupted, maybe for backup reasons. So maybe we submit the input messages to two computers, and then if one crashes, we just continue on with the second one until the other comes up, and then we can restore the first one from the progress the second machine made while the first one was down.

Oh hey, this is already technically a blockchain. Except our trust model is that we implicitly trust both machines.

Hm. Maybe we're now worried that we might have top-down government pressure to coerce some behavior on one of our nodes, or maybe we're worried that someone at a local datacenter is going to flip some bits to make themselves rich. So we actually want to spread this abstract machine out over three countries. So okay, we do that, and now we set a rule agreeing on what all the series of input messages are... if two of three nodes agree, that's good enough. Oh hey look, we've just invented the "small-quorum-style" blockchain/ledger!

(And yes, you can wire up Goblins to do just this; a hint as to how is seen in the Terminal Phase time travel demo. Actually, let's come back to that later.)

Well, okay. This is probably good enough for a private financial asset, but what about if we want to make something more... global? Where nobody is in charge!

Well, we could do that too. Here's what we do.

  • First, we need to prevent a "swarming attack" (okay, this is generally called a "sybil attack" in the literature, but for a multitude of reasons I won't get into, I don't like that term). If a global set of peers are running this single abstract machine, we need to make sure there aren't invocations filling up the system with garbage, since we all basically have to keep that information around. Well... this is exactly where those proof-of-foo systems come in the first time; in fact Proof of Work's origin is in something called Hashcash which was designed to add "friction" to disincentivize spam for email-like systems. If we don't do something friction-oriented in this category, our ledger is going to be too easily filled with garbage too fast. We also need to agree on what the order of messages is, so we can use this mechanism in conjuction with a consensus algorithm.

  • When are new units of currency issued? Well, in our original mint example, the person who set up the mint was the one given the authority to make new money out of thin air (and they can hand out attenuated versions of that authority to others as they see fit). But what if instead of handing this capability out to individuals we handed it out to anyone who can meet an abstract requirement? For instance, in zcap-ld an invoker can be any kind of entity which is specified with linked data proofs, meaning those entities can be something other than a single key... for instance, what if we delegated to an abstract invoker that was specified as being "whoever can solve the state of the machine's current proof-of-work puzzle"? Oh my gosh! We just took our 25-line mint and extended it for mining-style blockchains. And the fundamental design still applies!

With these two adjustments, we've created a "public blockchain" akin to bitcoin. And we don't need to use proof-of-work for either technically... we could swap in different mechanisms of friction / qualification.

If the set of inputs are stored as a merkle tree, then all of the system types we just looked at are technically blockchains:

  • A second machine as failover in a trusted environment

  • Three semi-trusted machines with small-scale private consensus

  • A public blockchain without global trust, with swarming-attack resistance and an interesting abstract capability accessible to anyone who can meet the abstract requirement (in this case, to issue some new currency).

The difference for choosing any of the above is really a question of: "what is your trust/failover requirements?"

Blockchains as time travel plus convergent inputs

If this doesn't sound believable to you, that you could create something like a "public blockchain" on top of something like Goblins so easily, consider how we might extend time travel in Terminal Phase to add multiplayer. As a reminder, here's an image:

Time travel in Spritely Goblins shown through Terminal Phase

Now, a secret thing about Terminal Phase is that the gameplay is deterministic (the random starfield in the background is not, but the gameplay is) and runs on a fixed frame-rate. This means that given the same set of keyboard inputs, the game will always play the same, every time.

Okay, well let's say we wanted to hand some way for someone to replay our last game. Chess games can be fully replayed with a very condensed syntax, meaning that merely handing someone a short list of codes they can precisely replay the same game, every time, deterministically.

Well okay, as a first attempt at thinking this through, what if for some game of Terminal Phase I played we wrote down each keystroke I entered on my keyboard, on every tick of the game? Terminal Phase runs at 30 ticks per second. So okay, if you replay these, each one at 30 ticks per second, then yeah, you'd end up with the same gameplay every time.

It would be simple enough for me to encode these as a linked list (cons, cons, cons!) and hand them to you. You could descend all the way to the root of the list and start playing them back up (ie, play the list in reverse order) and you'd get the same result as I did. I could even stream new events to you by giving you new items to tack onto the front of the list, and you could "watch" a game I was playing live.

So now imagine that you and I want to play Terminal Phase together now, over the network. Let's imagine there are two ships, and for simplicity, we're playing cooperatively. (The same ideas can be extended to competitive, but for narrating how real-time games work it's easier to to start with a cooperative assumption.)

We could start out by wiring things up on the network so that I am allowed to press certain keys for player 1 and you are allowed to press certain keys for player 2. (Now it's worth noting that a better way to do this doesn't involve keys on the keyboard but capability references, and really that's how we'd do things if we were to bring this multiplayer idea live, but I'm trying to provide a metaphor that's easy to think about without introducing the complicated sounding kinds of terms like "c-lists" and "vat turns" that we ocap people seem to like.) So, as a first attempt, maybe if we were playing on a local area network or something, we could synchronize at every game tick: I share my input with you and you share yours, and then and only then do both of our systems actually input them into that game-tick's inputs. We'll have achieved a kind of "convergence" as to the current game state on every tick. (EDIT: I wrote "a kind of consensus" instead of "a kind of convergence" originally, and that was an error, because it misleads on what consensus algorithms tend to do.)

Except this wouldn't work very well if you and I were living far away from each other and playing over the internet... the lag time for doing this for every game tick might slow the system to a crawl... our computers wouldn't get each others' inputs as fast as the game was moving along, and would have to pause until we received each others' moves.

So okay, here's what we'll do. Remember the time-travel GUI above? As you can see, we're effectively restoring from an old snapshot. Oh! So okay. We could save a snapshot of the game every second, and then both get each other our inputs to each other as fast as we can, but knowing it'll lag. So, without having seen your inputs yet, I could move my ship up and to the right and fire (and send that I did that to you). My game would be in a "dirty state"... I haven't actually seen what you've done yet. Now suddenly I get the last set of moves you did over the network... in the last five frames, you move down and to the left and fire. Now we've got each others' inputs... what our systems can do is secretly time travel behind the scenes to the last snapshot, then fast forward, replaying both of our inputs on each tick up until the latest state where we've both seen each others' moves (but we wouldn't show the fast forward process, we'd just show the result with the fast forward having been applied). This can happen fast enough that I might see your ship jump forward a little, and maybe your bullet will kill the enemy instead of mine and the scores shift so that you actually got some points that for a moment I thought I had, but this can all happen in realtime and we don't need to slow down the game at all to do it.

Again, all the above can be done, but with actual wiring of capabilities instead of the keystroke metaphor... and actually, the same set of ideas can be done with any kind of system, not just a game.

And oh hey, technically, technically, technically if we both hashed each of our previous messages in the linked list and signed each one, then this would qualify as a merkle tree and then this would also qualify as a blockchain... but wait, this doesn't have anything to do with cryptocurrencies! So is it really a blockchain?

"Blockchain" as synonym for "cryptocurrency" but this is wrong and don't do this one

By now you've probably gotten the sense that I really was annoyed with the first section of "blockchain" as a synonym for "decentralization" (especially because blockchains are decentralized centralization/convergence) and that is completely true. But even more annoying to me is the synonym of "blockchain" with "cryptocurrency".

"Cryptocurrency" means "cryptographically based currency" and it is NOT synonymous with blockchains. Digicash precedes blockchains by a dramatic amount, but it is a cryptocurrency. The "simple mint" type system also precedes blockchains and while it can be run on a blockchain, it can also run on a solo computer/machine.

But as we saw, we could perceive multiplayer Terminal Phase as technically, technically a blockchain, even though it has nothing to do with currencies whatsoever.

So again a blockchain is just a single, abstract, sequential machine, run by multiple parties. That's it. It's more general than cryptocurrencies, and it's not exclusive to implementing them either. One is a kind of programming-plus-cryptography-use-case (cryptocurrencies), the other one is a kind of abstracted machine (blockchains).

So please. They are frequently combined, but don't treat them as the same thing.

Blockchains as single abstract machines on a wider network

One of my favorite talks is Mark Miller's Programming Secure Smart Contracts talk. Admittedly, I like it partly because it well illustrates some of the low-level problems I've been working on, and that might not be as useful to everyone else. But it has this lovely diagram in it:

Machines / Vats / Ocaps / Erights layers of abstractions

This is better understood by watching the video, but the abstraction layers described here are basically as follows:

  • "Machines" are the lowest layer of abstraction on the network, but there a variety of kinds of machines. Public blockchains are one, quorum blockchains are another, solo computer machines yet another (and the simplest case, too). What's interesting then is that we can see public chains and quorums abstractly demonstrated as machines in and of themselves... even though they are run by many parties.

  • Vats are the next layer of abstraction, these are basically the "communicating event loops"... actors/objects live inside them, and more or less these things run sequentially.

  • Replace "JS ocaps" with "language ocaps" and you can see actors/objects in both Javascript and Spritely living here.

  • Finally, at the top are "erights" and "smart contracts", which feed into each other... "erights" are "exclusive electronic rights", and "smart contracts" are generally patterns of cooperation involving achieving mutual goals despite suspicion, generally involving the trading of these erights things (but not necessarily).

Okay, well cool! This finally explains the worldview I see blockchains on. And we can see a few curious things:

  • The "public chain" and "quorum" kinds of machines still boil down to a single, sequential abstract machine.

  • Object connections exist between the machines... ocap security. No matter whether it's run by a single computer or multiple.

  • Public blockchains, quorum blockchains, solo-computer machines all talk to each other, and communicate between object references on each other.

Blockchains are not magical things. They are abstracted machines on the network. Some of them have special rules that let whoever can prove they qualify for them access some well-known capabilities, but really they're just abstracted machines.

And here's an observation: you aren't ever going to move all computation to a single blockchain. Agoric's CEO, Dean Tribble, explained beautifully why on a recent podcast:

One of the problems with Ethereum is it is as tightly coupled as possible. The entire world is a single sequence of actions that runs on a computer with about the power of a cell phone. Now, that's obviously hugely valuable to be able to do commerce in a high-integrity fashion, even if you can only share a cell phone's worth of compute power with the entire rest of the world. But that's clearly gonna hit a brick wall. And we've done lots of large-scale distributed systems whether payments or cyberspace or coordination, and the fundamental model that covers all of those is islands of sequential programming in a sea of asynchronous communication. That is what the internet is about, that's what the interchain is about, that's what physics requires you to do if you want a system to scale.

Put this way, it should be obvious: are we going to replace the entire internet with something that has the power of a cell phone? To ask the question is to know the answer: of course not. Even when we do admit blockchain'y systems into our system, we're going to have to have many of them communicating with each other.

Blockchains are just machines that many people/agents run. That's it.

Some of these are encoded with some nice default programming to do some useful things, but all of them can be done in non-blockchain systems because communicating islands of sequential processes is the generalization. You might still want a blockchain, ie you might want multiple parties running one of those machines as a shared abstract machine, but how you configure that blockchain from there might depend on your trust and integrity requirements.

What do I think of blockchains?

I've covered a wide variety of perspectives of "what is a blockchain" in this article.

On the worse end of things are the parts involving hand-wavey confusion about decentralization, mistaken ideas of them being tied to cryptocurrencies, marketing hype, cultural assumptions, and some real, but not intrinsic, cultural problems.

In the middle, I am particularly keen on highlighting the similarity between the term "blockchain" and the term "roguelike", how both of them might boil down to some key ideas or not, but more importantly they're both a rough family of ideas that diverge from one highly influential source (Bitcoin and Rogue respectively). This is also the source of much of the "shouting past each other", because many people are referring to different components that they view as essential or inessential. Many of these pieces may be useful or harmful in isolation, in small amounts, in large amounts, but much of the arguing (and posturing) involves highlighting different things.

On the better end of things is a revelation, that blockchains are just another way of abstracting a computer so that multiple parties can run it. The particular decisions and use cases layered on top of this fundamental design are highly variant.

Having made the waters clear again, we could muddy them. A friend once tried to convince me that all computers are technically blockchains, that blockchains are the generalization of computing, and the case of a solo computer is merely one where a blockchain is run only by one party and no transaction history or old state is kept around. Maybe, but I don't think this is very useful. You can go in either direction, and I think the time travel and Terminal Phase section maybe makes that clear to me, but I'm not so sure how it lands with others I suppose. But a term tends to be useful in terms of what it introduces, and calling everything a blockchain seems to make the term even less useful than it already is. While a blockchain could be one or more parties running a sequential machine as the generalization, I suggest we stick to two or more.

Blockchains are not magic pixie dust, putting something on a blockchain does not make it work better or more decentralized... indeed, what a blockchain really does is converging (or re-centralizing) a machine from a decentralized set of computers. And it always does so with some cost, some set of overhead... but what those costs and overhead are varies depending on what the configuration decisions are. Those decisions should always stem from some careful thinking about what those trust and integrity needs are... one of the more frustrating things about blockchains being a technology of great hype and low understanding is that such care is much less common than it should be.

Having a blockchain, as a convergent machine, can be useful. But how that abstracted convergent machine is arranged can diverge dramatically; if we aren't talking about the same choices, we might shout past each other. Still, it may be an unfair ask to request that those without a deep technical background go into technical specifics, and I recognize that, and in a sense there can be some amount gained from speaking towards broad-sweeping, fuzzy sets and the patterns they seem to be carrying. A gut-sense assertion from a set of loosely observed behaviors can be a useful starting point. But to get at the root of what those gut senses actually map to, we will have to be specific, and we should encourage that specificity where we can (without being rude about it) and help others see those components as well.

But ultimately, as convergent machines, blockchains will not operate alone. I think the system that will hook them all together should be CapTP. But no matter the underlying protocol abstraction, blockchains are just abstract machines on the network.

Having finally disentangled what blockchains are, I think soon I would like to move onto what cryptocurrencies are. Knowing that they are not necessarily tied to blockchains opens us up to considering an ecosystem, even an interoperable and exchangeable one, of varying cryptographically based financial instruments, and the different roles and uses they might play. But that is another post of its own, for whenever I can get to it, I suppose.

ADDENDUM: After writing this post, I had several conversations with several blockchain-oriented people. Each of them roughly seemed to agree that Bitcoin was roughly the prototypical "blockchain", but each of them also seemed to highlight different things they thought were "essential" to what a "blockchain" is: some kinds of consensus algorithms being better than others, that kinds of social arrangements are enabled, whether transferrable assets are encoded on the chain, etc. To start with, I feel like this does confirm some of the premise of this post, that Bitcoin is the starting point, but like Rogue and "roguelikes", "blockchains" are an exploration space stemming from a particular influential technical piece.

However my friend Kate Sills (who also gave me a much better definition for "smart contracts", added above) highlighted something that I hadn't talked about much in my article so far, which I do agree deserves expansion. Kate said: "I do think there is something huge missing from your piece. Bitcoin is amazing because it aligns incentives among actors who otherwise have no goals in common."

I agree that there's something important here, and this definition of "blockchain" maybe does explain why while from a computer science perspective, perhaps signed git trees do resemble blockchains, they don't seem to fit within the realm of what most people are thinking about... while git might be a tool used by several people with aligned incentives, it is not generally itself the layer of incentive-alignment.

24 April, 2021 08:30PM by Christopher Lemmer Webber

April 23, 2021

GNU Guix

Building derivations, how complicated can it be?

Derivations are key to Guix, they're the low-level build instructions used for things like packages, disk images, and most things than end up in the store.

Around a year ago, the established approach to build derivations across multiple machines was daemon offloading. This offloading approach is mostly static in terms of the machines involved and uses SSH to communicate and move things between machines.

The Guix Build Coordinator project set out to provide an alternative approach, both to explore what's possible, but also to provide a usable tool to address two specific use cases.

The first use case was building things (mostly packages) for the purpose of providing substitutes. At the time, the daemon offloading approach used on ci.guix.gnu.org which is the default source of substitutes. This approach was not scaling particularly well, so there was room for improvement.

The second use case was more aspirational, support various quality assurance tasks, like building packages changed by patches, regularly testing fixed output derivations, or building the same derivations across different machines to test for hardware specific differences.

While both these tasks have quite a lot in common, there's still quite a lot of differences, this in part led to a lot of flexibility in the design of the Guix Build Coordinator.

Architecture

Like offloading, the Guix Build Coordinator works in a centralised manner. There's one coordinator process which manages state, and agent processes run on the machines that perform builds. The coordinator plans which builds to allocate to which agents, and agents ask the coordinator for builds, which they then attempt.

Once agents complete a build, they send the log file and any outputs back to the coordinator. This is shown in the diagram below. Note that the Guix Build Coordinator doesn't actually take care of building the individual derivations, that's still left to guix-daemon's on the machines involved.

Guix Build Coordinator sequence diagram

The builds to perform are worked through methodically, a build won't start unless all the inputs are available. This behaviour replicates what guix-daemon does, but across all the machines involved.

If agents can't setup to perform a build, they report this back to the coordinator, which may then perform other builds to produce those required inputs.

Currently HTTP is used when agents want to communicate to the coordinator, although additional approaches could be implemented in the future. Similarly, SQLite is used as the database, but from the start there has been a plan to support PostgreSQL, but that's yet to be implemented.

Comparison to offloading

There's lots more to the internal workings of the Guix Build Coordinator, but how does this compare to the daemon offloading builds?

Starting from the outside and working in, the API for the Guix Build Coordinator is all asynchronous. When you submit a build, you get an ID back, which you can use to track the progress of that build. This is in contrast to the way the daemon is used, where you keep a connection established while builds are progressing.

Offloading is tightly integrated with the daemon, which can be both useful as offloading can transparently apply to anything that would otherwise be built locally. However, this can also be a limitation since the daemon is one of the harder components to modify.

With offloading, guix-daemon reaches out to another machine, copies over all the inputs and the derivation, and then starts the build. Rather than doing this, the Guix Build Coordinator agent pulls in the inputs and derivation using substitutes.

This pull approach has a few advantages, firstly it removes the need to keep a large store on the machine running the coordinator, and this large store requirement of using offloading became a problem in terms of scalability for the offloading approach. Another advantage is that it makes deploying agents easier, as they don't need to be reachable from the coordinator over the network, which can be an issue with NATs or virtual machines.

When offloading builds, the outputs would be copied back to the store on build success. Instead, the Guix Build Coordinator agent sends the outputs back as nar files. The coordinator would then process these nar files to make substitutes available. This helps distribute the work in generating the nar files, which can be quite expensive.

These differences may be subtle, but the architecture makes a big difference, it's much easier to store and serve nars at scale if this doesn't require a large store managed by a single guix-daemon.

There's also quite a few things in common with the daemon offloading approach. Builds are still methodically performed across multiple machines, and load is taken in to account when starting new builds.

A basic example

Getting the Guix Build Coordinator up and running does require some work, the following commands should get the coordinator and an agent running on a single machine. First, you start the coordinator.

  guix-build-coordinator

Then in another terminal, you interact with the running coordinator to create an agent in it's database.

  guix-build-coordinator agent new

Note the UUID of the generated agent.

  guix-build-coordinator agent <AGENT ID> password new

Note the generated password for the agent. With the UUID and password, the agent can then be started.

  guix-build-coordinator-agent --uuid=<AGENT ID> --password=<AGENT PASSWORD>

At this point, both processes should be running and the guix-build-coordinator should be logging requests from the agent.

In a third terminal, also at the root of the repository, generate a derivation, and then instruct the coordinator to have it built.

  guix build --no-grafts -d hello

Note the derivation that is output.

  guix-build-coordinator build <DERIVATION FILE>

This will return a randomly generated UUID that represents the build.

While running from the command line is useful for development and experimentation, there are services in Guix for running the coordinator and agents.

Applications of the Guix Build Coordinator

While I think the Guix Build Coordinator is a better choice than daemon offloading in some circumstances, it doesn't currently replace it.

At a high level, the Guix Build Coordinator is useful where there's a need to build derivations and do something with the outputs or build results, more than just having the outputs in the local store. This could be serving substitutes, or testing derivations for example.

At small scales, the additional complexity of the coordinator is probably unnecessary, but when it's useful to use multiple machines, either because of the resources that provides, or because of a more diverse range of hardware, then it makes much more sense to use the Guix Build Coordinator to coordinate what's going on.

Looking forward

The Guix Build Coordinator isn't just an alternative to daemon offloading, it's more a general toolkit for coordinating the building of derivations.

Much of the functionality in the Guix Build Coordinator happens in hooks. There are bits of code (hooks) that run when certain events happen, like a build gets submitted, or a build finished successfully. It's these hooks that are responsible for doing things like processing nars to be served as substitutes, or submitting retries for a failed build.

This hook to automatically retry building particular derivations is particularly useful when trying to provide substitutes where you want to lessen the impact of random failures, or for quality assurance purposes, where you want more data to better identify problems.

There are also more features such as build and agent tags and build priorities that can be really useful in some scenarios.

My hope is that the Guix Build Coordinator will enable a better substitute experience for Guix users, as well as enabling a whole new range of quality assurance tasks. It's already possible to see some impact from the Guix Build Coordinator, but there's still much work to do!

Additional material

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

23 April, 2021 08:00PM by Christopher Baines

April 22, 2021

parallel @ Savannah

GNU Parallel 20210422 ('Ever Given') released [stable]

GNU Parallel 20210322 ('Ever Given') [stable] has been released. It is available for download at: lbry://@GnuParallel:4

No new functionality was introduced so this is a good candidate for a stable release.

An easy way to support GNU Parallel is to tip on LBRY.

Please help spreading GNU Parallel by making a testimonial video like Juan Sierra Pons: http://www.elsotanillo.net/wp-content/uploads/GnuParallel_JuanSierraPons.mp4

It does not have to be as detailed as Juan's. It is perfectly fine if you just say your name, and what field you are using GNU Parallel for.

Quote of the month:

  GNU Parallel is your friend.
  Can shorten that time by X cores.
    -- iRODS @irods@twitter

New in this release:

  • Bug fixes and man page updates.

News about GNU Parallel:

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep c82233e7da3166308632ac8c34f850c0
    12345678 c82233e7 da316630 8632ac8c 34f850c0
    $ md5sum install.sh | grep ae3d7aac5e15cf3dfc87046cfc5918d2
    ae3d7aac 5e15cf3d fc87046c fc5918d2
    $ sha512sum install.sh | grep dfc00d823137271a6d96225cea9e89f533ff6c81f
    9c5198d5 31a3b755 b7910ece 3a42d206 c804694d fc00d823 137271a6 d96225ce
    a9e89f53 3ff6c81f f52b298b ef9fb613 2d3f9ccd 0e2c7bd3 c35978b5 79acb5ca
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 April, 2021 04:24PM by Ole Tange

April 21, 2021

www @ Savannah

A Student Manages to Graduate Using Exclusively Free Software

A student of Computer Science in Poland fights back proprietary software at his university and manages to graduate using only free software.

How I Fought To Graduate Without Using Nonfree Software

by Wojciech Kosior

As a university student, I have struggled during the pandemic like everyone else. Many have experienced deaths in their families, or have lost their jobs. While studying informatics at the AGH University of Science and Technology in Kraków, Poland, I have been fighting another, seemingly less important battle, but one I passionately feel is vital to our future freedoms. I describe my fight below, so as to encourage and inspire others.

Gentle persuasion

Without serious problems, I completed the fifth semester of my studies. At the beginning of the sixth semester, the pandemic began. Universities closed their physical facilities, so most students returned home and professors started organizing remote classes. Unsurprisingly, they all chose proprietary platforms. Cisco Webex, Microsoft Teams, ClickMeeting, and Skype were popular choices. I could not find a free software client for any of those. Also, not realizing the problem of nonfree js, professors expected everyone to be able to easily join the video sessions using any web interface.

How did I handle these requirements? I would very politely email every single professor who announced something would be done using a problematic platform, explaining the lack of a suitable free software client. I often included a link to a popular online explanation of the issues of software freedom and universities, the “Costumed Heroes” video created by the Free Software Foundation (FSF), along with some other links to free videoconferencing programs like Jami and Jitsi Meet.

Although there are many documented surveillance and security issues on these centralized platforms, I explained that, for me, software freedom was the troubling factor. Replies urging me to “run the program in a virtual machine” or saying that I “don't need the source code to use the service,” made it clear that some of my professors didn't understand, or understood only part of the issues. Had I been studying anything other than informatics, I suspect the fraction of those who understood the problem would be far smaller.

Read more...

21 April, 2021 03:16PM by Dora Scilipoti

April 20, 2021

Riccardo Mottola

ArcticFox to browse on an iBook

I did quite some work to have "--enable-altivec" work in ArcticFox. The FireFox AltiVec test did not work because it relies on GCC rejecting it if not supported by the CPU.

Most of the work was getting the 32bit AltiVec code actually work during a 64bit compile on a PPC970. But what about a non-AltiVec build? WIth some #ifdef's imported from TenFourFox... I was able to get it and produce, while compiling on a G4, a usable G3 optimized binary for Linux.



Result? A quite current browser for a 21 year old vintage computer! Fun! Not very fast and the beautiful tangerine clamshell has only 160MB of RAM, but still, one can browse Wikpedia!




20 April, 2021 09:39PM by Riccardo (noreply@blogger.com)

April 18, 2021

poke @ Savannah

GNU poke 1.2 released

I am happy to announce a new release of GNU poke, version 1.2.

This is a bug fix release in the poke 1.x series, and is the
result of all the user feedback we have received since we did
the last release.  Our big thanks to everyone who provided
feedback :)

See the file NEWS in the released tarball for a detailed list
of changes in this release.

The tarball poke-1.2.tar.gz is now available at
https://ftp.gnu.org/gnu/poke/poke-1.2.tar.gz.

  GNU poke (http://www.jemarch.net/poke) is an interactive,
  extensible editor for binary data.  Not limited to editing basic
  entities such as bits and bytes, it provides a full-fledged
  procedural, interactive programming language designed to describe
  data structures and to operate on them.

This release is the product of a month of work resulting in 37
commits, made by 5 contributors.

Thanks to the people who contributed with code and/or
documentation to this release.  In certain but no significant
order they are:

   Mohammad-Reza Nabipoor
   David Faust
   Egeyar Bagcioglu
   Konstantinos Chasialis

Thank you all!  It is a real pleasure to hack with you.

And this is all for now.
Happy poking!

--
Jose E. Marchesi
Frankfurt am Main
18 April 2021

18 April, 2021 05:21PM by Jose E. Marchesi

April 15, 2021

remotecontrol @ Savannah

How Amazon Strong-Arms Partners Using Its Power Across Multiple Businesses

https://www.wsj.com/articles/amazon-strong-arms-partners-across-multiple-businesses-11618410439

"Amazon.com Inc. last year told smart-thermostat maker Ecobee it had to give the tech giant data from its voice-enabled devices even when customers weren’t using them."

"Amazon responded that if Ecobee didn’t serve up its data, the refusal could affect Ecobee’s ability to sell on Amazon’s retail platform..."

15 April, 2021 12:17PM by Stephen H. Dawson DSL

April 12, 2021

GNU Guix

New Supported Platform: powerpc64le-linux

It is a pleasure to announce that support for powerpc64le-linux (PowerISA v.2.07 and later) has now been merged to the master branch of GNU Guix!

This means that GNU Guix can be used immediately on this platform from a Git checkout. Starting with the next release (Guix v1.2.1), you will also be able to download a copy of Guix pre-built for powerpc64le-linux. Regardless of how you get it, you can run the new powerpc64le-linux port of GNU Guix on top of any existing powerpc64le GNU/Linux distribution.

This new platform is available as a "technology preview". This means that although it is supported, substitutes are not yet available from the build farm, and some packages may fail to build. Although powerpc64le-linux support is nascent, the Guix community is actively working on improving it, and this is a great time to get involved!

Why Is This Important?

This is important because it means that GNU Guix now works on the Talos II, Talos II Lite, and Blackbird mainboards sold by Raptor Computing Systems. This modern, performant hardware uses IBM POWER9 processors, and it is designed to respect your freedom. The Talos II and Talos II Lite have recently received Respects Your Freedom (RYF) certification from the FSF, and Raptor Computing Systems is currently pursuing RYF certification for the more affordable Blackbird, too. All of this hardware can run without any non-free code, even the bootloader and firmware. In other words, this is a freedom-friendly hardware platform that aligns well with GNU Guix's commitment to software freedom.

How is this any different from existing RYF hardware, you might ask? One reason is performance. The existing RYF laptops, mainboards, and workstations can only really be used with Intel Core Duo or AMD Opteron processors. Those processors were released over 15 years ago. Since then, processor performance has increased drastically. People should not have to choose between performance and freedom, but for many years that is exactly what we were forced to do. However, the POWER9 machines sold by Raptor Computing Systems have changed this: the free software community now has an RYF-certified option that can compete with the performance of modern Intel and AMD systems.

Although the performance of POWER9 processors is competitive with modern Intel and AMD processors, the real advantage of the Talos II, Talos II Lite, and Blackbird is that they were designed from the start to respect your freedom. Modern processors from both Intel and AMD include back doors over which you are given no control. Even though the back doors can be removed with significant effort on older hardware in some cases, this is an obstacle that nobody should have to overcome just to control their own computer. Many of the existing RYF-certified options (e.g., the venerable Lenovo x200) use hardware that can only be considered RYF-certified after someone has gone through the extra effort of removing those back doors. No such obstacles exist when using the Talos II, Talos II Lite, or Blackbird. In fact, although Intel and AMD both go out of their way to keep you from understanding what is going on in your own computer, Raptor Computing Systems releases all of the software and firmware used in their boards as free software. They even include circuit diagrams when they ship you the machine!

Compared to the existing options, the Talos II, Talos II Lite, and Blackbird are a breath of fresh air that the free software community really deserves. Raptor Computing Systems' commitment to software freedom and owner control is an inspiring reminder that it is possible to ship a great product while still respecting the freedom of your customers. And going forward, the future looks bright for the open, royalty-free Power ISA stewarded by the OpenPOWER Foundation, which is now a Linux Foundation project (see also: the same announcement from the OpenPOWER Foundation.

In the rest of this blog post, we will discuss the steps we took to port Guix to powerpc64le-linux, the issues we encountered, and the steps we can take going forward to further solidify support for this exciting new platform.

Bootstrapping powerpc64le-linux: A Journey

To build software, you need software. How can one port Guix to a platform before support for that platform exists? This is a bootstrapping problem.

In Guix, all software for a given platform (e.g., powerpc64le-linux) is built starting from a small set of "bootstrap binaries". These are binaries of Guile, GCC, Binutils, libc, and a few other packages, pre-built for the relevant platform. It is intended that the bootstrap binaries are the only pieces of software in the entire package collection that Guix cannot build from source. In practice, additional bootstrap roots are possible, but introducing them in Guix is highly discouraged, and our community actively works to reduce our overall bootstrap footprint. There is one set of bootstrap binaries for each platform that Guix supports.

This means that to port Guix to a new platform, you must first build the bootstrap binaries for that platform. In theory, you can do this in many ways. For example, you might try to manually compile them on an existing system. However, Guix has package definitions that you can use to build them - using Guix, of course!

Commonly, the first step in porting Guix to a new platform is to use Guix to cross-compile the bootstrap binaries for that new platform from a platform on which Guix is already supported. This can be done by running a command like the following on a system where Guix is already installed:

guix build --target=powerpc64le-linux-gnu bootstrap-tarballs

This is the route that we took when building the powerpc64le-linux bootstrap binaries, as described in commit 8a1118a. You might wonder why the target above is "powerpc64le-linux-gnu" even though the new Guix platform is called "powerpc64le-linux". This is because "powerpc64le-linux-gnu" is a GNU triplet identifying the new platform, but "powerpc64le-linux" is the name of a "system" (i.e., a platform) in Guix. Guix contains code that converts between the two as needed (see nix-system->gnu-triplet and gnu-triplet->nix-system in guix/utils.scm. When cross-compiling, you only need to specify the GNU triplet.

Note that before you can even do this, you must first update the glibc-dynamic-linker and system->linux-architecture procedures in Guix's code, as described in Porting. In addition, the versions of packages in Guix that make up the GNU toolchain (gcc, glibc, etc.) must already support the target platform. This pre-existing toolchain support needs to be good enough so that Guix can (1) build, on some already-supported platform, a cross-compilation toolchain for the target platform, (2) use, on the already-supported platform, the cross-compilation toolchain to cross-compile the bootstrap binaries for the target platform, and (3) use, on the target platform, the bootstrap binaries to natively build the rest of the Guix package collection. The above guix build command takes care of steps (1) and (2) automatically.

Step (3) is a little more involved. Once the bootstrap binaries for the target platform have been built, they must be published online for anyone to download. After that, Guix's code must be updated so that (a) it recognizes the "system" name (e.g., "powerpc64le-linux") that will be used to identify the new platform and (b) it fetches the new platform's bootstrap binaries from the right location. After all that is done, you just have to try building things and see what breaks. For example, you can run ./pre-inst-env guix build hello from your Git checkout to try building GNU Hello.

The actual bootstrap binaries for powerpc64le-linux are stored on the alpha.gnu.org FTP server. Chris Marusich built these bootstrap binaries in an x86_64-linux Guix System VM which was running on hardware owned by Léo Le Bouter. Chris then signed the binaries and provided them to Ludovic Courtès, who in turn verified their authenticity, signed them, and uploaded them to alpha.gnu.org. After that, we updated the code to use the newly published bootstrap binaries in commit 8a1118a. Once all that was done, we could begin bootstrapping the rest of the system - or trying to, at least.

There were many stumbling blocks. For example, to resolve some test failures, we had to update the code in Guix that enables it to make certain syscalls from scheme. In another example, we had to patch GCC so that it looks for the 64-bit libraries in /lib, rather than /lib64, since that is where Guix puts its 64-bit libraries by convention. In addition, some packages required in order to build Guix failed to build, so we had to debug those build failures, too.

For a list of all the changes, see the patch series or the actual commits, which are:

$ git log --oneline --no-decorate 8a1118a96c9ae128302c3d435ae77cb3dd693aea^..65c46e79e0495fe4d32f6f2725d7233fff10fd70
65c46e79e0 gnu: sed: Make it build on SELinux-enabled kernels.
93f21e1a35 utils: Fix target-64bit? on powerpc64le-linux.
8d9aece8c4 ci: %cross-targets: Add powerpc64le-linux-gnu.
c29bfbfc78 syscalls: Fix RNDADDTOENTCNT on powerpc64le-linux.
b57de27d03 syscalls: Fix clone on powerpc64le-linux.
a16eb6c5f9 Add powerpc64le-linux as a supported Guix architecture.
b50f426803 gnu: libelf: Fix compilation for powerpc64le-linux.
1a0f4013d3 gnu: texlive-latex-base: Fix compilation on powerpc64le*.
e9938dc8f0 gnu: texlive-bin: Fix compilation on powerpc64le*.
69b3907adf gnu: guile-avahi: Fix compilation on powerpc64le-linux.
4cc2d2aa59 gnu: bdb-4.8: Fix configure on powerpc64le-linux.
be4b1cf53b gnu: binutils-final: Support more Power architectures.
060478c32c gnu: binutils-final: Provide bash for binary on powerpc-linux.
b2135b5d57 gnu: gcc-boot0: Enable 128-bit long double for POWER9.
6e98e9ca92 gnu: glibc: Fix ldd path on powerpc*.
cac88b28b8 gnu: gcc-4.7: On powerpc64le, fix /lib64 references.
fc7cf0c1ec utils: Add target-powerpc? procedure.
8a1118a96c gnu: bootstrap: Add support for powerpc64le-linux.

In the end, through the combined efforts of multiple people, we slowly worked through the issues until we reached a point where we could do all of the following things successfully:

  • Build Guix manually on a Debian GNU/Linux ppc64el machine (this is Debian's name for a system using the powerpc64le-linux-gnu triplet), and verify that its make check tests passed.
  • Build GNU Hello using Guix and run it.
  • Run guix pull to build and install the most recent version of Guix, with powerpc64le-linux support.
  • Build a release binary tarball for powerpc64le-linux via: make guix-binary.powerpc64le-linux.tar.xz
  • Use that binary to install a version of Guix that could build/run GNU Hello and run guix pull successfully.

This was an exciting moment! But there was still more work to be done.

Originally, we did this work on the wip-ppc64le branch, with the intent of merging it into core-updates. By convention, the "core-updates" branch in Guix is where changes are made if they cause too many rebuilds. Since we were updating package definitions so deep in the dependency graph of the package collection, we assumed it wouldn't be possible to avoid rebuilding the world. For this reason, we had based the wip-ppc64le branch on core-updates.

However, Efraim Flashner proved us wrong! He created a separate branch, wip-ppc64le-for-master, where he adjusted some of the wip-ppc64le commits to avoid rebuilding the world on other platforms. Thanks to his work, we were able to merge the changes directly to master! This meant that we would be able to include it in the next release (Guix v.1.2.1).

In short, the initial porting work is done, and it is now possible for anyone to easily try out Guix on this new platform. Because guix pull works, too, it is also easy to iterate on what we have and work towards improving support for the platform. It took a lot of cooperation and effort to get this far, but there are multiple people actively contributing to this port in the Guix community who want to see it succeed. We hope you will join us in exploring the limits of this exciting new freedom-friendly platform!

Other Porting Challenges

Very early in the porting process, there were some other problems that stymied our work.

First, we actually thought we would try to port to powerpc64-linux (big-endian). However, this did not prove to be any easier than the little-endian port. In addition, other distributions (e.g., Debian and Fedora) have recently dropped their big-endian powerpc64 ports, so the little-endian variant is more likely to be tested and supported in the community. For these reasons, we decided to focus our efforts on the little-endian variant, and so far we haven't looked back.

In both the big-endian and little-endian case, we were saddened to discover that the bootstrap binaries are not entirely reproducible. This fact is documented in bug 41669, along with our extensive investigations.

In short, if you build the bootstrap binaries on two separate machines without using any substitutes, you will find that the derivation which cross-compiles %gcc-static (the bootstrap GCC, version 5.5.0) produces different output on the two systems. However, if you build %gcc-static twice on the same system, it builds reproducibly. This suggests that something in the transitive closure of inputs of %gcc-static is perhaps contributing to its non-reproducibility. There is an interesting graph toward the end of the bug report, shown below:

DifferingDerivations

This graph shows the derivations that produce differing outputs across two Guix System machines, when everything is built without substitutes. It starts from the derivation that cross-compiles %gcc-static for powerpc64-linux-gnu (from x86_64-linux) using Guix at commit 1ced8379c7641788fa607b19b7a66d18f045362b. Then, it walks the graph of derivation inputs, recording only those derivations which produce differing output on the two different machines. If the non-reproducibility (across systems) of %gcc-static is caused by a non-reproducible input, then it is probably caused by one or more of the derivations shown in this graph.

At some point, you have to cut your losses and move on. After months of investigation without resolving the reproducibility issue, we finally decided to move forward with the bootstrap binaries produced earlier. If necessary, we can always go back and try to fix this issue. However, it seemed more important to get started with the bootstrapping work.

Anyone who is interested in solving this problem is welcome to comment on the bug report and help us to figure out the mystery. We are very interested in solving it, but at the moment we are more focused on building the rest of the Guix package collection on the powerpc64le-linux platform using the existing bootstrap binaries.

Next Steps

It is now possible to install Guix on a powerpc64le-linux system and use it to build some useful software - in particular, Guix itself. So Guix is now "self-hosted" on this platform, which gives us a comfortable place to begin further work.

The following tasks still need to be done. Anyone can help, so please get in touch if you want to contribute!

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

12 April, 2021 12:00AM by Chris Marusich and Léo Le Bouter

April 11, 2021

Andy Wingo

guile's reader, in guile

Good evening! A brief(ish?) note today about some Guile nargery.

the arc of history

Like many language implementations that started life when you could turn on the radio and expect to hear Def Leppard, Guile has a bottom half and a top half. The bottom half is written in C and exposes a shared library and an executable, and the top half is written in the language itself (Scheme, in the case of Guile) and somehow loaded by the C code when the language implementation starts.

Since 2010 or so we have been working at replacing bits written in C with bits written in Scheme. Last week's missive was about replacing the implementation of dynamic-link from using the libltdl library to using Scheme on top of a low-level dlopen wrapper. I've written about rewriting eval in Scheme, and more recently about how the road to getting the performance of C implementations in Scheme has been sometimes long.

These rewrites have a quixotic aspect to them. I feel something in my gut about rightness and wrongness and I know at a base level that moving from C to Scheme is the right thing. Much of it is completely irrational and can be out of place in a lot of contexts -- like if you have a task to get done for a customer, you need to sit and think about minimal steps from here to the goal and the gut doesn't have much of a role to play in how you get there. But it's nice to have a project where you can do a thing in the way you'd like, and if it takes 10 years, that's fine.

But besides the ineffable motivations, there are concrete advantages to rewriting something in Scheme. I find Scheme code to be more maintainable, yes, and more secure relative to the common pitfalls of C, obviously. It decreases the amount of work I will have when one day I rewrite Guile's garbage collector. But also, Scheme code gets things that C can't have: tail calls, resumable delimited continuations, run-time instrumentation, and so on.

Taking delimited continuations as an example, five years ago or so I wrote a lightweight concurrency facility for Guile, modelled on Parallel Concurrent ML. It lets millions of fibers to exist on a system. When a fiber would need to block on an I/O operation (read or write), instead it suspends its continuation, and arranges to restart it when the operation becomes possible.

A lot had to change in Guile for this to become a reality. Firstly, delimited continuations themselves. Later, a complete rewrite of the top half of the ports facility in Scheme, to allow port operations to suspend and resume. Many of the barriers to resumable fibers were removed, but the Fibers manual still names quite a few.

Scheme read, in Scheme

Which brings us to today's note: I just rewrote Guile's reader in Scheme too! The reader is the bit that takes a stream of characters and parses it into S-expressions. It was in C, and now is in Scheme.

One of the primary motivators for this was to allow read to be suspendable. With this change, read-eval-print loops are now implementable on fibers.

Another motivation was to finally fix a bug in which Guile couldn't record source locations for some kinds of datums. It used to be that Guile would use a weak-key hash table to associate datums returned from read with source locations. But this only works for fresh values, not for immediate values like small integers or characters, nor does it work for globally unique non-immediates like keywords and symbols. So for these, we just wouldn't have any source locations.

A robust solution to that problem is to return annotated objects rather than using a side table. Since Scheme's macro expander is already set to work with annotated objects (syntax objects), a new read-syntax interface would do us a treat.

With read in C, this was hard to do. But with read in Scheme, it was no problem to implement. Adapting the expander to expect source locations inside syntax objects was a bit fiddly, though, and the resulting increase in source location information makes the output files bigger by a few percent -- due somewhat to the increased size of the .debug_lines DWARF data, but also due to serialized source locations for syntax objects in macros.

Speed-wise, switching to read in Scheme is a regression, currently. The old reader could parse around 15 or 16 megabytes per second when recording source locations on this laptop, or around 22 or 23 MB/s with source locations off. The new one parses more like 10.5 MB/s, or 13.5 MB/s with positions off, when in the old mode where it uses a weak-key side table to record source locations. The new read-syntax runs at around 12 MB/s. We'll be noodling at these in the coming months, but unlike when the original reader was written, at least now the reader is mainly used only at compile time. (It still has a role when reading s-expressions as data, so there is still a reason to make it fast.)

As is the case with eval, we still have a C version of the reader available for bootstrapping purposes, before the Scheme version is loaded. Happily, with this rewrite I was able to remove all of the cruft from the C reader related to non-default lexical syntax, which simplifies maintenance going forward.

An interesting aspect of attempting to make a bug-for-bug rewrite is that you find bugs and unexpected behavior. For example, it turns out that since the dawn of time, Guile always read #t and #f without requiring a terminating delimiter, so reading "(#t1)" would result in the list (#t 1). Weird, right? Weirder still, when the #true and #false aliases were added to the language, Guile decided to support them by default, but in an oddly backwards-compatible way... so "(#false1)" reads as (#f 1) but "(#falsa1)" reads as (#f alsa1). Quite a few more things like that.

All in all it would seem to be a successful rewrite, introducing no new behavior, even producing the same errors. However, this is not the case for backtraces, which can expose the guts of read in cases where that previously wouldn't happen because the C stack was opaque to Scheme. Probably we will simply need to add more sensible error handling around callers to read, as a backtrace isn't a good user-facing error anyway.

OK enough rambling for this evening. Happy hacking to all and to all a good night!

11 April, 2021 07:51PM by Andy Wingo

April 08, 2021

sign of the times

Hello all! There is a mounting backlog of things that landed in Guile recently and to avoid having to eat the whole plate in one bite, I'm going to try to send some shorter missives over the next weeks.

Today's is about a silly thing, dynamic-link. This interface is dlopen, but "portable". See, back in the day -- like, 1998 -- there were lots of kinds of systems and how to make and load a shared library portably was hard. You'd have people with AIX and Solaris and all kinds of weird compilers and linkers filing bugs on your project if you hard-coded a GNU toolchain invocation when creating loadable extensions, or hard-coded dlopen or similar to use them.

Libtool provided a solution to create portable loadable libraries, which involved installing .la files alongside the .so files. You could use libtool to link them to a library or an executable, or you could load them at run-time via the libtool-provided libltdl library.

But, the .la files were a second source of truth, and thus a source of bugs. If a .la file is present, so is an .so file, and you could always just use the .so file directly. For linking against an installed shared library on modern toolchains, the .la files are strictly redundant. Therefore, all GNU/Linux distributions just delete installed .la files -- Fedora, Debian, and even Guix do so.

Fast-forward to today: there has been a winnowing of platforms, and a widening of the GNU toolchain (in which I include LLVM as well as it has a mostly-compatible interface). The only remaining toolchain flavors are GNU and Windows, from the point of view of creating loadable shared libraries. Whether you use libtool or not to create shared libraries, the result can be loaded either way. And from the user side, dlopen is the universally supported interface, outside of Windows; even Mac OS fixed their implementation a few years back.

So in Guile we have been in an unstable equilibrium: creating shared libraries by including a probably-useless libtool into the toolchain, and loading them by using a probably-useless libtool-provided libltdl.

But the use of libltdl has not been without its costs. Because libltdl intends to abstract over different platforms, it encourages you to leave off the extension when loading a library, instead promising to try a platform-specific set such as .so, .dll, .dylib etc as appropriate. In practice the abstraction layer was under-maintained and we always had some problems on Mac OS, for example.

Worse, as ltdl would search through the path for candidates, it would only report the last error it saw from the underlying dlopen interface. It was almost always the case that if A and B were in the search path, and A/foo.so failed to load because of a missing dependency, the error you would get as a user would instead be "file not found", because ltdl swallowed the first error and kept trucking to try to load B/foo.so which didn't exist.

In summary, this is a case where the benefits of an abstraction layer decline over time. For a few years now, libltdl hasn't been paying for itself. Libtool is dead, for all intents and purposes (last release in 2015); best to make plans to migrate away, somehow.

In the case of the dlopen replacement, in Guile we ended up rewriting the functionality in Scheme. The underlying facility is now just plain dlopen, for which we shim a version of dlopen on Windows, inspired by the implementation in cygwin. There are still platform-specific library extensions, but that is handled easily on the Scheme layer.

Looking forward, I think it's probably time to replace Guile's use of libtool to create its libraries and executables. I loathe the fact that libtool puts shell scripts in the place of executables in build directories and stashes the actual executables elsewhere -- like, visceral revulsion. There is no need for that nowadays. Not sure what to replace it with, nor on what timeline.

And what about autotools? That, my friends, would be a whole nother blog post. Until then, & probably sooner, happy hacking!

08 April, 2021 07:09PM by Andy Wingo

April 06, 2021

poke @ Savannah

[VIDEO] Terminal Hyperlinks in GNU poke

Terminal Hyperlinks are a new way for programs to print click-able text in terminals and terminal emulators.  Along with the app:// protocol, the hyperlinks can be used to greatly enhance the experience of using CLI (command-line interface) programs.  We are making extensive usage of this novel feature in GNU poke and we are very happy with the results.

This short video introduces the terminal hyperlinks, the app protocol and shows a few examples on how poke uses them.

https://www.youtube.com/watch?v=9Eg9Zn8AtY8

06 April, 2021 05:58PM by Jose E. Marchesi

libredwg @ Savannah

libredwg-0.12.4 released

Fixed many more minor fuzzing errors.
See https://www.gnu.org/software/libredwg/ and https://github.com/LibreDWG/libredwg/blob/0.12.4/NEWS

Here are the compressed sources:
http://ftp.gnu.org/gnu/libredwg/libredwg-0.12.4.tar.gz (17.4MB)
http://ftp.gnu.org/gnu/libredwg/libredwg-0.12.4.tar.xz (9MB)

Here are the GPG detached signatures[*]:
http://ftp.gnu.org/gnu/libredwg/libredwg-0.12.4.tar.gz.sig
http://ftp.gnu.org/gnu/libredwg/libredwg-0.12.4.tar.xz.sig

Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html

Here are more binaries:
https://github.com/LibreDWG/libredwg/releases/tag/0.12.4

Here are the SHA256 checksums:

```
081e9a70be529542b905b04be73e3e7590d60b1e976c0227f47004f3373ed9b1  libredwg-0.12.4.tar.gz
918857f119c34d9bef17321b646c4ba0fbfaa93dcaced403bae1933e1d9a6517  libredwg-0.12.4.tar.xz
```

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:

gpg --verify libredwg-0.12.4.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

gpg --keyserver keys.gnupg.net --recv-keys B4F63339E65D6414

and rerun the gpg --verify command.

06 April, 2021 12:40PM by Reini Urban

April 05, 2021

Parabola GNU/Linux-libre

i686 desktop users should refrain from upgrading

users of a GTK-based desktops (LXDE, MATE, possibly others) on the i686 system, should refrain from upgrading for some time, or else the desktop may not start properly - this bug does not affect X86_64

05 April, 2021 11:22AM by bill auger

Sylvain Beucler

planet.gnu.org is looking for a new host and maintainer

planet.gnu.org logo

Around 3 years ago I revamped planet.gnu.org and hosted it myself, as the previous host was defunc.

I won't have the energy host it for much longer, so planet.gnu.org is now looking for a new host and maintainer.
In any case I'll shut down the service when I upgrade to Debian 11 "bullseye" in a few months.

Everything needed to run the service is documented and stored in the infra and config repositories, there is no private data. A previous blog post discusses upcoming tasks. The planet@gnu.org contact point is now managed by GNU's Mailman instance. The DNS alias is managed by the FSF sysadmins.

05 April, 2021 10:48AM

April 03, 2021

GNUnet News

GNUnet 0.14.1

GNUnet 0.14.1

Continuing to "release early / release often", we present GNUnet 0.14.1. This is a bugfix release for gnunet 0.14.0.

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.14.1 (since 0.14.0)

  • TNG : Various improvements to communicators. #6361 , #5550
  • GNS : Use autogenerated records header file from GANA.
  • FS : Improve modularity of FS structs. #6743
  • SETU : Various improvements as part of the ongoing work on LSD0003 .
  • IDENTITY : Fix wrong key construction for anonymous ECDSA identity.
  • RPS : Code cleanup mostly addressing warnings.
  • UTIL :
    • Added a Base32 en/decoded CLI gnunet-base32 .
    • Use timeflakes as UUIDs. #6716
  • Buildsystem : Fix libunistring detection. #6485

A detailed list of changes can be found in the ChangeLog and the 0.14.1 bugtracker .

Thanks

This release was the work of many people. The following people contributed code and were thus easily identified: Christian Grothoff, Florian Dold, t3sserakt, TheJackiMonster, Elias Summermatter, Julius Bünger and Thien-Thi Nguyen.

03 April, 2021 10:00PM

March 31, 2021

Christopher Allan Webber

The hurt of this moment, hopes for the future

Of the deeper thoughts I might give to this moment, I have given them elsewhere. For this blogpost, I just want to speak of feelings... feelings of hurt and hope.

I am reaching out, collecting the feelings of those I see around me, writing them in my mind's journal. Though I hold clear positions in this moment, there are few roots of feeling and emotion about the moment I feel I haven't steeped in myself at some time. Sometimes I tell this to friends, and they think maybe I am drifting from a mutual position, and this is painful for them. Perhaps they fear this could constitute or signal some kind of betrayal. I don't know what to say: I've been here too long to feel just one thing, even if I can commit to one position.

So I open my journal of feelings, and here I share some of the pages collecting the pain I see around me:

The irony of a movement wanting to be so logical and above feelings being drowned in them.

The feelings of those who found a comfortable and welcoming home in a world of loneliness, and the split between despondence and outrage for that unraveling.

The feelings of those who wanted to join that home too, but did not feel welcome.

The pent up feelings of those unheard for so long, uncorked and flowing.

The weight and shadow of a central person who seems to feel things so strongly but cannot, and does not care to learn to, understand the feelings of those around them.

I flip a few pages ahead. The pages are blank, and I interpret this as new chapters for us to write, together.

I hope we might re-discover the heart of our movement.

I hope we can find a place past the pain of the present, healing to build the future.

I hope we can build a new home, strong enough to serve us and keep us safe, but without the walls, moat, and throne of a fortress.

I hope we can be a movement that lives up to our claims: of justice, of freedom, of human rights, to bring these to everyone, especially those we haven't reached.

31 March, 2021 06:42PM by Christopher Lemmer Webber

March 30, 2021

Mike Gran

Guile Potluck 2021 Part 1: Genshou and Anguish

Preface

Guile Potluck 2021 was an event where hackers got to advertise their exciting new projects.  It wrapped up a few weeks ago, and at that time I was my intention to blog my way through the entrants right away.  Well, I have not been expedient on that front.
I'm so very sorry it has taken me so long to get back to Guile Potluck 2021.  Somewhere between family, kids, the day job, actually working on Guile, and the vague depression that quarantine seems to instill in me, it all got away from me.
But hey, let's see what we've got. I'll start from the end, and work my way back to the beginning.

Genshou, by Walter Lewis

https://git.sr.ht/~wklew/genshou

Here, wklew implements an extensible effects system, that allows stateful computations without any mutation.  Thought provoking stuff, especially if you like pondering monads, denotational semantics and other such things.

Honestly, it is projects like this that activate my impostor syndrome with regards to Scheme. Many Scheme hackers approach it from a deep interest in Computer Science, and mostly I look at their work in awe whilst carrying on with my second-hand understanding of programming that somehow I've built a career on.
 

Anguish, by Rutger van Beusekom

https://gitlab.com/rutger.van.beusekom/anguish
 


 

rutger.van.beusekom has written as parser for the POSIX sh language
using PEG grammar, which he hopes to convert into a full fledged shell
in future. At the moment, it converts shell statements into an SXML-like representation.

I am excited to see someone exercise Guile's PEG parser, which is both powerful and under-utilized.

30 March, 2021 03:51AM by Mike (noreply@blogger.com)

March 29, 2021

poke @ Savannah

[VIDEO] Hacking the BPF Type Format with GNU poke

I just uploaded the pilot test of what will be a series of videos indended to show how to use poke, writing pickles, and the like.  It is very improvised and I have never made a screencap/video of this kind before, so please bear with me ;)

https://youtu.be/n3ErUiLwFN4

Happy poking!

29 March, 2021 02:24AM by Jose E. Marchesi

March 28, 2021

GNU Taler news

Why a Digital Euro should be Online-first and Bearer-based

We are happy to announce the publication of our paper on "Why a Digital Euro should be Online-first and Bearer-based".

28 March, 2021 10:00PM

March 25, 2021

Andy Wingo

here we go again

Around 18 months ago, Richard Stallman was forced to resign from the Free Software Foundation board of directors and as president. It could have been anything -- at that point he already had a history of behaving in a way that was particularly alienating to women -- but in the end it was his insinuation that it was somehow OK if his recently-deceased mentor Marvin Minsky, then in his 70s or 80s, had sex with a 17-year-old on Jeffrey Epstein's private island. A weird pick of hill to stake one's reputation on, to say the least.

At the time I was relieved that we would finally be getting some leadership renewal at the FSF, and hopeful that we could get some mission renewal as well. I was also looking forward to the practical implications would be for the GNU project, as more people agreed that GNU was about software freedom and not about its founder.

But now we're back! Not only has RMS continued through this whole time to insist that he runs the GNU project -- something that is simply not the case, in my estimation -- but this week, a majority of a small self-selected group of people, essentially a subset of current and former members of the FSF board of directors and including RMS himself, elected to reinstate RMS to the board of the Free Software Foundation. Um... read the room, FSF voting members? What kind of message are you sending?

In this context I can only agree with the calls for the entire FSF board to resign. The board is clearly not fit for purpose, if it can make choices like this.

dissociation?

I haven't (yet?) signed the open letter because I would be in an inconsistent position if I did so. The letter enjoins people to "refuse to contribute to projects related to the FSF and RMS"; as a co-maintainer of GNU Guile, which has its origins in the heady 1990s of the FSF but has nothing to do any more with RMS, but whose copyrights are entirely held by the FSF, is hosted on FSF-run servers, and is even obliged (GPLv3 §5d, as referenced by LGPLv3) to print out Copyright (C) 1995-2021 Free Software Foundation, Inc. when it starts, I must admit that I contribute to a project that is "related to the FSF". But I don't see how Guile could continue this association, if the FSF board continues as it is. It's bad for contributors and for the future of the project.

It would be very tricky to disentangle Guile from the FSF -- consider hosting, for example -- so it's not the work of a day, but it's something to think about.

Of course I would rather that the FSF wouldn't allow itself to be seen as an essentially misogynist organization. So clean house, FSF!

on the nature of fire

Reflecting on how specifically we could have gotten here -- I don't know. I don't know the set of voting members at the FSF, what discussions were made, who voted what. But, having worked as a volunteer on GNU projects for almost two decades now, I have a guess. RMS and his closest supporters see themselves as guardians of the flame of free software -- a lost world of the late 70s MIT AI lab, reborn in a flurry of mid-80s hack, but since 25 years or so, slipping further and further away. These are dark times, in their view, and having the principled founder in a leadership role can only be a good thing.

(Of course, the environment in the AI lab was only good for some. The treatment of Margaret Hamilton as recounted in Levy's Hackers shows that not all were welcome. If this were just one story, I would discount it, but looking back, it does seem to be part of a pattern.)

But is that what the FSF is for today? If so, Guile should certainly leave. I'm not here for software as perfomative nostalgia -- I'm here to have fun with friends and start a fire. The FSF should look to do the same -- look at the world we are in, look where the energy is now, and engage in real conversations about success and failure and tactics. There is a world to win and doubling down on RMS won't get us there from here.

25 March, 2021 12:22PM by Andy Wingo

March 24, 2021

www-zh-cn @ Savannah

Letter to support RMS

This is the email I sent to <info@fsf.org>, cc  <directors@fsf.org>

Dear FSF,

I write this letter to support Dr. Richard Stallman as a board member of FSF.

Dr. Richard Stallman has been a strong leader in Free Software Movement ever since the beginning of free software. He has been always thinking from the free software point of view, and has been constantly promoting free software and free software community. This is what we need in Free Software Foundation. This is what we need to inspire people to work for the goal of Free Software Foundation.

I am expecting Dr. Richard Stall to do his best in FSF and wish him all the best.

best regards,
wxie

24 March, 2021 07:03AM by Wensheng XIE

March 22, 2021

parallel @ Savannah

GNU Parallel 20210322 ('2002-01-06') released [stable]

GNU Parallel 20210322 ('2002-01-06') [stable] has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

No new functionality was introduced so this is a good candidate for a stable release.

Please help spreading GNU Parallel by making a testimonial video like Juan Sierra Pons: http://www.elsotanillo.net/wp-content/uploads/GnuParallel_JuanSierraPons.mp4

It does not have to be as detailed as Juan's. It is perfectly fine if you just say your name, and what field you are using GNU Parallel for.

Quote of the month:

  GNU Parallel is my new favorite thing
    -- Will Tejeda @thewilltejeda

 

New in this release:

  • Bug fixes and man page updates.

News about GNU Parallel:

  • The very first version of Parallel dated 2002-01-06 was found in an old backup:

  #!/usr/bin/perl

  $processes=shift;

  chomp(@jobs=<>);
  for (@jobs) {
      $jobnr++;
      push @makefile,
      (".PHONY : job$jobnr\n",
       "job$jobnr :\n",
       "\t$_\n");
  }
  unshift @makefile, "all : ",(map { "job$_ " } 1 .. $jobnr),"\n";

 

  open (MAKE, "| make -k -f - -j $processes") || die;
  print MAKE @makefile;
  close MAKE;

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep c82233e7da3166308632ac8c34f850c0
    12345678 c82233e7 da316630 8632ac8c 34f850c0
    $ md5sum install.sh | grep ae3d7aac5e15cf3dfc87046cfc5918d2
    ae3d7aac 5e15cf3d fc87046c fc5918d2
    $ sha512sum install.sh | grep dfc00d823137271a6d96225cea9e89f533ff6c81f
    9c5198d5 31a3b755 b7910ece 3a42d206 c804694d fc00d823 137271a6 d96225ce
    a9e89f53 3ff6c81f f52b298b ef9fb613 2d3f9ccd 0e2c7bd3 c35978b5 79acb5ca
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 March, 2021 10:17PM by Ole Tange

March 21, 2021

www @ Savannah

The WWWorst App Store

The WWWorst App Store

by Alexandre Oliva

Picture the most abusive app store.

Programs in it are meant to run on your own computer.

However, you have to be online to run them.

Every time you start them, they contact the app store.

If there is an updated version, it's installed automatically, no questions asked. You'd rather run the earlier version? Tough.

If the app store decides you're no longer welcome, the program won't start any more.

If the app store servers are offline, or if you are, it won't start either. Read more...

21 March, 2021 01:32PM by Dora Scilipoti

Amin Bandali

LibrePlanet 2021: Jami and how it empowers users

I am giving my very first LibrePlanet talk today on March 20th. I will be talking about Jami, the GNU package for universal communication that respects the freedoms and privacy of its users. I'll be giving an introduction to Jami and its architecture, sharing important and exciting development news from the Jami team about rendezvous points, JAMS, the plugin SDK, Swarm chats, and more; and how these features each help empower users to communicate with their loved ones without sacrificing their privacy or freedom.

Here is the abstract for my talk, also available on the LibrePlanet 2021's speakers page:

Jami is free software for universal communication that respects the freedoms and privacy of its users. Jami is an official GNU package with a main goal of providing a framework for virtual communications, along with a series of end-user applications for audio/video calling and conferencing, text messaging, and file transfer.

With the outbreak of the COVID-19 pandemic, working from home has become the norm for many workers around the world. More and more people are using videoconferencing tools to work or communicate with their loved ones. The emergence of these tools has been followed by many questions and scandals concerning the privacy and freedom of users.

This talk gives an introduction to Jami, a free/libre, truly distributed, and peer-to-peer solution, and explains why and how it differs from all other existing solutions and how it empowers users.

I have been an attendee of LibrePlanet for some years, and am very excited to be giving my first ever talk at LibrePlanet 2021 this year! You can watch my talk and other speakers' talks live this weekend, from the LibrePlanet 2021 - Live page. Attendance is gratis (no cost), and you can register at https://u.fsf.org/lp21-sp.

Presentation slides: pdf (with notes) | bib
sources: tar.gz | zip

I hope to see you around this year's all-online LibrePlanet conference this weekend!

LibrePlanet is a conference about software freedom, happening March 20 through 21, 2021. The event is hosted by the Free Software Foundation (FSF), and brings together software developers, law and policy experts, activists, students, and computer users to learn skills, celebrate free software accomplishments, and face upcoming challenges. Newcomers are always welcome, and LibrePlanet 2021 will feature programming for all ages and experience levels.

21 March, 2021 05:15AM by bandali