Planet GNU

Aggregation of development blogs from the GNU Project

June 24, 2025

FSF Blogs

FSF Events

Free Software Directory meeting on IRC: Friday, July 11, starting at 12:00 EDT (16:00 UTC)

Join the FSF and friends on Friday, July 11 from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.

24 June, 2025 06:05PM

Free Software Directory meeting on IRC: Friday, June 27, starting at 12:00 EDT (16:00 UTC)

Join the FSF and friends on Friday, June 27 from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.

24 June, 2025 05:55PM

GNU Guix

Privilege Escalation Vulnerabilities (CVE-2025-46415, CVE-2025-46416)

Two security issues, known as CVE-2025-46415 and CVE-2025-46416, have been identified in guix-daemon, which allow for a local user to gain the privileges of any of the build users and subsequently use this to manipulate the output of any build, as well as to subsequently gain the privileges of the daemon user. You are strongly advised to upgrade your daemon now (see instructions below), especially on multi-user systems.

Both exploits require the ability to start a derivation build. CVE-2025-46415 requires the ability to create files in /tmp in the root mount namespace on the machine the build occurs on, and CVE-2025-46416 requires the ability to run arbitrary code in the root PID and network namespaces on the machine the build occurs on. As such, this represents an increased risk primarily to multi-user systems, but also more generally to any system in which untrusted code may be able to access guix-daemon's socket, which is usually located at /var/guix/daemon-socket/socket.

Vulnerability

One of the longstanding oversights of Guix's build environment isolation is what has become known as the abstract Unix-domain socket hole: a Linux-specific feature that enables any two processes in the same network namespace to communicate via Unix-domain sockets, regardless of all other namespace state. Unix-domain sockets are perhaps the single most powerful form of interprocess communication (IPC) that Unix-like systems have to offer, for the reason that they allow file descriptors to be passed between processes.

This behavior had played a crucial role in CVE-2024-27297, in which it was possible to smuggle a writable file descriptor to one of the output files of a fixed-output derivation to a process outside of the build environment sandbox. More specifically, this would use a fixed-output derivation that doesn't use a builtin builder; examples of this class of derivation include derivations produced by origins using svn-fetch and hg-fetch, but not git-fetch or url-fetch, since those are implemented using builtin builders. The process could then wait for the daemon to validate the hash and register the output, and subsequently modify the file to contain any contents it desired.

The fix for CVE-2024-27297 seems to have made the assumption that once the build was finished, no more processes could be running as that build user. This is unfortunately incorrect: the builder could also smuggle out the file descriptor of a setuid program, which could subsequently be executed either using /proc/self/fd/N or execveat to gain the privileges of the build user. This assumption was likely believed to hold in Nix because Nix had a seccomp filter that attempted to forbid the creation of setuid programs entirely by blocking the necessary chmod calls. The security researchers who discovered CVE-2025-46415 and CVE-2025-46416 discovered ways around Nix's seccomp filter, but Guix never had any such filter to begin with. It was therefore possible to run arbitrary code as the build user outside of the isolated build environment at any time.

Because it is possible to run arbitrary code as the build user even after the build has finished, many assumptions made in the design of the build daemon — not only in fixing CVE-2024-27297 but going way back — can be violated and exploited. One such assumption is that directories being deleted by deletePath — for instance the build tree of a build that has just failed — won't be modified while it is recursing through them. By violating this assumption, it is possible to exploit race conditions in deletePath to get the daemon to delete arbitrary files. One such file is a build directory of the form /tmp/guix-build-PACKAGE-X.Y.drv-0. If this is done between when the build directory is created and when it is chowned to the build user, an attacker can put a symbolic link in the appropriate place and get it to chown any file owned by the daemon's user to now be owned by the build user. In the case of a daemon running as root, that includes files such as /etc/passwd. The build users, as mentioned before, are easily compromised, so an attacker can at this point write to the target file.

When guix-daemon is not running as root, the attacker would gain privileges of the guix-daemon user, giving write access to the store and nothing else.

In short, there are two separate problems here:

  1. It is possible to take over build users by exfiltrating setuid programs (CVE-2025-46416).
  2. Race conditions in the daemon make it possible to elevate privileges when other processes can concurrently modify files it operates on (CVE-2025-46415).

Mitigation

This security issue has been fixed by 6 commits (7173c2c0ca, be8aca0651, fb42611b8f, c659f977bb, 0e79d5b655, and 30a5d140aa as part of pull request #788). Users should make sure they have upgraded to commit 30a5d140aa or any later commit to be protected from this vulnerability. Upgrade instructions are in the following section.

The fix was accomplished primarily by closing the "abstract Unix-domain socket hole" entirely. To do this, the daemon was modified so that all builds — even fixed-output ones – occur in a fresh network namespace. To keep networking functional despite the separate network namespace, a userspace networking stack, slirp4netns, is used. Additionally, some of the daemon's file deletion and copying helper procedures were modified to use the openat family of system calls, so that even in cases where build users can be taken over (for example, when the daemon is run with --disable-chroot), those particular helper procedures can't be exploited to escalate privileges.

A test for the presence of the abstract Unix-domain socket hole is available at the end of this post. One can run this code with:

guix repl -- abstract-socket-vuln-check.scm

This will output whether the current guix-daemon being used is vulnerable or not. If it is not vulnerable, the last line will contain Abstract unix socket hole is CLOSED, otherwise the last line will contain Abstract unix socket hole is OPEN, guix-daemon is VULNERABLE.

Note that this will properly report that the hole is still open for daemons running with --disable-chroot, which is, as before, still insecure wherever untrusted users can access the daemon's socket.

Upgrading

Due to the severity of this security advisory, we strongly recommend all users to upgrade guix-daemon immediately.

For Guix System, the procedure is to reconfigure the system after a guix pull, either restarting guix-daemon or rebooting. For example:

guix pull
sudo guix system reconfigure /run/current-system/configuration.scm
sudo herd restart guix-daemon

where /run/current-system/configuration.scm is the current system configuration but could, of course, be replaced by a system configuration file of a user's choice.

For Guix on another distribution, one needs to guix pull with sudo, as the guix-daemon runs as root, and restart the guix-daemon service, as documented. For example, on a system using systemd to manage services, run:

sudo --login guix pull
sudo systemctl restart guix-daemon.service

Note that for users with their distro's package of Guix (as opposed to having used the install script) you may need to take other steps or upgrade the Guix package as per other packages on your distro. Please consult the relevant documentation from your distro or contact the package maintainer for additional information or questions.

Timeline

On March 27th, the NixOS/Nixpkgs security team forwarded a detailed report about two vulnerabilities from Snyk Security Labs to the Guix security team and to Ludovic Courtès and Reepca Russelstein (as contributors to guix-daemon). A 90-day disclosure timeline was agreed upon with Snyk and all the affected projects: Nix, Lix, and Guix.

During that time, development of the fixes in Guix was led by Reepca Russelstein with peer review happening on the private guix-security mailing list. Coordination with the other projects and for this security advisory was managed by the Guix security team.

A pre-disclosure announcement was sent by the NixOS/Nixpkgs and the Guix security teams on June 19th–20th, giving June 24th as the full public disclosure date.

Some other CVEs that were included in the report were CVE-2025-52991, CVE-2025-52992, and CVE-2025-52993. These don't represent direct vulnerabilities so much as missed opportunities to mitigate the attack the researchers identified — that is, it has to be possible to do things like exfiltrate file descriptors (for CVE-2025-52992) and trick the daemon into deleting arbitrary files (for CVE-2025-52991 and CVE-2025-52993) before these start mattering.

Conclusion

More information concerning the fix for this vulnerability and the design choices made for it will be provided in a follow-up blog post.

We thank the Security Labs team at Snyk for discovering similar-but-not-quite-the-same vulnerabilities in Nix, and the NixOS/Nixpkgs security team for sharing this information with the Guix security team, which led us to realize our own related vulnerabilities.

Test for presence of vulnerability

Below is code to check if your guix-daemon is vulnerable to this exploit. Save this file as abstract-socket-vuln-check.scm and run following the instructions above, in "Mitigation."

;; Checking for CVE-2025-46415 and CVE-2025-46416.

(use-modules (guix)
             (gcrypt hash)
             ((rnrs bytevectors) #:select (string->utf8))
             (ice-9 match)
             (ice-9 threads)
             (srfi srfi-34))

(define nonce
  (string-append "-" (number->string (car (gettimeofday)) 16)
                 "-" (number->string (getpid))))

(define socket-name
  (string-append "\0" nonce))

(define test-message nonce)

(define check
  (computed-file
   "check-abstract-socket-hole"
   #~(begin
       (use-modules (ice-9 textual-ports))

       (let ((sock (socket AF_UNIX SOCK_STREAM 0)))
         ;; Attempt to connect to the abstract Unix-domain socket outside.
         (connect sock AF_UNIX #$socket-name)

         ;; If we reach this line, then we successfully managed to connect to
         ;; the abstract Unix-domain socket.
         (call-with-output-file #$output
           (lambda (port)
             (display (get-string-all sock) port)))))
   #:options
   `(#:hash-algo sha256
     #:hash ,(sha256 (string->utf8 test-message))
     #:local-build? #t)))

(define build-result
  ;; Listen on the abstract Unix-domain socket at SOCKET-NAME and build
  ;; CHECK.  If CHECK succeeds, then it managed to connect to SOCKET-NAME.
  (let ((sock (socket AF_UNIX SOCK_STREAM 0)))
    (bind sock AF_UNIX socket-name)
    (listen sock 1)
    (call-with-new-thread
     (lambda ()
       (match (accept sock)
         ((connection . peer)
          (format #t "accepted connection on abstract Unix-domain socket~%")
          (display test-message connection)
          (close-port connection)))))
    (with-store store
      (let ((drv (run-with-store store (lower-object check))))
        (guard (c ((store-protocol-error? c) c))
          (build-derivations store (list drv))
          #t)))))

(if (store-protocol-error? build-result)
    (format (current-error-port)
            "Abstract Unix-domain socket hole is CLOSED, build failed with ~S.~%"
            (store-protocol-error-message build-result))
    (format (current-error-port)
            "Abstract Unix-domain socket hole is OPEN, guix-daemon is VULNERABLE!~%"))

24 June, 2025 02:00PM by Caleb Ristvedt

June 23, 2025

GNUnet News

GNUnet 0.24.3

GNUnet 0.24.3

This is a bugfix release for gnunet 0.24.2. It fixes some regressions and minor bugs.

Links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try https://ftp.gnu.org/gnu/gnunet/

23 June, 2025 10:00PM

June 22, 2025

parallel @ Savannah

GNU Parallel 20250622 ('Павутина') released

GNU Parallel 20250622 ('Павутина') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  GNU Parallel is a seriously underrated tool, at least based on how little I hear people talk about it (and how often I possibly over-use it)
    -- Byron Alley @byronalley

New in this release:

  • No new features.
  • Bug fixes and man page updates.


News about GNU Parallel:

  • Maîtriser la commande parallel

  https://blog.stephane-robert.info/docs/admin-serveurs/linux/parallel/


GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.


About GNU Parallel


GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep c555f616391c6f7c28bf938044f4ec50
    12345678 c555f616 391c6f7c 28bf9380 44f4ec50
    $ md5sum install.sh | grep 707275363428aa9e9a136b9a7296dfe4
    70727536 3428aa9e 9a136b9a 7296dfe4
    $ sha512sum install.sh | grep b24bfe249695e0236f6bc7de85828fe1f08f4259
    83320d89 f56698ec 77454856 895edc3e aa16feab 2757966e 5092ef2d 661b8b45
    b24bfe24 9695e023 6f6bc7de 85828fe1 f08f4259 6ce5480a 5e1571b2 8b722f21
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference


If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)


If GNU Parallel saves you money:



About GNU SQL


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.


About GNU Niceload


GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 June, 2025 10:46PM by Ole Tange

June 20, 2025

FSF Blogs

GNU Press Shop is open! FSF 40 gear, books & more -- now until July 28

The Free Software Foundation's (FSF) summer fundraiser is underway, and that means the GNU Press Shop is open!

20 June, 2025 08:40PM

June 18, 2025

From Chennai to Lviv: A recap of the LibreLocal meetups, part two

18 June, 2025 04:25PM

June 16, 2025

FSF News

Le logiciel libre peut défier la dystopie

16 June, 2025 09:55PM

El software libre puede desafiar la distopía

16 June, 2025 09:55PM

Free software can defy dystopia

16 June, 2025 09:55PM

FSF Events

Free Software Directory meeting on IRC: Friday, June 20, starting at 12:00 EDT (16:00 UTC)

Join the FSF and friends on Friday, June 20 from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.

16 June, 2025 08:02PM

June 12, 2025

Free Software Licensing 101 with FSF copyright & licensing associate Craig Topham

This free software licensing 101 talk is intended to cover as many details as possible involving the subject of free software licensing. The talk is broad in scope and is geared toward the beginner and intermediate audience.

12 June, 2025 07:25PM

June 11, 2025

FSF Blogs

FSF News

Citations

Below are citations to research noted in the printed edition of the 46th issue of the Free Software Bulletin.

11 June, 2025 12:51PM

June 10, 2025

GNU Health

Aportes del sistema libre GNU Health al cuidado de la salud: experiencias en Honduras y Argentina

Jornada: “Aportes del sistema libre GNU Health al cuidado de la salud: experiencias en Honduras y Argentina”

📅 Viernes 13 de Junio
� Centro de Innovación, Emprendimiento y Vinculación
ğŸ�› Facultad de Ingeniería – UNER

En esta jornada exploraremos las experiencias prácticas y los avances en la implementación de GNU Health en el primer nivel de atención en salud.
En el marco de la Alianza Académica “GNU Health – UNERâ€�, contaremos con la presencia de docentes invitados de la UNITEC (Universidad Tecnológica Centroamericana, Honduras), gracias al Programa de Movilidad Internacional Docente (PROMID) de la Universidad Nacional de Entre Ríos (UNER).

Jornada "Aportes del sistema libre GNU Health al cuidado de la salud: Experiencias en Honduras y Argentina"  En el marco de la Alianza Académica "GNU Health", con la presencia de profesores invitados de la UNITEC (Universidad Tecnológica Centroamericana) en el contexto del Programa de Movidlidad Internacional Docente (PROMID) de la Universidad Nacional de Entre Rïos


Esta jornada es una oportunidad para:
✅ Conocer las últimas actualizaciones de GNU Health directamente de su creador, el Dr. Luis Falcón (GNU Solidario).
✅ Descubrir casos reales de implementación en Argentina (CAPS D’Angelo) y Honduras (Municipio del Níspero).
✅ Participar en talleres prácticos para aprender a instalar y utilizar el sistema.
✅ Conectar con profesionales y académicos comprometidos con la salud pública y las tecnologías libres.

🔗 Reserva e-mail: saludpublica@ingenieria.uner.edu.ar

Programa

SecciónHoraTítulo
Apertura8:45Autoridades / Organizadores
Experiencias en el uso de GNU Health9:00Aportes de GNU Health al primer nivel de atención, una mirada retrospectiva. Bioing. Carlos Scotta y Bioing. Ingrid Spessotti (Grupo de Estudios en Salud Pública y Tecnologías Aplicadas, FIUNER)
9:45Experiencia en CAPS D’Angelo. Historia y presente de GNU Health como herramienta de gestión del cuidado de la salud – Lic. Teresita Calzia, Claudia Gudiño y equipo del CAPS
10:30Experiencia en el Municipio del Níspero. Honduras – Ing. Lucy Rodas e Ing. Elvin Deras (UNITEC, Honduras)
11:15Presentación de la nueva versión de GNU Health – Dr. Luis Falcón (GNU Solidario)

12:00Break
Taller de implementación del sistema14:00Primeros pasos con GNU Health: instalación y puesta en marcha del sistema – Ing. Elvin Deras (UNITEC, Honduras)
15:00Conociendo GNU Health: principales funcionalidades. Inga. Lucy Rodas (UNITEC, Honduras), Bioinga Maia Iturain, Bioinga. Ingrid Spessotti, Bioing Francisco Moyano (Grupo de Estudios en Salud Pública y Tecnologías Aplicadas, FIUNER)

16:00Break
Proyectos en curso16:20Empoderando a la comunidad: Desarrollo de un Portal Paciente para GNU Health – Ana Roskopf
16:40PID UNER: adaptando el sistema GNU Health para enfermería de salud mental – Aldana Gagliardi
17:10Incorporando sistemas para el acompañamiento y seguimiento en terreno – Bioinga. Maia Iturain
Cierre17:30El sueño de un sistema interoperable: aspectos técnicos y políticos de la salud digital – Lic Mario Puntín, Bioing. Francisco Moyano, Dr. Fernando Sassetti.

10 June, 2025 09:34PM by GNU Solidario

FSF Events

Community meetup: Software Freedom Day

A group of us from the US will be getting together and discussing how to get local groups started in our local areas.

10 June, 2025 08:33PM

FSF Blogs

IRS tax filing software released to the people as free software

The majority of Direct File's source code is now public, in part thanks to free software advocates.

10 June, 2025 04:10PM

June 07, 2025

GNU Guix

A New Rust Packaging Model

If you've ever struggled with Rust packaging, here's some good news!

We have changed to a simplified Rust packaging model that is easier to automate and allows for modification, replacement and deletion of dependencies at the same time. The new model will significantly reduce our Rust packaging time and will help us to improve both package availability and quality.

Those changes are currently on the rust-team branch, slated to be merged in the coming weeks.

How good is the news? Migration of our current Rust package collection, 150+ applications with 3600+ dependency libraries, only took two weeks, all by one person! :)

See #387, if you want to track the current progress and give feedback. I'll request merging the rust-team branch when the pull request is merged. After merging the branch, a news entry will be issued for guix pull.

Upcoming changes

The previous packaging model for Rust in Guix would map one crate (Rust package) to one Guix package. This seemed to make sense but there's a fundamental mismatch here: while Guix packages—from applications like GIMP and Inkscape to C libraries like GnuTLS and Nettle—are meant to be compiled independently, Rust applications are meant to be compiled as a single unit together with all the crates they depend on, recursively. That mismatch meant that Guix would build each crate independently, but that build output was of no use at all.

The new model instead focuses on defining origins for crates, with actual builds happening only on the "leaves" of the graph—Rust applications. This is a major change with many implications, as we will see below.

  1. Importer

    guix import crate will support importing from Cargo.lock using the new --lockfile / -f option.

    guix import --insert=gnu/packages/rust-crates.scm \
         crate --lockfile=/path/to/Cargo.lock PACKAGE
    
    guix import -i gnu/packages/rust-crates.scm \
         crate -f /path/to/Cargo.lock PACKAGE

    To avoid conflicts with the new lockfile importer, the crates.io importer will be altered so it will no longer support importing dependencies.

    A new procedure, cargo-inputs-from-lockfile, will be added for use in the guix.scm of Rust projects. Note that Cargo workspaces in dependencies require manual intervention and are therefore not handled by this procedure.

    (use-modules (guix import crate))
    
    (package
      ...
      (inputs (cargo-inputs-from-lockfile "Cargo.lock")))
  2. Build system

    cargo-build-sytem will support directory inputs and Cargo workspaces.

    Build phase check-for-pregenerated-files will scan all unpacked sources and print out non-empty binary files.

    We won't accept contributions using the old packaging approach (#:cargo-inputs and #:cargo-development-inputs) anymore. Its support is deprecated and will be removed after Dec. 31, 2026.

  3. Packages

    Rust libraries will be stored in two new modules and will be hidden from the user interface:

    • (gnu packages rust-sources)

      Rust libraries that require a build process or complex modification involving external dependencies to unbundle dependencies.

    • (gnu packages rust-crates)

      Rust libraries imported using the lockfile importer. This module exports a lookup-cargo-inputs interface, providing an identifier -> libraries mapping.

      Libraries defined in this module can be modified via snippets and patches, replaced by changing their definitions to point to other variables, or removed by changing their definitions to #f. The importer will skip existing libraries to avoid overwriting modifications.

      A template file for this module will be provided as etc/teams/rust/rust-crates.tmpl in Guix source tree, for use in external channels.

    All other libraries (those currently in (gnu packages crates-...)) will be moved to an external channel. If you have packages depending on them, please add this channel and use its (past-crates packages crates-io) module to avoid possible breakage. Once merged, you can migrate your packages and safely remove the channel.

    (channel
      (name 'guix-rust-past-crates)
      (url "https://codeberg.org/guix/guix-rust-past-crates.git")
      (branch "trunk")
      (introduction
       (make-channel-introduction
        "1db24ca92c28255b28076792b93d533eabb3dc6a"
        (openpgp-fingerprint
         "F4C2D1DF3FDEEA63D1D30776ACC66D09CA528292"))))
  4. Documentation

    API references for cargo-build-sytem and packaging guidelines for Rust crates will be updated. A packaging workflow built upon the new features will be added under the Packaging chapter of Guix Cookbook.

Background

Currently, our Rust packaging uses the traditional approach, treating each application and library equally.

This brings issues. Firstly on packaging and maintenance, due to the large number of libraries with limited people working on it, plus we can't reuse those packaged libraries so instead of the built libraries, their sources are extracted and used in the build process. As a result, the packaging experience is not very smooth, although the crates.io importer has helped mitigate this to some extent.

Secondly on the user interface, thousands of Rust libraries that can't be used by the user appear in the search result. Documentation can't be taken good care of for all these packages as well, understandably.

Lastly, the inconsistency in the packaging interface. Our dependency model cannot perfectly map to Rust's, and circular dependencies are possible. To solve this, build system arguments #:cargo-inputs and #:cargo-development-inputs were introduced and used for specifying Rust libraries, instead of the regular propagated-inputs and native-inputs. Additionally, inputs propagation logic had to be reimplemented for them, which resulted in additional performance overhead.

Approaches have been proposed to improve the situation, notably the antioxidant build system developed by Maxime Devos, and the cargo2guix tool developed by Murilo and Luis Guilherme Coelho:

  1. Antioxidant

    The antioxidant build system builds Rust packages without Cargo, instead the build process is fully managed by Guix by invoking rustc directly.

    This build system would allow Guix to produce and share build artifacts for Rust libraries. It's a step towards making our work on the current approach more reasonable.

    However there's a downside. Since this is not what the Rust community expects, we'd also have to heavily patch many Rust packages, which would make it even harder for us to move forward.

  2. cargo2guix

    This tool parses Cargo.lock and outputs package definitions. It's more reliable than the crates.io importer, since dependencies are already known offline. It should be the most efficient improvement for the current approach. The upcoming importer update integrates a modified version of this tool.

    Murilo also proposes to package Rust applications in self-contained modules, each module containing a Rust application with all its dependencies, in order to reduce merge conflicts. However, one same library will be defined in multiple modules, duplicating the effort to check and manage them.

  3. Let Cargo download dependencies

    This is the "vendoring" approach, used in some distributions and can be implemented as a fixed-output derivation.

    We don't use this approach since the dependency information is completely hidden from us. We can't locate a library easily when we want to modify or replace it. If we made a mistake on checking dependencies, it could be very difficult to find out later.

    Another downside is that downloading of a single library can't be deduplicated. Since we use an isolated build environment, commonly used libraries will be downloaded repeatedly, despite already available in the store.

After reading the recent discussion, I thought about these existing approaches in the hope of finding one that does only the minimum necessary: since users can't use our packaged libraries, there's no reason to insist on the traditional approach -> libraries can be hidden from the user interface -> user-facing documentations are not needed -> since metadata is not used at this stage, why bother defining a package for the library?

Actually cargo2guix is more suitable for importing sources rather than packages, as it has issues handling licenses, and Cargo.lock only contains enough information to construct the source representation in Guix, which has support for simple patching.

Since the vendoring approach exists, packaging all Rust libraries as sources only has been proven effective. However, we'll lose important information in our representation when switching from packages to sources: license and dependency. Thanks to the awesome cargo-license tool, only the latter required further consideration.

The implementation has been changed a few times in the review process, but the idea remains: make automation and manual intervention coexist. As a result, the importer:

  1. outputs definitions with full versions.
  2. skips existing definitions.
  3. maintains an identifier -> libraries mapping, along with an accessing interface that handles modifications made to the libraries.

Despite proposing it, I was a bit worried about the mapping, which references all dependency libraries directly, but the result went quite well: with compact source definitions, we reduced 153k lines of definitions for Rust libraries to 42k after this migration.

  • Imported libraries, these are what the importer creates:

    (define rust-unindent-0.2.4
      (crate-source "unindent" "0.2.4"
                    "1wvfh815i6wm6whpdz1viig7ib14cwfymyr1kn3sxk2kyl3y2r3j"))
    
    (define rust-ureq-2.10.0.1cad58f
      (origin
        (method git-fetch)
        (uri (git-reference (url "https://github.com/algesten/ureq")
                            (commit "1cad58f5a4f359e318858810de51666d63de70e8")))
        (file-name (git-file-name "rust-ureq" "2.10.0.1cad58f"))
        (sha256 (base32 "1ryn499kbv44h3lzibk9568ln13yi10frbpjjnrn7dz0lkrdin2w"))))
  • Library with modification:

    (define rust-libmimalloc-sys-0.1.24
      (crate-source "libmimalloc-sys" "0.1.24"
                    "0s8ab4nc33qgk9jybpv0zxcb75jgwwjb7fsab1rkyjgdyr0gq1bp"
                    #:snippet
                    '(begin
                       (delete-file-recursively "c_src")
                       (delete-file "build.rs")
                       (with-output-to-file "build.rs"
                         (lambda _
                           (format #t "fn main() {~@
                            println!(\"cargo:rustc-link-lib=mimalloc\");~@
                            }~%"))))))
  • Library with replacement, for those requiring a build process with dependencies.

    (define rust-pipewire-0.8.0.fd3d8f7 rust-pipewire-for-niri)
  • Deleted library:

    (define rust-unrar-0.5.8 #f)
  • Accessing interface and identifier -> libraries mapping:

    (define-cargo-inputs lookup-cargo-inputs
      (rust-deunicode-1
       => (list rust-any-ascii-0.3.2
                rust-emojis-0.6.4
                rust-itoa-1.0.15
                ...))
      (rust-pcre2-utf32-0.2
       => (list rust-bitflags-2.9.0
                rust-cc-1.2.18
                rust-cfg-if-1.0.0
                ...))
      (zoxide
       => (list rust-aho-corasick-1.1.3
                rust-aliasable-0.1.3
                rust-anstream-0.6.18
                ...)))
  • Dependency libraries lookup, module selection is supported:

    (cargo-inputs 'rust-pcre2-utf32-0.2)
    (define (my-cargo-inputs name)
      (cargo-inputs name #:module '(my packages rust-crates)))
    
    (my-cargo-inputs ...)

Since we have all the dependency information, unpacking any libraries we want to a directory and then running more common tools to check them is possible (some scripts are provided under etc/teams/rust, yet to be rewritten in Guile). You're encouraged to share yours and check the libraries after the merge, and help improve the collection ;)

Next steps

One issue for this model is that all libraries are stored and referenced in one module, making merge conflicts harder to resolve.

I'm considering creating a separate repository to manage this module. Whenever there's a change, it will be applied into this repository first and then synced back to Guix.

We can also store dependency specifications and lockfiles in that separate repository to make the packaging process, which may require changing the specifications, more transparent. This may also allow automation in updating dependency libraries.

Thanks for reading! Happy hacking :)

07 June, 2025 09:00PM by Hilton Chain

June 03, 2025

www @ Savannah

Malware in Proprietary Software - May 2025 Additions

The initial injustice of proprietary software often leads to further injustices: malicious functionalities.

The introduction of unjust techniques in nonfree software, such as back doors, DRM, tethering, and others, has become ever more frequent. Nowadays, it is standard practice.

We at the GNU Project show examples of malware that has been introduced in a wide variety of products and dis-services people use everyday, and of companies that make use of these techniques.

Here are our latest additions

May 2025

Malware in Appliances

  • Synology forces users to install self-branded hard drives in some of its recent NAS systems on pretext of reliability, by blocking critical functions of drives that were purchased from other sources, and cutting down on support. Synology does this by replacing the original firmware with custom firmware that acts like DRM.



As a general precaution, users should make sure their printer can't connect to the manufacturer's server, for example by shielding it from the internet by a firewall. This will not restore the ability of printers to use third-party toner if they have already lost it, but will prevent any future downgrades. All printer manufacturers are concerned, not only Brother.

Microsoft's Software is Malware


Enough is enough!

[*] Why “useds”? Because running Windows is not you using Windows; it is Windows using you.

  • Microsoft Teams has been collecting voice and face data from students of an Australian school, to feed the CoPilot chatbot. It took the school network administrators a whole month to realize what was happening, and disable this malfeature. It was obviously beyond their imagination that Microsoft could have made biometric data collection the default in Teams!


Let's hope legislators and regulatory agencies all over the world will quickly put a stop to this sort of outrageous practice.

In any case people would be better off switching to a free-software replacement such as Jitsi Meet for medium-size groups, or Big Blue Button for larger ones. Many public instances are available, and groups of users can also set up their own servers.

Apple's Operating Systems Are Malware

  • Apple has been labeling various third-party files and programs as “damaged”, preventing users from opening them, and implying that software from third-party sources is dangerous. While these restrictions can be circumvented, they violate users' freedom to do their computing as they wish. Most of the time, the purpose of warnings such as “damaged” is to scare users into sticking with Apple's proprietary programs for no good reason.


Amazon's Software Is Malware

  • Amazon has removed the “Do Not Send Voice Recordings” option from Echo devices, including from devices that support local processing of these recordings. All private conversations are now used to train Alexa's “artificial intelligence.” Moreover, if users choose not to save recordings, they will lose some advanced functions of Alexa that they paid for.


This wouldn't happen if software in the Echo were free. Users would be able to restore the “Do Not Send Voice Recordings” option.

Malware in Mobile Devices


Malware in Games


In addition, Nintendo can record audio and video chats for moderation purposes. User's consent is required, but there is no guarantee that the recordings will not be sent to third parties. In short, there is no privacy in these chats.

If you ever consider buying a Switch, think twice, because you will not own it. Nintendo will.

03 June, 2025 03:02PM by Rob Musial

June 01, 2025

Bruno Haible

Major German science prize for open-source firmware developer

The 1st prize of the German young scientists competition, in the category mathematics + computer science, of this year was awarded to Simon Neuenhausen for writing an open-source firmware for the wifi of the ESP32 SoC. It replaces the closed-source wifi driver.

Project description (in German): https://www.jugend-forscht.de/virtuelle-ausstellung/detailseite/Open_Source_WLAN_auf_dem_ESP32.html

Award: https://www.jugend-forscht.de/fileadmin/user_upload/Downloadcenter/Bundeswettbewerb/Bundeswettbewerb_2025/Preistraegerbroschuere_Bundeswettbewerb_Jugend_forscht_2025.pdf, page 16.

01 June, 2025 12:39PM

The ideal application profiler

The usual tool for optimizing a program's execution speed is a profiler.

I've seen and tried various profilers over the years, and each of them had some drawbacks: Some of them require root privileges, some of them produce only a per-function profiling (no insights of what is expensive inside a function), some of them work only on unoptimized or specially compiled binaries, some of them are very slow during the program execution.

For the first time, there is a profiler without any these drawbacks. Plus, it is easy to use.

It is gprofng, part of the GNU binutils, in versions from 2025-05-22 or newer. Together with the gprofng-gui, an optional GUI that makes it very easy to use.

For more details, see this wiki: https://gitlab.com/ghwiki/gnow-how/-/wikis/Profiling/with_sampling.

Congratulations to the GNU binutils team, and to Vladimir Mezentsev in particular!

01 June, 2025 12:25PM

May 31, 2025

unifont @ Savannah

Unifont 16.0.04 Released

31 May 2025 Unifont 16.0.04 is now available.  This is a minor release with many glyph improvements.  See the ChangeLog file for details.

Download this release from GNU server mirrors at:

     https://ftpmirror.gnu.org/unifont/unifont-16.0.04/

or if that fails,

     https://ftp.gnu.org/gnu/unifont/unifont-16.0.04/

or, as a last resort,

     ftp://ftp.gnu.org/gnu/unifont/unifont-16.0.04/

These files are also available on the unifoundry.com website:

     https://unifoundry.com/pub/unifont/unifont-16.0.04/

Font files are in the subdirectory

     https://unifoundry.com/pub/unifont/unifont-16.0.04/font-builds/

A more detailed description of font changes is available at

      https://unifoundry.com/unifont/index.html

and of utility program changes at

      https://unifoundry.com/unifont/unifont-utilities.html

Information about Hangul modifications is at

      https://unifoundry.com/hangul/index.html

and

      http://unifoundry.com/hangul/hangul-generation.html

Enjoy!

31 May, 2025 10:52PM by Paul Hardy

GNUnet News

GNUnet 0.24.2

GNUnet 0.24.2

This is a bugfix release for gnunet 0.24.1. It fixes some regressions and minor bugs.

Links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try https://ftp.gnu.org/gnu/gnunet/

31 May, 2025 10:00PM

May 27, 2025

automake @ Savannah

May 21, 2025

parallel @ Savannah

GNU Parallel 20250522 ('Leif Tange') released [stable]

GNU Parallel 20250522 ('Leif Tange') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  gnu parallel is my new favorite toy
    -- Eytan Adar @eytan.adar.prof
 
New in this release:

  • No new features. This is a candidate for a stable release.
  • Bug fixes and man page updates.


News about GNU Parallel:


GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.


About GNU Parallel


GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep c555f616391c6f7c28bf938044f4ec50
    12345678 c555f616 391c6f7c 28bf9380 44f4ec50
    $ md5sum install.sh | grep 707275363428aa9e9a136b9a7296dfe4
    70727536 3428aa9e 9a136b9a 7296dfe4
    $ sha512sum install.sh | grep b24bfe249695e0236f6bc7de85828fe1f08f4259
    83320d89 f56698ec 77454856 895edc3e aa16feab 2757966e 5092ef2d 661b8b45
    b24bfe24 9695e023 6f6bc7de 85828fe1 f08f4259 6ce5480a 5e1571b2 8b722f21
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference


If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)


If GNU Parallel saves you money:



About GNU SQL


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.


About GNU Niceload


GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

21 May, 2025 08:15PM by Ole Tange

GNU Parallel 20250422 ('Tariffs') released [stable]

GNU Parallel 20250422 ('Tariffs') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  Man, GNU Parallel is very cool.
    -- jeeger @jeeger@mastodon.social

New in this release:

  • No new features. This is a candidate for a stable release.
  • Bug fixes and man page updates.


News about GNU Parallel:



GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.


About GNU Parallel


GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep c555f616391c6f7c28bf938044f4ec50
    12345678 c555f616 391c6f7c 28bf9380 44f4ec50
    $ md5sum install.sh | grep 707275363428aa9e9a136b9a7296dfe4
    70727536 3428aa9e 9a136b9a 7296dfe4
    $ sha512sum install.sh | grep b24bfe249695e0236f6bc7de85828fe1f08f4259
    83320d89 f56698ec 77454856 895edc3e aa16feab 2757966e 5092ef2d 661b8b45
    b24bfe24 9695e023 6f6bc7de 85828fe1 f08f4259 6ce5480a 5e1571b2 8b722f21
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference


If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)


If GNU Parallel saves you money:



About GNU SQL


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.


About GNU Niceload


GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

21 May, 2025 08:12PM by Ole Tange

May 20, 2025

Amin Bandali

Don't "buy" e-books from Oxford University Press

Last month I watched the book talk Music Copyright, Creativity, and Culture by Jennifer Jenkins with James Boyle facilitating the discussion, co-hosted by the Internet Archive and the Authors Alliance:

Looking to get a copy of the book, I found the book’s page on the publisher’s website, Oxford University Press. Seeing it available as an e-book, I opted to go with that as a more eco-friendly option and to save some physical space. I worked my way through the checkout and payment steps, under the impression that I would be purchasing a copy of the book that I could download and do with as I wished. The use of the words “buy” and “purchase” throughout the book page on the publisher’s website certainly did not suggest otherwise.

Screenshot from book page on OUP

In hindsight, there were red flags I failed to notice at the time, such as confusing and seemingly redundant, if not contradictory, information on the book page:

“Downloaded copy on your device does not expire.”

Um, okay? I’d sure hope and expect as much about any file I download.

“Includes 4 years of Bookshelf Online.”

Whatever — as long as I could download, store, and use the book offline I’d be happy.

It’s only upon hovering the small and generic, if not misleading, “E-book purchasing help” link that one would be presented with this vaguely informative eyebrow-raising sentence:

E-book purchase

E-books are granted under the terms of a single-user, non-transferable license, and may be accessed online from any location.

“E-books are granted” (??) is news to me. I thought I would have rightful access to something I bought and paid for, rather than being “granted” (read “allowed”) access to it by and through some overlord. Oh but of course, we live in a time where vendors get to redefine well-established words like “purchase” and “buy” on page N of their terms and conditions.

I obviously did not see that “E-book purchasing help” before giving Oxford University Press my money: being a tech-savvy person, I didn’t think I needed any help “purchasing” an e-book.

Everything became clear shortly after I completed the “purchase” and was redirected to VitalSource to access the book: the VitalSource “Bookshelf” user interface offered no way to download a copy of the book I thought I bought and paid for. It is instead a glorified pile of proprietary JavaScript DRM (Digital Restrictions Management) that wraps around the underlying representation of the book in VitalSource’s possession. The only other option for accessing the book would be through VitalSource’s proprietary application available only for certain versions of certain proprietary operating systems.

At this point, the only method I could think of to try and obtain a copy of the book that I could read without subjecting myself to the shackles of DRM or proprietary software was trying to print the book to PDF. Given that VitalSource’s DRM interface is a proprietary wrapper around VitalSource’s likely ePub-based underlying representation (guessing from the presence of epubcfi in the URL of their book renderer page), the book pages are not exposed all at once, practically forcing one to use the interface’s Print function to get all the pages in one go. After waiting what felt like an eternity for the website to prepare a printable version of the book, I was presented with this abomination (click image for sample in original PDF form):

Sample print output from VitalSource

That is a sample of the output generated by the interface’s Print function: an utterly useless, inferior copy of the book that has giant watermarks on every single page, with the only selectable text in the whole book being the repugnant threat at the top of each page — the actual body text of the book is converted to low-resolution, blurry images, and is therefore neither selectable nor searchable.

Going forward, I will NEVER “purchase” anything from Oxford University Press (and most definitely not from VitalSource), so long as they have no problem “selling” [access to] DRM-infested copies of books with no way to download a usable copy of what I paid for.

The key takeaway for me from this whole experience is that due to the sad and sorry status quo of our current times, this kind of insulting (mal)treatment of users is all but common, and really can happen to any one of us. Therefore it is all the more important for us to band together in protest of this, rather than dividing and isolating ourselves through misguided better-than-thou sentiments toward each other.

For Music Copyright, Creativity, and Culture, I ordered and a few days later received a paper copy from the local bookstore. It’s a copy I truly own, and can read whenever, wherever, and however I please.

Take care, and so long for now.

References and related links:

20 May, 2025 11:34PM

May 19, 2025

hello @ Savannah

hello 2.12.2 released [stable]

This release has no code changes since 2.12, mostly updates to and improvements to the build system.

Some out-of-date and tenuously-relevant files were also removed from the distribution, thus removing the contrib directory.

19 May, 2025 12:13PM by Reuben Thomas

May 17, 2025

www-ru @ Savannah

Встреча сообщества в Москве

В рамках празднования 40-летия ФСПО Глеб Ерофеев проводит встречу 24 мая в 18:00 по местному времени.

Приглашаются все желающие и те, кого они сумеют привести.

17 May, 2025 02:19PM by Ineiev

May 14, 2025

GNU Health

Suriname Public Healthcare System embraces GNU Health

Suriname has adopted GNU Health Hospital and Health Information System for their Public Healthcare system.

The adoption of GNU Health was announced during the press release celebrated last Friday, May 9th in Paramaribo, in the context of the country healthcare digitization campaign. They defined GNU Health as “An open source system that is both accessible and scalable”1. During the event, the Suriname Patient Portal and My Health App were also announced.

Press release. From left to right: Prof. Dr Jerry Toelsie, Minister Amar Ramadhin, Dr Aloysius Koendjbihari and Mr. Richard Mendes

The Minister of Health, Dr. Amar Ramadhin, made emphasis on the benefits of this digital transformation. “We move away from paper files and work towards greater efficiency and patient-oriented care experience”.

Digitization is supported by the IS4HIT Information Systems for Health Information Technology) program, an initiative of the Pan-American Health Organisation (PAHO), whose delegates where at the conference, together with local health professionals.

The GNU Health Hospital and Health Information System – from the socioeconomic determinants of health to the molecular basis of disease

The GNU Health rollout will be done in phases throughout the different public health centers, starting at the Regional Service Centers (RGDs). The main focus is on Primary Care. Some of the tasks in the initial phase will be demographics, patient management, appointments, medical encounters, prescriptions, complementary tests orders and reporting. Training sessions to the local health professionals and technical team are being conducted, as well as the localization to Suriname.

Minister Ramadhin declared: “[Healthcare] Digitization is not an end in itself but a powerful means to make care more human-oriented, safer and more efficient.” . That’s where GNU Health fits right in. The Hospital and Health Information System of GNU Health has Social Medicine and primary care at its core. It excels in health promotion and disease prevention. When properly implemented and used, GNU Health is way more than just an Electronic Medical Record or a Hospital Management Information System. It empowers health professionals to assess the socioeconomic determinants of health and disease, taking a proactive approach to prevent and tackle the root of the diseases at individual, family and society level. The world is facing a pandemic of non-transmissible diseases. Obesity, diabetes, depression, cancer and neurodegenerative conditions are on the rise, with a appalling impact on the underprivileged. GNU Health will be a great ally for nurses, physicians, nutritionists and social workers of Suriname to find and engage those at higher risk. in the community.

The fact that GNU Health is Free/Libre software allows Suriname to download, study the system and adapt it to their needs and legislation, free of any kind of vendor lock-in. After all, health is -or it should be- a non-negotiable human right.

GNU Health is now part of Suriname to deliver a sustainable, interoperable, standard-based, privacy oriented, scalable digital healthcare solution for the country public health system.

A Digital Public Good. In 2022 GNU Health was declared a Digital Public Good by the Digital Public Goods Alliance (DPGA). By definition, a Digital Public Good is open-source software, open data, open AI models, open standards, and open content that adhere to privacy and other applicable best practices, do no harm by design and are of high relevance for attainment of the United Nations 2030 Sustainable Development Goals (SDGs). This definition stems from the UN Secretary-General’s Roadmap for Digital Cooperation.

We are very proud and excited to see GNU Health deployed in Suriname national public health system and wish them the very best embracing the system as we envision it, a social project with some technology behind.

About GNU Health

GNU Health is an open science, community driven project from GNU Solidarioa non-profit humanitarian organization focused on Social Medicine. Our project has been adopted by public hospitals, research and academic institutions, governments and multilateral organizations around the world.

GNU Health is a GNU official package, awarded with the Free Software Foundation award of Social benefit and declared a Digital Public Good.

See also:

GNU Health : https://www.gnuhealth.org

GNU Solidario: https://www.gnusolidario.org

  1. https://www.surinametimes.com/artikel/suriname-zet-grote-stappen-richting-digitale-gezondheidszorg ↩

14 May, 2025 06:57PM by GNU Solidario

May 12, 2025

screen @ Savannah

GNU Screen v.5.0.1 is released

Screen is a full-screen window manager that multiplexes a physical terminal between several processes, typically interactive shells.

5.0.1 is a security fix release. It includes only few code fixes, types and security issues. It doesn't include any new features.

  • CVE-2025-46805: do NOT send signals with root privileges
  • CVE-2025-46804: avoid file existence test information leaks
  • CVE-2025-46803: apply safe PTY default mode of 0620
  • CVE-2025-46802: prevent temporary 0666 mode on PTYs in attacher
  • CVE-2025-23395: reintroduce lf_secreopen() for logfile
  • buffer overflow due bad strncpy()
  • uninitialized variables warnings
  • typos
  • combining char handling that could lead to a segfault


Release (official tarball) will be available soon for download:
https://ftp.gnu.org/gnu/screen/

Please report any bugs or regressions.
Thanks to everyone who contributed to this release.

Cheers,
Alex

12 May, 2025 07:38PM by Alexander Naumov

May 11, 2025

GNU Guix

Migrating to Codeberg

The Guix project will be migrating all its repositories along with bug tracking and patch tracking to Codeberg within a month. This decision is the result of a collective consensus-building process that lasted several months. This post shows the upcoming milestones in that migration and discusses what it will change for people using Guix and for contributors.

Codeberg logo.

Context

For those who haven’t heard about it, Codeberg is a source code collaboration platform. It is run by Codeberg e.V., a non-profit registered in Germany. The software behind Codeberg is Forgejo, a free software forge (licensed under GPLv3) supporting the “merge request” style of workflow familiar to many developers.

Since its inception, Guix has been hosting its source code on Savannah, with bug reports and patches handled by email, tracked by a Debbugs instance, and visible on the project’s tracker. Debbugs and Savannah are hosted by the Free Software Foundation (FSF); all three services are administered by volunteers who have been supportive over these 13 years—thanks!

The motivation and the main parts of the migration are laid out in the second Guix Consensus Document (GCD). The GCD process itself was adopted just a few months ago; it’s a major milestone for the project that we’ll discuss in more detail in a future post. Suffice to say that this GCD was discussed and improved publicly for two months, after which deliberation among members of Guix teams led to acceptance.

Milestones

Migration to Codeberg will happen gradually. To summarize the GCD, the key milestones are the following:

  1. By June 7th, and probably earlier, Git repositories will all have migrated to Codeberg—some have already moved.

  2. On May 25th, the Guix repository itself will be migrated.

  3. From there on and until at least May 25th, 2026, https://git.savannah.gnu.org/git/guix.git will be a mirror of https://codeberg.org/guix/guix.git.

  4. Until December 31st, 2025, bug reports and patches will still be accepted by email, in addition to Codeberg (issues and pull requests).

Of course, this is just the beginning. Our hope is that the move can help improve much needed tooling such as the QA infrastructure following work on Forgejo/Cuirass integration started earlier this year, and possibly develop new tools and services to assist in the maintenance of this huge package collection that Guix provides.

What this will change for you

As a user, the main change is that your channels.scm configuration files, if their refer to the git.savannah.gnu.org URL, should be changed to refer to https://codeberg.org/guix/guix.git once migration is complete. But don’t worry: guix pull will tell you if/when you need to update your config files and the old URL will remain a mirror for at least a year anyway.

Also, channel files produced by guix describe to pin Guix to a specific revision and to re-deploy it later anytime with time-machine will always work, even if they refer to the git.savannah.gnu.org URL, and even when that repository eventually vanishes, thanks to automatic fallback to Software Heritage.

As a contributor, nothing changes for bug reports and patches that you already submitted by email: just keep going!

Once the Guix repository has migrated though, you’ll be able to report bugs at Codeberg and create pull requests for changes. The latter is a relief for many—no need to fiddle with admittedly intricate email setups and procedures—but also a pain point for those who had come to master and appreciate the email workflow.

For this reason, the “User Interfaces” section of the GCD describes the options available besides the Web interface—command-line and Emacs interfaces in particular. Some are still work-in-progress, but it’s exciting to see, for example, that over the past few months many improvements landed in fj.el and that a Forgejo-capable branch of Magit-Forge saw the light. Check it out!

A concern brought up during the discussion is that of having to create an account on Codeberg to be able to contribute—sometimes seen as a hindrance compared to the open-for-all and distributed nature of cooperation by email. This remains an open issue, though hopefully one that will become less acute as support for federation in Forgejo develops. In the meantime, as the GCD states, occasional bug reports and patches sent by email to guix-devel will be accepted.

Moving forward

This was an summary of what is to come; check out the GCD for more info, and reach out to the guix-devel mailing list if you have any questions!

Real work begins now. We hope the migration to Codeberg will be smooth and enjoyable for all. For one thing, it already proved our ability to collectively decide on the project’s future, which is no small feat. There’s a lot to expect from the move in improving the project’s ability to work flawlessly at this scale—more than 100 code contributors and 2,000 commits each month, and more than 33,000 packages available in Guix proper. Let’s make the best of it, and until then, happy hacking!

11 May, 2025 06:30PM by Ludovic Courtès

May 09, 2025

GNU Taler news

GNU Taler 1.0 released

We are happy to announce the release of GNU Taler v1.0.

09 May, 2025 10:00PM

May 08, 2025

health @ Savannah

GNU Health becomes an organization in the Python Package Index - PyPI

We're proud to announce that #GNUHealth is now an organization in the Python Package Index (#PyPI).

The organization makes it easy to find and explore our projects and packages.

This is URL for the GNU Health organization in PyPI:

https://pypi.org/org/GNUHealth/

We are very grateful to the Python Software Foundation for making GNU Health a community organization within PyPI!

Get this and the latest news about GNU Health from our official Mastodon account:

https://mastodon.social/@gnuhealth

08 May, 2025 11:06AM by Luis Falcon

May 07, 2025

gettext @ Savannah

GNU gettext 0.25 released

Download from https://ftp.gnu.org/pub/gnu/gettext/gettext-0.25.tar.gz

New in this release:


  • Programming languages support:
    • Go:
      • xgettext now supports Go.
      • 'msgfmt -c' now verifies the syntax of translations of Go format strings.
      • New examples 'hello-go' and 'hello-go-http' have been added.
    • TypeScript:
      • xgettext now supports TypeScript and TSX (= TypeScript with JSX  extensions).
    • D:
      • A new library libintl_d.a contains the runtime for using GNU gettext message catalogs in the D programming language.
      • xgettext now supports D.
      • 'msgfmt -c' now verifies the syntax of translations of D format strings.
      • A new example 'hello-d' has been added.
    • Modula-2:
      • A new library libintl_m2.so contains the runtime for using GNU gettext message catalogs in the Modula-2 programming language.
      • xgettext now supports Modula-2.
      • 'msgfmt -c' now verifies the syntax of translations of Modula-2 format strings.
      • A new example 'hello-modula2' has been added.


  • Improvements for maintainers:
    • xgettext has two new options, '--no-git' and '--generated', that customize the way the 'POT-Creation-Date' in the POT file is computed.
    • Fixed bad interactions between autoreconf and autopoint.

07 May, 2025 05:15PM by Bruno Haible

May 02, 2025

GNU gettext 0.24.1 released

Download from https://ftp.gnu.org/pub/gnu/gettext/gettext-0.24.1.tar.gz

New in this release:

  • Bug fixes:
    • Fix bad interactions between autoreconf and autopoint.
    • xgettext: Creating the POT file of a package under Git version control is now faster. Also, the use of Git can be turned off by specifying the option --no-git.

02 May, 2025 06:18PM by Bruno Haible

May 01, 2025

www @ Savannah

Malware in Proprietary Software - April 2025 Additions

The initial injustice of proprietary software often leads to further injustices: malicious functionalities.

The introduction of unjust techniques in nonfree software, such as back doors, DRM, tethering, and others, has become ever more frequent. Nowadays, it is standard practice.

We at the GNU Project show examples of malware that has been introduced in a wide variety of products and dis-services people use everyday, and of companies that make use of these techniques.

Here are our latest additions

April 2025

Malware in Games

Malware in Appliances

  • The company making a “smart” bassinet called Snoo has locked the most advanced functionalities of the Snoo behind a paywall. This unexpected change mainly affects users who received the appliance as a gift, or bought it second-hand on the assumption that all these functionalities would be available to them, as they used to be. This is another example of the deceptive behavior of proprietary software developers who take advantage of their power over users to change rules at will.

Another malicious feature of the Snoo is the fact that users need to create an account with the company, which thus has access to personal data, location (SSID), appliance log, etc., as well as manual notes about baby history.

01 May, 2025 05:33PM by Rob Musial

GNU Health

GNU Health Hospital Information System 5.0 enters alpha

We are very happy to announce that the upcoming version of GNU Health Hospital Information System has entered feature-complete alpha stage. This upcoming version of GNU Health HIS 5.0 supposes over a year of work and is the largest release in terms of functionality and refactoring.

GNU Health HIS 5.0 is expected to be released by the end of June.

This new release comes after over a year of development to deliver state-of-the-art libre technology and user experience. In a nutshell:

  • Tryton 7.0 LTS support
  • New functionality for patient procedures and medical interventions
  • Improved reporting and analytics
  • Enhanced the Laboratory Information System (GNU LIMS – Occhiolino)
  • New features on patient obstetric history and pregnancy related evaluations
  • Improved ergonomics and views on demographics and patient related information.
  • Improved medical genetics and family history taking. Update to the latest genes, proteins and natural variants datasets from UniProt and HUGO
  • Enhanced socoeconomic and family functionalty assessment
  • Extensively revised Medical Imaging, DICOM worklists and Orthanc packages
  • Reorganize nursing and ambulatory care packages
  • Enhanced patient body composition and anthropometrics
  • Enhanced “Focus on” patient section, including automated settings and mental health
  • New insurance and billing features for medical interventions and insurance plans.
  • Improved patient safety and allergic conditions checks and prescription writing

On the technical side we have worked on:

  • Migration to Python Poetry and pyproject.toml from setuptools
  • Increased modularity and minimize dependencies among packages
  • Simplified installation and administration (Virtual machine images, pip, ansible)
  • Improved stability using virtual environment in the installation
  • Over 30 localization and language teams at Codeberg.

At this point, our focus in on testing, translation, packaging and documentation. In the coming days we’ll migrate our community server so we can all test the upcoming version.

For those of you on GNU Health 4.4, please start thinking on the migration project to GH HIS 5.0. This new version is a major leap that delivers many benefits, so we highly encourage you to upgrade. As always, the migration methods and tools are included.

We’d like to invite you to translate GNU Health at Codeberg weblate translation instance and to report any issues you may find during this period.

Don’t forget to follow us in Mastodon (https://mastodon.social/@gnuhealth) to get the latest on this and other GNU Health news!

Stay tuned and happy hacking!

About GNU Health

GNU Health is a Libre, community driven project from GNU Solidarioa non-profit humanitarian organization focused on Social Medicine. Our project has been adopted by public and private health institutions and laboratories, multilateral organizations and national public health systems around the world.

The GNU Health project provides the tools for individuals, health professionals, institutions and governments to proactively assess and improve the underlying determinants of health, from the socioeconomic agents to the molecular basis of disease. From primary health care to precision medicine.

The following are the main components that make up the GNU Health ecosystem:

  • Hospital Management (HMIS)
  • Social Medicine and Public Health
  • Laboratory Management (Occhiolino)
  • Personal Health Record (MyGNUHealth)
  • Bioinformatics and Medical Genetics
  • Thalamus and Federated health networks
  • GNU Health embedded on Single Board devices

GNU Health is a GNU (www.gnu.org) official package, awarded with the Free Software Foundation award of Social benefit. GNU Health has been declared a Digital Public Good ,adopted by many hospitals, governments and multilateral organizations around the globe.

01 May, 2025 11:59AM by Luis Falcon

April 30, 2025

Simon Josefsson

Building Debian in a GitLab Pipeline

After thinking about multi-stage Debian rebuilds I wanted to implement the idea. Recall my illustration:

Earlier I rebuilt all packages that make up the difference between Ubuntu and Trisquel. It turned out to be a 42% bit-by-bit identical similarity. To check the generality of my approach, I rebuilt the difference between Debian and Devuan too. That was the debdistreproduce project. It “only” had to orchestrate building up to around 500 packages for each distribution and per architecture.

Differential reproducible rebuilds doesn’t give you the full picture: it ignore the shared package between the distribution, which make up over 90% of the packages. So I felt a desire to do full archive rebuilds. The motivation is that in order to trust Trisquel binary packages, I need to trust Ubuntu binary packages (because that make up 90% of the Trisquel packages), and many of those Ubuntu binaries are derived from Debian source packages. How to approach all of this? Last year I created the debdistrebuild project, and did top-50 popcon package rebuilds of Debian bullseye, bookworm, trixie, and Ubuntu noble and jammy, on a mix of amd64 and arm64. The amount of reproducibility was lower. Primarily the differences were caused by using different build inputs.

Last year I spent (too much) time creating a mirror of snapshot.debian.org, to be able to have older packages available for use as build inputs. I have two copies hosted at different datacentres for reliability and archival safety. At the time, snapshot.d.o had serious rate-limiting making it pretty unusable for massive rebuild usage or even basic downloads. Watching the multi-month download complete last year had a meditating effect. The completion of my snapshot download co-incided with me realizing something about the nature of rebuilding packages. Let me below give a recap of the idempotent rebuilds idea, because it motivate my work to build all of Debian from a GitLab pipeline.

One purpose for my effort is to be able to trust the binaries that I use on my laptop. I believe that without building binaries from source code, there is no practically feasible way to trust binaries. To trust any binary you receive, you can de-assemble the bits and audit the assembler instructions for the CPU you will execute it on. Doing that on a OS-wide level this is unpractical. A more practical approach is to audit the source code, and then confirm that the binary is 100% bit-by-bit identical to one that you can build yourself (from the same source) on your own trusted toolchain. This is similar to a reproducible build.

My initial goal with debdistrebuild was to get to 100% bit-by-bit identical rebuilds, and then I would have trustworthy binaries. Or so I thought. This also appears to be the goal of reproduce.debian.net. They want to reproduce the official Debian binaries. That is a worthy and important goal. They achieve this by building packages using the build inputs that were used to build the binaries. The build inputs are earlier versions of Debian packages (not necessarily from any public Debian release), archived at snapshot.debian.org.

I realized that these rebuilds would be not be sufficient for me: it doesn’t solve the problem of how to trust the toolchain. Let’s assume the reproduce.debian.net effort succeeds and is able to 100% bit-by-bit identically reproduce the official Debian binaries. Which appears to be within reach. To have trusted binaries we would “only” have to audit the source code for the latest version of the packages AND audit the tool chain used. There is no escaping from auditing all the source code — that’s what I think we all would prefer to focus on, to be able to improve upstream source code.

The trouble is about auditing the tool chain. With the Reproduce.debian.net approach, that is a recursive problem back to really ancient Debian packages, some of them which may no longer build or work, or even be legally distributable. Auditing all those old packages is a LARGER effort than auditing all current packages! Doing auditing of old packages is of less use to making contributions: those releases are old, and chances are any improvements have already been implemented and released. Or that improvements are no longer applicable because the projects evolved since the earlier version.

See where this is going now? I reached the conclusion that reproducing official binaries using the same build inputs is not what I’m interested in. I want to be able to build the binaries that I use from source using a toolchain that I can also build from source. And preferably that all of this is using latest version of all packages, so that I can contribute and send patches for them, to improve matters.

The toolchain that Reproduce.Debian.Net is using is not trustworthy unless all those ancient packages are audited or rebuilt bit-by-bit identically, and I don’t see any practical way forward to achieve that goal. Nor have I seen anyone working on that problem. It is possible to do, though, but I think there are simpler ways to achieve the same goal.

My approach to reach trusted binaries on my laptop appears to be a three-step effort:

  • Encourage an idempotently rebuildable Debian archive, i.e., a Debian archive that can be 100% bit-by-bit identically rebuilt using Debian itself.
  • Construct a smaller number of binary *.deb packages based on Guix binaries that when used as build inputs (potentially iteratively) leads to 100% bit-by-bit identical packages as in step 1.
  • Encourage a freedom respecting distribution, similar to Trisquel, from this idempotently rebuildable Debian.

How to go about achieving this? Today’s Debian build architecture is something that lack transparency and end-user control. The build environment and signing keys are managed by, or influenced by, unidentified people following undocumented (or at least not public) security procedures, under unknown legal jurisdictions. I always wondered why none of the Debian-derivates have adopted a modern GitDevOps-style approach as a method to improve binary build transparency, maybe I missed some project?

If you want to contribute to some GitHub or GitLab project, you click the ‘Fork’ button and get a CI/CD pipeline running which rebuild artifacts for the project. This makes it easy for people to contribute, and you get good QA control because the entire chain up until its artifact release are produced and tested. At least in theory. Many projects are behind on this, but it seems like this is a useful goal for all projects. This is also liberating: all users are able to reproduce artifacts. There is no longer any magic involved in preparing release artifacts. As we’ve seen with many software supply-chain security incidents for the past years, where the “magic” is involved is a good place to introduce malicious code.

To allow me to continue with my experiment, I thought the simplest way forward was to setup a GitDevOps-centric and user-controllable way to build the entire Debian archive. Let me introduce the debdistbuild project.

Debdistbuild is a re-usable GitLab CI/CD pipeline, similar to the Salsa CI pipeline. It provide one “build” job definition and one “deploy” job definition. The pipeline can run on GitLab.org Shared Runners or you can set up your own runners, like my GitLab riscv64 runner setup. I have concerns about relying on GitLab (both as software and as a service), but my ideas are easy to transfer to some other GitDevSecOps setup such as Codeberg.org. Self-hosting GitLab, including self-hosted runners, is common today, and Debian rely increasingly on Salsa for this. All of the build infrastructure could be hosted on Salsa eventually.

The build job is simple. From within an official Debian container image build packages using dpkg-buildpackage essentially by invoking the following commands.

sed -i 's/ deb$/ deb deb-src/' /etc/apt/sources.list.d/*.sources
apt-get -o Acquire::Check-Valid-Until=false update
apt-get dist-upgrade -q -y
apt-get install -q -y --no-install-recommends build-essential fakeroot
env DEBIAN_FRONTEND=noninteractive \
    apt-get build-dep -y --only-source $PACKAGE=$VERSION
useradd -m build
DDB_BUILDDIR=/build/reproducible-path
chgrp build $DDB_BUILDDIR
chmod g+w $DDB_BUILDDIR
su build -c "apt-get source --only-source $PACKAGE=$VERSION" > ../$PACKAGE_$VERSION.build
cd $DDB_BUILDDIR
su build -c "dpkg-buildpackage"
cd ..
mkdir out
mv -v $(find $DDB_BUILDDIR -maxdepth 1 -type f) out/

The deploy job is also simple. It commit artifacts to a Git project using Git-LFS to handle large objects, essentially something like this:

if ! grep -q '^pool/**' .gitattributes; then
    git lfs track 'pool/**'
    git add .gitattributes
    git commit -m"Track pool/* with Git-LFS." .gitattributes
fi
POOLDIR=$(if test "$(echo "$PACKAGE" | cut -c1-3)" = "lib"; then C=4; else C=1; fi; echo "$DDB_PACKAGE" | cut -c1-$C)
mkdir -pv pool/main/$POOLDIR/
rm -rfv pool/main/$POOLDIR/$PACKAGE
mv -v out pool/main/$POOLDIR/$PACKAGE
git add pool
git commit -m"Add $PACKAGE." -m "$CI_JOB_URL" -m "$VERSION" -a
if test "${DDB_GIT_TOKEN:-}" = ""; then
    echo "SKIP: Skipping git push due to missing DDB_GIT_TOKEN (see README)."
else
    git push -o ci.skip
fi

That’s it! The actual implementation is a bit longer, but the major difference is for log and error handling.

You may review the source code of the base Debdistbuild pipeline definition, the base Debdistbuild script and the rc.d/-style scripts implementing the build.d/ process and the deploy.d/ commands.

There was one complication related to artifact size. GitLab.org job artifacts are limited to 1GB. Several packages in Debian produce artifacts larger than this. What to do? GitLab supports up to 5GB for files stored in its package registry, but this limit is too close for my comfort, having seen some multi-GB artifacts already. I made the build job optionally upload artifacts to a S3 bucket using SHA256 hashed file hierarchy. I’m using Hetzner Object Storage but there are many S3 providers around, including self-hosting options. This hierarchy is compatible with the Git-LFS .git/lfs/object/ hierarchy, and it is easy to setup a separate Git-LFS object URL to allow Git-LFS object downloads from the S3 bucket. In this mode, only Git-LFS stubs are pushed to the git repository. It should have no trouble handling the large number of files, since I have earlier experience with Apt mirrors in Git-LFS.

To speed up job execution, and to guarantee a stable build environment, instead of installing build-essential packages on every build job execution, I prepare some build container images. The project responsible for this is tentatively called stage-N-containers. Right now it create containers suitable for rolling builds of trixie on amd64, arm64, and riscv64, and a container intended for as use the stage-0 based on the 20250407 docker images of bookworm on amd64 and arm64 using the snapshot.d.o 20250407 archive. Or actually, I’m using snapshot-cloudflare.d.o because of download speed and reliability. I would have prefered to use my own snapshot mirror with Hetzner bandwidth, alas the Debian snapshot team have concerns about me publishing the list of (SHA1 hash) filenames publicly and I haven’t been bothered to set up non-public access.

Debdistbuild has built around 2.500 packages for bookworm on amd64 and bookworm on arm64. To confirm the generality of my approach, it also build trixie on amd64, trixie on arm64 and trixie on riscv64. The riscv64 builds are all on my own hosted runners. For amd64 and arm64 my own runners are only used for large packages where the GitLab.com shared runners run into the 3 hour time limit.

What’s next in this venture? Some ideas include:

  • Optimize the stage-N build process by identifying the transitive closure of build dependencies from some initial set of packages.
  • Create a build orchestrator that launches pipelines based on the previous list of packages, as necessary to fill the archive with necessary packages. Currently I’m using a basic /bin/sh for loop around curl to trigger GitLab CI/CD pipelines with names derived from https://popcon.debian.org/.
  • Create and publish a dists/ sub-directory, so that it is possible to use the newly built packages in the stage-1 build phase.
  • Produce diffoscope-style differences of built packages, both stage0 against official binaries and between stage0 and stage1.
  • Create the stage-1 build containers and stage-1 archive.
  • Review build failures. On amd64 and arm64 the list is small (below 10 out of ~5000 builds), but on riscv64 there is some icache-related problem that affects Java JVM that triggers build failures.
  • Provide GitLab pipeline based builds of the Debian docker container images, cloud-images, debian-live CD and debian-installer ISO’s.
  • Provide integration with Sigstore and Sigsum for signing of Debian binaries with transparency-safe properties.
  • Implement a simple replacement for dpkg and apt using /bin/sh for use during bootstrapping when neither packaging tools are available.

What do you think?

30 April, 2025 09:25AM by simon

April 28, 2025

libsigsegv @ Savannah

GNU libsigsegv 2.15 is released

GNU libsigsegv version 2.15 is released.

New in this release:

  • Added support for Linux/PowerPC (32-bit) with musl libc.
  • Added support for Hurd/x86_64.
  • Added support for macOS/x86_64 with clang 15 or newer.
  • Optimize distinction between stack overflow and other fault on AIX 7.


Download: https://ftp.gnu.org/gnu/libsigsegv/libsigsegv-2.15.tar.gz

28 April, 2025 04:45PM by Bruno Haible

April 27, 2025

GNU Taler news

Taler Mailbox and Directory service released

We are happy to announce the release of two new GNU Taler components: The Taler Directory (TalDir) and Mailbox services. The Taler Wallet will be integrated in future versions to interact with the Taler Directory and Mailbox in order to deliver a smooth user experience for Peer-to-Peer payments.

27 April, 2025 10:00PM

April 26, 2025

remotecontrol @ Savannah

April 23, 2025

GNU Taler news

Taler iOS wallet independent security audit report published

RadicallyOpenSecurity performed an external crystal-box security audit of the GNU Taler iOS wallet (excluding wallet-core) funded by NGI. You can find the final report here. We already addressed all significant findings except enabling FaceID/TouchID to enable using the app which remains a feature on our roadmap to be addressed in the next few months. We thank RadicallyOpenSecurity for their work and the European Commission's Horizion 2020 NGI initiative for funding the development of the iOS wallet including the security review.

23 April, 2025 10:00PM

April 21, 2025

Jose E. Marchesi

SUPPER, a "modern" stropping regime for Algol 68

A draft of a proposed GNU extension to the Algol 68 programming language has been published today at https://algol68-lang.org/docs/GNU68-2025-004-supper.pdf.

SUPPER stropping in Algol 68
SUPPER stropping in Algol 68

This new stropping regime aims to be more appealing to contemporary programmers, and also more convenient to be used in today's computing systems, while at the same time retaining the full expressive power of a stropped language and being 100% backwards compatible as a super-extension.

The stropping regime has been already implemented in the https://gcc.gnu.org/wiki/Algol68FrontEndGCC Algol 68 front-end and also in the Emacs a68-mode that provides full automatic indentation and syntax highlighting.

The sources of the godcc program have been already transitioned to the new regime, and the result is quite satisfactory. Check it out!

Comments and suggestions for the draft are very welcome, and would help to move the draft forward to a final state. Please send them to algol68@gcc.gnu.org.

Salud, and happy Easter everyone!

21 April, 2025 12:00AM

April 20, 2025

gperf @ Savannah

GNU gperf 3.3 released

Download from https://ftp.gnu.org/gnu/gperf/gperf-3.3.tar.gz

New in this release:

  • Speedup: gperf is now between 2x and 2.5x faster.

20 April, 2025 12:43PM by Bruno Haible

April 19, 2025

unifont @ Savannah

Unifont 16.0.03 Released

19 April 2025 Unifont 16.0.03 is now available.  This is a minor release with many glyph improvements.  See the ChangeLog file for details.

Download this release from GNU server mirrors at:

     https://ftpmirror.gnu.org/unifont/unifont-16.0.03/

or if that fails,

     https://ftp.gnu.org/gnu/unifont/unifont-16.0.03/

or, as a last resort,

     ftp://ftp.gnu.org/gnu/unifont/unifont-16.0.03/

These files are also available on the unifoundry.com website:

     https://unifoundry.com/pub/unifont/unifont-16.0.03/

Font files are in the subdirectory

     https://unifoundry.com/pub/unifont/unifont-16.0.03/font-builds/

A more detailed description of font changes is available at

      https://unifoundry.com/unifont/index.html

and of utility program changes at

      https://unifoundry.com/unifont/unifont-utilities.html

Information about Hangul modifications is at

      https://unifoundry.com/hangul/index.html

and

      http://unifoundry.com/hangul/hangul-generation.html

Enjoy!

19 April, 2025 04:08PM by Paul Hardy

April 17, 2025

Simon Josefsson

Verified Reproducible Tarballs

Remember the XZ Utils backdoor? One factor that enabled the attack was poor auditing of the release tarballs for differences compared to the Git version controlled source code. This proved to be a useful place to distribute malicious data.

The differences between release tarballs and upstream Git sources is typically vendored and generated files. Lots of them. Auditing all source tarballs in a distribution for similar issues is hard and boring work for humans. Wouldn’t it be better if that human auditing time could be spent auditing the actual source code stored in upstream version control instead? That’s where auditing time would help the most.

Are there better ways to address the concern about differences between version control sources and tarball artifacts? Let’s consider some approaches:

  • Stop publishing (or at least stop building from) source tarballs that differ from version control sources.
  • Create recipes for how to derive the published source tarballs from version control sources. Verify that independently from upstream.

While I like the properties of the first solution, and have made effort to support that approach, I don’t think normal source tarballs are going away any time soon. I am concerned that it may not even be a desirable complete solution to this problem. We may need tarballs with pre-generated content in them for various reasons that aren’t entirely clear to us today.

So let’s consider the second approach. It could help while waiting for more experience with the first approach, to see if there are any fundamental problems with it.

How do you know that the XZ release tarballs was actually derived from its version control sources? The same for Gzip? Coreutils? Tar? Sed? Bash? GCC? We don’t know this! I am not aware of any automated or collaborative effort to perform this independent confirmation. Nor am I aware of anyone attempting to do this on a regular basis. We would want to be able to do this in the year 2042 too. I think the best way to reach that is to do the verification continuously in a pipeline, fixing bugs as time passes. The current state of the art seems to be that people audit the differences manually and hope to find something. I suspect many package maintainers ignore the problem and take the release source tarballs and trust upstream about this.

We can do better.

I have launched a project to setup a GitLab pipeline that invokes per-release scripts to rebuild that release artifact from git sources. Currently it only contain recipes for projects that I released myself. Releases which where done in a controlled way with considerable care to make reproducing the tarballs possible. The project homepage is here:

https://gitlab.com/debdistutils/verify-reproducible-releases

The project is able to reproduce the release tarballs for Libtasn1 v4.20.0, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, and GNU SASL v2.2.2. You can see this in a recent successful pipeline. All of those releases were prepared using Guix, and I’m hoping the Guix time-machine will make it possible to keep re-generating these tarballs for many years to come.

I spent some time trying to reproduce the current XZ release tarball for version 5.8.1. That would have been a nice example, wouldn’t it? First I had to somehow mimic upstream’s build environment. The XZ release tarball contains GNU Libtool files that are identified with version 2.5.4.1-baa1-dirty. I initially assumed this was due to the maintainer having installed libtool from git locally (after making some modifications) and made the XZ release using it. Later I learned that it may actually be coming from ArchLinux which ship with this particular libtool version. It seems weird for a distribution to use libtool built from a non-release tag, and furthermore applying patches to it, but things are what they are. I made some effort to setup an ArchLinux build environment, however the now-current Gettext version in ArchLinux seems to be more recent than the one that were used to prepare the XZ release. I don’t know enough ArchLinux to setup an environment corresponding to an earlier version of ArchLinux, which would be required to finish this. I gave up, maybe the XZ release wasn’t prepared on ArchLinux after all. Actually XZ became a good example for this writeup anyway: while you would think this should be trivial, the fact is that it isn’t! (There is another aspect here: fingerprinting the versions used to prepare release tarballs allows you to infer what kind of OS maintainers are using to make releases on, which is interesting on its own.)

I made some small attempts to reproduce the tarball for GNU Shepherd version 1.0.4 too, but I still haven’t managed to complete it.

Do you want a supply-chain challenge for the Easter weekend? Pick some well-known software and try to re-create the official release tarballs from the corresponding Git checkout. Is anyone able to reproduce anything these days? Bonus points for wrapping it up as a merge request to my project.

Happy Supply-Chain Security Hacking!

17 April, 2025 07:24PM by simon

April 11, 2025

gcl @ Savannah

Small release errata

Greetings!  While these tiny issues will likely not affect many if any,
there are alas a few tiny errata with the 2.7.1 tarball release.  Posted
here just for those interested.  Will of course be incorporated in the
next release.


modified   gcl/debian/rules
@@ -138,7 +138,7 @@ clean: debian/control debian/gcl.templates
  rm -rf $(INS) debian/substvars debian.upstream
  rm -rf *stamp build-indep
  rm -f  debian/elpa-gcl$(EXT).elpa debian/gcl$(EXT)-pkg.el
- rm -rf $(EXT_TARGS) info/gcl$(EXT)*.info*
+ rm -rf $(EXT_TARGS) info/gcl$(EXT)*.info* gcl_pool
 
 debian-clean: debian/control debian/gcl.templates
  dh_testdir
modified   gcl/git.tag
@@ -1,2 +1,2 @@
-"Version_2_7_0"
+"Version_2_7_1"
 
modified   gcl/o/alloc.c
@@ -707,6 +707,7 @@ empty_relblock(void) {
   for (;!rb_emptyp();) {
     tm_table[t_relocatable].tm_adjgbccnt--;
     expand_contblock_index_space();
+    expand_contblock_array();
     GBC(t_relocatable);
   }
   sSAleaf_collection_thresholdA->s.s_dbind=o;

11 April, 2025 10:06PM by Camm Maguire

GCL 2.7.1 has been released

Greetings! 

Greetings!  The GCL team is happy to announce the release of version
2.7.1, the culmination of many years of work and a major development
in the evolution of GCL.  Please see http://www.gnu.org/software/gcl for
downloading information.

11 April, 2025 02:31PM by Camm Maguire

Gary Benson

Python antipattern: Close in finally

Don’t do this:

thing = Thing()
try:
    thing.do_stuff()
finally:
    thing.close()

Do do this:

from contextlib import closing

with closing(Thing()) as thing:
    thing.do_stuff()

Why is the second better? Using contextlib.closing() ties closing the item to its creation. These baby examples are about equally easy to reason about, with only a single line in the try block, but consider what happens ifwhen more lines get added in future? In the first example, the close moves away, potentially offscreen, but that doesn’t happen in the second.

11 April, 2025 10:27AM by gbenson

April 10, 2025

GNUnet News

GNUnet 0.24.1

GNUnet 0.24.1

This is a bugfix release for gnunet 0.24.0. It fixes some regressions and minor bugs.

Links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try https://ftp.gnu.org/gnu/gnunet/

10 April, 2025 10:00PM

grep @ Savannah

grep-3.12 released [stable]


This is to announce grep-3.12, a stable release.

It's been nearly two years! There have been two bug fixes and many
harder-to-see improvements via gnulib. Thanks to Paul Eggert for doing
so much of the work and Bruno Haible for all the testing and all he does
to make gnulib a paragon of portable, reliable, top-notch code.

There have been 77 commits by 6 people in the 100 weeks since 3.11.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Bruno Haible (5)
  Carlo Marcelo Arenas Belón (1)
  Collin Funk (1)
  Grisha Levit (1)
  Jim Meyering (31)
  Paul Eggert (38)

Jim
 [on behalf of the grep maintainers]
==================================================================

Here is the GNU grep home page:
    https://gnu.org/s/grep/

Here are the compressed sources:
  https://ftp.gnu.org/gnu/grep/grep-3.12.tar.gz   (3.1MB)
  https://ftp.gnu.org/gnu/grep/grep-3.12.tar.xz   (1.9MB)

Here are the GPG detached signatures:
  https://ftp.gnu.org/gnu/grep/grep-3.12.tar.gz.sig
  https://ftp.gnu.org/gnu/grep/grep-3.12.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

  025644ca3ea4f59180d531547c53baeb789c6047  grep-3.12.tar.gz
  ut2lRt/Eudl+mS4sNfO1x/IFIv/L4vAboenNy+dkTNw=  grep-3.12.tar.gz
  4b4df79f5963041d515ef64cfa245e0193a33009  grep-3.12.tar.xz
  JkmyfA6Q5jLq3NdXvgbG6aT0jZQd5R58D4P/dkCKB7k=  grep-3.12.tar.xz

Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify grep-3.12.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
        Key fingerprint = 155D 3FC5 00C8 3448 6D1E  EA67 7FD9 FCCB 000B EEEE
  uid                   [ unknown] Jim Meyering <jim@meyering.net>
  uid                   [ unknown] Jim Meyering <meyering@fb.com>
  uid                   [ unknown] Jim Meyering <meyering@gnu.org>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key jim@meyering.net

  gpg --recv-keys 7FD9FCCB000BEEEE

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=grep&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify grep-3.12.tar.gz.sig

This release is based on the grep git repository, available as

  git clone https://git.savannah.gnu.org/git/grep.git

with commit 3f8c09ec197a2ced82855f9ecd2cbc83874379ab tagged as v3.12.

For a summary of changes and contributors, see:

  https://git.sv.gnu.org/gitweb/?p=grep.git;a=shortlog;h=v3.12

or run this command from a git-cloned grep directory:

  git shortlog v3.11..v3.12

This release was bootstrapped with the following tools:
  Autoconf 2.72.76-2f64
  Automake 1.17.0.91
  Gnulib 2025-04-04 3773db653242ab7165cd300295c27405e4f9cc79

NEWS

* Noteworthy changes in release 3.12 (2025-04-10) [stable]

** Bug fixes

  Searching a directory with at least 100,000 entries no longer fails
  with "Operation not supported" and exit status 2. Now, this prints 1
  and no diagnostic, as expected:
    $ mkdir t && cd t && seq 100000|xargs touch && grep -r x .; echo $?
    1
  [bug introduced in grep 3.11]

  -mN where 1 < N no longer mistakenly lseeks to end of input merely
  because standard output is /dev/null.

** Changes in behavior

  The --unix-byte-offsets (-u) option is gone. In grep-3.7 (2021-08-14)
  it became a warning-only no-op. Before then, it was a Windows-only no-op.

  On Windows platforms and on AIX in 32-bit mode, grep in some cases
  now supports Unicode characters outside the Basic Multilingual Plane.


10 April, 2025 05:04PM by Jim Meyering

gzip @ Savannah

gzip-1.14 released [stable]


This is to announce gzip-1.14, a stable release.

Most notable: "gzip -d" is up to 40% faster on x86_64 CPUs with pclmul
support. Why? Because about half of its time was spent computing a CRC
checksum, and that code is far more efficient now.  Even on 10-year-old
CPUs lacking pclmul support, it's ~20% faster.  Thanks to Lasse Collin
for alerting me to this very early on, to Sam Russell for contributing
gnulib's new crc module and to Bruno Haible and everyone else who keeps
the bar so high for all of gnulib. And as usual, thanks to Paul Eggert
for many contributions everywhere.

There have been 58 commits by 7 people in the 85 weeks since 1.13.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Bruno Haible (1)
  Collin Funk (4)
  Jim Meyering (26)
  Lasse Collin (1)
  Paul Eggert (24)
  Sam Russell (1)
  Simon Josefsson (1)

Jim
 [on behalf of the gzip maintainers]
==================================================================

Here is the GNU gzip home page:
    https://gnu.org/s/gzip/

Here are the compressed sources:
  https://ftp.gnu.org/gnu/gzip/gzip-1.14.tar.gz   (1.4MB)
  https://ftp.gnu.org/gnu/gzip/gzip-1.14.tar.xz   (868KB)

Here are the GPG detached signatures:
  https://ftp.gnu.org/gnu/gzip/gzip-1.14.tar.gz.sig
  https://ftp.gnu.org/gnu/gzip/gzip-1.14.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

  27f9847892a1c59b9527469a8a3e5d635057fbdd  gzip-1.14.tar.gz
  YT1upE8SSNc3DHzN7uDdABegnmw53olLPG8D+YEZHGs=  gzip-1.14.tar.gz
  05f44a8a589df0171e75769e3d11f8b11d692f58  gzip-1.14.tar.xz
  Aae4gb0iC/32Ffl7hxj4C9/T9q3ThbmT3Pbv0U6MCsY=  gzip-1.14.tar.xz

Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify gzip-1.14.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
        Key fingerprint = 155D 3FC5 00C8 3448 6D1E  EA67 7FD9 FCCB 000B EEEE
  uid                   [ unknown] Jim Meyering <jim@meyering.net>
  uid                   [ unknown] Jim Meyering <meyering@fb.com>
  uid                   [ unknown] Jim Meyering <meyering@gnu.org>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key jim@meyering.net

  gpg --recv-keys 7FD9FCCB000BEEEE

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=gzip&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify gzip-1.14.tar.gz.sig

This release is based on the gzip git repository, available as

  git clone https://git.savannah.gnu.org/git/gzip.git

with commit fbc4883eb9c304a04623ac506dd5cf5450d055f1 tagged as v1.14.

For a summary of changes and contributors, see:

  https://git.sv.gnu.org/gitweb/?p=gzip.git;a=shortlog;h=v1.14

or run this command from a git-cloned gzip directory:

  git shortlog v1.13..v1.14

This release was bootstrapped with the following tools:
  Autoconf 2.72.76-2f64
  Automake 1.17.0.91
  Gnulib 2025-01-31 553ab924d2b68d930fae5d3c6396502a57852d23

NEWS

* Noteworthy changes in release 1.14 (2025-04-09) [stable]

** Bug fixes

  'gzip -d' no longer omits the last partial output buffer when the
  input ends unexpectedly on an IBM Z platform.
  [bug introduced in gzip-1.11]

  'gzip -l' no longer misreports lengths of multimember inputs.
  [bug introduced in gzip-1.12]

  'gzip -S' now rejects suffixes containing '/'.
  [bug present since the beginning]

** Changes in behavior

  The GZIP environment variable is now silently ignored except for the
  options -1 (--fast) through -9 (--best), --rsyncable, and --synchronous.
  This brings gzip into line with more-cautious compressors like zstd
  that limit environment variables' effect to relatively innocuous
  performance issues.  You can continue to use scripts to specify
  whatever gzip options you like.

  'zmore' is no longer installed on platforms lacking 'more'.

** Performance improvements

  gzip now decompresses significantly faster by computing CRCs via a
  slice by 8 algorithm, and faster yet on x86-64 platforms that
  support pclmul instructions.


10 April, 2025 04:34AM by Jim Meyering

April 09, 2025

coreutils @ Savannah

coreutils-9.7 released [stable]


This is to announce coreutils-9.7, a stable release.

There have been 63 commits by 11 people in the 12 weeks since 9.6,
with a focus on bug fixing and stabilization.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Bruno Haible (1)                Jim Meyering (2)
  Collin Funk (2)                 Lukáš Zaoral (1)
  Daniel Hofstetter (1)           Mike Swanson (1)
  Frédéric Yhuel (1)              Paul Eggert (21)
  G. Branden Robinson (1)         Pádraig Brady (32)
  Grisha Levit (1)

Pádraig [on behalf of the coreutils maintainers]
==================================================================

Here is the GNU coreutils home page:
    https://gnu.org/s/coreutils/

Here are the compressed sources:
  https://ftp.gnu.org/gnu/coreutils/coreutils-9.7.tar.gz   (15MB)
  https://ftp.gnu.org/gnu/coreutils/coreutils-9.7.tar.xz   (5.9MB)

Here are the GPG detached signatures:
  https://ftp.gnu.org/gnu/coreutils/coreutils-9.7.tar.gz.sig
  https://ftp.gnu.org/gnu/coreutils/coreutils-9.7.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

  File: coreutils-9.7.tar.gz
  SHA1 sum:   bfebebaa1aa59fdfa6e810ac07d85718a727dcf6
  SHA256 sum: 0898a90191c828e337d5e4e4feb71f8ebb75aacac32c434daf5424cda16acb42

  File: coreutils-9.7.tar.xz
  SHA1 sum:   920791e12e7471479565a066e116a087edcc0df9
  SHA256 sum: e8bb26ad0293f9b5a1fc43fb42ba970e312c66ce92c1b0b16713d7500db251bf

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify coreutils-9.7.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa4096/0xDF6FD971306037D9 2011-09-23 [SC]
        Key fingerprint = 6C37 DC12 121A 5006 BC1D  B804 DF6F D971 3060 37D9
  uid                   [ultimate] Pádraig Brady <P@draigBrady.com>
  uid                   [ultimate] Pádraig Brady <pixelbeat@gnu.org>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key P@draigBrady.com

  gpg --recv-keys DF6FD971306037D9

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=coreutils&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify coreutils-9.7.tar.gz.sig

This release is based on the coreutils git repository, available as

  git clone https://git.savannah.gnu.org/git/coreutils.git

with commit 8e075ff8ee11692c5504d8e82a48ed47a7f07ba9 tagged as v9.7.

For a summary of changes and contributors, see:

  https://git.sv.gnu.org/gitweb/?p=coreutils.git;a=shortlog;h=v9.7

or run this command from a git-cloned coreutils directory:

  git shortlog v9.6..v9.7

This release was bootstrapped with the following tools:
  Autoconf 2.72.70-9ff9
  Automake 1.16.5
  Gnulib 2025-04-07 41e7b7e0d159d8ac0eb385964119f350ac9dfc3f
  Bison 3.8.2

NEWS

* Noteworthy changes in release 9.7 (2025-04-09) [stable]

** Bug fixes

  'cat' would fail with "input file is output file" if input and
  output are the same terminal device and the output is append-only.
  [bug introduced in coreutils-9.6]

  'cksum -a crc' misbehaved on aarch64 with 32-bit uint_fast32_t.
  [bug introduced in coreutils-9.6]

  dd with the 'nocache' flag will now detect all failures to drop the
  cache for the whole file.  Previously it may have erroneously succeeded.
  [bug introduced with the "nocache" feature in coreutils-8.11]

  'ls -Z dir' would crash on all systems, and 'ls -l' could crash
  on systems like Android with SELinux but without xattr support.
  [bug introduced in coreutils-9.6]

  `ls -l` could output spurious "Not supported" errors in certain cases,
  like with dangling symlinks on cygwin.
  [bug introduced in coreutils-9.6]

  timeout would fail to timeout commands with infinitesimal timeouts.
  For example `timeout 1e-5000 sleep inf` would never timeout.
  [bug introduced with timeout in coreutils-7.0]

  sleep, tail, and timeout would sometimes sleep for slightly less
  time than requested.
  [bug introduced in coreutils-5.0]

  'who -m' now outputs entries for remote logins.  Previously login
  entries prefixed with the service (like "sshd") were not matched.
  [bug introduced in coreutils-9.4]

** Improvements

  'logname' correctly returns the user who logged in the session,
  on more systems.  Previously on musl or uclibc it would have merely
  output the LOGNAME environment variable.


09 April, 2025 11:36AM by Pádraig Brady

diffutils @ Savannah

diffutils-3.12 released [stable]


This is to announce diffutils-3.12, a stable bug-fix release.
Thanks to Paul Eggert and Collin Funk for the bug fixes.

There have been 13 commits by 4 people in the 9 weeks since 3.11.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Collin Funk (1)
  Jim Meyering (6)
  Paul Eggert (5)
  Simon Josefsson (1)

Jim
 [on behalf of the diffutils maintainers]
==================================================================

Here is the GNU diffutils home page:
    https://gnu.org/s/diffutils/

Here are the compressed sources:
  https://ftp.gnu.org/gnu/diffutils/diffutils-3.12.tar.gz   (3.3MB)
  https://ftp.gnu.org/gnu/diffutils/diffutils-3.12.tar.xz   (1.9MB)

Here are the GPG detached signatures:
  https://ftp.gnu.org/gnu/diffutils/diffutils-3.12.tar.gz.sig
  https://ftp.gnu.org/gnu/diffutils/diffutils-3.12.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

  e3f3e8ef171fcb54911d1493ac6066aa3ed9df38  diffutils-3.12.tar.gz
  W+GBsn7Diq0kUAgGYaZOShdSuym31QUr8KAqcPYj+bI=  diffutils-3.12.tar.gz
  c2f302726d2709c6881c4657430a671abe5eedfa  diffutils-3.12.tar.xz
  fIt/n8hgkUH96pzs6FJJ0whiQ5H/Yd7a9Sj8szdyff0=  diffutils-3.12.tar.xz

Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify diffutils-3.12.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
        Key fingerprint = 155D 3FC5 00C8 3448 6D1E  EA67 7FD9 FCCB 000B EEEE
  uid                   [ unknown] Jim Meyering <jim@meyering.net>
  uid                   [ unknown] Jim Meyering <meyering@fb.com>
  uid                   [ unknown] Jim Meyering <meyering@gnu.org>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key jim@meyering.net

  gpg --recv-keys 7FD9FCCB000BEEEE

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=diffutils&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify diffutils-3.12.tar.gz.sig

This release is based on the diffutils git repository, available as

  git clone https://git.savannah.gnu.org/git/diffutils.git

with commit 16681a3cbcea47e82683c713b0dac7d59d85a6fa tagged as v3.12.

For a summary of changes and contributors, see:

  https://git.sv.gnu.org/gitweb/?p=diffutils.git;a=shortlog;h=v3.12

or run this command from a git-cloned diffutils directory:

  git shortlog v3.11..v3.12

This release was bootstrapped with the following tools:
  Autoconf 2.72.76-2f64
  Automake 1.17.0.91
  Gnulib 2025-04-04 3773db653242ab7165cd300295c27405e4f9cc79

NEWS

* Noteworthy changes in release 3.12 (2025-04-08) [stable]

** Bug fixes

  diff -r no longer merely summarizes when comparing an empty regular
  file to a nonempty regular file.
  [bug#76452 introduced in 3.11]

  diff -y no longer crashes when given nontrivial differences.
  [bug#76613 introduced in 3.11]


09 April, 2025 03:16AM by Jim Meyering

April 08, 2025

www @ Savannah

Malware in Proprietary Software - Latest Additions

The initial injustice of proprietary software often leads to further injustices: malicious functionalities.

The introduction of unjust techniques in nonfree software, such as back doors, DRM, tethering, and others, has become ever more frequent. Nowadays, it is standard practice.

We at the GNU Project show examples of malware that has been introduced in a wide variety of products and dis-services people use everyday, and of companies that make use of these techniques.

Here are our latest additions

March 2025

Microsoft's Software is Malware

  • Windows Recall is a feature of Microsoft's Copilot tool that comes preinstalled on AI-specialized computers. Recall records everything users do on their computer and allows them to search the recordings, but it has numerous security flaws and poses a risk to privacy. As Recall cannot be completely uninstalled, disabling it doesn't eliminate the risk because it can be reactivated by malware or misconfiguration. Microsoft says that Recall will not take screenshots of digitally restricted media. Meanwhile, it stores sensitive user information such as passwords and bank account numbers, showing that whereas Microsoft worries somewhat about corporate interests, it couldn't care less about user privacy.
  • Windows Defender deletes downloaded files that it considers malware as soon as they are saved to disk, without requesting permission to do so. Many angry users have complained about this unacceptable behavior over the last few years, and even suggested fixes, but Microsoft has ignored them. It is high time for Windows users to escape Microsoft's tyranny by migrating to a free/libre system.
  • Microsoft has started to show ads in the “Recommended” section of the Windows 11 Start menu. Previously, this section only included recently used documents and images. Now it also contains the icons of apps Microsoft wants to advertise, in the hope that the user will click on one of them, and buy the app. So far, the user can disable the ads, but this doesn't make them more legitimate.
  • In its default configuration, Windows 11 now uploads users' files and personal information to Microsoft's “cloud” without asking permission to do so. This is presented as a convenient backup method, but if the allotted storage capacity is exceeded, the user will need to buy more space, increasing Microsoft's profit. However, this small profit is probably not the company's major reason for making cloud storage the default. Here is an excerpt from the Microsoft Services agreement (Section 2b): To the extent necessary to provide the Services to you and others, to protect you and the Services, and to improve Microsoft products and services, you grant to Microsoft a worldwide and royalty-free intellectual property license to use Your Content, for example, to make copies of, retain, transmit, reformat, display, and distribute via communication tools Your Content on the Services. We strongly suspect that the backed-up material is used to feed Microsoft's greedy “AI.” In addition, it is most likely analysed to better profile users in order to flood them with targeted ads, thereby generating more profit.Users, on the other hand, are at the mercy of any entity that demands their data, let alone of any cracker that breaks into Microsoft's servers. They must escape from this sick environment, and install a sane free/libre system.
  • Outlook has become a “data collection and ad delivery service”. Since Outlook is now integrated with Microsoft “cloud” services, and doesn't support end-to-end encryption, the company has full access to users' emails, contacts, and calendar events. Microsoft may also retrieve credentials associated with any third-party services that are synchronized with Outlook. This trove of personal data enables Microsoft, as well as its commercial partners, to flood users with targeted ads, and possibly to train “artificial intelligences.” Even worse, this data is available to any government that can force Microsoft to hand it over.
  • Microsoft is shutting down Skype on May 5th, 2025. As with other tethered proprietary programs, users have to rely on servers that are controlled by the developer. When these servers shut down, the service disappears. Instead of migrating to the service that Microsoft suggests as a replacement, Skype users should regain control of their communications by switching to one that is based on free software. Jitsi Meet, for example, is appropriate for small video meetings. Anyone can set up a Jitsi server and let other people use it, and indeed many of these are available around the world.
  • A critical vulnerability in Windows systems that support IPv6 was discovered in 2024, 16 years after the first affected system was released. Unless the relevant patch is applied, an attacker can remotely execute arbitrary code on these systems. Microsoft considers exploits “likely.” The same sort of vulnerability in a free/libre operating system would probably be discovered sooner, since many more people would be able to look at the source code.

Google's Software is Malware

Proprietary Censorship

Adobe's Software is Malware

  • In its terms of service, Adobe gives itself permission to spy on material that people upload to its servers, supposedly for moderation purposes. In spite of Adobe's denial, we can expect that sooner or later it will use this material to train its so-called “artificial intelligence,” and will claim that by agreeing to the terms of service users gave it the right to do so.

Proprietary Sabotage

  • Ubisoft is facing a fraud lawsuit for shutting down the proprietary video game The Crew, which was tethered to its servers. As this game can't be played offline, people who used to think they owned a copy of it are now realizing they only bought a license that could be revoked at will by the developer. This is one more example of what tethering of a proprietary program leads to. If The Crew were free software, its users would be able to set up another server, and keep on playing.

Proprietary Tethers

Apple's Operating Systems Are Malware

Proprietary Subscriptions


February 2025

Google's Software is Malware

Proprietary Back Doors

  • Eclypsium discovered an insecure universal back door on many computers using Gigabyte mainboards. Gigabyte designed their nonfree firmware so they could add a program to Windows to download additional software from the Internet, and run it behind the user's back. To add injury to injury, the back-door program was insecure, and opened ways for crackers to run their own programs on the affected systems, also behind the user's back. Gigabyte's “solution” was to ensure the back door would only run programs from Gigabyte. In this case, the back door required the connivance of Windows accepting the program, and running it behind the user's back. Free operating systems rightly ignore such “Greek gifts,” so users of GNU (including GNU/Linux) are safe from this particular back door, even on affected hardware. Nonfree software does not make your computer secure—it does the opposite: it prevents you from trying to secure it. When nonfree programs are required for booting and impossible to replace, they are, in effect, a low-level rootkit. All the things that the industry has done to make its power over you secure against you also protect firmware-level rootkits against you. Instead of allowing Intel, AMD, Apple and perhaps ARM to impose security through tyranny, we should demand laws that require them to allow users to install their choice of startup software and make available the information needed to develop such. Think of this as right-to-repair at the initialization stage. Note: Eclypsium at least mentions the problem of “unwanted behavior within official firmware,” but does not seem to recognize that the only real solution is for firmware to be free, so users can fix these problems without having to rely on the vendor.

08 April, 2025 05:31PM by Rob Musial

April 07, 2025

gperf @ Savannah

GNU gperf 3.2 released

Download from https://ftp.gnu.org/gnu/gperf/gperf-3.2.tar.gz

New in this release:

  • The generated code avoids several types of warnings:
    • "implicit fallthrough" warnings in 'switch' statements.
    • "unused parameter" warnings regarding 'str' or 'len'.
    • "missing initializer for field ..." warnings.
    • "zero as null pointer constant" warnings.


  • The input file may now use Windows line terminators (CR/LF) instead of Unix line terminators (LF). Note: This is an incompatible change. If you want to use a keyword that ends in a CR byte, such as xyz<CR>, write it as "xyz\r".

07 April, 2025 10:50AM by Bruno Haible

April 04, 2025

datamash @ Savannah

GNU Datamash 1.9 released

This is to announce datamash-1.9, a stable release.

Home page: https://www.gnu.org/software/datamash

GNU Datamash is a command-line program which performs basic numeric,
textual and statistical operations on input textual data files.

It is designed to be portable and reliable, and aid researchers
to easily automate analysis pipelines, without writing code or even
short scripts. It is very friendly to GNU Bash and GNU Make pipelines.

There have been 52 commits by 5 people in the 141 weeks since 1.8.

See the NEWS below for a brief summary.

The following people contributed changes to this release:

  Dima Kogan (1)
  Erik Auerswald (14)
  Georg Sauthoff (4)
  Shawn Wagner (6)
  Timothy Rice (27)

Thanks to everyone who has contributed!

Please report any problem you may experience to the bug-datamash@gnu.org
mailing list.

Happy Hacking!
- Tim

==================================================================

Here is the GNU datamash home page:
    https://gnu.org/s/datamash/

Here are the compressed sources and a GPG detached signature:
  https://ftpmirror.gnu.org/datamash/datamash-1.9.tar.gz
  https://ftpmirror.gnu.org/datamash/datamash-1.9.tar.gz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

  File: datamash-1.9.tar.gz
  SHA1 sum:   935c9f24a925ce34927189ef9f86798a6303ec78
  SHA256 sum: f382ebda03650dd679161f758f9c0a6cc9293213438d4a77a8eda325aacb87d2

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify datamash-1.9.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   ed25519 2022-04-05 [SC]
        3338 2C8D 6201 7A10 12A0  5B35 BDB7 2EC3 D3F8 7EE6
  uid   Timothy Rice (Yubikey 5 Nano 13139911) <trice@posteo.net>

If that command fails because you don't have the required public key,
or that public key has expired, try the following command to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify datamash-1.9.tar.gz.sig

This release is based on the datamash git repository, available as

  git clone https://git.savannah.gnu.org/git/datamash.git

with commit 39101c367a07f2c1aea8f3b540fc490735596e6a tagged as v1.9.

For a summary of changes and contributors, see:

  https://git.sv.gnu.org/gitweb/?p=datamash.git;a=shortlog;h=v1.9

or run this command from a git-cloned datamash directory:

  git shortlog v1.8..v1.9

This release was bootstrapped with the following tools:
  Autoconf 2.72
  Automake 1.17
  Gnulib 2025-03-27 54fc57c23dcd833819a7adbdfcc3bd1c805103a8

NEWS

  • Noteworthy changes in release 1.9 (2025-04-05) [stable]


** Changes in Behavior

  datamash(1), decorate(1): Add short options -h and -V for --help and --version
  respectively.

  datamash(1): the rand operation now uses getrandom(2) for generating a random
  seed, instead of relying on date/time/pid mixing.

** New Features

  datamash(1): add operation dotprod for calculating the scalar product of two
  columns.

  datamash(1): Add option -S/--seed to set a specific seed for pseudo-random
  number generation.

  datamash(1): Add option --vnlog to enable experimental support for the vnlog
  format. More about vnlog is at https://github.com/dkogan/vnlog.

  datamash(1): -g/groupby takes ranges of columns (e.g. 1-4)

** Bug Fixes

  datamash(1) now correctly calculates the "antimode" for a sequence
  of numbers.  Problem reported by Kingsley G. Morse Jr. in
  <https://lists.gnu.org/archive/html/bug-datamash/2023-12/msg00003.html>.

  When using the locale's decimal separator as field separator, numeric
  datamash(1) operations now work correctly.  Problem reported by Jérémie
  Roquet in
  <https://lists.gnu.org/archive/html/bug-datamash/2018-09/msg00000.html>
  and by Jeroen Hoek in
  <https://lists.gnu.org/archive/html/bug-datamash/2023-11/msg00000.html>.

  datamash(1): The "getnum" operation now stays inside the specified field.

04 April, 2025 08:52PM by Tim Rice