Aggregation of development blogs from the GNU Project
The initial injustice of proprietary software often leads to further injustices: malicious functionalities.
The introduction of unjust techniques in nonfree software, such as back doors, DRM, tethering, and others, has become ever more frequent. Nowadays, it is standard practice.
We at the GNU Project show examples of malware that has been introduced in a wide variety of products and dis-services people use everyday, and of companies that make use of these techniques.
The server will figure out what (if anything) someone asked it to do.
What else will it do with that recording? There are no limits except management's will. It might save some of the utterances it hears, and present them years later to the political police.
Proprietary Surveillance
Although Meta and Yandex have discontinued this type of spying, they may resume it in the future, possibly with other methods, and we don't know which other companies might follow their example. A foolproof way to avoid this sort of tracking is to refrain from installing any proprietary apps on a “smart”phone, especially if the app has a way of identifying users. To avoid proprietary apps, we recommend using the F-Droid store instead of Google Play.
Since most trackers, including the Meta Pixel and Yandex Metrica, are nonfree JavaScript programs, it is also good practice to prevent nonfree JavaScript from running in the browser, with an add-on such as GNU LibreJS.
Malware in Games
Of course, gamers hate Denuvo. But hate is useless. They should go one step further, and stop buying games that use DRM.
03 July, 2025 03:13AM by Rob Musial
Dear community:
I am very happy to announce the release 5.0 series of the GNU Health Hospital Information System (HIS). This release it's the result of a tremendous amount of work that spanned for almost the last two years!
Series 5.0 represent a major leap in functionality, the underlying technology & project development.
Currently we have the vanilla version ready to be downloaded, via gnuhealth-control (see https://docs.gnuhealth.org/his/techguide/installation/vanilla.html#installation-with-gnu-health-control)
Specific GNU/Linux and FreeBSD packages, Ansible packages for HIS 5.0 and virtual machines images will come in the coming days / weeks.
The following paragraphs summarize the changes and features included in GNUHealth HIS 5.0. There are more features and information that has been left outside this document for the sake of brevity. You can look consult the Changelog at Codeberg.
Some of the new features include:
Thank you to the GNU and GNU Health community, for delivering freedom, privacy and equity in healthcare around the world ♥
You can find short PDF presentation I made some weeks ago at the University of Entre Ríos, Argentina, about the new features in GNU Health HIS 5.0.
https://www.gnuhealth.org/downloads/media/new_features_gnuhealth_50.pdf
PS: In the coming days / weeks, we'll be polishing the documentation for this release. If you have any question or issue in the installation and/or upgrade, don't hesitate to send us a note at health @ gnu.org . Make sure you suscribe to the list ( https://savannah.gnu.org/mail/?group=health), otherwise your email will be automatically discarded to avoid spam.
We also invite you to join us at Mastodon for the latest news in about the GNU Health ecosystem.
https://mastodon.social/@gnuhealth
Happy hacking
Luis
30 June, 2025 12:15AM by Luis Falcon
Two security issues, known as
CVE-2025-46415 and
CVE-2025-46416, have been
identified in
guix-daemon
,
which allow for a local user to gain the privileges of any of the build users
and subsequently use this to manipulate the output of any build, as well as to
subsequently gain the privileges of the daemon user. You are strongly advised
to upgrade your daemon now (see instructions below), especially on
multi-user systems.
Both exploits require the ability to start a derivation build. CVE-2025-46415
requires the ability to create files in /tmp
in the root mount namespace on
the machine the build occurs on, and CVE-2025-46416 requires the ability to run
arbitrary code in the root PID and network namespaces on the machine the build
occurs on. As such, this represents an increased risk primarily to multi-user
systems, but also more generally to any system in which untrusted code may be
able to access guix-daemon's socket, which is usually located at
/var/guix/daemon-socket/socket
.
One of the longstanding oversights of Guix's build environment isolation is what has become known as the abstract Unix-domain socket hole: a Linux-specific feature that enables any two processes in the same network namespace to communicate via Unix-domain sockets, regardless of all other namespace state. Unix-domain sockets are perhaps the single most powerful form of interprocess communication (IPC) that Unix-like systems have to offer, for the reason that they allow file descriptors to be passed between processes.
This behavior had played a crucial role in
CVE-2024-27297,
in which it was possible to smuggle a writable file descriptor to one of the
output files of a fixed-output
derivation
to a process outside of the build environment sandbox. More specifically, this
would use a fixed-output derivation that doesn't use a builtin builder; examples
of this class of derivation include derivations produced by origins using
svn-fetch
and hg-fetch
, but not git-fetch
or url-fetch
, since those are
implemented using builtin builders. The process could then wait for the daemon
to validate the hash and register the output, and subsequently modify the file
to contain any contents it desired.
The fix for CVE-2024-27297 seems to have made the assumption that once the build
was finished, no more processes could be running as that build user. This is
unfortunately incorrect: the builder could also smuggle out the file descriptor
of a setuid program, which could subsequently be executed either using
/proc/self/fd/N
or execveat
to gain the privileges of the build user. This
assumption was likely believed to hold in Nix because Nix had a seccomp filter
that attempted to forbid the creation of setuid programs entirely by blocking
the necessary chmod
calls. The security researchers who discovered
CVE-2025-46415 and CVE-2025-46416 discovered ways around Nix's seccomp filter,
but Guix never had any such filter to begin with. It was therefore possible to
run arbitrary code as the build user outside of the isolated build environment
at any time.
Because it is possible to run arbitrary code as the build user even after the
build has finished, many assumptions made in the design of the build daemon —
not only in fixing CVE-2024-27297 but going way back — can be violated and
exploited. One such assumption is that directories being deleted by
deletePath
— for instance the build tree of a build that has just failed —
won't be modified while it is recursing through them. By violating this
assumption, it is possible to exploit race conditions in deletePath
to get the
daemon to delete arbitrary files. One such file is a build directory of the
form /tmp/guix-build-PACKAGE-X.Y.drv-0
. If this is done between when the
build directory is created and when it is chown
ed to the build user, an
attacker can put a symbolic link in the appropriate place and get it to chown
any file owned by the daemon's user to now be owned by the build user. In the
case of a daemon running as root, that includes files such as /etc/passwd
.
The build users, as mentioned before, are easily compromised, so an attacker can
at this point write to the target file.
When guix-daemon
is not running as
root,
the attacker would gain privileges of the guix-daemon
user, giving write
access to the store and nothing else.
In short, there are two separate problems here:
This security issue has been fixed by 6 commits (7173c2c0ca, be8aca0651, fb42611b8f, c659f977bb, 0e79d5b655, and 30a5d140aa as part of pull request #788). Users should make sure they have upgraded to commit 30a5d140aa or any later commit to be protected from this vulnerability. Upgrade instructions are in the following section.
The fix was accomplished primarily by closing the "abstract Unix-domain socket
hole" entirely. To do this, the daemon was modified so that all builds — even
fixed-output ones – occur in a fresh network
namespace. To
keep networking functional despite the separate network namespace, a userspace
networking stack,
slirp4netns, is used.
Additionally, some of the daemon's file deletion and copying helper procedures
were modified to use the openat
family of system calls, so that even in cases
where build users can be taken over (for example, when the daemon is run with
--disable-chroot
), those particular helper procedures can't be exploited to
escalate privileges.
A test for the presence of the abstract Unix-domain socket hole is available at the end of this post. One can run this code with:
guix repl -- abstract-socket-vuln-check.scm
This will output whether the current guix-daemon
being used is vulnerable or
not. If it is not vulnerable, the last line will contain Abstract unix socket hole is CLOSED
, otherwise the last line will contain Abstract unix socket hole is OPEN, guix-daemon is VULNERABLE
.
Note that this will properly report that the hole is still open for daemons
running with --disable-chroot
, which is, as before, still insecure wherever
untrusted users can access the daemon's socket.
Due to the severity of this security advisory, we strongly recommend
all users to upgrade guix-daemon
immediately.
For Guix System, the
procedure
is to reconfigure the system after a guix pull
, either restarting
guix-daemon
or rebooting. For example:
guix pull
sudo guix system reconfigure /run/current-system/configuration.scm
sudo herd restart guix-daemon
where /run/current-system/configuration.scm
is the current system
configuration but could, of course, be replaced by a system
configuration file of a user's choice.
For Guix on another distribution, one
needs to guix pull
with sudo
, as the guix-daemon
runs as root,
and restart the guix-daemon
service, as
documented.
For example, on a system using systemd to manage services, run:
sudo --login guix pull
sudo systemctl restart guix-daemon.service
Note that for users with their distro's package of Guix (as opposed to having used the install script) you may need to take other steps or upgrade the Guix package as per other packages on your distro. Please consult the relevant documentation from your distro or contact the package maintainer for additional information or questions.
On March 27th, the NixOS/Nixpkgs security team forwarded a detailed report about
two vulnerabilities from Snyk Security Labs to the Guix security
team and to Ludovic Courtès and Reepca
Russelstein (as contributors to guix-daemon
). A 90-day disclosure timeline
was agreed upon with Snyk and all the affected projects: Nix, Lix, and Guix.
During that time, development of the fixes in Guix was led by Reepca Russelstein
with peer review happening on the private guix-security
mailing list.
Coordination with the other projects and for this security advisory was managed
by the Guix security team.
A pre-disclosure announcement was sent by the NixOS/Nixpkgs and the Guix security teams on June 19th–20th, giving June 24th as the full public disclosure date.
Some other CVEs that were included in the report were CVE-2025-52991, CVE-2025-52992, and CVE-2025-52993. These don't represent direct vulnerabilities so much as missed opportunities to mitigate the attack the researchers identified — that is, it has to be possible to do things like exfiltrate file descriptors (for CVE-2025-52992) and trick the daemon into deleting arbitrary files (for CVE-2025-52991 and CVE-2025-52993) before these start mattering.
More information concerning the fix for this vulnerability and the design choices made for it will be provided in a follow-up blog post.
We thank the Security Labs team at Snyk for discovering similar-but-not-quite-the-same vulnerabilities in Nix, and the NixOS/Nixpkgs security team for sharing this information with the Guix security team, which led us to realize our own related vulnerabilities.
Below is code to check if your guix-daemon
is vulnerable to this exploit. Save
this file as abstract-socket-vuln-check.scm
and run following the instructions
above, in "Mitigation."
;; Checking for CVE-2025-46415 and CVE-2025-46416.
(use-modules (guix)
(gcrypt hash)
((rnrs bytevectors) #:select (string->utf8))
(ice-9 match)
(ice-9 threads)
(srfi srfi-34))
(define nonce
(string-append "-" (number->string (car (gettimeofday)) 16)
"-" (number->string (getpid))))
(define socket-name
(string-append "\0" nonce))
(define test-message nonce)
(define check
(computed-file
"check-abstract-socket-hole"
#~(begin
(use-modules (ice-9 textual-ports))
(let ((sock (socket AF_UNIX SOCK_STREAM 0)))
;; Attempt to connect to the abstract Unix-domain socket outside.
(connect sock AF_UNIX #$socket-name)
;; If we reach this line, then we successfully managed to connect to
;; the abstract Unix-domain socket.
(call-with-output-file #$output
(lambda (port)
(display (get-string-all sock) port)))))
#:options
`(#:hash-algo sha256
#:hash ,(sha256 (string->utf8 test-message))
#:local-build? #t)))
(define build-result
;; Listen on the abstract Unix-domain socket at SOCKET-NAME and build
;; CHECK. If CHECK succeeds, then it managed to connect to SOCKET-NAME.
(let ((sock (socket AF_UNIX SOCK_STREAM 0)))
(bind sock AF_UNIX socket-name)
(listen sock 1)
(call-with-new-thread
(lambda ()
(match (accept sock)
((connection . peer)
(format #t "accepted connection on abstract Unix-domain socket~%")
(display test-message connection)
(close-port connection)))))
(with-store store
(let ((drv (run-with-store store (lower-object check))))
(guard (c ((store-protocol-error? c) c))
(build-derivations store (list drv))
#t)))))
(if (store-protocol-error? build-result)
(format (current-error-port)
"Abstract Unix-domain socket hole is CLOSED, build failed with ~S.~%"
(store-protocol-error-message build-result))
(format (current-error-port)
"Abstract Unix-domain socket hole is OPEN, guix-daemon is VULNERABLE!~%"))
24 June, 2025 02:00PM by Caleb Ristvedt
This is a bugfix release for gnunet 0.24.2. It fixes some regressions and minor bugs.
The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A
Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try https://ftp.gnu.org/gnu/gnunet/
GNU Parallel 20250622 ('Павутина') has been released. It is available for download at: lbry://@GnuParallel:4
Quote of the month:
GNU Parallel is a seriously underrated tool, at least based on how little I hear people talk about it (and how often I possibly over-use it)
-- Byron Alley @byronalley
New in this release:
News about GNU Parallel:
https://blog.stephane-robert.info/docs/admin-serveurs/linux/parallel/
GNU Parallel - For people who live life in the parallel lane.
If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.
GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.
If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.
GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.
For example you can run this to convert all jpeg files into png and gif files and have a progress bar:
parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif
Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:
find . -name '*.jpg' |
parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200
You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/
You can install GNU Parallel in just 10 seconds with:
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep c555f616391c6f7c28bf938044f4ec50
12345678 c555f616 391c6f7c 28bf9380 44f4ec50
$ md5sum install.sh | grep 707275363428aa9e9a136b9a7296dfe4
70727536 3428aa9e 9a136b9a 7296dfe4
$ sha512sum install.sh | grep b24bfe249695e0236f6bc7de85828fe1f08f4259
83320d89 f56698ec 77454856 895edc3e aa16feab 2757966e 5092ef2d 661b8b45
b24bfe24 9695e023 6f6bc7de 85828fe1 f08f4259 6ce5480a 5e1571b2 8b722f21
$ bash install.sh
Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.
When using programs that use GNU Parallel to process data for publication please cite:
O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.
If you like GNU Parallel:
If you use programs that use GNU Parallel for research:
If GNU Parallel saves you money:
GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.
The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.
When using GNU SQL for a publication please cite:
O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.
GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.
22 June, 2025 10:46PM by Ole Tange
Jornada: “Aportes del sistema libre GNU Health al cuidado de la salud: experiencias en Honduras y Argentina”
Viernes 13 de Junio
Centro de Innovación, Emprendimiento y Vinculación
Facultad de IngenierÃa – UNER
En esta jornada exploraremos las experiencias prácticas y los avances en la implementación de GNU Health en el primer nivel de atención en salud.
En el marco de la Alianza Académica “GNU Health – UNERâ€�, contaremos con la presencia de docentes invitados de la UNITEC (Universidad Tecnológica Centroamericana, Honduras), gracias al Programa de Movilidad Internacional Docente (PROMID) de la Universidad Nacional de Entre RÃos (UNER).
Esta jornada es una oportunidad para: Conocer las últimas actualizaciones de GNU Health directamente de su creador, el Dr. Luis Falcón (GNU Solidario).
Descubrir casos reales de implementación en Argentina (CAPS D’Angelo) y Honduras (Municipio del NÃspero).
Participar en talleres prácticos para aprender a instalar y utilizar el sistema.
Conectar con profesionales y académicos comprometidos con la salud pública y las tecnologÃas libres.
Reserva e-mail: saludpublica@ingenieria.uner.edu.ar
Programa
Sección | Hora | TÃtulo |
---|---|---|
Apertura | 8:45 | Autoridades / Organizadores |
Experiencias en el uso de GNU Health | 9:00 | Aportes de GNU Health al primer nivel de atención, una mirada retrospectiva. Bioing. Carlos Scotta y Bioing. Ingrid Spessotti (Grupo de Estudios en Salud Pública y TecnologÃas Aplicadas, FIUNER) |
9:45 | Experiencia en CAPS D’Angelo. Historia y presente de GNU Health como herramienta de gestión del cuidado de la salud – Lic. Teresita Calzia, Claudia Gudiño y equipo del CAPS | |
10:30 | Experiencia en el Municipio del NÃspero. Honduras – Ing. Lucy Rodas e Ing. Elvin Deras (UNITEC, Honduras) | |
11:15 | Presentación de la nueva versión de GNU Health – Dr. Luis Falcón (GNU Solidario) | |
12:00 | Break | |
Taller de implementación del sistema | 14:00 | Primeros pasos con GNU Health: instalación y puesta en marcha del sistema – Ing. Elvin Deras (UNITEC, Honduras) |
15:00 | Conociendo GNU Health: principales funcionalidades. Inga. Lucy Rodas (UNITEC, Honduras), Bioinga Maia Iturain, Bioinga. Ingrid Spessotti, Bioing Francisco Moyano (Grupo de Estudios en Salud Pública y TecnologÃas Aplicadas, FIUNER) | |
16:00 | Break | |
Proyectos en curso | 16:20 | Empoderando a la comunidad: Desarrollo de un Portal Paciente para GNU Health – Ana Roskopf |
16:40 | PID UNER: adaptando el sistema GNU Health para enfermerÃa de salud mental – Aldana Gagliardi | |
17:10 | Incorporando sistemas para el acompañamiento y seguimiento en terreno – Bioinga. Maia Iturain | |
Cierre | 17:30 | El sueño de un sistema interoperable: aspectos técnicos y polÃticos de la salud digital – Lic Mario PuntÃn, Bioing. Francisco Moyano, Dr. Fernando Sassetti. |
10 June, 2025 09:34PM by GNU Solidario
If you've ever struggled with Rust packaging, here's some good news!
We have changed to a simplified Rust packaging model that is easier to automate and allows for modification, replacement and deletion of dependencies at the same time. The new model will significantly reduce our Rust packaging time and will help us to improve both package availability and quality.
Those changes are currently on the rust-team
branch, slated to be merged in
the coming weeks.
How good is the news? Migration of our current Rust package collection, 150+ applications with 3600+ dependency libraries, only took two weeks, all by one person! :)
See #387, if you want to track the
current progress and give feedback. I'll request merging the rust-team
branch
when the pull request is merged. After merging the branch, a news entry will be
issued for guix pull
.
The previous packaging model for Rust in Guix would map one crate (Rust package) to one Guix package. This seemed to make sense but there's a fundamental mismatch here: while Guix packages—from applications like GIMP and Inkscape to C libraries like GnuTLS and Nettle—are meant to be compiled independently, Rust applications are meant to be compiled as a single unit together with all the crates they depend on, recursively. That mismatch meant that Guix would build each crate independently, but that build output was of no use at all.
The new model instead focuses on defining origins for crates, with actual builds happening only on the "leaves" of the graph—Rust applications. This is a major change with many implications, as we will see below.
Importer
guix import crate
will support importing from Cargo.lock
using the new --lockfile
/ -f
option.
guix import --insert=gnu/packages/rust-crates.scm \
crate --lockfile=/path/to/Cargo.lock PACKAGE
guix import -i gnu/packages/rust-crates.scm \
crate -f /path/to/Cargo.lock PACKAGE
To avoid conflicts with the new lockfile importer, the crates.io importer will be altered so it will no longer support importing dependencies.
A new procedure, cargo-inputs-from-lockfile
, will be added for use in the
guix.scm
of Rust projects. Note that Cargo workspaces in dependencies require manual
intervention and are therefore not handled by this procedure.
(use-modules (guix import crate))
(package
...
(inputs (cargo-inputs-from-lockfile "Cargo.lock")))
Build system
cargo-build-sytem
will support directory inputs and Cargo workspaces.
Build phase check-for-pregenerated-files
will scan all unpacked sources
and print out non-empty binary files.
We won't accept contributions using the old packaging approach
(#:cargo-inputs
and #:cargo-development-inputs
) anymore. Its support is
deprecated and will be removed after Dec. 31, 2026.
Packages
Rust libraries will be stored in two new modules and will be hidden from the user interface:
(gnu packages rust-sources)
Rust libraries that require a build process or complex modification involving external dependencies to unbundle dependencies.
(gnu packages rust-crates)
Rust libraries imported using the lockfile importer. This module
exports a lookup-cargo-inputs
interface, providing an identifier ->
libraries mapping.
Libraries defined in this module can be modified via snippets and
patches,
replaced by changing their definitions to point to other variables, or
removed by changing their definitions to #f
. The importer will skip
existing libraries to avoid overwriting modifications.
A template file for this module will be provided as
etc/teams/rust/rust-crates.tmpl
in Guix source tree, for use in external
channels.
All other libraries (those currently in (gnu packages crates-...)
) will
be moved to an external channel. If you have packages depending on them,
please add this
channel and use
its (past-crates packages crates-io)
module to avoid possible breakage. Once
merged, you can migrate your packages and safely remove the channel.
(channel
(name 'guix-rust-past-crates)
(url "https://codeberg.org/guix/guix-rust-past-crates.git")
(branch "trunk")
(introduction
(make-channel-introduction
"1db24ca92c28255b28076792b93d533eabb3dc6a"
(openpgp-fingerprint
"F4C2D1DF3FDEEA63D1D30776ACC66D09CA528292"))))
Documentation
API references for
cargo-build-sytem
and packaging guidelines for Rust
crates
will be updated. A packaging workflow built upon the new features will be
added under the Packaging
chapter of Guix
Cookbook.
Currently, our Rust packaging uses the traditional approach, treating each application and library equally.
This brings issues. Firstly on packaging and maintenance, due to the large number of libraries with limited people working on it, plus we can't reuse those packaged libraries so instead of the built libraries, their sources are extracted and used in the build process. As a result, the packaging experience is not very smooth, although the crates.io importer has helped mitigate this to some extent.
Secondly on the user interface, thousands of Rust libraries that can't be used by the user appear in the search result. Documentation can't be taken good care of for all these packages as well, understandably.
Lastly, the inconsistency in the packaging interface. Our dependency model
cannot perfectly map to Rust's, and circular dependencies are possible. To
solve this, build system arguments #:cargo-inputs
and
#:cargo-development-inputs
were introduced and used for specifying Rust
libraries, instead of the regular propagated-inputs
and native-inputs
.
Additionally, inputs propagation logic had to be reimplemented for them, which
resulted in additional performance overhead.
Approaches have been proposed to improve the situation, notably the antioxidant build system developed by Maxime Devos, and the cargo2guix tool developed by Murilo and Luis Guilherme Coelho:
The antioxidant build system builds Rust packages without Cargo, instead the
build process is fully managed by Guix by invoking rustc
directly.
This build system would allow Guix to produce and share build artifacts for Rust libraries. It's a step towards making our work on the current approach more reasonable.
However there's a downside. Since this is not what the Rust community expects, we'd also have to heavily patch many Rust packages, which would make it even harder for us to move forward.
This tool parses Cargo.lock
and outputs package definitions. It's more
reliable than the crates.io importer, since dependencies are already known
offline. It should be the most efficient improvement for the current
approach. The upcoming importer update integrates a modified version of
this tool.
Murilo also proposes to package Rust applications in self-contained modules, each module containing a Rust application with all its dependencies, in order to reduce merge conflicts. However, one same library will be defined in multiple modules, duplicating the effort to check and manage them.
Let Cargo download dependencies
This is the "vendoring" approach, used in some distributions and can be implemented as a fixed-output derivation.
We don't use this approach since the dependency information is completely hidden from us. We can't locate a library easily when we want to modify or replace it. If we made a mistake on checking dependencies, it could be very difficult to find out later.
Another downside is that downloading of a single library can't be deduplicated. Since we use an isolated build environment, commonly used libraries will be downloaded repeatedly, despite already available in the store.
After reading the recent discussion, I thought about these existing approaches in the hope of finding one that does only the minimum necessary: since users can't use our packaged libraries, there's no reason to insist on the traditional approach -> libraries can be hidden from the user interface -> user-facing documentations are not needed -> since metadata is not used at this stage, why bother defining a package for the library?
Actually cargo2guix is more suitable for importing sources rather than packages,
as it has issues handling licenses, and Cargo.lock
only contains enough
information to construct the source
representation
in Guix, which has support for simple patching.
Since the vendoring approach exists, packaging all Rust libraries as sources only has been proven effective. However, we'll lose important information in our representation when switching from packages to sources: license and dependency. Thanks to the awesome cargo-license tool, only the latter required further consideration.
The implementation has been changed a few times in the review process, but the idea remains: make automation and manual intervention coexist. As a result, the importer:
Despite proposing it, I was a bit worried about the mapping, which references all dependency libraries directly, but the result went quite well: with compact source definitions, we reduced 153k lines of definitions for Rust libraries to 42k after this migration.
Imported libraries, these are what the importer creates:
(define rust-unindent-0.2.4
(crate-source "unindent" "0.2.4"
"1wvfh815i6wm6whpdz1viig7ib14cwfymyr1kn3sxk2kyl3y2r3j"))
(define rust-ureq-2.10.0.1cad58f
(origin
(method git-fetch)
(uri (git-reference (url "https://github.com/algesten/ureq")
(commit "1cad58f5a4f359e318858810de51666d63de70e8")))
(file-name (git-file-name "rust-ureq" "2.10.0.1cad58f"))
(sha256 (base32 "1ryn499kbv44h3lzibk9568ln13yi10frbpjjnrn7dz0lkrdin2w"))))
Library with modification:
(define rust-libmimalloc-sys-0.1.24
(crate-source "libmimalloc-sys" "0.1.24"
"0s8ab4nc33qgk9jybpv0zxcb75jgwwjb7fsab1rkyjgdyr0gq1bp"
#:snippet
'(begin
(delete-file-recursively "c_src")
(delete-file "build.rs")
(with-output-to-file "build.rs"
(lambda _
(format #t "fn main() {~@
println!(\"cargo:rustc-link-lib=mimalloc\");~@
}~%"))))))
Library with replacement, for those requiring a build process with dependencies.
(define rust-pipewire-0.8.0.fd3d8f7 rust-pipewire-for-niri)
Deleted library:
(define rust-unrar-0.5.8 #f)
Accessing interface and identifier -> libraries mapping:
(define-cargo-inputs lookup-cargo-inputs
(rust-deunicode-1
=> (list rust-any-ascii-0.3.2
rust-emojis-0.6.4
rust-itoa-1.0.15
...))
(rust-pcre2-utf32-0.2
=> (list rust-bitflags-2.9.0
rust-cc-1.2.18
rust-cfg-if-1.0.0
...))
(zoxide
=> (list rust-aho-corasick-1.1.3
rust-aliasable-0.1.3
rust-anstream-0.6.18
...)))
Dependency libraries lookup, module selection is supported:
(cargo-inputs 'rust-pcre2-utf32-0.2)
(define (my-cargo-inputs name)
(cargo-inputs name #:module '(my packages rust-crates)))
(my-cargo-inputs ...)
Since we have all the dependency information, unpacking any libraries we want to
a directory and then running more common tools to check them is possible (some
scripts are provided under etc/teams/rust
, yet to be rewritten in Guile).
You're encouraged to share yours and check the libraries after the merge, and
help improve the collection ;)
One issue for this model is that all libraries are stored and referenced in one module, making merge conflicts harder to resolve.
I'm considering creating a separate repository to manage this module. Whenever there's a change, it will be applied into this repository first and then synced back to Guix.
We can also store dependency specifications and lockfiles in that separate repository to make the packaging process, which may require changing the specifications, more transparent. This may also allow automation in updating dependency libraries.
Thanks for reading! Happy hacking :)
07 June, 2025 09:00PM by Hilton Chain
The initial injustice of proprietary software often leads to further injustices: malicious functionalities.
The introduction of unjust techniques in nonfree software, such as back doors, DRM, tethering, and others, has become ever more frequent. Nowadays, it is standard practice.
We at the GNU Project show examples of malware that has been introduced in a wide variety of products and dis-services people use everyday, and of companies that make use of these techniques.
As a general precaution, users should make sure their printer can't connect to the manufacturer's server, for example by shielding it from the internet by a firewall. This will not restore the ability of printers to use third-party toner if they have already lost it, but will prevent any future downgrades. All printer manufacturers are concerned, not only Brother.
Microsoft's Software is Malware
Enough is enough!
[*] Why “useds”? Because running Windows is not you using Windows; it is Windows using you.
Let's hope legislators and regulatory agencies all over the world will quickly put a stop to this sort of outrageous practice.
In any case people would be better off switching to a free-software replacement such as Jitsi Meet for medium-size groups, or Big Blue Button for larger ones. Many public instances are available, and groups of users can also set up their own servers.
Apple's Operating Systems Are Malware
This wouldn't happen if software in the Echo were free. Users would be able to restore the “Do Not Send Voice Recordings” option.
Malware in Mobile Devices
In addition, Nintendo can record audio and video chats for moderation purposes. User's consent is required, but there is no guarantee that the recordings will not be sent to third parties. In short, there is no privacy in these chats.
If you ever consider buying a Switch, think twice, because you will not own it. Nintendo will.
03 June, 2025 03:02PM by Rob Musial
The 1st prize of the German young scientists competition, in the category mathematics + computer science, of this year was awarded to Simon Neuenhausen for writing an open-source firmware for the wifi of the ESP32 SoC. It replaces the closed-source wifi driver.
Project description (in German): https://www.jugend-forscht.de/virtuelle-ausstellung/detailseite/Open_Source_WLAN_auf_dem_ESP32.html
The usual tool for optimizing a program's execution speed is a profiler.
I've seen and tried various profilers over the years, and each of them had some drawbacks: Some of them require root privileges, some of them produce only a per-function profiling (no insights of what is expensive inside a function), some of them work only on unoptimized or specially compiled binaries, some of them are very slow during the program execution.
For the first time, there is a profiler without any these drawbacks. Plus, it is easy to use.
It is gprofng, part of the GNU binutils, in versions from 2025-05-22 or newer. Together with the gprofng-gui, an optional GUI that makes it very easy to use.
For more details, see this wiki: https://gitlab.com/ghwiki/gnow-how/-/wikis/Profiling/with_sampling.
Congratulations to the GNU binutils team, and to Vladimir Mezentsev in particular!
31 May 2025 Unifont 16.0.04 is now available. This is a minor release with many glyph improvements. See the ChangeLog file for details.
Download this release from GNU server mirrors at:
https://ftpmirror.gnu.org/unifont/unifont-16.0.04/
or if that fails,
https://ftp.gnu.org/gnu/unifont/unifont-16.0.04/
or, as a last resort,
ftp://ftp.gnu.org/gnu/unifont/unifont-16.0.04/
These files are also available on the unifoundry.com website:
https://unifoundry.com/pub/unifont/unifont-16.0.04/
Font files are in the subdirectory
https://unifoundry.com/pub/unifont/unifont-16.0.04/font-builds/
A more detailed description of font changes is available at
https://unifoundry.com/unifont/index.html
and of utility program changes at
https://unifoundry.com/unifont/unifont-utilities.html
Information about Hangul modifications is at
https://unifoundry.com/hangul/index.html
and
http://unifoundry.com/hangul/hangul-generation.html
Enjoy!
31 May, 2025 10:52PM by Paul Hardy
This is a bugfix release for gnunet 0.24.1. It fixes some regressions and minor bugs.
The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A
Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try https://ftp.gnu.org/gnu/gnunet/
Automake 1.18 released. Announcement:
https://lists.gnu.org/archive/html/autotools-announce/2025-05/msg00001.html
27 May, 2025 09:19PM by Karl Berry
GNU Parallel 20250522 ('Leif Tange') has been released. It is available for download at: lbry://@GnuParallel:4
Quote of the month:
gnu parallel is my new favorite toy
-- Eytan Adar @eytan.adar.prof
New in this release:
News about GNU Parallel:
GNU Parallel - For people who live life in the parallel lane.
If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.
GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.
If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.
GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.
For example you can run this to convert all jpeg files into png and gif files and have a progress bar:
parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif
Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:
find . -name '*.jpg' |
parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200
You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/
You can install GNU Parallel in just 10 seconds with:
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep c555f616391c6f7c28bf938044f4ec50
12345678 c555f616 391c6f7c 28bf9380 44f4ec50
$ md5sum install.sh | grep 707275363428aa9e9a136b9a7296dfe4
70727536 3428aa9e 9a136b9a 7296dfe4
$ sha512sum install.sh | grep b24bfe249695e0236f6bc7de85828fe1f08f4259
83320d89 f56698ec 77454856 895edc3e aa16feab 2757966e 5092ef2d 661b8b45
b24bfe24 9695e023 6f6bc7de 85828fe1 f08f4259 6ce5480a 5e1571b2 8b722f21
$ bash install.sh
Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.
When using programs that use GNU Parallel to process data for publication please cite:
O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.
If you like GNU Parallel:
If you use programs that use GNU Parallel for research:
If GNU Parallel saves you money:
GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.
The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.
When using GNU SQL for a publication please cite:
O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.
GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.
21 May, 2025 08:15PM by Ole Tange
GNU Parallel 20250422 ('Tariffs') has been released. It is available for download at: lbry://@GnuParallel:4
Quote of the month:
Man, GNU Parallel is very cool.
-- jeeger @jeeger@mastodon.social
New in this release:
News about GNU Parallel:
GNU Parallel - For people who live life in the parallel lane.
If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.
GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.
If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.
GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.
For example you can run this to convert all jpeg files into png and gif files and have a progress bar:
parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif
Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:
find . -name '*.jpg' |
parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200
You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/
You can install GNU Parallel in just 10 seconds with:
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep c555f616391c6f7c28bf938044f4ec50
12345678 c555f616 391c6f7c 28bf9380 44f4ec50
$ md5sum install.sh | grep 707275363428aa9e9a136b9a7296dfe4
70727536 3428aa9e 9a136b9a 7296dfe4
$ sha512sum install.sh | grep b24bfe249695e0236f6bc7de85828fe1f08f4259
83320d89 f56698ec 77454856 895edc3e aa16feab 2757966e 5092ef2d 661b8b45
b24bfe24 9695e023 6f6bc7de 85828fe1 f08f4259 6ce5480a 5e1571b2 8b722f21
$ bash install.sh
Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.
When using programs that use GNU Parallel to process data for publication please cite:
O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.
If you like GNU Parallel:
If you use programs that use GNU Parallel for research:
If GNU Parallel saves you money:
GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.
The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.
When using GNU SQL for a publication please cite:
O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.
GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.
21 May, 2025 08:12PM by Ole Tange
Last month I watched the book talk Music Copyright, Creativity, and Culture by Jennifer Jenkins with James Boyle facilitating the discussion, co-hosted by the Internet Archive and the Authors Alliance:
Looking to get a copy of the book, I found the book’s page on the publisher’s website, Oxford University Press. Seeing it available as an e-book, I opted to go with that as a more eco-friendly option and to save some physical space. I worked my way through the checkout and payment steps, under the impression that I would be purchasing a copy of the book that I could download and do with as I wished. The use of the words “buy” and “purchase” throughout the book page on the publisher’s website certainly did not suggest otherwise.
In hindsight, there were red flags I failed to notice at the time, such as confusing and seemingly redundant, if not contradictory, information on the book page:
“Downloaded copy on your device does not expire.”
Um, okay? I’d sure hope and expect as much about any file I download.
“Includes 4 years of Bookshelf Online.”
Whatever — as long as I could download, store, and use the book offline I’d be happy.
It’s only upon hovering the small and generic, if not misleading, “E-book purchasing help” link that one would be presented with this vaguely informative eyebrow-raising sentence:
E-book purchase
E-books are granted under the terms of a single-user, non-transferable license, and may be accessed online from any location.
“E-books are granted” (??) is news to me. I thought I would have rightful access to something I bought and paid for, rather than being “granted” (read “allowed”) access to it by and through some overlord. Oh but of course, we live in a time where vendors get to redefine well-established words like “purchase” and “buy” on page N of their terms and conditions.
I obviously did not see that “E-book purchasing help” before giving Oxford University Press my money: being a tech-savvy person, I didn’t think I needed any help “purchasing” an e-book.
Everything became clear shortly after I completed the “purchase” and was redirected to VitalSource to access the book: the VitalSource “Bookshelf” user interface offered no way to download a copy of the book I thought I bought and paid for. It is instead a glorified pile of proprietary JavaScript DRM (Digital Restrictions Management) that wraps around the underlying representation of the book in VitalSource’s possession. The only other option for accessing the book would be through VitalSource’s proprietary application available only for certain versions of certain proprietary operating systems.
At this point, the only method I could think of to try and obtain a
copy of the book that I could read without subjecting myself to the
shackles of DRM or proprietary software was trying to print the
book to PDF. Given that VitalSource’s DRM interface is a proprietary
wrapper around VitalSource’s likely ePub-based underlying
representation (guessing from the presence of epubcfi
in the URL of
their book renderer page), the book pages are not exposed all at once,
practically forcing one to use the interface’s Print function to get
all the pages in one go. After waiting what felt like an eternity
for the website to prepare a printable version of the book, I was
presented with this abomination (click image for sample in original
PDF form):
That is a sample of the output generated by the interface’s Print function: an utterly useless, inferior copy of the book that has giant watermarks on every single page, with the only selectable text in the whole book being the repugnant threat at the top of each page — the actual body text of the book is converted to low-resolution, blurry images, and is therefore neither selectable nor searchable.
Going forward, I will NEVER “purchase” anything from Oxford University Press (and most definitely not from VitalSource), so long as they have no problem “selling” [access to] DRM-infested copies of books with no way to download a usable copy of what I paid for.
The key takeaway for me from this whole experience is that due to the sad and sorry status quo of our current times, this kind of insulting (mal)treatment of users is all but common, and really can happen to any one of us. Therefore it is all the more important for us to band together in protest of this, rather than dividing and isolating ourselves through misguided better-than-thou sentiments toward each other.
For Music Copyright, Creativity, and Culture, I ordered and a few days later received a paper copy from the local bookstore. It’s a copy I truly own, and can read whenever, wherever, and however I please.
Take care, and so long for now.
References and related links:
This release has no code changes since 2.12, mostly updates to and improvements to the build system.
Some out-of-date and tenuously-relevant files were also removed from the distribution, thus removing the contrib directory.
19 May, 2025 12:13PM by Reuben Thomas
В рамках празднования 40-летия ФСПО Глеб Ерофеев проводит встречу 24 мая в 18:00 по местному времени.
Приглашаются все желающие и те, кого они сумеют привести.
17 May, 2025 02:19PM by Ineiev
Suriname has adopted GNU Health Hospital and Health Information System for their Public Healthcare system.
The adoption of GNU Health was announced during the press release celebrated last Friday, May 9th in Paramaribo, in the context of the country healthcare digitization campaign. They defined GNU Health as “An open source system that is both accessible and scalable”1. During the event, the Suriname Patient Portal and My Health App were also announced.
The Minister of Health, Dr. Amar Ramadhin, made emphasis on the benefits of this digital transformation. “We move away from paper files and work towards greater efficiency and patient-oriented care experience”.
Digitization is supported by the IS4HIT Information Systems for Health Information Technology) program, an initiative of the Pan-American Health Organisation (PAHO), whose delegates where at the conference, together with local health professionals.
The GNU Health rollout will be done in phases throughout the different public health centers, starting at the Regional Service Centers (RGDs). The main focus is on Primary Care. Some of the tasks in the initial phase will be demographics, patient management, appointments, medical encounters, prescriptions, complementary tests orders and reporting. Training sessions to the local health professionals and technical team are being conducted, as well as the localization to Suriname.
Minister Ramadhin declared: “[Healthcare] Digitization is not an end in itself but a powerful means to make care more human-oriented, safer and more efficient.” . That’s where GNU Health fits right in. The Hospital and Health Information System of GNU Health has Social Medicine and primary care at its core. It excels in health promotion and disease prevention. When properly implemented and used, GNU Health is way more than just an Electronic Medical Record or a Hospital Management Information System. It empowers health professionals to assess the socioeconomic determinants of health and disease, taking a proactive approach to prevent and tackle the root of the diseases at individual, family and society level. The world is facing a pandemic of non-transmissible diseases. Obesity, diabetes, depression, cancer and neurodegenerative conditions are on the rise, with a appalling impact on the underprivileged. GNU Health will be a great ally for nurses, physicians, nutritionists and social workers of Suriname to find and engage those at higher risk. in the community.
The fact that GNU Health is Free/Libre software allows Suriname to download, study the system and adapt it to their needs and legislation, free of any kind of vendor lock-in. After all, health is -or it should be- a non-negotiable human right.
GNU Health is now part of Suriname to deliver a sustainable, interoperable, standard-based, privacy oriented, scalable digital healthcare solution for the country public health system.
A Digital Public Good. In 2022 GNU Health was declared a Digital Public Good by the Digital Public Goods Alliance (DPGA). By definition, a Digital Public Good is open-source software, open data, open AI models, open standards, and open content that adhere to privacy and other applicable best practices, do no harm by design and are of high relevance for attainment of the United Nations 2030 Sustainable Development Goals (SDGs). This definition stems from the UN Secretary-General’s Roadmap for Digital Cooperation.
We are very proud and excited to see GNU Health deployed in Suriname national public health system and wish them the very best embracing the system as we envision it, a social project with some technology behind.
GNU Health is an open science, community driven project from GNU Solidario, a non-profit humanitarian organization focused on Social Medicine. Our project has been adopted by public hospitals, research and academic institutions, governments and multilateral organizations around the world.
GNU Health is a GNU official package, awarded with the Free Software Foundation award of Social benefit and declared a Digital Public Good.
GNU Health : https://www.gnuhealth.org
GNU Solidario: https://www.gnusolidario.org
14 May, 2025 06:57PM by GNU Solidario
Screen is a full-screen window manager that multiplexes a physical terminal between several processes, typically interactive shells.
5.0.1 is a security fix release. It includes only few code fixes, types and security issues. It doesn't include any new features.
Release (official tarball) will be available soon for download:
https://ftp.gnu.org/gnu/screen/
Please report any bugs or regressions.
Thanks to everyone who contributed to this release.
Cheers,
Alex
12 May, 2025 07:38PM by Alexander Naumov
The Guix project will be migrating all its repositories along with bug tracking and patch tracking to Codeberg within a month. This decision is the result of a collective consensus-building process that lasted several months. This post shows the upcoming milestones in that migration and discusses what it will change for people using Guix and for contributors.
For those who haven’t heard about it, Codeberg is a source code collaboration platform. It is run by Codeberg e.V., a non-profit registered in Germany. The software behind Codeberg is Forgejo, a free software forge (licensed under GPLv3) supporting the “merge request” style of workflow familiar to many developers.
Since its inception, Guix has been hosting its source code on Savannah, with bug reports and patches handled by email, tracked by a Debbugs instance, and visible on the project’s tracker. Debbugs and Savannah are hosted by the Free Software Foundation (FSF); all three services are administered by volunteers who have been supportive over these 13 years—thanks!
The motivation and the main parts of the migration are laid out in the second Guix Consensus Document (GCD). The GCD process itself was adopted just a few months ago; it’s a major milestone for the project that we’ll discuss in more detail in a future post. Suffice to say that this GCD was discussed and improved publicly for two months, after which deliberation among members of Guix teams led to acceptance.
Migration to Codeberg will happen gradually. To summarize the GCD, the key milestones are the following:
By June 7th, and probably earlier, Git repositories will all have migrated to Codeberg—some have already moved.
On May 25th, the Guix repository itself will be migrated.
From there on and until at least May 25th, 2026,
https://git.savannah.gnu.org/git/guix.git
will be a mirror of
https://codeberg.org/guix/guix.git
.
Until December 31st, 2025, bug reports and patches will still be accepted by email, in addition to Codeberg (issues and pull requests).
Of course, this is just the beginning. Our hope is that the move can help improve much needed tooling such as the QA infrastructure following work on Forgejo/Cuirass integration started earlier this year, and possibly develop new tools and services to assist in the maintenance of this huge package collection that Guix provides.
As a user, the main change is that your channels.scm
configuration
files,
if their refer to the git.savannah.gnu.org
URL, should be changed to
refer to https://codeberg.org/guix/guix.git
once migration is
complete. But don’t worry: guix pull
will tell you if/when you need
to update your config files and the old URL will remain a mirror for at
least a year anyway.
Also, channel files produced by guix describe
to pin Guix to a
specific revision and to re-deploy it later anytime with
time-machine
will always work, even if they refer to the git.savannah.gnu.org
URL,
and even when that repository eventually vanishes, thanks to automatic
fallback to Software
Heritage.
As a contributor, nothing changes for bug reports and patches that you already submitted by email: just keep going!
Once the Guix repository has migrated though, you’ll be able to report bugs at Codeberg and create pull requests for changes. The latter is a relief for many—no need to fiddle with admittedly intricate email setups and procedures—but also a pain point for those who had come to master and appreciate the email workflow.
For this reason, the “User Interfaces” section of the GCD describes the
options available besides the Web interface—command-line and Emacs
interfaces in particular. Some are still work-in-progress, but it’s
exciting to see, for example, that over the past few months many
improvements landed in fj.el
and that a Forgejo-capable branch of
Magit-Forge saw
the light. Check it out!
A concern brought up during the discussion is that of having to create
an account on Codeberg to be able to contribute—sometimes seen as a
hindrance compared to the open-for-all and distributed nature of
cooperation by email. This remains an open issue, though hopefully one
that will become less acute as support for federation in
Forgejo
develops. In the meantime, as the GCD states, occasional bug reports
and patches sent by email to guix-devel
will be accepted.
This was an summary of what is to come; check out the
GCD
for more info, and reach out to the guix-devel
mailing
list if you have any questions!
Real work begins now. We hope the migration to Codeberg will be smooth and enjoyable for all. For one thing, it already proved our ability to collectively decide on the project’s future, which is no small feat. There’s a lot to expect from the move in improving the project’s ability to work flawlessly at this scale—more than 100 code contributors and 2,000 commits each month, and more than 33,000 packages available in Guix proper. Let’s make the best of it, and until then, happy hacking!
11 May, 2025 06:30PM by Ludovic Courtès
We're proud to announce that #GNUHealth is now an organization in the Python Package Index (#PyPI).
The organization makes it easy to find and explore our projects and packages.
This is URL for the GNU Health organization in PyPI:
https://pypi.org/org/GNUHealth/
We are very grateful to the Python Software Foundation for making GNU Health a community organization within PyPI!
Get this and the latest news about GNU Health from our official Mastodon account:
https://mastodon.social/@gnuhealth
08 May, 2025 11:06AM by Luis Falcon
Download from https://ftp.gnu.org/pub/gnu/gettext/gettext-0.25.tar.gz
New in this release:
07 May, 2025 05:15PM by Bruno Haible
Download from https://ftp.gnu.org/pub/gnu/gettext/gettext-0.24.1.tar.gz
New in this release:
02 May, 2025 06:18PM by Bruno Haible
The initial injustice of proprietary software often leads to further injustices: malicious functionalities.
The introduction of unjust techniques in nonfree software, such as back doors, DRM, tethering, and others, has become ever more frequent. Nowadays, it is standard practice.
We at the GNU Project show examples of malware that has been introduced in a wide variety of products and dis-services people use everyday, and of companies that make use of these techniques.
Another malicious feature of the Snoo is the fact that users need to create an account with the company, which thus has access to personal data, location (SSID), appliance log, etc., as well as manual notes about baby history.
01 May, 2025 05:33PM by Rob Musial
We are very happy to announce that the upcoming version of GNU Health Hospital Information System has entered feature-complete alpha stage. This upcoming version of GNU Health HIS 5.0 supposes over a year of work and is the largest release in terms of functionality and refactoring.
GNU Health HIS 5.0 is expected to be released by the end of June.
This new release comes after over a year of development to deliver state-of-the-art libre technology and user experience. In a nutshell:
On the technical side we have worked on:
At this point, our focus in on testing, translation, packaging and documentation. In the coming days we’ll migrate our community server so we can all test the upcoming version.
For those of you on GNU Health 4.4, please start thinking on the migration project to GH HIS 5.0. This new version is a major leap that delivers many benefits, so we highly encourage you to upgrade. As always, the migration methods and tools are included.
We’d like to invite you to translate GNU Health at Codeberg weblate translation instance and to report any issues you may find during this period.
Don’t forget to follow us in Mastodon (https://mastodon.social/@gnuhealth) to get the latest on this and other GNU Health news!
Stay tuned and happy hacking!
GNU Health is a Libre, community driven project from GNU Solidario, a non-profit humanitarian organization focused on Social Medicine. Our project has been adopted by public and private health institutions and laboratories, multilateral organizations and national public health systems around the world.
The GNU Health project provides the tools for individuals, health professionals, institutions and governments to proactively assess and improve the underlying determinants of health, from the socioeconomic agents to the molecular basis of disease. From primary health care to precision medicine.
The following are the main components that make up the GNU Health ecosystem:
GNU Health is a GNU (www.gnu.org) official package, awarded with the Free Software Foundation award of Social benefit. GNU Health has been declared a Digital Public Good ,adopted by many hospitals, governments and multilateral organizations around the globe.
01 May, 2025 11:59AM by Luis Falcon
After thinking about multi-stage Debian rebuilds I wanted to implement the idea. Recall my illustration:
Earlier I rebuilt all packages that make up the difference between Ubuntu and Trisquel. It turned out to be a 42% bit-by-bit identical similarity. To check the generality of my approach, I rebuilt the difference between Debian and Devuan too. That was the debdistreproduce project. It “only” had to orchestrate building up to around 500 packages for each distribution and per architecture.
Differential reproducible rebuilds doesn’t give you the full picture: it ignore the shared package between the distribution, which make up over 90% of the packages. So I felt a desire to do full archive rebuilds. The motivation is that in order to trust Trisquel binary packages, I need to trust Ubuntu binary packages (because that make up 90% of the Trisquel packages), and many of those Ubuntu binaries are derived from Debian source packages. How to approach all of this? Last year I created the debdistrebuild project, and did top-50 popcon package rebuilds of Debian bullseye, bookworm, trixie, and Ubuntu noble and jammy, on a mix of amd64 and arm64. The amount of reproducibility was lower. Primarily the differences were caused by using different build inputs.
Last year I spent (too much) time creating a mirror of snapshot.debian.org, to be able to have older packages available for use as build inputs. I have two copies hosted at different datacentres for reliability and archival safety. At the time, snapshot.d.o had serious rate-limiting making it pretty unusable for massive rebuild usage or even basic downloads. Watching the multi-month download complete last year had a meditating effect. The completion of my snapshot download co-incided with me realizing something about the nature of rebuilding packages. Let me below give a recap of the idempotent rebuilds idea, because it motivate my work to build all of Debian from a GitLab pipeline.
One purpose for my effort is to be able to trust the binaries that I use on my laptop. I believe that without building binaries from source code, there is no practically feasible way to trust binaries. To trust any binary you receive, you can de-assemble the bits and audit the assembler instructions for the CPU you will execute it on. Doing that on a OS-wide level this is unpractical. A more practical approach is to audit the source code, and then confirm that the binary is 100% bit-by-bit identical to one that you can build yourself (from the same source) on your own trusted toolchain. This is similar to a reproducible build.
My initial goal with debdistrebuild was to get to 100% bit-by-bit identical rebuilds, and then I would have trustworthy binaries. Or so I thought. This also appears to be the goal of reproduce.debian.net. They want to reproduce the official Debian binaries. That is a worthy and important goal. They achieve this by building packages using the build inputs that were used to build the binaries. The build inputs are earlier versions of Debian packages (not necessarily from any public Debian release), archived at snapshot.debian.org.
I realized that these rebuilds would be not be sufficient for me: it doesn’t solve the problem of how to trust the toolchain. Let’s assume the reproduce.debian.net effort succeeds and is able to 100% bit-by-bit identically reproduce the official Debian binaries. Which appears to be within reach. To have trusted binaries we would “only” have to audit the source code for the latest version of the packages AND audit the tool chain used. There is no escaping from auditing all the source code — that’s what I think we all would prefer to focus on, to be able to improve upstream source code.
The trouble is about auditing the tool chain. With the Reproduce.debian.net approach, that is a recursive problem back to really ancient Debian packages, some of them which may no longer build or work, or even be legally distributable. Auditing all those old packages is a LARGER effort than auditing all current packages! Doing auditing of old packages is of less use to making contributions: those releases are old, and chances are any improvements have already been implemented and released. Or that improvements are no longer applicable because the projects evolved since the earlier version.
See where this is going now? I reached the conclusion that reproducing official binaries using the same build inputs is not what I’m interested in. I want to be able to build the binaries that I use from source using a toolchain that I can also build from source. And preferably that all of this is using latest version of all packages, so that I can contribute and send patches for them, to improve matters.
The toolchain that Reproduce.Debian.Net is using is not trustworthy unless all those ancient packages are audited or rebuilt bit-by-bit identically, and I don’t see any practical way forward to achieve that goal. Nor have I seen anyone working on that problem. It is possible to do, though, but I think there are simpler ways to achieve the same goal.
My approach to reach trusted binaries on my laptop appears to be a three-step effort:
*.deb
packages based on Guix binaries that when used as build inputs (potentially iteratively) leads to 100% bit-by-bit identical packages as in step 1.How to go about achieving this? Today’s Debian build architecture is something that lack transparency and end-user control. The build environment and signing keys are managed by, or influenced by, unidentified people following undocumented (or at least not public) security procedures, under unknown legal jurisdictions. I always wondered why none of the Debian-derivates have adopted a modern GitDevOps-style approach as a method to improve binary build transparency, maybe I missed some project?
If you want to contribute to some GitHub or GitLab project, you click the ‘Fork’ button and get a CI/CD pipeline running which rebuild artifacts for the project. This makes it easy for people to contribute, and you get good QA control because the entire chain up until its artifact release are produced and tested. At least in theory. Many projects are behind on this, but it seems like this is a useful goal for all projects. This is also liberating: all users are able to reproduce artifacts. There is no longer any magic involved in preparing release artifacts. As we’ve seen with many software supply-chain security incidents for the past years, where the “magic” is involved is a good place to introduce malicious code.
To allow me to continue with my experiment, I thought the simplest way forward was to setup a GitDevOps-centric and user-controllable way to build the entire Debian archive. Let me introduce the debdistbuild project.
Debdistbuild is a re-usable GitLab CI/CD pipeline, similar to the Salsa CI pipeline. It provide one “build” job definition and one “deploy” job definition. The pipeline can run on GitLab.org Shared Runners or you can set up your own runners, like my GitLab riscv64 runner setup. I have concerns about relying on GitLab (both as software and as a service), but my ideas are easy to transfer to some other GitDevSecOps setup such as Codeberg.org. Self-hosting GitLab, including self-hosted runners, is common today, and Debian rely increasingly on Salsa for this. All of the build infrastructure could be hosted on Salsa eventually.
The build job is simple. From within an official Debian container image build packages using dpkg-buildpackage
essentially by invoking the following commands.
sed -i 's/ deb$/ deb deb-src/' /etc/apt/sources.list.d/*.sources
apt-get -o Acquire::Check-Valid-Until=false update
apt-get dist-upgrade -q -y
apt-get install -q -y --no-install-recommends build-essential fakeroot
env DEBIAN_FRONTEND=noninteractive \
apt-get build-dep -y --only-source $PACKAGE=$VERSION
useradd -m build
DDB_BUILDDIR=/build/reproducible-path
chgrp build $DDB_BUILDDIR
chmod g+w $DDB_BUILDDIR
su build -c "apt-get source --only-source $PACKAGE=$VERSION" > ../$PACKAGE_$VERSION.build
cd $DDB_BUILDDIR
su build -c "dpkg-buildpackage"
cd ..
mkdir out
mv -v $(find $DDB_BUILDDIR -maxdepth 1 -type f) out/
The deploy job is also simple. It commit artifacts to a Git project using Git-LFS to handle large objects, essentially something like this:
if ! grep -q '^pool/**' .gitattributes; then
git lfs track 'pool/**'
git add .gitattributes
git commit -m"Track pool/* with Git-LFS." .gitattributes
fi
POOLDIR=$(if test "$(echo "$PACKAGE" | cut -c1-3)" = "lib"; then C=4; else C=1; fi; echo "$DDB_PACKAGE" | cut -c1-$C)
mkdir -pv pool/main/$POOLDIR/
rm -rfv pool/main/$POOLDIR/$PACKAGE
mv -v out pool/main/$POOLDIR/$PACKAGE
git add pool
git commit -m"Add $PACKAGE." -m "$CI_JOB_URL" -m "$VERSION" -a
if test "${DDB_GIT_TOKEN:-}" = ""; then
echo "SKIP: Skipping git push due to missing DDB_GIT_TOKEN (see README)."
else
git push -o ci.skip
fi
That’s it! The actual implementation is a bit longer, but the major difference is for log and error handling.
You may review the source code of the base Debdistbuild pipeline definition, the base Debdistbuild script and the rc.d/
-style scripts implementing the build.d/ process and the deploy.d/ commands.
There was one complication related to artifact size. GitLab.org job artifacts are limited to 1GB. Several packages in Debian produce artifacts larger than this. What to do? GitLab supports up to 5GB for files stored in its package registry, but this limit is too close for my comfort, having seen some multi-GB artifacts already. I made the build job optionally upload artifacts to a S3 bucket using SHA256 hashed file hierarchy. I’m using Hetzner Object Storage but there are many S3 providers around, including self-hosting options. This hierarchy is compatible with the Git-LFS .git/lfs/object
/ hierarchy, and it is easy to setup a separate Git-LFS object URL to allow Git-LFS object downloads from the S3 bucket. In this mode, only Git-LFS stubs are pushed to the git repository. It should have no trouble handling the large number of files, since I have earlier experience with Apt mirrors in Git-LFS.
To speed up job execution, and to guarantee a stable build environment, instead of installing build-essential
packages on every build
job execution, I prepare some build container images. The project responsible for this is tentatively called stage-N-containers. Right now it create containers suitable for rolling builds of trixie on amd64, arm64, and riscv64, and a container intended for as use the stage-0 based on the 20250407 docker images of bookworm on amd64 and arm64 using the snapshot.d.o 20250407 archive. Or actually, I’m using snapshot-cloudflare.d.o because of download speed and reliability. I would have prefered to use my own snapshot mirror with Hetzner bandwidth, alas the Debian snapshot team have concerns about me publishing the list of (SHA1 hash) filenames publicly and I haven’t been bothered to set up non-public access.
Debdistbuild has built around 2.500 packages for bookworm on amd64 and bookworm on arm64. To confirm the generality of my approach, it also build trixie on amd64, trixie on arm64 and trixie on riscv64. The riscv64 builds are all on my own hosted runners. For amd64 and arm64 my own runners are only used for large packages where the GitLab.com shared runners run into the 3 hour time limit.
What’s next in this venture? Some ideas include:
for
loop around curl
to trigger GitLab CI/CD pipelines with names derived from https://popcon.debian.org/.dists/
sub-directory, so that it is possible to use the newly built packages in the stage-1 build phase.dpkg
and apt
using /bin/sh
for use during bootstrapping when neither packaging tools are available.What do you think?
30 April, 2025 09:25AM by simon
GNU libsigsegv version 2.15 is released.
New in this release:
Download: https://ftp.gnu.org/gnu/libsigsegv/libsigsegv-2.15.tar.gz
28 April, 2025 04:45PM by Bruno Haible
A draft of a proposed GNU extension to the Algol 68 programming language has been published today at https://algol68-lang.org/docs/GNU68-2025-004-supper.pdf.
This new stropping regime aims to be more appealing to contemporary programmers, and also more convenient to be used in today's computing systems, while at the same time retaining the full expressive power of a stropped language and being 100% backwards compatible as a super-extension.
The stropping regime has been already implemented in the https://gcc.gnu.org/wiki/Algol68FrontEndGCC Algol 68 front-end and also in the Emacs a68-mode that provides full automatic indentation and syntax highlighting.
The sources of the godcc program have been already transitioned to the new regime, and the result is quite satisfactory. Check it out!
Comments and suggestions for the draft are very welcome, and would help to move the draft forward to a final state. Please send them to algol68@gcc.gnu.org.
Salud, and happy Easter everyone!
Download from https://ftp.gnu.org/gnu/gperf/gperf-3.3.tar.gz
New in this release:
20 April, 2025 12:43PM by Bruno Haible
19 April 2025 Unifont 16.0.03 is now available. This is a minor release with many glyph improvements. See the ChangeLog file for details.
Download this release from GNU server mirrors at:
https://ftpmirror.gnu.org/unifont/unifont-16.0.03/
or if that fails,
https://ftp.gnu.org/gnu/unifont/unifont-16.0.03/
or, as a last resort,
ftp://ftp.gnu.org/gnu/unifont/unifont-16.0.03/
These files are also available on the unifoundry.com website:
https://unifoundry.com/pub/unifont/unifont-16.0.03/
Font files are in the subdirectory
https://unifoundry.com/pub/unifont/unifont-16.0.03/font-builds/
A more detailed description of font changes is available at
https://unifoundry.com/unifont/index.html
and of utility program changes at
https://unifoundry.com/unifont/unifont-utilities.html
Information about Hangul modifications is at
https://unifoundry.com/hangul/index.html
and
http://unifoundry.com/hangul/hangul-generation.html
Enjoy!
19 April, 2025 04:08PM by Paul Hardy
Remember the XZ Utils backdoor? One factor that enabled the attack was poor auditing of the release tarballs for differences compared to the Git version controlled source code. This proved to be a useful place to distribute malicious data.
The differences between release tarballs and upstream Git sources is typically vendored and generated files. Lots of them. Auditing all source tarballs in a distribution for similar issues is hard and boring work for humans. Wouldn’t it be better if that human auditing time could be spent auditing the actual source code stored in upstream version control instead? That’s where auditing time would help the most.
Are there better ways to address the concern about differences between version control sources and tarball artifacts? Let’s consider some approaches:
While I like the properties of the first solution, and have made effort to support that approach, I don’t think normal source tarballs are going away any time soon. I am concerned that it may not even be a desirable complete solution to this problem. We may need tarballs with pre-generated content in them for various reasons that aren’t entirely clear to us today.
So let’s consider the second approach. It could help while waiting for more experience with the first approach, to see if there are any fundamental problems with it.
How do you know that the XZ release tarballs was actually derived from its version control sources? The same for Gzip? Coreutils? Tar? Sed? Bash? GCC? We don’t know this! I am not aware of any automated or collaborative effort to perform this independent confirmation. Nor am I aware of anyone attempting to do this on a regular basis. We would want to be able to do this in the year 2042 too. I think the best way to reach that is to do the verification continuously in a pipeline, fixing bugs as time passes. The current state of the art seems to be that people audit the differences manually and hope to find something. I suspect many package maintainers ignore the problem and take the release source tarballs and trust upstream about this.
We can do better.
I have launched a project to setup a GitLab pipeline that invokes per-release scripts to rebuild that release artifact from git sources. Currently it only contain recipes for projects that I released myself. Releases which where done in a controlled way with considerable care to make reproducing the tarballs possible. The project homepage is here:
https://gitlab.com/debdistutils/verify-reproducible-releases
The project is able to reproduce the release tarballs for Libtasn1 v4.20.0, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, and GNU SASL v2.2.2. You can see this in a recent successful pipeline. All of those releases were prepared using Guix, and I’m hoping the Guix time-machine will make it possible to keep re-generating these tarballs for many years to come.
I spent some time trying to reproduce the current XZ release tarball for version 5.8.1. That would have been a nice example, wouldn’t it? First I had to somehow mimic upstream’s build environment. The XZ release tarball contains GNU Libtool files that are identified with version 2.5.4.1-baa1-dirty
. I initially assumed this was due to the maintainer having installed libtool from git locally (after making some modifications) and made the XZ release using it. Later I learned that it may actually be coming from ArchLinux which ship with this particular libtool version. It seems weird for a distribution to use libtool built from a non-release tag, and furthermore applying patches to it, but things are what they are. I made some effort to setup an ArchLinux build environment, however the now-current Gettext version in ArchLinux seems to be more recent than the one that were used to prepare the XZ release. I don’t know enough ArchLinux to setup an environment corresponding to an earlier version of ArchLinux, which would be required to finish this. I gave up, maybe the XZ release wasn’t prepared on ArchLinux after all. Actually XZ became a good example for this writeup anyway: while you would think this should be trivial, the fact is that it isn’t! (There is another aspect here: fingerprinting the versions used to prepare release tarballs allows you to infer what kind of OS maintainers are using to make releases on, which is interesting on its own.)
I made some small attempts to reproduce the tarball for GNU Shepherd version 1.0.4 too, but I still haven’t managed to complete it.
Do you want a supply-chain challenge for the Easter weekend? Pick some well-known software and try to re-create the official release tarballs from the corresponding Git checkout. Is anyone able to reproduce anything these days? Bonus points for wrapping it up as a merge request to my project.
Happy Supply-Chain Security Hacking!
17 April, 2025 07:24PM by simon
Greetings! While these tiny issues will likely not affect many if any,
there are alas a few tiny errata with the 2.7.1 tarball release. Posted
here just for those interested. Will of course be incorporated in the
next release.
modified gcl/debian/rules
@@ -138,7 +138,7 @@ clean: debian/control debian/gcl.templates
rm -rf $(INS) debian/substvars debian.upstream
rm -rf *stamp build-indep
rm -f debian/elpa-gcl$(EXT).elpa debian/gcl$(EXT)-pkg.el
- rm -rf $(EXT_TARGS) info/gcl$(EXT)*.info*
+ rm -rf $(EXT_TARGS) info/gcl$(EXT)*.info* gcl_pool
debian-clean: debian/control debian/gcl.templates
dh_testdir
modified gcl/git.tag
@@ -1,2 +1,2 @@
-"Version_2_7_0"
+"Version_2_7_1"
modified gcl/o/alloc.c
@@ -707,6 +707,7 @@ empty_relblock(void) {
for (;!rb_emptyp();) {
tm_table[t_relocatable].tm_adjgbccnt--;
expand_contblock_index_space();
+ expand_contblock_array();
GBC(t_relocatable);
}
sSAleaf_collection_thresholdA->s.s_dbind=o;
11 April, 2025 10:06PM by Camm Maguire
Greetings!
Greetings! The GCL team is happy to announce the release of version
2.7.1, the culmination of many years of work and a major development
in the evolution of GCL. Please see http://www.gnu.org/software/gcl for
downloading information.
11 April, 2025 02:31PM by Camm Maguire
Don’t do this:
thing = Thing() try: thing.do_stuff() finally: thing.close()
Do do this:
from contextlib import closing with closing(Thing()) as thing: thing.do_stuff()
Why is the second better? Using contextlib.closing()
ties closing the item to its creation. These baby examples are about equally easy to reason about, with only a single line in the try
block, but consider what happens ifwhen more lines get added in future? In the first example, the close moves away, potentially offscreen, but that doesn’t happen in the second.
11 April, 2025 10:27AM by gbenson
This is a bugfix release for gnunet 0.24.0. It fixes some regressions and minor bugs.
The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A
Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try https://ftp.gnu.org/gnu/gnunet/
This is to announce grep-3.12, a stable release.
It's been nearly two years! There have been two bug fixes and many
harder-to-see improvements via gnulib. Thanks to Paul Eggert for doing
so much of the work and Bruno Haible for all the testing and all he does
to make gnulib a paragon of portable, reliable, top-notch code.
There have been 77 commits by 6 people in the 100 weeks since 3.11.
See the NEWS below for a brief summary.
Thanks to everyone who has contributed!
The following people contributed changes to this release:
Bruno Haible (5)
Carlo Marcelo Arenas Belón (1)
Collin Funk (1)
Grisha Levit (1)
Jim Meyering (31)
Paul Eggert (38)
Jim
[on behalf of the grep maintainers]
==================================================================
Here is the GNU grep home page:
https://gnu.org/s/grep/
Here are the compressed sources:
https://ftp.gnu.org/gnu/grep/grep-3.12.tar.gz (3.1MB)
https://ftp.gnu.org/gnu/grep/grep-3.12.tar.xz (1.9MB)
Here are the GPG detached signatures:
https://ftp.gnu.org/gnu/grep/grep-3.12.tar.gz.sig
https://ftp.gnu.org/gnu/grep/grep-3.12.tar.xz.sig
Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html
Here are the SHA1 and SHA256 checksums:
025644ca3ea4f59180d531547c53baeb789c6047 grep-3.12.tar.gz
ut2lRt/Eudl+mS4sNfO1x/IFIv/L4vAboenNy+dkTNw= grep-3.12.tar.gz
4b4df79f5963041d515ef64cfa245e0193a33009 grep-3.12.tar.xz
JkmyfA6Q5jLq3NdXvgbG6aT0jZQd5R58D4P/dkCKB7k= grep-3.12.tar.xz
Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.
Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:
gpg --verify grep-3.12.tar.gz.sig
The signature should match the fingerprint of the following key:
pub rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
Key fingerprint = 155D 3FC5 00C8 3448 6D1E EA67 7FD9 FCCB 000B EEEE
uid [ unknown] Jim Meyering <jim@meyering.net>
uid [ unknown] Jim Meyering <meyering@fb.com>
uid [ unknown] Jim Meyering <meyering@gnu.org>
If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.
gpg --locate-external-key jim@meyering.net
gpg --recv-keys 7FD9FCCB000BEEEE
wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=grep&download=1' | gpg --import -
As a last resort to find the key, you can try the official GNU
keyring:
wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
gpg --keyring gnu-keyring.gpg --verify grep-3.12.tar.gz.sig
This release is based on the grep git repository, available as
git clone https://git.savannah.gnu.org/git/grep.git
with commit 3f8c09ec197a2ced82855f9ecd2cbc83874379ab tagged as v3.12.
For a summary of changes and contributors, see:
https://git.sv.gnu.org/gitweb/?p=grep.git;a=shortlog;h=v3.12
or run this command from a git-cloned grep directory:
git shortlog v3.11..v3.12
This release was bootstrapped with the following tools:
Autoconf 2.72.76-2f64
Automake 1.17.0.91
Gnulib 2025-04-04 3773db653242ab7165cd300295c27405e4f9cc79
NEWS
* Noteworthy changes in release 3.12 (2025-04-10) [stable]
** Bug fixes
Searching a directory with at least 100,000 entries no longer fails
with "Operation not supported" and exit status 2. Now, this prints 1
and no diagnostic, as expected:
$ mkdir t && cd t && seq 100000|xargs touch && grep -r x .; echo $?
1
[bug introduced in grep 3.11]
-mN where 1 < N no longer mistakenly lseeks to end of input merely
because standard output is /dev/null.
** Changes in behavior
The --unix-byte-offsets (-u) option is gone. In grep-3.7 (2021-08-14)
it became a warning-only no-op. Before then, it was a Windows-only no-op.
On Windows platforms and on AIX in 32-bit mode, grep in some cases
now supports Unicode characters outside the Basic Multilingual Plane.
10 April, 2025 05:04PM by Jim Meyering
This is to announce gzip-1.14, a stable release.
Most notable: "gzip -d" is up to 40% faster on x86_64 CPUs with pclmul
support. Why? Because about half of its time was spent computing a CRC
checksum, and that code is far more efficient now. Even on 10-year-old
CPUs lacking pclmul support, it's ~20% faster. Thanks to Lasse Collin
for alerting me to this very early on, to Sam Russell for contributing
gnulib's new crc module and to Bruno Haible and everyone else who keeps
the bar so high for all of gnulib. And as usual, thanks to Paul Eggert
for many contributions everywhere.
There have been 58 commits by 7 people in the 85 weeks since 1.13.
See the NEWS below for a brief summary.
Thanks to everyone who has contributed!
The following people contributed changes to this release:
Bruno Haible (1)
Collin Funk (4)
Jim Meyering (26)
Lasse Collin (1)
Paul Eggert (24)
Sam Russell (1)
Simon Josefsson (1)
Jim
[on behalf of the gzip maintainers]
==================================================================
Here is the GNU gzip home page:
https://gnu.org/s/gzip/
Here are the compressed sources:
https://ftp.gnu.org/gnu/gzip/gzip-1.14.tar.gz (1.4MB)
https://ftp.gnu.org/gnu/gzip/gzip-1.14.tar.xz (868KB)
Here are the GPG detached signatures:
https://ftp.gnu.org/gnu/gzip/gzip-1.14.tar.gz.sig
https://ftp.gnu.org/gnu/gzip/gzip-1.14.tar.xz.sig
Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html
Here are the SHA1 and SHA256 checksums:
27f9847892a1c59b9527469a8a3e5d635057fbdd gzip-1.14.tar.gz
YT1upE8SSNc3DHzN7uDdABegnmw53olLPG8D+YEZHGs= gzip-1.14.tar.gz
05f44a8a589df0171e75769e3d11f8b11d692f58 gzip-1.14.tar.xz
Aae4gb0iC/32Ffl7hxj4C9/T9q3ThbmT3Pbv0U6MCsY= gzip-1.14.tar.xz
Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.
Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:
gpg --verify gzip-1.14.tar.gz.sig
The signature should match the fingerprint of the following key:
pub rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
Key fingerprint = 155D 3FC5 00C8 3448 6D1E EA67 7FD9 FCCB 000B EEEE
uid [ unknown] Jim Meyering <jim@meyering.net>
uid [ unknown] Jim Meyering <meyering@fb.com>
uid [ unknown] Jim Meyering <meyering@gnu.org>
If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.
gpg --locate-external-key jim@meyering.net
gpg --recv-keys 7FD9FCCB000BEEEE
wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=gzip&download=1' | gpg --import -
As a last resort to find the key, you can try the official GNU
keyring:
wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
gpg --keyring gnu-keyring.gpg --verify gzip-1.14.tar.gz.sig
This release is based on the gzip git repository, available as
git clone https://git.savannah.gnu.org/git/gzip.git
with commit fbc4883eb9c304a04623ac506dd5cf5450d055f1 tagged as v1.14.
For a summary of changes and contributors, see:
https://git.sv.gnu.org/gitweb/?p=gzip.git;a=shortlog;h=v1.14
or run this command from a git-cloned gzip directory:
git shortlog v1.13..v1.14
This release was bootstrapped with the following tools:
Autoconf 2.72.76-2f64
Automake 1.17.0.91
Gnulib 2025-01-31 553ab924d2b68d930fae5d3c6396502a57852d23
NEWS
* Noteworthy changes in release 1.14 (2025-04-09) [stable]
** Bug fixes
'gzip -d' no longer omits the last partial output buffer when the
input ends unexpectedly on an IBM Z platform.
[bug introduced in gzip-1.11]
'gzip -l' no longer misreports lengths of multimember inputs.
[bug introduced in gzip-1.12]
'gzip -S' now rejects suffixes containing '/'.
[bug present since the beginning]
** Changes in behavior
The GZIP environment variable is now silently ignored except for the
options -1 (--fast) through -9 (--best), --rsyncable, and --synchronous.
This brings gzip into line with more-cautious compressors like zstd
that limit environment variables' effect to relatively innocuous
performance issues. You can continue to use scripts to specify
whatever gzip options you like.
'zmore' is no longer installed on platforms lacking 'more'.
** Performance improvements
gzip now decompresses significantly faster by computing CRCs via a
slice by 8 algorithm, and faster yet on x86-64 platforms that
support pclmul instructions.
10 April, 2025 04:34AM by Jim Meyering
This is to announce coreutils-9.7, a stable release.
There have been 63 commits by 11 people in the 12 weeks since 9.6,
with a focus on bug fixing and stabilization.
See the NEWS below for a brief summary.
Thanks to everyone who has contributed!
The following people contributed changes to this release:
Bruno Haible (1) Jim Meyering (2)
Collin Funk (2) Lukáš Zaoral (1)
Daniel Hofstetter (1) Mike Swanson (1)
Frédéric Yhuel (1) Paul Eggert (21)
G. Branden Robinson (1) Pádraig Brady (32)
Grisha Levit (1)
Pádraig [on behalf of the coreutils maintainers]
==================================================================
Here is the GNU coreutils home page:
https://gnu.org/s/coreutils/
Here are the compressed sources:
https://ftp.gnu.org/gnu/coreutils/coreutils-9.7.tar.gz (15MB)
https://ftp.gnu.org/gnu/coreutils/coreutils-9.7.tar.xz (5.9MB)
Here are the GPG detached signatures:
https://ftp.gnu.org/gnu/coreutils/coreutils-9.7.tar.gz.sig
https://ftp.gnu.org/gnu/coreutils/coreutils-9.7.tar.xz.sig
Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html
Here are the SHA1 and SHA256 checksums:
File: coreutils-9.7.tar.gz
SHA1 sum: bfebebaa1aa59fdfa6e810ac07d85718a727dcf6
SHA256 sum: 0898a90191c828e337d5e4e4feb71f8ebb75aacac32c434daf5424cda16acb42
File: coreutils-9.7.tar.xz
SHA1 sum: 920791e12e7471479565a066e116a087edcc0df9
SHA256 sum: e8bb26ad0293f9b5a1fc43fb42ba970e312c66ce92c1b0b16713d7500db251bf
Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:
gpg --verify coreutils-9.7.tar.gz.sig
The signature should match the fingerprint of the following key:
pub rsa4096/0xDF6FD971306037D9 2011-09-23 [SC]
Key fingerprint = 6C37 DC12 121A 5006 BC1D B804 DF6F D971 3060 37D9
uid [ultimate] Pádraig Brady <P@draigBrady.com>
uid [ultimate] Pádraig Brady <pixelbeat@gnu.org>
If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.
gpg --locate-external-key P@draigBrady.com
gpg --recv-keys DF6FD971306037D9
wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=coreutils&download=1' | gpg --import -
As a last resort to find the key, you can try the official GNU
keyring:
wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
gpg --keyring gnu-keyring.gpg --verify coreutils-9.7.tar.gz.sig
This release is based on the coreutils git repository, available as
git clone https://git.savannah.gnu.org/git/coreutils.git
with commit 8e075ff8ee11692c5504d8e82a48ed47a7f07ba9 tagged as v9.7.
For a summary of changes and contributors, see:
https://git.sv.gnu.org/gitweb/?p=coreutils.git;a=shortlog;h=v9.7
or run this command from a git-cloned coreutils directory:
git shortlog v9.6..v9.7
This release was bootstrapped with the following tools:
Autoconf 2.72.70-9ff9
Automake 1.16.5
Gnulib 2025-04-07 41e7b7e0d159d8ac0eb385964119f350ac9dfc3f
Bison 3.8.2
NEWS
* Noteworthy changes in release 9.7 (2025-04-09) [stable]
** Bug fixes
'cat' would fail with "input file is output file" if input and
output are the same terminal device and the output is append-only.
[bug introduced in coreutils-9.6]
'cksum -a crc' misbehaved on aarch64 with 32-bit uint_fast32_t.
[bug introduced in coreutils-9.6]
dd with the 'nocache' flag will now detect all failures to drop the
cache for the whole file. Previously it may have erroneously succeeded.
[bug introduced with the "nocache" feature in coreutils-8.11]
'ls -Z dir' would crash on all systems, and 'ls -l' could crash
on systems like Android with SELinux but without xattr support.
[bug introduced in coreutils-9.6]
`ls -l` could output spurious "Not supported" errors in certain cases,
like with dangling symlinks on cygwin.
[bug introduced in coreutils-9.6]
timeout would fail to timeout commands with infinitesimal timeouts.
For example `timeout 1e-5000 sleep inf` would never timeout.
[bug introduced with timeout in coreutils-7.0]
sleep, tail, and timeout would sometimes sleep for slightly less
time than requested.
[bug introduced in coreutils-5.0]
'who -m' now outputs entries for remote logins. Previously login
entries prefixed with the service (like "sshd") were not matched.
[bug introduced in coreutils-9.4]
** Improvements
'logname' correctly returns the user who logged in the session,
on more systems. Previously on musl or uclibc it would have merely
output the LOGNAME environment variable.
09 April, 2025 11:36AM by Pádraig Brady
This is to announce diffutils-3.12, a stable bug-fix release.
Thanks to Paul Eggert and Collin Funk for the bug fixes.
There have been 13 commits by 4 people in the 9 weeks since 3.11.
See the NEWS below for a brief summary.
Thanks to everyone who has contributed!
The following people contributed changes to this release:
Collin Funk (1)
Jim Meyering (6)
Paul Eggert (5)
Simon Josefsson (1)
Jim
[on behalf of the diffutils maintainers]
==================================================================
Here is the GNU diffutils home page:
https://gnu.org/s/diffutils/
Here are the compressed sources:
https://ftp.gnu.org/gnu/diffutils/diffutils-3.12.tar.gz (3.3MB)
https://ftp.gnu.org/gnu/diffutils/diffutils-3.12.tar.xz (1.9MB)
Here are the GPG detached signatures:
https://ftp.gnu.org/gnu/diffutils/diffutils-3.12.tar.gz.sig
https://ftp.gnu.org/gnu/diffutils/diffutils-3.12.tar.xz.sig
Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html
Here are the SHA1 and SHA256 checksums:
e3f3e8ef171fcb54911d1493ac6066aa3ed9df38 diffutils-3.12.tar.gz
W+GBsn7Diq0kUAgGYaZOShdSuym31QUr8KAqcPYj+bI= diffutils-3.12.tar.gz
c2f302726d2709c6881c4657430a671abe5eedfa diffutils-3.12.tar.xz
fIt/n8hgkUH96pzs6FJJ0whiQ5H/Yd7a9Sj8szdyff0= diffutils-3.12.tar.xz
Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.
Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:
gpg --verify diffutils-3.12.tar.gz.sig
The signature should match the fingerprint of the following key:
pub rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
Key fingerprint = 155D 3FC5 00C8 3448 6D1E EA67 7FD9 FCCB 000B EEEE
uid [ unknown] Jim Meyering <jim@meyering.net>
uid [ unknown] Jim Meyering <meyering@fb.com>
uid [ unknown] Jim Meyering <meyering@gnu.org>
If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.
gpg --locate-external-key jim@meyering.net
gpg --recv-keys 7FD9FCCB000BEEEE
wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=diffutils&download=1' | gpg --import -
As a last resort to find the key, you can try the official GNU
keyring:
wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
gpg --keyring gnu-keyring.gpg --verify diffutils-3.12.tar.gz.sig
This release is based on the diffutils git repository, available as
git clone https://git.savannah.gnu.org/git/diffutils.git
with commit 16681a3cbcea47e82683c713b0dac7d59d85a6fa tagged as v3.12.
For a summary of changes and contributors, see:
https://git.sv.gnu.org/gitweb/?p=diffutils.git;a=shortlog;h=v3.12
or run this command from a git-cloned diffutils directory:
git shortlog v3.11..v3.12
This release was bootstrapped with the following tools:
Autoconf 2.72.76-2f64
Automake 1.17.0.91
Gnulib 2025-04-04 3773db653242ab7165cd300295c27405e4f9cc79
NEWS
* Noteworthy changes in release 3.12 (2025-04-08) [stable]
** Bug fixes
diff -r no longer merely summarizes when comparing an empty regular
file to a nonempty regular file.
[bug#76452 introduced in 3.11]
diff -y no longer crashes when given nontrivial differences.
[bug#76613 introduced in 3.11]
09 April, 2025 03:16AM by Jim Meyering