Planet GNU

Aggregation of development blogs from the GNU Project

May 16, 2025

FSF Events

Community meetup in Moscow, Russia

Приглашаем тебя обсудить состояние движения за свободные программы в целом и конкретно в русскоязычном пространстве на встрече в честь 40-летия ФСФ. Участни/цы движения расскажут о его истории и целях тем, кто только начал или хочет начать знакомство с ним, а между собой обсудят стратегию его развития.

16 May, 2025 02:39PM

May 15, 2025

Community meetup Tunis, Tunisia and online

Round table with FSF executive director Zoë Kooyman to celebrate FSF40

15 May, 2025 03:55PM

May 14, 2025

GNU Health

Suriname Public Healthcare System embraces GNU Health

Suriname has adopted GNU Health Hospital and Health Information System for their Public Healthcare system.

The adoption of GNU Health was announced during the press release celebrated last Friday, May 9th in Paramaribo, in the context of the country healthcare digitization campaign. They defined GNU Health as “An open source system that is both accessible and scalable”1. During the event, the Suriname Patient Portal and My Health App were also announced.

Press release. From left to right: Prof. Dr Jerry Toelsie, Minister Amar Ramadhin, Dr Aloysius Koendjbihari and Mr. Richard Mendes

The Minister of Health, Dr. Amar Ramadhin, made emphasis on the benefits of this digital transformation. “We move away from paper files and work towards greater efficiency and patient-oriented care experience”.

Digitization is supported by the IS4HIT Information Systems for Health Information Technology) program, an initiative of the Pan-American Health Organisation (PAHO), whose delegates where at the conference, together with local health professionals.

The GNU Health Hospital and Health Information System – from the socioeconomic determinants of health to the molecular basis of disease

The GNU Health rollout will be done in phases throughout the different public health centers, starting at the Regional Service Centers (RGDs). The main focus is on Primary Care. Some of the tasks in the initial phase will be demographics, patient management, appointments, medical encounters, prescriptions, complementary tests orders and reporting. Training sessions to the local health professionals and technical team are being conducted, as well as the localization to Suriname.

Minister Ramadhin declared: “[Healthcare] Digitization is not an end in itself but a powerful means to make care more human-oriented, safer and more efficient.” . That’s where GNU Health fits right in. The Hospital and Health Information System of GNU Health has Social Medicine and primary care at its core. It excels in health promotion and disease prevention. When properly implemented and used, GNU Health is way more than just an Electronic Medical Record or a Hospital Management Information System. It empowers health professionals to assess the socioeconomic determinants of health and disease, taking a proactive approach to prevent and tackle the root of the diseases at individual, family and society level. The world is facing a pandemic of non-transmissible diseases. Obesity, diabetes, depression, cancer and neurodegenerative conditions are on the rise, with a appalling impact on the underprivileged. GNU Health will be a great ally for nurses, physicians, nutritionists and social workers of Suriname to find and engage those at higher risk. in the community.

The fact that GNU Health is Free/Libre software allows Suriname to download, study the system and adapt it to their needs and legislation, free of any kind of vendor lock-in. After all, health is -or it should be- a non-negotiable human right.

GNU Health is now part of Suriname to deliver a sustainable, interoperable, standard-based, privacy oriented, scalable digital healthcare solution for the country public health system.

A Digital Public Good. In 2022 GNU Health was declared a Digital Public Good by the Digital Public Goods Alliance (DPGA). By definition, a Digital Public Good is open-source software, open data, open AI models, open standards, and open content that adhere to privacy and other applicable best practices, do no harm by design and are of high relevance for attainment of the United Nations 2030 Sustainable Development Goals (SDGs). This definition stems from the UN Secretary-General’s Roadmap for Digital Cooperation.

We are very proud and excited to see GNU Health deployed in Suriname national public health system and wish them the very best embracing the system as we envision it, a social project with some technology behind.

About GNU Health

GNU Health is an open science, community driven project from GNU Solidarioa non-profit humanitarian organization focused on Social Medicine. Our project has been adopted by public hospitals, research and academic institutions, governments and multilateral organizations around the world.

GNU Health is a GNU official package, awarded with the Free Software Foundation award of Social benefit and declared a Digital Public Good.

See also:

GNU Health : https://www.gnuhealth.org

GNU Solidario: https://www.gnusolidario.org

  1. https://www.surinametimes.com/artikel/suriname-zet-grote-stappen-richting-digitale-gezondheidszorg ↩

14 May, 2025 06:57PM by GNU Solidario

May 13, 2025

FSF Events

Free Software Directory meeting on IRC: Friday, May 16, starting at 12:00 EDT (16:00 UTC)

Join the FSF and friends on Friday, May 16 from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.

13 May, 2025 08:48PM

May 12, 2025

screen @ Savannah

GNU Screen v.5.0.1 is released

Screen is a full-screen window manager that multiplexes a physical terminal between several processes, typically interactive shells.

5.0.1 is a security fix release. It includes only few code fixes, types and security issues. It doesn't include any new features.

  • CVE-2025-46805: do NOT send signals with root privileges
  • CVE-2025-46804: avoid file existence test information leaks
  • CVE-2025-46803: apply safe PTY default mode of 0620
  • CVE-2025-46802: prevent temporary 0666 mode on PTYs in attacher
  • CVE-2025-23395: reintroduce lf_secreopen() for logfile
  • buffer overflow due bad strncpy()
  • uninitialized variables warnings
  • typos
  • combining char handling that could lead to a segfault


Release (official tarball) will be available soon for download:
https://ftp.gnu.org/gnu/screen/

Please report any bugs or regressions.
Thanks to everyone who contributed to this release.

Cheers,
Alex

12 May, 2025 07:38PM by Alexander Naumov

May 11, 2025

GNU Guix

Migrating to Codeberg

The Guix project will be migrating all its repositories along with bug tracking and patch tracking to Codeberg within a month. This decision is the result of a collective consensus-building process that lasted several months. This post shows the upcoming milestones in that migration and discusses what it will change for people using Guix and for contributors.

Codeberg logo.

Context

For those who haven’t heard about it, Codeberg is a source code collaboration platform. It is run by Codeberg e.V., a non-profit registered in Germany. The software behind Codeberg is Forgejo, a free software forge (licensed under GPLv3) supporting the “merge request” style of workflow familiar to many developers.

Since its inception, Guix has been hosting its source code on Savannah, with bug reports and patches handled by email, tracked by a Debbugs instance, and visible on the project’s tracker. Debbugs and Savannah are hosted by the Free Software Foundation (FSF); all three services are administered by volunteers who have been supportive over these 13 years—thanks!

The motivation and the main parts of the migration are laid out in the second Guix Consensus Document (GCD). The GCD process itself was adopted just a few months ago; it’s a major milestone for the project that we’ll discuss in more detail in a future post. Suffice to say that this GCD was discussed and improved publicly for two months, after which deliberation among members of Guix teams led to acceptance.

Milestones

Migration to Codeberg will happen gradually. To summarize the GCD, the key milestones are the following:

  1. By June 7th, and probably earlier, Git repositories will all have migrated to Codeberg—some have already moved.

  2. On May 25th, the Guix repository itself will be migrated.

  3. From there on and until at least May 25th, 2026, https://git.savannah.gnu.org/git/guix.git will be a mirror of https://codeberg.org/guix/guix.git.

  4. Until December 31st, 2025, bug reports and patches will still be accepted by email, in addition to Codeberg (issues and pull requests).

Of course, this is just the beginning. Our hope is that the move can help improve much needed tooling such as the QA infrastructure following work on Forgejo/Cuirass integration started earlier this year, and possibly develop new tools and services to assist in the maintenance of this huge package collection that Guix provides.

What this will change for you

As a user, the main change is that your channels.scm configuration files, if their refer to the git.savannah.gnu.org URL, should be changed to refer to https://codeberg.org/guix/guix.git once migration is complete. But don’t worry: guix pull will tell you if/when you need to update your config files and the old URL will remain a mirror for at least a year anyway.

Also, channel files produced by guix describe to pin Guix to a specific revision and to re-deploy it later anytime with time-machine will always work, even if they refer to the git.savannah.gnu.org URL, and even when that repository eventually vanishes, thanks to automatic fallback to Software Heritage.

As a contributor, nothing changes for bug reports and patches that you already submitted by email: just keep going!

Once the Guix repository has migrated though, you’ll be able to report bugs at Codeberg and create pull requests for changes. The latter is a relief for many—no need to fiddle with admittedly intricate email setups and procedures—but also a pain point for those who had come to master and appreciate the email workflow.

For this reason, the “User Interfaces” section of the GCD describes the options available besides the Web interface—command-line and Emacs interfaces in particular. Some are still work-in-progress, but it’s exciting to see, for example, that over the past few months many improvements landed in fj.el and that a Forgejo-capable branch of Magit-Forge saw the light. Check it out!

A concern brought up during the discussion is that of having to create an account on Codeberg to be able to contribute—sometimes seen as a hindrance compared to the open-for-all and distributed nature of cooperation by email. This remains an open issue, though hopefully one that will become less acute as support for federation in Forgejo develops. In the meantime, as the GCD states, occasional bug reports and patches sent by email to guix-devel will be accepted.

Moving forward

This was an summary of what is to come; check out the GCD for more info, and reach out to the guix-devel mailing list if you have any questions!

Real work begins now. We hope the migration to Codeberg will be smooth and enjoyable for all. For one thing, it already proved our ability to collectively decide on the project’s future, which is no small feat. There’s a lot to expect from the move in improving the project’s ability to work flawlessly at this scale—more than 100 code contributors and 2,000 commits each month, and more than 33,000 packages available in Guix proper. Let’s make the best of it, and until then, happy hacking!

11 May, 2025 06:30PM by Ludovic Courtès

May 09, 2025

GNU Taler news

GNU Taler 1.0 released

We are happy to announce the release of GNU Taler v1.0.

09 May, 2025 10:00PM

May 08, 2025

health @ Savannah

GNU Health becomes an organization in the Python Package Index - PyPI

We're proud to announce that #GNUHealth is now an organization in the Python Package Index (#PyPI).

The organization makes it easy to find and explore our projects and packages.

This is URL for the GNU Health organization in PyPI:

https://pypi.org/org/GNUHealth/

We are very grateful to the Python Software Foundation for making GNU Health a community organization within PyPI!

Get this and the latest news about GNU Health from our official Mastodon account:

https://mastodon.social/@gnuhealth

08 May, 2025 11:06AM by Luis Falcon

May 07, 2025

gettext @ Savannah

GNU gettext 0.25 released

Download from https://ftp.gnu.org/pub/gnu/gettext/gettext-0.25.tar.gz

New in this release:


  • Programming languages support:
    • Go:
      • xgettext now supports Go.
      • 'msgfmt -c' now verifies the syntax of translations of Go format strings.
      • New examples 'hello-go' and 'hello-go-http' have been added.
    • TypeScript:
      • xgettext now supports TypeScript and TSX (= TypeScript with JSX  extensions).
    • D:
      • A new library libintl_d.a contains the runtime for using GNU gettext message catalogs in the D programming language.
      • xgettext now supports D.
      • 'msgfmt -c' now verifies the syntax of translations of D format strings.
      • A new example 'hello-d' has been added.
    • Modula-2:
      • A new library libintl_m2.so contains the runtime for using GNU gettext message catalogs in the Modula-2 programming language.
      • xgettext now supports Modula-2.
      • 'msgfmt -c' now verifies the syntax of translations of Modula-2 format strings.
      • A new example 'hello-modula2' has been added.


  • Improvements for maintainers:
    • xgettext has two new options, '--no-git' and '--generated', that customize the way the 'POT-Creation-Date' in the POT file is computed.
    • Fixed bad interactions between autoreconf and autopoint.

07 May, 2025 05:15PM by Bruno Haible

May 06, 2025

FSF Events

Free Software Directory meeting on IRC: Friday, May 9, starting at 12:00 EDT (16:00 UTC)

Join the FSF and friends on Friday, May 9 from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.

06 May, 2025 06:47PM

Community meetup in Glasgow, Scotland, United Kingdom

A weekend-long open doors event where people can seek help and discuss where free software may fit in their lives.

06 May, 2025 06:30PM

May 02, 2025

gettext @ Savannah

GNU gettext 0.24.1 released

Download from https://ftp.gnu.org/pub/gnu/gettext/gettext-0.24.1.tar.gz

New in this release:

  • Bug fixes:
    • Fix bad interactions between autoreconf and autopoint.
    • xgettext: Creating the POT file of a package under Git version control is now faster. Also, the use of Git can be turned off by specifying the option --no-git.

02 May, 2025 06:18PM by Bruno Haible

May 01, 2025

FSF Blogs

April GNU Spotlight with Amin Bandali: Twenty-one new GNU releases!

Twenty-one new GNU releases in the last month (as of April 30, 2025):

01 May, 2025 07:30PM

www @ Savannah

Malware in Proprietary Software - April 2025 Additions

The initial injustice of proprietary software often leads to further injustices: malicious functionalities.

The introduction of unjust techniques in nonfree software, such as back doors, DRM, tethering, and others, has become ever more frequent. Nowadays, it is standard practice.

We at the GNU Project show examples of malware that has been introduced in a wide variety of products and dis-services people use everyday, and of companies that make use of these techniques.

Here are our latest additions

April 2025

Malware in Games

Malware in Appliances

  • The company making a “smart” bassinet called Snoo has locked the most advanced functionalities of the Snoo behind a paywall. This unexpected change mainly affects users who received the appliance as a gift, or bought it second-hand on the assumption that all these functionalities would be available to them, as they used to be. This is another example of the deceptive behavior of proprietary software developers who take advantage of their power over users to change rules at will.

Another malicious feature of the Snoo is the fact that users need to create an account with the company, which thus has access to personal data, location (SSID), appliance log, etc., as well as manual notes about baby history.

01 May, 2025 05:33PM by Rob Musial

GNU Health

GNU Health Hospital Information System 5.0 enters alpha

We are very happy to announce that the upcoming version of GNU Health Hospital Information System has entered feature-complete alpha stage. This upcoming version of GNU Health HIS 5.0 supposes over a year of work and is the largest release in terms of functionality and refactoring.

GNU Health HIS 5.0 is expected to be released by the end of June.

This new release comes after over a year of development to deliver state-of-the-art libre technology and user experience. In a nutshell:

  • Tryton 7.0 LTS support
  • New functionality for patient procedures and medical interventions
  • Improved reporting and analytics
  • Enhanced the Laboratory Information System (GNU LIMS – Occhiolino)
  • New features on patient obstetric history and pregnancy related evaluations
  • Improved ergonomics and views on demographics and patient related information.
  • Improved medical genetics and family history taking. Update to the latest genes, proteins and natural variants datasets from UniProt and HUGO
  • Enhanced socoeconomic and family functionalty assessment
  • Extensively revised Medical Imaging, DICOM worklists and Orthanc packages
  • Reorganize nursing and ambulatory care packages
  • Enhanced patient body composition and anthropometrics
  • Enhanced “Focus on” patient section, including automated settings and mental health
  • New insurance and billing features for medical interventions and insurance plans.
  • Improved patient safety and allergic conditions checks and prescription writing

On the technical side we have worked on:

  • Migration to Python Poetry and pyproject.toml from setuptools
  • Increased modularity and minimize dependencies among packages
  • Simplified installation and administration (Virtual machine images, pip, ansible)
  • Improved stability using virtual environment in the installation
  • Over 30 localization and language teams at Codeberg.

At this point, our focus in on testing, translation, packaging and documentation. In the coming days we’ll migrate our community server so we can all test the upcoming version.

For those of you on GNU Health 4.4, please start thinking on the migration project to GH HIS 5.0. This new version is a major leap that delivers many benefits, so we highly encourage you to upgrade. As always, the migration methods and tools are included.

We’d like to invite you to translate GNU Health at Codeberg weblate translation instance and to report any issues you may find during this period.

Don’t forget to follow us in Mastodon (https://mastodon.social/@gnuhealth) to get the latest on this and other GNU Health news!

Stay tuned and happy hacking!

About GNU Health

GNU Health is a Libre, community driven project from GNU Solidarioa non-profit humanitarian organization focused on Social Medicine. Our project has been adopted by public and private health institutions and laboratories, multilateral organizations and national public health systems around the world.

The GNU Health project provides the tools for individuals, health professionals, institutions and governments to proactively assess and improve the underlying determinants of health, from the socioeconomic agents to the molecular basis of disease. From primary health care to precision medicine.

The following are the main components that make up the GNU Health ecosystem:

  • Hospital Management (HMIS)
  • Social Medicine and Public Health
  • Laboratory Management (Occhiolino)
  • Personal Health Record (MyGNUHealth)
  • Bioinformatics and Medical Genetics
  • Thalamus and Federated health networks
  • GNU Health embedded on Single Board devices

GNU Health is a GNU (www.gnu.org) official package, awarded with the Free Software Foundation award of Social benefit. GNU Health has been declared a Digital Public Good ,adopted by many hospitals, governments and multilateral organizations around the globe.

01 May, 2025 11:59AM by Luis Falcon

April 30, 2025

FSF News

Simon Josefsson

Building Debian in a GitLab Pipeline

After thinking about multi-stage Debian rebuilds I wanted to implement the idea. Recall my illustration:

Earlier I rebuilt all packages that make up the difference between Ubuntu and Trisquel. It turned out to be a 42% bit-by-bit identical similarity. To check the generality of my approach, I rebuilt the difference between Debian and Devuan too. That was the debdistreproduce project. It “only” had to orchestrate building up to around 500 packages for each distribution and per architecture.

Differential reproducible rebuilds doesn’t give you the full picture: it ignore the shared package between the distribution, which make up over 90% of the packages. So I felt a desire to do full archive rebuilds. The motivation is that in order to trust Trisquel binary packages, I need to trust Ubuntu binary packages (because that make up 90% of the Trisquel packages), and many of those Ubuntu binaries are derived from Debian source packages. How to approach all of this? Last year I created the debdistrebuild project, and did top-50 popcon package rebuilds of Debian bullseye, bookworm, trixie, and Ubuntu noble and jammy, on a mix of amd64 and arm64. The amount of reproducibility was lower. Primarily the differences were caused by using different build inputs.

Last year I spent (too much) time creating a mirror of snapshot.debian.org, to be able to have older packages available for use as build inputs. I have two copies hosted at different datacentres for reliability and archival safety. At the time, snapshot.d.o had serious rate-limiting making it pretty unusable for massive rebuild usage or even basic downloads. Watching the multi-month download complete last year had a meditating effect. The completion of my snapshot download co-incided with me realizing something about the nature of rebuilding packages. Let me below give a recap of the idempotent rebuilds idea, because it motivate my work to build all of Debian from a GitLab pipeline.

One purpose for my effort is to be able to trust the binaries that I use on my laptop. I believe that without building binaries from source code, there is no practically feasible way to trust binaries. To trust any binary you receive, you can de-assemble the bits and audit the assembler instructions for the CPU you will execute it on. Doing that on a OS-wide level this is unpractical. A more practical approach is to audit the source code, and then confirm that the binary is 100% bit-by-bit identical to one that you can build yourself (from the same source) on your own trusted toolchain. This is similar to a reproducible build.

My initial goal with debdistrebuild was to get to 100% bit-by-bit identical rebuilds, and then I would have trustworthy binaries. Or so I thought. This also appears to be the goal of reproduce.debian.net. They want to reproduce the official Debian binaries. That is a worthy and important goal. They achieve this by building packages using the build inputs that were used to build the binaries. The build inputs are earlier versions of Debian packages (not necessarily from any public Debian release), archived at snapshot.debian.org.

I realized that these rebuilds would be not be sufficient for me: it doesn’t solve the problem of how to trust the toolchain. Let’s assume the reproduce.debian.net effort succeeds and is able to 100% bit-by-bit identically reproduce the official Debian binaries. Which appears to be within reach. To have trusted binaries we would “only” have to audit the source code for the latest version of the packages AND audit the tool chain used. There is no escaping from auditing all the source code — that’s what I think we all would prefer to focus on, to be able to improve upstream source code.

The trouble is about auditing the tool chain. With the Reproduce.debian.net approach, that is a recursive problem back to really ancient Debian packages, some of them which may no longer build or work, or even be legally distributable. Auditing all those old packages is a LARGER effort than auditing all current packages! Doing auditing of old packages is of less use to making contributions: those releases are old, and chances are any improvements have already been implemented and released. Or that improvements are no longer applicable because the projects evolved since the earlier version.

See where this is going now? I reached the conclusion that reproducing official binaries using the same build inputs is not what I’m interested in. I want to be able to build the binaries that I use from source using a toolchain that I can also build from source. And preferably that all of this is using latest version of all packages, so that I can contribute and send patches for them, to improve matters.

The toolchain that Reproduce.Debian.Net is using is not trustworthy unless all those ancient packages are audited or rebuilt bit-by-bit identically, and I don’t see any practical way forward to achieve that goal. Nor have I seen anyone working on that problem. It is possible to do, though, but I think there are simpler ways to achieve the same goal.

My approach to reach trusted binaries on my laptop appears to be a three-step effort:

  • Encourage an idempotently rebuildable Debian archive, i.e., a Debian archive that can be 100% bit-by-bit identically rebuilt using Debian itself.
  • Construct a smaller number of binary *.deb packages based on Guix binaries that when used as build inputs (potentially iteratively) leads to 100% bit-by-bit identical packages as in step 1.
  • Encourage a freedom respecting distribution, similar to Trisquel, from this idempotently rebuildable Debian.

How to go about achieving this? Today’s Debian build architecture is something that lack transparency and end-user control. The build environment and signing keys are managed by, or influenced by, unidentified people following undocumented (or at least not public) security procedures, under unknown legal jurisdictions. I always wondered why none of the Debian-derivates have adopted a modern GitDevOps-style approach as a method to improve binary build transparency, maybe I missed some project?

If you want to contribute to some GitHub or GitLab project, you click the ‘Fork’ button and get a CI/CD pipeline running which rebuild artifacts for the project. This makes it easy for people to contribute, and you get good QA control because the entire chain up until its artifact release are produced and tested. At least in theory. Many projects are behind on this, but it seems like this is a useful goal for all projects. This is also liberating: all users are able to reproduce artifacts. There is no longer any magic involved in preparing release artifacts. As we’ve seen with many software supply-chain security incidents for the past years, where the “magic” is involved is a good place to introduce malicious code.

To allow me to continue with my experiment, I thought the simplest way forward was to setup a GitDevOps-centric and user-controllable way to build the entire Debian archive. Let me introduce the debdistbuild project.

Debdistbuild is a re-usable GitLab CI/CD pipeline, similar to the Salsa CI pipeline. It provide one “build” job definition and one “deploy” job definition. The pipeline can run on GitLab.org Shared Runners or you can set up your own runners, like my GitLab riscv64 runner setup. I have concerns about relying on GitLab (both as software and as a service), but my ideas are easy to transfer to some other GitDevSecOps setup such as Codeberg.org. Self-hosting GitLab, including self-hosted runners, is common today, and Debian rely increasingly on Salsa for this. All of the build infrastructure could be hosted on Salsa eventually.

The build job is simple. From within an official Debian container image build packages using dpkg-buildpackage essentially by invoking the following commands.

sed -i 's/ deb$/ deb deb-src/' /etc/apt/sources.list.d/*.sources
apt-get -o Acquire::Check-Valid-Until=false update
apt-get dist-upgrade -q -y
apt-get install -q -y --no-install-recommends build-essential fakeroot
env DEBIAN_FRONTEND=noninteractive \
    apt-get build-dep -y --only-source $PACKAGE=$VERSION
useradd -m build
DDB_BUILDDIR=/build/reproducible-path
chgrp build $DDB_BUILDDIR
chmod g+w $DDB_BUILDDIR
su build -c "apt-get source --only-source $PACKAGE=$VERSION" > ../$PACKAGE_$VERSION.build
cd $DDB_BUILDDIR
su build -c "dpkg-buildpackage"
cd ..
mkdir out
mv -v $(find $DDB_BUILDDIR -maxdepth 1 -type f) out/

The deploy job is also simple. It commit artifacts to a Git project using Git-LFS to handle large objects, essentially something like this:

if ! grep -q '^pool/**' .gitattributes; then
    git lfs track 'pool/**'
    git add .gitattributes
    git commit -m"Track pool/* with Git-LFS." .gitattributes
fi
POOLDIR=$(if test "$(echo "$PACKAGE" | cut -c1-3)" = "lib"; then C=4; else C=1; fi; echo "$DDB_PACKAGE" | cut -c1-$C)
mkdir -pv pool/main/$POOLDIR/
rm -rfv pool/main/$POOLDIR/$PACKAGE
mv -v out pool/main/$POOLDIR/$PACKAGE
git add pool
git commit -m"Add $PACKAGE." -m "$CI_JOB_URL" -m "$VERSION" -a
if test "${DDB_GIT_TOKEN:-}" = ""; then
    echo "SKIP: Skipping git push due to missing DDB_GIT_TOKEN (see README)."
else
    git push -o ci.skip
fi

That’s it! The actual implementation is a bit longer, but the major difference is for log and error handling.

You may review the source code of the base Debdistbuild pipeline definition, the base Debdistbuild script and the rc.d/-style scripts implementing the build.d/ process and the deploy.d/ commands.

There was one complication related to artifact size. GitLab.org job artifacts are limited to 1GB. Several packages in Debian produce artifacts larger than this. What to do? GitLab supports up to 5GB for files stored in its package registry, but this limit is too close for my comfort, having seen some multi-GB artifacts already. I made the build job optionally upload artifacts to a S3 bucket using SHA256 hashed file hierarchy. I’m using Hetzner Object Storage but there are many S3 providers around, including self-hosting options. This hierarchy is compatible with the Git-LFS .git/lfs/object/ hierarchy, and it is easy to setup a separate Git-LFS object URL to allow Git-LFS object downloads from the S3 bucket. In this mode, only Git-LFS stubs are pushed to the git repository. It should have no trouble handling the large number of files, since I have earlier experience with Apt mirrors in Git-LFS.

To speed up job execution, and to guarantee a stable build environment, instead of installing build-essential packages on every build job execution, I prepare some build container images. The project responsible for this is tentatively called stage-N-containers. Right now it create containers suitable for rolling builds of trixie on amd64, arm64, and riscv64, and a container intended for as use the stage-0 based on the 20250407 docker images of bookworm on amd64 and arm64 using the snapshot.d.o 20250407 archive. Or actually, I’m using snapshot-cloudflare.d.o because of download speed and reliability. I would have prefered to use my own snapshot mirror with Hetzner bandwidth, alas the Debian snapshot team have concerns about me publishing the list of (SHA1 hash) filenames publicly and I haven’t been bothered to set up non-public access.

Debdistbuild has built around 2.500 packages for bookworm on amd64 and bookworm on arm64. To confirm the generality of my approach, it also build trixie on amd64, trixie on arm64 and trixie on riscv64. The riscv64 builds are all on my own hosted runners. For amd64 and arm64 my own runners are only used for large packages where the GitLab.com shared runners run into the 3 hour time limit.

What’s next in this venture? Some ideas include:

  • Optimize the stage-N build process by identifying the transitive closure of build dependencies from some initial set of packages.
  • Create a build orchestrator that launches pipelines based on the previous list of packages, as necessary to fill the archive with necessary packages. Currently I’m using a basic /bin/sh for loop around curl to trigger GitLab CI/CD pipelines with names derived from https://popcon.debian.org/.
  • Create and publish a dists/ sub-directory, so that it is possible to use the newly built packages in the stage-1 build phase.
  • Produce diffoscope-style differences of built packages, both stage0 against official binaries and between stage0 and stage1.
  • Create the stage-1 build containers and stage-1 archive.
  • Review build failures. On amd64 and arm64 the list is small (below 10 out of ~5000 builds), but on riscv64 there is some icache-related problem that affects Java JVM that triggers build failures.
  • Provide GitLab pipeline based builds of the Debian docker container images, cloud-images, debian-live CD and debian-installer ISO’s.
  • Provide integration with Sigstore and Sigsum for signing of Debian binaries with transparency-safe properties.
  • Implement a simple replacement for dpkg and apt using /bin/sh for use during bootstrapping when neither packaging tools are available.

What do you think?

30 April, 2025 09:25AM by simon

April 29, 2025

FSF News

FSF to hold free software hackathon in honor of its fortieth anniversary

BOSTON, Massachusetts, USA (Tuesday, April 29, 2025), The Free Software Foundation (FSF) today announced its plans for a hackathon to improve free/libre software in honor of its fortieth anniversary. Free software projects and hackers at any stage of their development are invited to participate.

29 April, 2025 06:55PM

April 28, 2025

libsigsegv @ Savannah

GNU libsigsegv 2.15 is released

GNU libsigsegv version 2.15 is released.

New in this release:

  • Added support for Linux/PowerPC (32-bit) with musl libc.
  • Added support for Hurd/x86_64.
  • Added support for macOS/x86_64 with clang 15 or newer.
  • Optimize distinction between stack overflow and other fault on AIX 7.


Download: https://ftp.gnu.org/gnu/libsigsegv/libsigsegv-2.15.tar.gz

28 April, 2025 04:45PM by Bruno Haible

April 27, 2025

GNU Taler news

Taler Mailbox and Directory service released

We are happy to announce the release of two new GNU Taler components: The Taler Directory (TalDir) and Mailbox services. The Taler Wallet will be integrated in future versions to interact with the Taler Directory and Mailbox in order to deliver a smooth user experience for Peer-to-Peer payments.

27 April, 2025 10:00PM

April 26, 2025

remotecontrol @ Savannah

April 23, 2025

GNU Taler news

Taler iOS wallet independent security audit report published

RadicallyOpenSecurity performed an external crystal-box security audit of the GNU Taler iOS wallet (excluding wallet-core) funded by NGI. You can find the final report here. We already addressed all significant findings except enabling FaceID/TouchID to enable using the app which remains a feature on our roadmap to be addressed in the next few months. We thank RadicallyOpenSecurity for their work and the European Commission's Horizion 2020 NGI initiative for funding the development of the iOS wallet including the security review.

23 April, 2025 10:00PM

April 21, 2025

Jose E. Marchesi

SUPPER, a "modern" stropping regime for Algol 68

A draft of a proposed GNU extension to the Algol 68 programming language has been published today at https://algol68-lang.org/docs/GNU68-2025-004-supper.pdf.

SUPPER stropping in Algol 68
SUPPER stropping in Algol 68

This new stropping regime aims to be more appealing to contemporary programmers, and also more convenient to be used in today's computing systems, while at the same time retaining the full expressive power of a stropped language and being 100% backwards compatible as a super-extension.

The stropping regime has been already implemented in the https://gcc.gnu.org/wiki/Algol68FrontEndGCC Algol 68 front-end and also in the Emacs a68-mode that provides full automatic indentation and syntax highlighting.

The sources of the godcc program have been already transitioned to the new regime, and the result is quite satisfactory. Check it out!

Comments and suggestions for the draft are very welcome, and would help to move the draft forward to a final state. Please send them to algol68@gcc.gnu.org.

Salud, and happy Easter everyone!

21 April, 2025 12:00AM

April 20, 2025

gperf @ Savannah

GNU gperf 3.3 released

Download from https://ftp.gnu.org/gnu/gperf/gperf-3.3.tar.gz

New in this release:

  • Speedup: gperf is now between 2x and 2.5x faster.

20 April, 2025 12:43PM by Bruno Haible

April 19, 2025

unifont @ Savannah

Unifont 16.0.03 Released

19 April 2025 Unifont 16.0.03 is now available.  This is a minor release with many glyph improvements.  See the ChangeLog file for details.

Download this release from GNU server mirrors at:

     https://ftpmirror.gnu.org/unifont/unifont-16.0.03/

or if that fails,

     https://ftp.gnu.org/gnu/unifont/unifont-16.0.03/

or, as a last resort,

     ftp://ftp.gnu.org/gnu/unifont/unifont-16.0.03/

These files are also available on the unifoundry.com website:

     https://unifoundry.com/pub/unifont/unifont-16.0.03/

Font files are in the subdirectory

     https://unifoundry.com/pub/unifont/unifont-16.0.03/font-builds/

A more detailed description of font changes is available at

      https://unifoundry.com/unifont/index.html

and of utility program changes at

      https://unifoundry.com/unifont/unifont-utilities.html

Information about Hangul modifications is at

      https://unifoundry.com/hangul/index.html

and

      http://unifoundry.com/hangul/hangul-generation.html

Enjoy!

19 April, 2025 04:08PM by Paul Hardy

April 17, 2025

FSF Blogs

US Social Security Administration reverses freedom-impeding identity verification policy

In a win for free software activists, the United States Social Security Administration reversed its policy plan that would require using a freedom-disrespecting website or traveling to an in-person office.

17 April, 2025 07:55PM

Simon Josefsson

Verified Reproducible Tarballs

Remember the XZ Utils backdoor? One factor that enabled the attack was poor auditing of the release tarballs for differences compared to the Git version controlled source code. This proved to be a useful place to distribute malicious data.

The differences between release tarballs and upstream Git sources is typically vendored and generated files. Lots of them. Auditing all source tarballs in a distribution for similar issues is hard and boring work for humans. Wouldn’t it be better if that human auditing time could be spent auditing the actual source code stored in upstream version control instead? That’s where auditing time would help the most.

Are there better ways to address the concern about differences between version control sources and tarball artifacts? Let’s consider some approaches:

  • Stop publishing (or at least stop building from) source tarballs that differ from version control sources.
  • Create recipes for how to derive the published source tarballs from version control sources. Verify that independently from upstream.

While I like the properties of the first solution, and have made effort to support that approach, I don’t think normal source tarballs are going away any time soon. I am concerned that it may not even be a desirable complete solution to this problem. We may need tarballs with pre-generated content in them for various reasons that aren’t entirely clear to us today.

So let’s consider the second approach. It could help while waiting for more experience with the first approach, to see if there are any fundamental problems with it.

How do you know that the XZ release tarballs was actually derived from its version control sources? The same for Gzip? Coreutils? Tar? Sed? Bash? GCC? We don’t know this! I am not aware of any automated or collaborative effort to perform this independent confirmation. Nor am I aware of anyone attempting to do this on a regular basis. We would want to be able to do this in the year 2042 too. I think the best way to reach that is to do the verification continuously in a pipeline, fixing bugs as time passes. The current state of the art seems to be that people audit the differences manually and hope to find something. I suspect many package maintainers ignore the problem and take the release source tarballs and trust upstream about this.

We can do better.

I have launched a project to setup a GitLab pipeline that invokes per-release scripts to rebuild that release artifact from git sources. Currently it only contain recipes for projects that I released myself. Releases which where done in a controlled way with considerable care to make reproducing the tarballs possible. The project homepage is here:

https://gitlab.com/debdistutils/verify-reproducible-releases

The project is able to reproduce the release tarballs for Libtasn1 v4.20.0, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, and GNU SASL v2.2.2. You can see this in a recent successful pipeline. All of those releases were prepared using Guix, and I’m hoping the Guix time-machine will make it possible to keep re-generating these tarballs for many years to come.

I spent some time trying to reproduce the current XZ release tarball for version 5.8.1. That would have been a nice example, wouldn’t it? First I had to somehow mimic upstream’s build environment. The XZ release tarball contains GNU Libtool files that are identified with version 2.5.4.1-baa1-dirty. I initially assumed this was due to the maintainer having installed libtool from git locally (after making some modifications) and made the XZ release using it. Later I learned that it may actually be coming from ArchLinux which ship with this particular libtool version. It seems weird for a distribution to use libtool built from a non-release tag, and furthermore applying patches to it, but things are what they are. I made some effort to setup an ArchLinux build environment, however the now-current Gettext version in ArchLinux seems to be more recent than the one that were used to prepare the XZ release. I don’t know enough ArchLinux to setup an environment corresponding to an earlier version of ArchLinux, which would be required to finish this. I gave up, maybe the XZ release wasn’t prepared on ArchLinux after all. Actually XZ became a good example for this writeup anyway: while you would think this should be trivial, the fact is that it isn’t! (There is another aspect here: fingerprinting the versions used to prepare release tarballs allows you to infer what kind of OS maintainers are using to make releases on, which is interesting on its own.)

I made some small attempts to reproduce the tarball for GNU Shepherd version 1.0.4 too, but I still haven’t managed to complete it.

Do you want a supply-chain challenge for the Easter weekend? Pick some well-known software and try to re-create the official release tarballs from the corresponding Git checkout. Is anyone able to reproduce anything these days? Bonus points for wrapping it up as a merge request to my project.

Happy Supply-Chain Security Hacking!

17 April, 2025 07:24PM by simon

April 15, 2025

FSF News

More than fifteen free software socials to be held globally

BOSTON, Massachusetts, USA (Tuesday, April 15, 2025), The Free Software Foundation (FSF) today announced that more than fifteen free software socials will be held around the world this year with the help of the FSF.

15 April, 2025 04:23PM

April 11, 2025

gcl @ Savannah

Small release errata

Greetings!  While these tiny issues will likely not affect many if any,
there are alas a few tiny errata with the 2.7.1 tarball release.  Posted
here just for those interested.  Will of course be incorporated in the
next release.


modified   gcl/debian/rules
@@ -138,7 +138,7 @@ clean: debian/control debian/gcl.templates
  rm -rf $(INS) debian/substvars debian.upstream
  rm -rf *stamp build-indep
  rm -f  debian/elpa-gcl$(EXT).elpa debian/gcl$(EXT)-pkg.el
- rm -rf $(EXT_TARGS) info/gcl$(EXT)*.info*
+ rm -rf $(EXT_TARGS) info/gcl$(EXT)*.info* gcl_pool
 
 debian-clean: debian/control debian/gcl.templates
  dh_testdir
modified   gcl/git.tag
@@ -1,2 +1,2 @@
-"Version_2_7_0"
+"Version_2_7_1"
 
modified   gcl/o/alloc.c
@@ -707,6 +707,7 @@ empty_relblock(void) {
   for (;!rb_emptyp();) {
     tm_table[t_relocatable].tm_adjgbccnt--;
     expand_contblock_index_space();
+    expand_contblock_array();
     GBC(t_relocatable);
   }
   sSAleaf_collection_thresholdA->s.s_dbind=o;

11 April, 2025 10:06PM by Camm Maguire

GCL 2.7.1 has been released

Greetings! 

Greetings!  The GCL team is happy to announce the release of version
2.7.1, the culmination of many years of work and a major development
in the evolution of GCL.  Please see http://www.gnu.org/software/gcl for
downloading information.

11 April, 2025 02:31PM by Camm Maguire

Gary Benson

Python antipattern: Close in finally

Don’t do this:

thing = Thing()
try:
    thing.do_stuff()
finally:
    thing.close()

Do do this:

from contextlib import closing

with closing(Thing()) as thing:
    thing.do_stuff()

Why is the second better? Using contextlib.closing() ties closing the item to its creation. These baby examples are about equally easy to reason about, with only a single line in the try block, but consider what happens ifwhen more lines get added in future? In the first example, the close moves away, potentially offscreen, but that doesn’t happen in the second.

11 April, 2025 10:27AM by gbenson

April 10, 2025

GNUnet News

GNUnet 0.24.1

GNUnet 0.24.1

This is a bugfix release for gnunet 0.24.0. It fixes some regressions and minor bugs.

Links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try https://ftp.gnu.org/gnu/gnunet/

10 April, 2025 10:00PM

FSF Blogs

grep @ Savannah

grep-3.12 released [stable]


This is to announce grep-3.12, a stable release.

It's been nearly two years! There have been two bug fixes and many
harder-to-see improvements via gnulib. Thanks to Paul Eggert for doing
so much of the work and Bruno Haible for all the testing and all he does
to make gnulib a paragon of portable, reliable, top-notch code.

There have been 77 commits by 6 people in the 100 weeks since 3.11.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Bruno Haible (5)
  Carlo Marcelo Arenas Belón (1)
  Collin Funk (1)
  Grisha Levit (1)
  Jim Meyering (31)
  Paul Eggert (38)

Jim
 [on behalf of the grep maintainers]
==================================================================

Here is the GNU grep home page:
    https://gnu.org/s/grep/

Here are the compressed sources:
  https://ftp.gnu.org/gnu/grep/grep-3.12.tar.gz   (3.1MB)
  https://ftp.gnu.org/gnu/grep/grep-3.12.tar.xz   (1.9MB)

Here are the GPG detached signatures:
  https://ftp.gnu.org/gnu/grep/grep-3.12.tar.gz.sig
  https://ftp.gnu.org/gnu/grep/grep-3.12.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

  025644ca3ea4f59180d531547c53baeb789c6047  grep-3.12.tar.gz
  ut2lRt/Eudl+mS4sNfO1x/IFIv/L4vAboenNy+dkTNw=  grep-3.12.tar.gz
  4b4df79f5963041d515ef64cfa245e0193a33009  grep-3.12.tar.xz
  JkmyfA6Q5jLq3NdXvgbG6aT0jZQd5R58D4P/dkCKB7k=  grep-3.12.tar.xz

Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify grep-3.12.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
        Key fingerprint = 155D 3FC5 00C8 3448 6D1E  EA67 7FD9 FCCB 000B EEEE
  uid                   [ unknown] Jim Meyering <jim@meyering.net>
  uid                   [ unknown] Jim Meyering <meyering@fb.com>
  uid                   [ unknown] Jim Meyering <meyering@gnu.org>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key jim@meyering.net

  gpg --recv-keys 7FD9FCCB000BEEEE

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=grep&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify grep-3.12.tar.gz.sig

This release is based on the grep git repository, available as

  git clone https://git.savannah.gnu.org/git/grep.git

with commit 3f8c09ec197a2ced82855f9ecd2cbc83874379ab tagged as v3.12.

For a summary of changes and contributors, see:

  https://git.sv.gnu.org/gitweb/?p=grep.git;a=shortlog;h=v3.12

or run this command from a git-cloned grep directory:

  git shortlog v3.11..v3.12

This release was bootstrapped with the following tools:
  Autoconf 2.72.76-2f64
  Automake 1.17.0.91
  Gnulib 2025-04-04 3773db653242ab7165cd300295c27405e4f9cc79

NEWS

* Noteworthy changes in release 3.12 (2025-04-10) [stable]

** Bug fixes

  Searching a directory with at least 100,000 entries no longer fails
  with "Operation not supported" and exit status 2. Now, this prints 1
  and no diagnostic, as expected:
    $ mkdir t && cd t && seq 100000|xargs touch && grep -r x .; echo $?
    1
  [bug introduced in grep 3.11]

  -mN where 1 < N no longer mistakenly lseeks to end of input merely
  because standard output is /dev/null.

** Changes in behavior

  The --unix-byte-offsets (-u) option is gone. In grep-3.7 (2021-08-14)
  it became a warning-only no-op. Before then, it was a Windows-only no-op.

  On Windows platforms and on AIX in 32-bit mode, grep in some cases
  now supports Unicode characters outside the Basic Multilingual Plane.


10 April, 2025 05:04PM by Jim Meyering

gzip @ Savannah

gzip-1.14 released [stable]


This is to announce gzip-1.14, a stable release.

Most notable: "gzip -d" is up to 40% faster on x86_64 CPUs with pclmul
support. Why? Because about half of its time was spent computing a CRC
checksum, and that code is far more efficient now.  Even on 10-year-old
CPUs lacking pclmul support, it's ~20% faster.  Thanks to Lasse Collin
for alerting me to this very early on, to Sam Russell for contributing
gnulib's new crc module and to Bruno Haible and everyone else who keeps
the bar so high for all of gnulib. And as usual, thanks to Paul Eggert
for many contributions everywhere.

There have been 58 commits by 7 people in the 85 weeks since 1.13.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Bruno Haible (1)
  Collin Funk (4)
  Jim Meyering (26)
  Lasse Collin (1)
  Paul Eggert (24)
  Sam Russell (1)
  Simon Josefsson (1)

Jim
 [on behalf of the gzip maintainers]
==================================================================

Here is the GNU gzip home page:
    https://gnu.org/s/gzip/

Here are the compressed sources:
  https://ftp.gnu.org/gnu/gzip/gzip-1.14.tar.gz   (1.4MB)
  https://ftp.gnu.org/gnu/gzip/gzip-1.14.tar.xz   (868KB)

Here are the GPG detached signatures:
  https://ftp.gnu.org/gnu/gzip/gzip-1.14.tar.gz.sig
  https://ftp.gnu.org/gnu/gzip/gzip-1.14.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

  27f9847892a1c59b9527469a8a3e5d635057fbdd  gzip-1.14.tar.gz
  YT1upE8SSNc3DHzN7uDdABegnmw53olLPG8D+YEZHGs=  gzip-1.14.tar.gz
  05f44a8a589df0171e75769e3d11f8b11d692f58  gzip-1.14.tar.xz
  Aae4gb0iC/32Ffl7hxj4C9/T9q3ThbmT3Pbv0U6MCsY=  gzip-1.14.tar.xz

Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify gzip-1.14.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
        Key fingerprint = 155D 3FC5 00C8 3448 6D1E  EA67 7FD9 FCCB 000B EEEE
  uid                   [ unknown] Jim Meyering <jim@meyering.net>
  uid                   [ unknown] Jim Meyering <meyering@fb.com>
  uid                   [ unknown] Jim Meyering <meyering@gnu.org>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key jim@meyering.net

  gpg --recv-keys 7FD9FCCB000BEEEE

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=gzip&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify gzip-1.14.tar.gz.sig

This release is based on the gzip git repository, available as

  git clone https://git.savannah.gnu.org/git/gzip.git

with commit fbc4883eb9c304a04623ac506dd5cf5450d055f1 tagged as v1.14.

For a summary of changes and contributors, see:

  https://git.sv.gnu.org/gitweb/?p=gzip.git;a=shortlog;h=v1.14

or run this command from a git-cloned gzip directory:

  git shortlog v1.13..v1.14

This release was bootstrapped with the following tools:
  Autoconf 2.72.76-2f64
  Automake 1.17.0.91
  Gnulib 2025-01-31 553ab924d2b68d930fae5d3c6396502a57852d23

NEWS

* Noteworthy changes in release 1.14 (2025-04-09) [stable]

** Bug fixes

  'gzip -d' no longer omits the last partial output buffer when the
  input ends unexpectedly on an IBM Z platform.
  [bug introduced in gzip-1.11]

  'gzip -l' no longer misreports lengths of multimember inputs.
  [bug introduced in gzip-1.12]

  'gzip -S' now rejects suffixes containing '/'.
  [bug present since the beginning]

** Changes in behavior

  The GZIP environment variable is now silently ignored except for the
  options -1 (--fast) through -9 (--best), --rsyncable, and --synchronous.
  This brings gzip into line with more-cautious compressors like zstd
  that limit environment variables' effect to relatively innocuous
  performance issues.  You can continue to use scripts to specify
  whatever gzip options you like.

  'zmore' is no longer installed on platforms lacking 'more'.

** Performance improvements

  gzip now decompresses significantly faster by computing CRCs via a
  slice by 8 algorithm, and faster yet on x86-64 platforms that
  support pclmul instructions.


10 April, 2025 04:34AM by Jim Meyering

April 09, 2025

coreutils @ Savannah

coreutils-9.7 released [stable]


This is to announce coreutils-9.7, a stable release.

There have been 63 commits by 11 people in the 12 weeks since 9.6,
with a focus on bug fixing and stabilization.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Bruno Haible (1)                Jim Meyering (2)
  Collin Funk (2)                 Lukáš Zaoral (1)
  Daniel Hofstetter (1)           Mike Swanson (1)
  Frédéric Yhuel (1)              Paul Eggert (21)
  G. Branden Robinson (1)         Pádraig Brady (32)
  Grisha Levit (1)

Pádraig [on behalf of the coreutils maintainers]
==================================================================

Here is the GNU coreutils home page:
    https://gnu.org/s/coreutils/

Here are the compressed sources:
  https://ftp.gnu.org/gnu/coreutils/coreutils-9.7.tar.gz   (15MB)
  https://ftp.gnu.org/gnu/coreutils/coreutils-9.7.tar.xz   (5.9MB)

Here are the GPG detached signatures:
  https://ftp.gnu.org/gnu/coreutils/coreutils-9.7.tar.gz.sig
  https://ftp.gnu.org/gnu/coreutils/coreutils-9.7.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

  File: coreutils-9.7.tar.gz
  SHA1 sum:   bfebebaa1aa59fdfa6e810ac07d85718a727dcf6
  SHA256 sum: 0898a90191c828e337d5e4e4feb71f8ebb75aacac32c434daf5424cda16acb42

  File: coreutils-9.7.tar.xz
  SHA1 sum:   920791e12e7471479565a066e116a087edcc0df9
  SHA256 sum: e8bb26ad0293f9b5a1fc43fb42ba970e312c66ce92c1b0b16713d7500db251bf

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify coreutils-9.7.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa4096/0xDF6FD971306037D9 2011-09-23 [SC]
        Key fingerprint = 6C37 DC12 121A 5006 BC1D  B804 DF6F D971 3060 37D9
  uid                   [ultimate] Pádraig Brady <P@draigBrady.com>
  uid                   [ultimate] Pádraig Brady <pixelbeat@gnu.org>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key P@draigBrady.com

  gpg --recv-keys DF6FD971306037D9

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=coreutils&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify coreutils-9.7.tar.gz.sig

This release is based on the coreutils git repository, available as

  git clone https://git.savannah.gnu.org/git/coreutils.git

with commit 8e075ff8ee11692c5504d8e82a48ed47a7f07ba9 tagged as v9.7.

For a summary of changes and contributors, see:

  https://git.sv.gnu.org/gitweb/?p=coreutils.git;a=shortlog;h=v9.7

or run this command from a git-cloned coreutils directory:

  git shortlog v9.6..v9.7

This release was bootstrapped with the following tools:
  Autoconf 2.72.70-9ff9
  Automake 1.16.5
  Gnulib 2025-04-07 41e7b7e0d159d8ac0eb385964119f350ac9dfc3f
  Bison 3.8.2

NEWS

* Noteworthy changes in release 9.7 (2025-04-09) [stable]

** Bug fixes

  'cat' would fail with "input file is output file" if input and
  output are the same terminal device and the output is append-only.
  [bug introduced in coreutils-9.6]

  'cksum -a crc' misbehaved on aarch64 with 32-bit uint_fast32_t.
  [bug introduced in coreutils-9.6]

  dd with the 'nocache' flag will now detect all failures to drop the
  cache for the whole file.  Previously it may have erroneously succeeded.
  [bug introduced with the "nocache" feature in coreutils-8.11]

  'ls -Z dir' would crash on all systems, and 'ls -l' could crash
  on systems like Android with SELinux but without xattr support.
  [bug introduced in coreutils-9.6]

  `ls -l` could output spurious "Not supported" errors in certain cases,
  like with dangling symlinks on cygwin.
  [bug introduced in coreutils-9.6]

  timeout would fail to timeout commands with infinitesimal timeouts.
  For example `timeout 1e-5000 sleep inf` would never timeout.
  [bug introduced with timeout in coreutils-7.0]

  sleep, tail, and timeout would sometimes sleep for slightly less
  time than requested.
  [bug introduced in coreutils-5.0]

  'who -m' now outputs entries for remote logins.  Previously login
  entries prefixed with the service (like "sshd") were not matched.
  [bug introduced in coreutils-9.4]

** Improvements

  'logname' correctly returns the user who logged in the session,
  on more systems.  Previously on musl or uclibc it would have merely
  output the LOGNAME environment variable.


09 April, 2025 11:36AM by Pádraig Brady

diffutils @ Savannah

diffutils-3.12 released [stable]


This is to announce diffutils-3.12, a stable bug-fix release.
Thanks to Paul Eggert and Collin Funk for the bug fixes.

There have been 13 commits by 4 people in the 9 weeks since 3.11.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Collin Funk (1)
  Jim Meyering (6)
  Paul Eggert (5)
  Simon Josefsson (1)

Jim
 [on behalf of the diffutils maintainers]
==================================================================

Here is the GNU diffutils home page:
    https://gnu.org/s/diffutils/

Here are the compressed sources:
  https://ftp.gnu.org/gnu/diffutils/diffutils-3.12.tar.gz   (3.3MB)
  https://ftp.gnu.org/gnu/diffutils/diffutils-3.12.tar.xz   (1.9MB)

Here are the GPG detached signatures:
  https://ftp.gnu.org/gnu/diffutils/diffutils-3.12.tar.gz.sig
  https://ftp.gnu.org/gnu/diffutils/diffutils-3.12.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

  e3f3e8ef171fcb54911d1493ac6066aa3ed9df38  diffutils-3.12.tar.gz
  W+GBsn7Diq0kUAgGYaZOShdSuym31QUr8KAqcPYj+bI=  diffutils-3.12.tar.gz
  c2f302726d2709c6881c4657430a671abe5eedfa  diffutils-3.12.tar.xz
  fIt/n8hgkUH96pzs6FJJ0whiQ5H/Yd7a9Sj8szdyff0=  diffutils-3.12.tar.xz

Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify diffutils-3.12.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
        Key fingerprint = 155D 3FC5 00C8 3448 6D1E  EA67 7FD9 FCCB 000B EEEE
  uid                   [ unknown] Jim Meyering <jim@meyering.net>
  uid                   [ unknown] Jim Meyering <meyering@fb.com>
  uid                   [ unknown] Jim Meyering <meyering@gnu.org>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key jim@meyering.net

  gpg --recv-keys 7FD9FCCB000BEEEE

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=diffutils&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify diffutils-3.12.tar.gz.sig

This release is based on the diffutils git repository, available as

  git clone https://git.savannah.gnu.org/git/diffutils.git

with commit 16681a3cbcea47e82683c713b0dac7d59d85a6fa tagged as v3.12.

For a summary of changes and contributors, see:

  https://git.sv.gnu.org/gitweb/?p=diffutils.git;a=shortlog;h=v3.12

or run this command from a git-cloned diffutils directory:

  git shortlog v3.11..v3.12

This release was bootstrapped with the following tools:
  Autoconf 2.72.76-2f64
  Automake 1.17.0.91
  Gnulib 2025-04-04 3773db653242ab7165cd300295c27405e4f9cc79

NEWS

* Noteworthy changes in release 3.12 (2025-04-08) [stable]

** Bug fixes

  diff -r no longer merely summarizes when comparing an empty regular
  file to a nonempty regular file.
  [bug#76452 introduced in 3.11]

  diff -y no longer crashes when given nontrivial differences.
  [bug#76613 introduced in 3.11]


09 April, 2025 03:16AM by Jim Meyering

April 08, 2025

www @ Savannah

Malware in Proprietary Software - Latest Additions

The initial injustice of proprietary software often leads to further injustices: malicious functionalities.

The introduction of unjust techniques in nonfree software, such as back doors, DRM, tethering, and others, has become ever more frequent. Nowadays, it is standard practice.

We at the GNU Project show examples of malware that has been introduced in a wide variety of products and dis-services people use everyday, and of companies that make use of these techniques.

Here are our latest additions

March 2025

Microsoft's Software is Malware

  • Windows Recall is a feature of Microsoft's Copilot tool that comes preinstalled on AI-specialized computers. Recall records everything users do on their computer and allows them to search the recordings, but it has numerous security flaws and poses a risk to privacy. As Recall cannot be completely uninstalled, disabling it doesn't eliminate the risk because it can be reactivated by malware or misconfiguration. Microsoft says that Recall will not take screenshots of digitally restricted media. Meanwhile, it stores sensitive user information such as passwords and bank account numbers, showing that whereas Microsoft worries somewhat about corporate interests, it couldn't care less about user privacy.
  • Windows Defender deletes downloaded files that it considers malware as soon as they are saved to disk, without requesting permission to do so. Many angry users have complained about this unacceptable behavior over the last few years, and even suggested fixes, but Microsoft has ignored them. It is high time for Windows users to escape Microsoft's tyranny by migrating to a free/libre system.
  • Microsoft has started to show ads in the “Recommended” section of the Windows 11 Start menu. Previously, this section only included recently used documents and images. Now it also contains the icons of apps Microsoft wants to advertise, in the hope that the user will click on one of them, and buy the app. So far, the user can disable the ads, but this doesn't make them more legitimate.
  • In its default configuration, Windows 11 now uploads users' files and personal information to Microsoft's “cloud” without asking permission to do so. This is presented as a convenient backup method, but if the allotted storage capacity is exceeded, the user will need to buy more space, increasing Microsoft's profit. However, this small profit is probably not the company's major reason for making cloud storage the default. Here is an excerpt from the Microsoft Services agreement (Section 2b): To the extent necessary to provide the Services to you and others, to protect you and the Services, and to improve Microsoft products and services, you grant to Microsoft a worldwide and royalty-free intellectual property license to use Your Content, for example, to make copies of, retain, transmit, reformat, display, and distribute via communication tools Your Content on the Services. We strongly suspect that the backed-up material is used to feed Microsoft's greedy “AI.” In addition, it is most likely analysed to better profile users in order to flood them with targeted ads, thereby generating more profit.Users, on the other hand, are at the mercy of any entity that demands their data, let alone of any cracker that breaks into Microsoft's servers. They must escape from this sick environment, and install a sane free/libre system.
  • Outlook has become a “data collection and ad delivery service”. Since Outlook is now integrated with Microsoft “cloud” services, and doesn't support end-to-end encryption, the company has full access to users' emails, contacts, and calendar events. Microsoft may also retrieve credentials associated with any third-party services that are synchronized with Outlook. This trove of personal data enables Microsoft, as well as its commercial partners, to flood users with targeted ads, and possibly to train “artificial intelligences.” Even worse, this data is available to any government that can force Microsoft to hand it over.
  • Microsoft is shutting down Skype on May 5th, 2025. As with other tethered proprietary programs, users have to rely on servers that are controlled by the developer. When these servers shut down, the service disappears. Instead of migrating to the service that Microsoft suggests as a replacement, Skype users should regain control of their communications by switching to one that is based on free software. Jitsi Meet, for example, is appropriate for small video meetings. Anyone can set up a Jitsi server and let other people use it, and indeed many of these are available around the world.
  • A critical vulnerability in Windows systems that support IPv6 was discovered in 2024, 16 years after the first affected system was released. Unless the relevant patch is applied, an attacker can remotely execute arbitrary code on these systems. Microsoft considers exploits “likely.” The same sort of vulnerability in a free/libre operating system would probably be discovered sooner, since many more people would be able to look at the source code.

Google's Software is Malware

Proprietary Censorship

Adobe's Software is Malware

  • In its terms of service, Adobe gives itself permission to spy on material that people upload to its servers, supposedly for moderation purposes. In spite of Adobe's denial, we can expect that sooner or later it will use this material to train its so-called “artificial intelligence,” and will claim that by agreeing to the terms of service users gave it the right to do so.

Proprietary Sabotage

  • Ubisoft is facing a fraud lawsuit for shutting down the proprietary video game The Crew, which was tethered to its servers. As this game can't be played offline, people who used to think they owned a copy of it are now realizing they only bought a license that could be revoked at will by the developer. This is one more example of what tethering of a proprietary program leads to. If The Crew were free software, its users would be able to set up another server, and keep on playing.

Proprietary Tethers

Apple's Operating Systems Are Malware

Proprietary Subscriptions


February 2025

Google's Software is Malware

Proprietary Back Doors

  • Eclypsium discovered an insecure universal back door on many computers using Gigabyte mainboards. Gigabyte designed their nonfree firmware so they could add a program to Windows to download additional software from the Internet, and run it behind the user's back. To add injury to injury, the back-door program was insecure, and opened ways for crackers to run their own programs on the affected systems, also behind the user's back. Gigabyte's “solution” was to ensure the back door would only run programs from Gigabyte. In this case, the back door required the connivance of Windows accepting the program, and running it behind the user's back. Free operating systems rightly ignore such “Greek gifts,” so users of GNU (including GNU/Linux) are safe from this particular back door, even on affected hardware. Nonfree software does not make your computer secure—it does the opposite: it prevents you from trying to secure it. When nonfree programs are required for booting and impossible to replace, they are, in effect, a low-level rootkit. All the things that the industry has done to make its power over you secure against you also protect firmware-level rootkits against you. Instead of allowing Intel, AMD, Apple and perhaps ARM to impose security through tyranny, we should demand laws that require them to allow users to install their choice of startup software and make available the information needed to develop such. Think of this as right-to-repair at the initialization stage. Note: Eclypsium at least mentions the problem of “unwanted behavior within official firmware,” but does not seem to recognize that the only real solution is for firmware to be free, so users can fix these problems without having to rely on the vendor.

08 April, 2025 05:31PM by Rob Musial

April 07, 2025

gperf @ Savannah

GNU gperf 3.2 released

Download from https://ftp.gnu.org/gnu/gperf/gperf-3.2.tar.gz

New in this release:

  • The generated code avoids several types of warnings:
    • "implicit fallthrough" warnings in 'switch' statements.
    • "unused parameter" warnings regarding 'str' or 'len'.
    • "missing initializer for field ..." warnings.
    • "zero as null pointer constant" warnings.


  • The input file may now use Windows line terminators (CR/LF) instead of Unix line terminators (LF). Note: This is an incompatible change. If you want to use a keyword that ends in a CR byte, such as xyz<CR>, write it as "xyz\r".

07 April, 2025 10:50AM by Bruno Haible

April 04, 2025

datamash @ Savannah

GNU Datamash 1.9 released

This is to announce datamash-1.9, a stable release.

Home page: https://www.gnu.org/software/datamash

GNU Datamash is a command-line program which performs basic numeric,
textual and statistical operations on input textual data files.

It is designed to be portable and reliable, and aid researchers
to easily automate analysis pipelines, without writing code or even
short scripts. It is very friendly to GNU Bash and GNU Make pipelines.

There have been 52 commits by 5 people in the 141 weeks since 1.8.

See the NEWS below for a brief summary.

The following people contributed changes to this release:

  Dima Kogan (1)
  Erik Auerswald (14)
  Georg Sauthoff (4)
  Shawn Wagner (6)
  Timothy Rice (27)

Thanks to everyone who has contributed!

Please report any problem you may experience to the bug-datamash@gnu.org
mailing list.

Happy Hacking!
- Tim

==================================================================

Here is the GNU datamash home page:
    https://gnu.org/s/datamash/

Here are the compressed sources and a GPG detached signature:
  https://ftpmirror.gnu.org/datamash/datamash-1.9.tar.gz
  https://ftpmirror.gnu.org/datamash/datamash-1.9.tar.gz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

  File: datamash-1.9.tar.gz
  SHA1 sum:   935c9f24a925ce34927189ef9f86798a6303ec78
  SHA256 sum: f382ebda03650dd679161f758f9c0a6cc9293213438d4a77a8eda325aacb87d2

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify datamash-1.9.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   ed25519 2022-04-05 [SC]
        3338 2C8D 6201 7A10 12A0  5B35 BDB7 2EC3 D3F8 7EE6
  uid   Timothy Rice (Yubikey 5 Nano 13139911) <trice@posteo.net>

If that command fails because you don't have the required public key,
or that public key has expired, try the following command to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify datamash-1.9.tar.gz.sig

This release is based on the datamash git repository, available as

  git clone https://git.savannah.gnu.org/git/datamash.git

with commit 39101c367a07f2c1aea8f3b540fc490735596e6a tagged as v1.9.

For a summary of changes and contributors, see:

  https://git.sv.gnu.org/gitweb/?p=datamash.git;a=shortlog;h=v1.9

or run this command from a git-cloned datamash directory:

  git shortlog v1.8..v1.9

This release was bootstrapped with the following tools:
  Autoconf 2.72
  Automake 1.17
  Gnulib 2025-03-27 54fc57c23dcd833819a7adbdfcc3bd1c805103a8

NEWS

  • Noteworthy changes in release 1.9 (2025-04-05) [stable]


** Changes in Behavior

  datamash(1), decorate(1): Add short options -h and -V for --help and --version
  respectively.

  datamash(1): the rand operation now uses getrandom(2) for generating a random
  seed, instead of relying on date/time/pid mixing.

** New Features

  datamash(1): add operation dotprod for calculating the scalar product of two
  columns.

  datamash(1): Add option -S/--seed to set a specific seed for pseudo-random
  number generation.

  datamash(1): Add option --vnlog to enable experimental support for the vnlog
  format. More about vnlog is at https://github.com/dkogan/vnlog.

  datamash(1): -g/groupby takes ranges of columns (e.g. 1-4)

** Bug Fixes

  datamash(1) now correctly calculates the "antimode" for a sequence
  of numbers.  Problem reported by Kingsley G. Morse Jr. in
  <https://lists.gnu.org/archive/html/bug-datamash/2023-12/msg00003.html>.

  When using the locale's decimal separator as field separator, numeric
  datamash(1) operations now work correctly.  Problem reported by Jérémie
  Roquet in
  <https://lists.gnu.org/archive/html/bug-datamash/2018-09/msg00000.html>
  and by Jeroen Hoek in
  <https://lists.gnu.org/archive/html/bug-datamash/2023-11/msg00000.html>.

  datamash(1): The "getnum" operation now stays inside the specified field.

04 April, 2025 08:52PM by Tim Rice

April 02, 2025

GNU Artanis

April 01, 2025

FSF Blogs

March GNU Spotlight with Amin Bandali

Eighteen new GNU releases in the last month (as of March 31, 2025):

01 April, 2025 03:55PM

March 29, 2025

patch @ Savannah

GNU patch 2.8 released

I am pleased to announce the release of GNU patch 2.8.

The project page is at https://savannah.gnu.org/projects/patch

The sources can be downloaded from http://ftpmirror.gnu.org/patch/

The sha256sum checksums are:

  308a4983ff324521b9b21310bfc2398ca861798f02307c79eb99bb0e0d2bf980  patch-2.8.tar.gz
  7f51814e85e780b39704c9b90d264ba3515377994ea18a2fabd5d213e5a862bc  patch-2.8.tar.bz2
  f87cee69eec2b4fcbf60a396b030ad6aa3415f192aa5f7ee84cad5e11f7f5ae3  patch-2.8.tar.xz

This release is also GPG signed. You can download the signature by appending '.sig' to the URL. If the 'gpg --verify' command fails because you don't have the required public key, then run this command to import it:

  gpg --recv-keys D5BF9FEB0313653A

Key fingerprint = 259B 3792 B3D6 D319 212C  C4DC D5BF 9FEB 0313 653A

NEWS since v2.7.6 (2018-02-03):

  • The --follow-symlinks option now applies to output files as well as input.
  • 'patch' now supports file timestamps after 2038 even on traditional

  GNU/Linux platforms where time_t defaults to 32 bits.

  • 'patch' no longer creates files with names containing newlines,

  as encouraged by POSIX.1-2024.

  • Patches can no longer contain NUL ('\0') bytes in diff directive lines.

  These bytes would otherwise cause unpredictable behavior.

  • Patches can now contain sequences of spaces and tabs around line numbers

  and in other places where POSIX requires support for these sequences.

  • --enable-gcc-warnings no longer uses expensive static checking.

  Use --enable-gcc-warnings=expensive if you still want it.

  • Fix undefined or ill-defined behavior in unusual cases, such as very

  large sizes, possible stack overflow, I/O errors, memory exhaustion,
  races with other processes, and signals arriving at inopportune moments.

  • Remove old "Plan B" code, designed for machines with 16-bit pointers.
  • Assume C99 or later; previously it assumed C89 or later.
  • Port to current GCC, Autoconf, Gnulib, etc.


The following people contributed changes to this release:
  Andreas Gruenbacher (34)
  Bruno Haible (5)
  Collin Funk (2)
  Eli Schwartz (1)
  Jean Delvare (2)
  Jim Meyering (1)
  Kerin Millar (1)
  Paul Eggert (166)
  Petr Vaněk (1)
  Sam James (1)
  Takashi Iwai (1)

Special thanks to Paul Eggert for doing the vast majority of the work.

Regards,
Andreas Gruenbacher

29 March, 2025 06:41PM by Andreas Gruenbacher

March 28, 2025

FSF Blogs

"Free" filing should be free as in freedom

A modern free society has an obligation to offer electronic tax filing that respects user freedom, and the United States is not excluded from this responsibility.

28 March, 2025 05:15PM

March 24, 2025

gnuboot @ Savannah

New GNU Boot 0.1 RC6 release.

This release is meant to fix multiple security issues that are present
in the GRUB version we use (2.06+).

Users having replaced the GNU Boot picture / logo with untrusted
pictures could have been affected if the pictures they used were
specially crafted to exploit a vulnerability in GRUB and take full
control of the computer. In general it's a good idea to avoid using
untrusted pictures in GRUB or other boot software to limit such risks
because software can have bugs (a similar issue also happened in a
free software UEFI implementation).

Users having implemented various user-respecting flavor(s) of
secure-boot, either by using GPG signatures and/or by using a GRUB
password combined with full disk encryption are also affected as these
security vulnerabilities could enable people to bypass secure-boot
schemes.

In addition there are also security vulnerabilities in file systems,
which also enable execution of code. When booting, GRUB has to load
files (like the Linux or linux-libre kernel) that are executed
anyway. But in some cases, it could still affect users.

This could happen when trying to boot from an USB key, and also having
another USB key that has a file system that was crafted to take
control of the computer.

At the time, no known exploits are known by the GNU Boot maintainers.

Why it took so long.
--------------------

The 18 February, the GRUB maintainer posted some patches on the
grub-devel mailing list in order to notify people that there were some
security vulnerabilities in GRUB that were fixed, and which commit
fixed them.

One of the GNU Boot maintainers saw these patches but didn't read the
mails and assumed that a new GRUB release was near and decided to wait
for it as this would make things easier as GRUB releases are tested in
many different situations.

However the thread posting these patches also mentioned that a new
release would take too much time and that the GRUB contributors and/or
maintainers already had a lot to deal with.

It took a while to realize the issue: a second GNU Boot maintainer saw
the GRUB security vulnerabilities later on, and at this point they
realized that nothing had happened yet on GRUB side yet and they
looked into the issue.

In addition the computer of one of the GNU Boot maintainer broke,
which also delayed the review of the GNU Boot patches meant to fix
this security issues.

These patches also contain fixes for the GNU Boot build system as well
to ensure users building GNU Boot themselves really do get an updated
GRUB version.

As this is a new release candidate, we also need help for reporting on
which computers and/or configuration it works or doesn't work,
especially because we had to update to an unreleased GRUB version to
get the fixes (see below for more details).

Other affected distributions?
-----------------------------

We started telling the Canoeboot and Libreboot maintainer about the
issue to later find out that the issue was fixed since the 18
February, and their users were also notified via a news, so everything
is good on that side.

For most 100% free distributions, using GRUB from git would be
a significant effort in testing and/or in packaging.

We notified Trisquel, Parabola and Guix and the ones who responded are
not comfortable with updating GRUB to a not-yet released git
revision. Though in the case of Parabola nothing prevent adding a new
grub-git package that has no known vulnerabilities in addition to the
existing grub package, so patches for that are welcome.

As for the other distributions, most of them do support secure boot
(by supporting UEFI secure boot), but they are probably aware of the
issue as (maintainers of) distributions like Debian or Arch Linux
either responded to the thread on the GRUB mailing list, or were
mentioned as having fixed the issue in that thread.

At the time of writing, the affected GRUBs versions seems not to be
blacklisted yet by UEFIs, so it also leaves some time to fix the
issue, and things like GRUB password can usually be bypassed easily
unless people use distributions like GNU Boot or Canoeboot and
configure both the hardware and the software to support a secure boot
scheme that respect users freedoms.

As for PureOS we just notified them in a bug report as they have UEFI
secure boot, but in another hand we don't know if they are aware of
that (they are based on Debian so it could be inherited from Debian
and not something supported/advertised), and because Purism (the
company behind PureOS) ships computers with their own secure boot
scheme (PureBoot), since it works in a very different way we are not
sure if people could be affected or not.

References and details
----------------------

This release should fix the following CVEs affecting previous GRUB
2.06: CVE-2025-0690, CVE-2025-0622, CVE-2024-45775, CVE-2024-45777,
CVE-2024-45778, CVE-2024-45779, CVE-2024-45781, CVE-2024-45782,
CVE-2024-45783, CVE-2025-0624, CVE-2025-0677, CVE-2025-0684,
CVE-2025-0685, CVE-2025-0686, CVE-2025-0689, CVE-2025-1125.

More details are available in the "[SECURITY PATCH 00/73] GRUB2
vulnerabilities - 2025/02/18" thread From Daniel Kiper (Tuesday, 18
February 2025, archived online at
https://lists.gnu.org/archive/html/grub-devel/2025-02/msg00024.html).

24 March, 2025 09:11PM by GNUtoo

FSF News

GNU Head, Stallman's katana, and Internet Hall of Fame medal auctioned off to free software community members

BOSTON, Massachusetts, USA (Monday, March 24, 2025), The Free Software Foundation (FSF) today announced that, among other historical free software artifacts, the GNU Head found a new home through an unprecedented memorabilia auction.

24 March, 2025 08:15PM

Simon Josefsson

Reproducible Software Releases

Around a year ago I discussed two concerns with software release archives (tarball artifacts) that could be improved to increase confidence in the supply-chain security of software releases. Repeating the goals for simplicity:

  • Release artifacts should be built in a way that can be reproduced by others
  • It should be possible to build a project from source tarball that doesn’t contain any generated or vendor files (e.g., in the style of git-archive).

While implementing these ideas for a small project was accomplished within weeks – see my announcement of Libntlm version 1.8 – adressing this in complex projects uncovered concerns with tools that had to be addressed, and things stalled for many months pending that work.

I had the notion that these two goals were easy and shouldn’t be hard to accomplish. I still believe that, but have had to realize that improving tooling to support these goals takes time. It seems clear that these concepts are not universally agreed on and implemented generally.

I’m now happy to recap some of the work that led to releases of libtasn1 v4.20.0, inetutils v2.6, libidn2 v2.3.8, libidn v1.43. These releases all achieve these goals. I am working on a bunch of more projects to support these ideas too.

What have the obstacles so far been to make this happen? It may help others who are in the same process of addressing these concerns to have a high-level introduction to the issues I encountered. Source code for projects above are available and anyone can look at the solutions to learn how the problems are addressed.

First let’s look at the problems we need to solve to make “git-archive” style tarballs usable:

Version Handling

To build usable binaries from a minimal tarballs, it need to know which version number it is. Traditionally this information was stored inside configure.ac in git. However I use gnulib’s git-version-gen to infer the version number from the git tag or git commit instead. The git tag information is not available in a git-archive tarball. My solution to this was to make use of the export-subst feature of the .gitattributes file. I store the file .tarball-version-git in git containing the magic cookie like this:

$Format:%(describe)$

With this, git-archive will replace with a useful version identifier on export, see the libtasn1 patch to achieve this. To make use of this information, the git-version-gen script was enhanced to read this information, see the gnulib patch. This is invoked by ./configure to figure out which version number the package is for.

Translations

We want translations to be included in the minimal source tarball for it to be buildable. Traditionally these files are retrieved by the maintainer from the Translation project when running ./bootstrap, however there are two problems with this. The first one is that there is no strong authentication or versioning information on this data, the tools just download and place whatever wget downloaded into your source tree (printf-style injection attack anyone?). We could improve this (e.g., publish GnuPG signed translations messages with clear versioning), however I did not work on that further. The reason is that I want to support offline builds of packages. Downloading random things from the Internet during builds does not work when building a Debian package, for example. The translation project could solve this by making a monthly tarball with their translations available, for distributors to pick up and provide as a separate package that could be used as a build dependency. However that is not how these tools and projects are designed. Instead I reverted back to storing translations in git, something that I did for most projects back when I was using CVS 20 years ago. Hooking this into ./bootstrap and gettext workflow can be tricky (ideas for improvement most welcome!), but I used a simple approach to store all directly downloaded po/*.po files directly as po/*.po.in and make the ./bootstrap tool move them in place, see the libidn2 commit followed by the actual ‘make update-po’ commit with all the translations where one essential step is:

# Prime po/*.po from fall-back copy stored in git.
for poin in po/*.po.in; do
    po=$(echo $poin | sed 's/.in//')
    test -f $po || cp -v $poin $po
done
ls po/*.po | sed 's|.*/||; s|\.po$||' > po/LINGUAS

Fetching vendor files like gnulib

Most build dependencies are in the shape of “You need a C compiler”. However some come in the shape of “source-code files intended to be vendored”, and gnulib is a huge repository of such files. The latter is a problem when building from a minimal git archive. It is possible to consider translation files as a class of vendor files, since they need to be copied verbatim into the project build directory for things to work. The same goes for *.m4 macros from the GNU Autoconf Archive. However I’m not confident that the solution for all vendor files must be the same. For translation files and for Autoconf Archive macros, I have decided to put these files into git and merge them manually occasionally. For gnulib files, in some projects like OATH Toolkit I also store all gnulib files in git which effectively resolve this concern. (Incidentally, the reason for doing so was originally that running ./bootstrap took forever since there is five gnulib instances used, which is no longer the case since gnulib-tool was rewritten in Python.) For most projects, however, I rely on ./bootstrap to fetch a gnulib git clone when building. I like this model, however it doesn’t work offline. One way to resolve this is to make the gnulib git repository available for offline use, and I’ve made some effort to make this happen via a Gnulib Git Bundle and have explained how to implement this approach for Debian packaging. I don’t think that is sufficient as a generic solution though, it is mostly applicable to building old releases that uses old gnulib files. It won’t work when building from CI/CD pipelines, for example, where I have settled to use a crude way of fetching and unpacking a particular gnulib snapshot, see this Libntlm patch. This is much faster than working with git submodules and cloning gnulib during ./bootstrap. Essentially this is doing:

GNULIB_REVISION=$(. bootstrap.conf >&2; echo $GNULIB_REVISION)
wget -nv https://gitlab.com/libidn/gnulib-mirror/-/archive/$GNULIB_REVISION/gnulib-mirror-$GNULIB_REVISION.tar.gz
gzip -cd gnulib-mirror-$GNULIB_REVISION.tar.gz | tar xf -
rm -fv gnulib-mirror-$GNULIB_REVISION.tar.gz
export GNULIB_SRCDIR=$PWD/gnulib-mirror-$GNULIB_REVISION
./bootstrap --no-git
./configure
make

Test the git-archive tarball

This goes without saying, but if you don’t test that building from a git-archive style tarball works, you are likely to regress at some point. Use CI/CD techniques to continuously test that a minimal git-archive tarball leads to a usable build.

Mission Accomplished

So that wasn’t hard, was it? You should now be able to publish a minimal git-archive tarball and users should be able to build your project from it.

I recommend naming these archives as PROJECT-vX.Y.Z-src.tar.gz replacing PROJECT with your project name and X.Y.Z with your version number. The archive should have only one sub-directory named PROJECT-vX.Y.Z/ containing all the source-code files. This differentiate it against traditional PROJECT-X.Y.Z.tar.gz tarballs in that it embeds the git tag (which typically starts with v) and contains a wildcard-friendly -src substring. Alas there is no consistency around this naming pattern, and GitLab, GitHub, Codeberg etc all seem to use their own slightly incompatible variant.

Let’s go on to see what is needed to achieve reproducible “make dist” source tarballs. This is the release artifact that most users use, and they often contain lots of generated files and vendor files. These files are included to make it easy to build for the user. What are the challenges to make these reproducible?

Build dependencies causing different generated content

The first part is to realize that if you use tool X with version A to generate a file that goes into the tarball, version B of that tool may produce different outputs. This is a generic concern and it cannot be solved. We want our build tools to evolve and produce better outputs over time. What can be addressed is to avoid needless differences. For example, many tools store timestamps and versioning information in the generated files. This causes needless differences, which makes audits harder. I have worked on some of these, like Autoconf Archive timestamps but solving all of these examples will take a long time, and some upstream are reluctant to incorporate these changes. My approach meanwhile is to build things using similar environments, and compare the outputs for differences. I’ve found that the various closely related forks of GNU/Linux distributions are useful for this. Trisquel 11 is based on Ubuntu 22.04, and building my projects using both and comparing the differences only give me the relevant differences to improve. This can be extended to compare AlmaLinux with RockyLinux (for both versions 8 and 9), Devuan 5 against Debian 12, PureOS 10 with Debian 11, and so on.

Timestamps

Sometimes tools store timestamps in files in a way that is harder to fix. Two notable examples of this are *.po translation files and Texinfo manuals. For translation files, I have resolved this by making sure the files use a predictable POT-Creation-Date timestamp, and I set it to the modification timestamps of the NEWS file in the repository (which I set to the git commit of the latest commit elsewhere) like this:

dist-hook: po-CreationDate-to-mtime-NEWS
.PHONY: po-CreationDate-to-mtime-NEWS
po-CreationDate-to-mtime-NEWS: mtime-NEWS-to-git-HEAD
  $(AM_V_GEN)for p in $(distdir)/po/*.po $(distdir)/po/$(PACKAGE).pot; do \
    if test -f "$$p"; then \
      $(SED) -e 's,POT-Creation-Date: .*\\n",POT-Creation-Date: '"$$(env LC_ALL=C TZ=UTC0 stat --format=%y $(srcdir)/NEWS | cut -c1-16,31-)"'\\n",' < $$p > $$p.tmp && \
      if cmp $$p $$p.tmp > /dev/null; then \
        rm -f $$p.tmp; \
      else \
        mv $$p.tmp $$p; \
      fi \
    fi \
  done

Similarily, I set a predictable modification time of the texinfo source file like this:

dist-hook: mtime-NEWS-to-git-HEAD
.PHONY: mtime-NEWS-to-git-HEAD
mtime-NEWS-to-git-HEAD:
  $(AM_V_GEN)if test -e $(srcdir)/.git \
                && command -v git > /dev/null; then \
    touch -m -t "$$(git log -1 --format=%cd \
      --date=format-local:%Y%m%d%H%M.%S)" $(srcdir)/NEWS; \
  fi

However I’ve realized that this needs to happen earlier and probably has to be run during ./configure time, because the doc/version.texi file is generated on first build before running ‘make dist‘ and for some reason the file is not rebuilt at release time. The Automake texinfo integration is a bit inflexible about providing hooks to extend the dependency tracking.

The method to address these differences isn’t really important, and they change over time depending on preferences. What is important is that the differences are eliminated.

ChangeLog

Traditionally ChangeLog files were manually prepared, and still is for some projects. I maintain git2cl but recently I’ve settled with gnulib’s gitlog-to-changelog because doing so avoids another build dependency (although the output formatting is different and arguable worse for my git commit style). So the ChangeLog files are generated from git history. This means a shallow clone will not produce the same ChangeLog file depending on how deep it was cloned. For Libntlm I simply disabled use of generated ChangeLog because I wanted to support an even more extreme form of reproducibility: I wanted to be able to reproduce the full “make dist” source archives from a minimal “git-archive” source archive. However for other projects I’ve settled with a middle ground. I realized that for ‘git describe‘ to produce reproducible outputs, the shallow clone needs to include the last release tag. So it felt acceptable to assume that the clone is not minimal, but instead has some but not all of the history. I settled with the following recipe to produce ChangeLog's covering all changes since the last release.

dist-hook: gen-ChangeLog
.PHONY: gen-ChangeLog
gen-ChangeLog:
  $(AM_V_GEN)if test -e $(srcdir)/.git; then			\
    LC_ALL=en_US.UTF-8 TZ=UTC0					\
    $(top_srcdir)/build-aux/gitlog-to-changelog			\
       --srcdir=$(srcdir) --					\
       v$(PREV_VERSION)~.. > $(distdir)/cl-t &&			\
       { printf '\n\nSee the source repo for older entries\n'	\
         >> $(distdir)/cl-t &&					\
         rm -f $(distdir)/ChangeLog &&				\
         mv $(distdir)/cl-t $(distdir)/ChangeLog; }		\
  fi

I’m undecided about the usefulness of generated ChangeLog files within ‘make dist‘ archives. Before we have stable and secure archival of git repositories widely implemented, I can see some utility of this in case we lose all copies of the upstream git repositories. I can sympathize with the concept of ChangeLog files died when we started to generate them from git logs: the files no longer serve any purpose, and we can ask people to go look at the git log instead of reading these generated non-source files.

Long-term reproducible trusted build environment

Distributions comes and goes, and old releases of them goes out of support and often stops working. Which build environment should I chose to build the official release archives? To my knowledge only Guix offers a reliable way to re-create an older build environment (guix gime-machine) that have bootstrappable properties for additional confidence. However I had two difficult problems here. The first one was that I needed Guix container images that were usable in GitLab CI/CD Pipelines, and this side-tracked me for a while. The second one delayed my effort for many months, and I was inclined to give up. Libidn distribute a C# implementation. Some of the C# source code files included in the release tarball are generated. By what? You guess it, by a C# program, with the source code included in the distribution. This means nobody could reproduce the source tarball of Libidn without trusting someone elses C# compiler binaries, which were built from binaries of earlier releases, chaining back into something that nobody ever attempts to build any more and likely fail to build due to bit-rot. I had two basic choices, either remove the C# implementation from Libidn (which may be a good idea for other reasons, since the C and C# are unrelated implementations) or build the source tarball on some binary-only distribution like Trisquel. Neither felt appealing to me, but a late christmas gift of a reproducible Mono came to Guix that resolve this.

Embedded images in Texinfo manual

For Libidn one section of the manual has an image illustrating some concepts. The PNG, PDF and EPS outputs were generated via fig2dev from a *.fig file (hello 1985!) that I had stored in git. Over time, I had also started to store the generated outputs because of build issues. At some point, it was possible to post-process the PDF outputs with grep to remove some timestamps, however with compression this is no longer possible and actually the grep command I used resulted in a 0-byte output file. So my embedded binaries in git was no longer reproducible. I first set out to fix this by post-processing things properly, however I then realized that the *.fig file is not really easy to work with in a modern world. I wanted to create an image from some text-file description of the image. Eventually, via the Guix manual on guix graph, I came to re-discover the graphviz language and tool called dot (hello 1993!). All well then? Oh no, the PDF output embeds timestamps. Binary editing of PDF’s no longer work through simple grep, remember? I was back where I started, and after some (soul- and web-) searching I discovered that Ghostscript (hello 1988!) pdfmarks could be used to modify things here. Cooperating with automake’s texinfo rules related to make dist proved once again a worthy challenge, and eventually I ended up with a Makefile.am snippet to build images that could be condensed into:

info_TEXINFOS = libidn.texi
libidn_TEXINFOS += libidn-components.png
imagesdir = $(infodir)
images_DATA = libidn-components.png
EXTRA_DIST += components.dot
DISTCLEANFILES = \
  libidn-components.eps libidn-components.png libidn-components.pdf
libidn-components.eps: $(srcdir)/components.dot
  $(AM_V_GEN)$(DOT) -Nfontsize=9 -Teps < $< > $@.tmp
  $(AM_V_at)! grep %%CreationDate $@.tmp
  $(AM_V_at)mv $@.tmp $@
libidn-components.pdf: $(srcdir)/components.dot
  $(AM_V_GEN)$(DOT) -Nfontsize=9 -Tpdf < $< > $@.tmp
# A simple sed on CreationDate is no longer possible due to compression.
# 'exiftool -CreateDate' is alternative to 'gs', but adds ~4kb to file.
# Ghostscript add <1kb.  Why can't 'dot' avoid setting CreationDate?
  $(AM_V_at)printf '[ /ModDate ()\n  /CreationDate ()\n  /DOCINFO pdfmark\n' > pdfmarks
  $(AM_V_at)$(GS) -q -dBATCH -dNOPAUSE -sDEVICE=pdfwrite -sOutputFile=$@.tmp2 $@.tmp pdfmarks
  $(AM_V_at)rm -f $@.tmp pdfmarks
  $(AM_V_at)mv $@.tmp2 $@
libidn-components.png: $(srcdir)/components.dot
  $(AM_V_GEN)$(DOT) -Nfontsize=9 -Tpng < $< > $@.tmp
  $(AM_V_at)mv $@.tmp $@
pdf-recursive: libidn-components.pdf
dvi-recursive: libidn-components.eps
ps-recursive: libidn-components.eps
info-recursive: $(top_srcdir)/.version libidn-components.png

Surely this can be improved, but I’m not yet certain in what way is the best one forward. I like having a text representation as the source of the image. I’m sad that the new image size is ~48kb compared to the old image size of ~1kb. I tried using exiftool -CreateDate as an alternative to GhostScript, but using it to remove the timestamp added ~4kb to the file size and naturally I was appalled by this ignorance of impending doom.

Test reproducibility of tarball

Again, you need to continuously test the properties you desire. This means building your project twice using different environments and comparing the results. I’ve settled with a small GitLab CI/CD pipeline job that perform bit-by-bit comparison of generated ‘make dist’ archives. It also perform bit-by-bit comparison of generated ‘git-archive’ artifacts. See the Libidn2 .gitlab-ci.yml 0-compare job which essentially is:

0-compare:
  image: alpine:latest
  stage: repro
  needs: [ B-AlmaLinux8, B-AlmaLinux9, B-RockyLinux8, B-RockyLinux9, B-Trisquel11, B-Ubuntu2204, B-PureOS10, B-Debian11, B-Devuan5, B-Debian12, B-gcc, B-clang, B-Guix, R-Guix, R-Debian12, R-Ubuntu2404, S-Trisquel10, S-Ubuntu2004 ]
  script:
  - cd out
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep    -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
  - sha256sum */*.tar.* */*/*.tar.* | grep    -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
  - sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
# Confirm modern git-archive tarball reproducibility
  - cmp b-almalinux8/src/*.tar.gz b-almalinux9/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-rockylinux8/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-rockylinux9/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-debian12/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-devuan5/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz r-guix/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz r-debian12/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz r-ubuntu2404/src/*v2.*.tar.gz
# Confirm old git-archive (export-subst but long git describe) tarball reproducibility
  - cmp b-trisquel11/src/*.tar.gz b-ubuntu2204/src/*.tar.gz
# Confirm really old git-archive (no export-subst) tarball reproducibility
  - cmp b-debian11/src/*.tar.gz b-pureos10/src/*.tar.gz
# Confirm 'make dist' generated tarball reproducibility
  - cmp b-almalinux8/*.tar.gz b-rockylinux8/*.tar.gz
  - cmp b-almalinux9/*.tar.gz b-rockylinux9/*.tar.gz
  - cmp b-pureos10/*.tar.gz b-debian11/*.tar.gz
  - cmp b-devuan5/*.tar.gz b-debian12/*.tar.gz
  - cmp b-trisquel11/*.tar.gz b-ubuntu2204/*.tar.gz
  - cmp b-guix/*.tar.gz r-guix/*.tar.gz
# Confirm 'make dist' from git-archive tarball reproducibility
  - cmp s-trisquel10/*.tar.gz s-ubuntu2004/*.tar.gz

Notice that I discovered that ‘git archive’ outputs differ over time too, which is natural but a bit of a nuisance. The output of the job is illuminating in the way that all SHA256 checksums of generated tarballs are included, for example the libidn2 v2.3.8 job log:

$ sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
368488b6cc8697a0a937b9eb307a014396dd17d3feba3881e6911d549732a293  b-trisquel11/libidn2-2.3.8.tar.gz
368488b6cc8697a0a937b9eb307a014396dd17d3feba3881e6911d549732a293  b-ubuntu2204/libidn2-2.3.8.tar.gz
59db2d045fdc5639c98592d236403daa24d33d7c8db0986686b2a3056dfe0ded  b-debian11/libidn2-2.3.8.tar.gz
59db2d045fdc5639c98592d236403daa24d33d7c8db0986686b2a3056dfe0ded  b-pureos10/libidn2-2.3.8.tar.gz
5bd521d5ecd75f4b0ab0fc6d95d444944ef44a84cad859c9fb01363d3ce48bb8  s-trisquel10/libidn2-2.3.8.tar.gz
5bd521d5ecd75f4b0ab0fc6d95d444944ef44a84cad859c9fb01363d3ce48bb8  s-ubuntu2004/libidn2-2.3.8.tar.gz
7f1dcdea3772a34b7a9f22d6ae6361cdcbe5513e3b6485d40100b8565c9b961a  b-almalinux8/libidn2-2.3.8.tar.gz
7f1dcdea3772a34b7a9f22d6ae6361cdcbe5513e3b6485d40100b8565c9b961a  b-rockylinux8/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-clang/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-debian12/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-devuan5/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-gcc/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  r-debian12/libidn2-2.3.8.tar.gz
acf5cbb295e0693e4394a56c71600421059f9c9bf45ccf8a7e305c995630b32b  r-ubuntu2404/libidn2-2.3.8.tar.gz
cbdb75c38100e9267670b916f41878b6dbc35f9c6cbe60d50f458b40df64fcf1  b-almalinux9/libidn2-2.3.8.tar.gz
cbdb75c38100e9267670b916f41878b6dbc35f9c6cbe60d50f458b40df64fcf1  b-rockylinux9/libidn2-2.3.8.tar.gz
f557911bf6171621e1f72ff35f5b1825bb35b52ed45325dcdee931e5d3c0787a  b-guix/libidn2-2.3.8.tar.gz
f557911bf6171621e1f72ff35f5b1825bb35b52ed45325dcdee931e5d3c0787a  r-guix/libidn2-2.3.8.tar.gz

I’m sure I have forgotten or suppressed some challenges (sprinkling LANG=C TZ=UTC0 helps) related to these goals, but my hope is that this discussion of solutions will inspire you to implement these concepts for your software project too. Please share your thoughts and additional insights in a comment below. Enjoy Happy Hacking in the course of practicing this!

24 March, 2025 11:09AM by simon

March 23, 2025

parallel @ Savannah

GNU Parallel 20250322 ('Have you said thank you') released

GNU Parallel 20250322 ('Have you said thank you') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  te amo gnu parallel
    -- Ayleen I. C. @ayleen_ic

New in this release:

  • When hitting a --milestone, wait until running jobs are done before starting more jobs.
  • Append 'auto' to --jobs and GNU Parallel will lower the number of jobs if jobs fail and raise it up to the given number if they succeed.
  • --unsafe now treats UFT8 as safe and only warns.
  • Bug fixes and man page updates.


News about GNU Parallel:


GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.


About GNU Parallel


GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep c555f616391c6f7c28bf938044f4ec50
    12345678 c555f616 391c6f7c 28bf9380 44f4ec50
    $ md5sum install.sh | grep 707275363428aa9e9a136b9a7296dfe4
    70727536 3428aa9e 9a136b9a 7296dfe4
    $ sha512sum install.sh | grep b24bfe249695e0236f6bc7de85828fe1f08f4259
    83320d89 f56698ec 77454856 895edc3e aa16feab 2757966e 5092ef2d 661b8b45
    b24bfe24 9695e023 6f6bc7de 85828fe1 f08f4259 6ce5480a 5e1571b2 8b722f21
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference


If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)


If GNU Parallel saves you money:



About GNU SQL


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.


About GNU Niceload


GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

23 March, 2025 12:19PM by Ole Tange

March 22, 2025

mailutils @ Savannah

GNU mailutils version 3.19

Version 3.19 of GNU mailutils is available for download.  This is a bug-fixing release. Noteworthy changes are:

  • mail: part specifier accepted after any message designator
  • filters: revise buffer requirements when requesting more input/output
  • libproto tests: link using the libtool archives
  • Fix testsuite (mda & mail) to work with arbitrary default mailbox type

22 March, 2025 03:58PM by Sergey Poznyakoff

gdbm @ Savannah

GDBM version 1.25

GNU DBM version 1.25 is available for download.  New in this release:

New function: gdbm_open_ext


This function provides a general-purpose interface for opening and creating GDBM files.  It combines the possibilities of gdbm_open and gdbm_fd_open and provides detailed control over database file
locking.

New gdbmtool command: collisions


The command prints the collision chains for the current bucket, or for buckets identified by its arguments.

Pipelines in gdbmtool


The output of a gdbmtool command can be connected to the input of a shell command using the traditional pipeline syntax.

Bugfixes


  • Fix a bug in block coalescing code.
  • Other minor fixes.

22 March, 2025 02:38PM by Sergey Poznyakoff

GNU Artanis

How to make i18n properly

22 March, 2025 02:17PM

March 20, 2025

Jose E. Marchesi

Homepage for Algol 68

The Algol 68 programming language got a new homepage: https://www.algol68-lang.org.

20 March, 2025 08:00AM

March 13, 2025

GNUnet News

GNUnet 0.24.0

GNUnet 0.24.0 released

We are pleased to announce the release of GNUnet 0.24.0.
GNUnet is an alternative network stack for building secure, decentralized and privacy-preserving distributed applications. Our goal is to replace the old insecure Internet protocol stack. Starting from an application for secure publication of files, it has grown to include all kinds of basic protocol components and applications towards the creation of a GNU internet.

This is a new major release. Major versions may break protocol compatibility with the 0.23.0X versions. Please be aware that Git master is thus henceforth (and has been for a while) INCOMPATIBLE with the 0.23.0X GNUnet network, and interactions between old and new peers will result in issues. In terms of usability, users should be aware that there are still a number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.24.0 release is still only suitable for early adopters with some reasonable pain tolerance .

After almost a year of testing we believe that the meson build system is stable enough that it can be used as the default build system. In order to reduce maintenance overhead, we are planning to phase out the autotools build until the next major release. Meson shows up to 10x better development build times. It also facilitates building a single libgnunet.so for future requirements of a monolithic build on other platforms such as Android.

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Changes

A detailed list of changes can be found in the git log , the NEWS and the bug tracker . Noteworthy highlights are

  • Build system: After almost a year of testing we believe that the meson build system is stable enough that it can be used as the default build system. In order to reduce maintenance overhead, we are planning to phase out the autotools build until the next major release.

Known Issues

  • There are known major design issues in the CORE subsystems which will need to be addressed in the future to achieve acceptable usability, performance and security.
  • There are known moderate implementation limitations in CADET that negatively impact performance.
  • There are known moderate design issues in FS that also impact usability and performance.
  • There are minor implementation limitations in SET that create unnecessary attack surface for availability.
  • The RPS subsystem remains experimental.

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.

Thanks

This release was the work of many people. The following people contributed code and were thus easily identified: Christian Grothoff, Florian Dold, dvn, TheJackiMonster, oec, ch3, and Martin Schanzenbach.

13 March, 2025 11:00PM

March 11, 2025

FSF News

FSF launches pre-bid phase for silent memorabilia auction

BOSTON, Massachusetts, USA (March 11, 2025) -- The Free Software Foundation (FSF) has published the memorabilia items for bidding in the silent auction on the LibrePlanet wiki. Starting March 17, the FSF will unlock items each day for bidding on the LibrePlanet wiki at 12:00 EDT until March 20. Bidding on all items will conclude at 15:00 EDT on March 21, 2025.

11 March, 2025 08:50PM

March 10, 2025

poke @ Savannah

GNU poke 4.3 released

I am happy to announce a new release of GNU poke, version 4.3.

This is a bugfix release in the 4.x series.

See the file NEWS in the distribution tarball for a list of issues
fixed in this release.

The tarball poke-4.3.tar.gz is now available at
https://ftp.gnu.org/gnu/poke/poke-4.3.tar.gz.

> GNU poke (http://www.jemarch.net/poke) is an interactive, extensible
> editor for binary data.  Not limited to editing basic entities such
> as bits and bytes, it provides a full-fledged procedural,
> interactive programming language designed to describe data
> structures and to operate on them.


Thanks to the people who contributed with code and/or documentation to this release.

Happy poking!

Mohammad-Reza Nabipoor

10 March, 2025 11:05PM by Mohammad-Reza Nabipoor

March 02, 2025

www @ Savannah

Richard Stallman Interviewed in Bolzano, Italy

Richard Stallman was interviewed during his visit to the University of Bozen-Bolzano, Italy, in February. Clear questions with short, simply worded answers suitable for students and newcomers to the free software world.

02 March, 2025 08:29PM by Dora Scilipoti

February 26, 2025

Trisquel GNU/Linux

February 25, 2025

gettext @ Savannah

GNU gettext 0.24 released

Download from https://ftp.gnu.org/pub/gnu/gettext/gettext-0.24.tar.gz

New in this release:

  • Programming languages support:
    • JavaScript:
      • xgettext now parses recursive JSX expressions correctly.
    • Rust:
      • xgettext now supports Rust.
      • 'msgfmt -c' now verifies the syntax of translations of Rust format strings.
      • A new example 'hello-rust' has been added.
    • C:
      • A new example 'hello-c-http' has been added, showing the use of GNU gettext in a multithreaded web server.
    • C++:
      • A new example 'hello-c++-gnome3' has been added.
    • Ruby:
      • A new example 'hello-ruby' has been added.


  • Improvements for maintainers:
    • When xgettext creates the POT file of a package under Git version control, the 'POT-Creation-Date' in the POT file usually no longer changes gratuitously each time the POT file is regenerated.


  • Caveat maintainers:
    • Building the po/ directory now requires GNU make on specific platforms: macOS, Solaris, AIX.

25 February, 2025 02:05PM by Bruno Haible

February 22, 2025

parallel @ Savannah

GNU Parallel 20250222 ('Grete Tange') released [stable]

GNU Parallel 20250222 ('Grete Tange') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  Use GNU Parallel and thank me later
    -- pratikbin | NodeOps @pratikbin
 
New in this release:

  • No new features. This is a candidate for a stable release.
  • Bug fixes and man page updates.

News about GNU Parallel:


GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.


About GNU Parallel


GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep c555f616391c6f7c28bf938044f4ec50
    12345678 c555f616 391c6f7c 28bf9380 44f4ec50
    $ md5sum install.sh | grep 707275363428aa9e9a136b9a7296dfe4
    70727536 3428aa9e 9a136b9a 7296dfe4
    $ sha512sum install.sh | grep b24bfe249695e0236f6bc7de85828fe1f08f4259
    83320d89 f56698ec 77454856 895edc3e aa16feab 2757966e 5092ef2d 661b8b45
    b24bfe24 9695e023 6f6bc7de 85828fe1 f08f4259 6ce5480a 5e1571b2 8b722f21
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference


If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)


If GNU Parallel saves you money:



About GNU SQL


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.


About GNU Niceload


GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 February, 2025 05:28PM by Ole Tange

February 21, 2025

remotecontrol @ Savannah