Planet GNU

Aggregation of development blogs from the GNU Project

November 28, 2023

FSF News

Worldwide community of activists protest OverDrive and others forcing DRM upon libraries

BOSTON, Massachusetts, USA -- Tuesday, November 28, 2023 -- The Free Software Foundation (FSF) has announced its Defective by Design campaign's 17th annual International Day Against DRM (IDAD). It will protest uses of Digital Restrictions Management technology's hold over public libraries around the world, exemplified by corporations like OverDrive and Follett Destiny. IDAD will take place digitally and worldwide on December 8, 2023.

28 November, 2023 09:22PM

FSF Events

Free Software Directory meeting on IRC: Friday, December 01, starting at 12:00 EST (17:00 UTC)

Join the FSF and friends on Friday, December 01, from 12:00 to 15:00 EST (17:00 to 20:00 UTC) to help improve the Free Software Directory.

28 November, 2023 07:10PM

Greg Casamento

Objective-C end of life?? Not a chance...

Recently, I saw this article regarding ObjCs "end of life" from JetBrains.

The tiobe index seems to disagree. It’s also important to remember that jetbrains recently had to take down their AppCode application (which sucked) since it didn’t sell.
Jetbrains is the creator of the kotlin language so they have a vested interest in their android customers. I would take their “index” with a grain of salt to say the least.

While it is certain that Apple won’t be investing into thing beyond ObjC 2.0, it is foolhardy to think that ObjC is going away anytime soon since there is an enormous installed base of stable code, not the least of which is Foundation and AppKit themselves. Also consider CocoaPods.

So, no, not worried about it. Also… look at Java and COBOL. For years people have declared the end of both languages. Java is still popular, though not in vogue and COBOL while not one of the “cool kids” has literally billions of lines of code being maintained and new code being written every year. This (admittedly biased as it is by the CTO of MicroFocus) article gives some reasons why….

Here is the article about COBOL...

Plus… Apple already has a mechanism for automatically allowing objc and swift to work together. Take a look at the frameworks in Xcode and you’ll notice some files called *.apinotes. These are YAML files that are used by the compiler to allow easy integration into swift projects. So, essentially, if Apple writes an ObjC version of a framework they get the swift version for absolutely free (minus the cost of writing the YAML file). If they write a swift only version they don’t get that benefit.

So, yeah, in conclusion… Yes, ObjC is NOT on the rise, but reports of its demise have been greatly exaggerated! ;)

PS. That being said, Apple dumping ObjC might spell a boom for us as all of the people who have installed codebases would suddenly need support for it either on macOS (on which we don’t currently work) or on other platforms. Something to think about…

PPS. All of the above being said. I admit I wouldn’t be terribly shocked to hear from Apple that “we have dropped support for the legacy objc language to provide you with the best support for our new swift language to make it the ‘greatest developer experience in the world’” or some grotesque BS like that. Lol

GC

28 November, 2023 04:41AM by Unknown (noreply@blogger.com)

November 27, 2023

FSF Events

FSF Free Software Community Meetup on December 15, 2023

We are inviting you to the first ever FSF Free Software Community Meetup on Friday, December 15, 2023, from 18:45 to 21:00 (6:45 PM to 9:00 PM) EST.

27 November, 2023 07:30PM

November 24, 2023

GNU Guix

Write package definitions in a breeze

More than 28,000 packages are available in Guix today, not counting third-party channels. That’s a lot—the 5th largest GNU/Linux distro! But it’s nothing if the one package you care about is missing. So even you, dear reader, may one day find yourself defining a package for your beloved deployment tool. This post introduces a new tool poised to significantly lower the barrier to writing new packages.

Introducing Guix Packager

Defining packages for Guix is not all that hard but, as always, it’s much harder the first time you do it, especially when starting from a blank page and/or not being familiar with the programming environment of Guix. Guix Packager is a new web user interface to get you started—try it!. It arrived right in time as an aid to the packaging tutorial given last week at the Workshop on Reproducible Software Environments.

Screenshot showing the Guix Packager interface.

The interface aims to be intuitive: fill in forms on the left and it produces a correct, ready-to-use package definition on the right. Importantly, it helps you avoid pitfalls that trip up many newcomers:

  • When you add a dependency in one of the “Inputs” fields, it adds the right variable name in the generated code and imports the right package module.
  • Likewise, you can choose a license and be sure the license field will refer to the right variable representing that license.
  • You can turn tests on and off, and add configure flags. These translate to a valid arguments field of your package, letting you discover the likes of keyword arguments and G-expressions without having to first dive into the manual.

Pretty cool, no?

Implementation

All the credit for this tool goes to co-worker and intrepid hacker Philippe Virouleau. A unique combination of paren aversion and web development superpowers—unique in the Guix community—led Philippe to develop the whole thing in a glimpse (says Ludovic!).

The purpose was to provide a single view to be able to edit a package recipe, therefore the application is a single-page application (SPA) written in using the UI library Philippe is most comfortable with: React, and MaterialUI for styling the components. It's built with TypeScript, and the library part actually defines all the types needed to manipulate Guix packages and their components (such as build systems or package sources). One of the more challenging parts was to be able to provide fast and helpful “search as you type” results over the 28k+ packages. It required a combination of MaterialUI's virtualized inputs, as well as caching the packages data in the browser's local storage, when possible (packaging metadata itself is fetched from https://guix.gnu.org/packages.json, a generic representation of the current package set).

While the feature set provides a great starting point, there are still a few things that may be worth implementing. For instance, only the GNU and CMake build systems are supported so far; it would make sense to include a few others (Python-related ones might be good candidates).

Running a local (development) version of the application can happen on top of Guix, since—obviously—it's been developed with the node version packaged in Guix, using the quite standard packages.json for JavaScript dependencies installed through npm. Contributions welcome!

Lowering the barrier to entry

This neat tool complements a set of steps we’ve taken over time to make packaging in Guix approachable. Indeed, while package definitions are actually code written in the Scheme language, the package “language” was designed from the get-go to be fully declarative—think JSON with parens instead of curly braces and semicolons. More recently we simplified the way package inputs are specified with an eye on making package definitions less intimidating.

The guix import command also exists to make it easier to simplify packaging: it can generate a package definition for anything available in other package repositories such as PyPI, CRAN, Crates.io, and so forth. If your preference goes to curly braces rather than parens, it can also convert a JSON package description to Scheme code. Once you have your first .scm file, guix build prints hints for common errors such missing module imports (those #:use-module stanzas). We also put effort into providing reference documentation, a video tutorial, and a tutorial for more complex packages.

Do share your experience with us and until then, happy packaging!

Acknowledgments

Thanks to Felix Lechner and Timothy Sample for providing feedback on an earlier draft of this post.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64 and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

24 November, 2023 02:30PM by Ludovic Courtès, Philippe Virouleau

Andy Wingo

tree-shaking, the horticulturally misguided algorithm

Let's talk about tree-shaking!

looking up from the trough

But first, I need to talk about WebAssembly's dirty secret: despite the hype, WebAssembly has had limited success on the web.

There is Photoshop, which does appear to be a real success. 5 years ago there was Figma, though they don't talk much about Wasm these days. There are quite a number of little NPM libraries that use Wasm under the hood, usually compiled from C++ or Rust. I think Blazor probably gets used for a few in-house corporate apps, though I could be fooled by their marketing.

You might recall the hyped demos of 3D first-person-shooter games with Unreal engine again from 5 years ago, but that was the previous major release of Unreal and was always experimental; the current Unreal 5 does not support targetting WebAssembly.

Don't get me wrong, I think WebAssembly is great. It is having fine success in off-the-web environments, and I think it is going to be a key and growing part of the Web platform. I suspect, though, that we are only just now getting past the trough of disillusionment.

It's worth reflecting a bit on the nature of web Wasm's successes and failures. Taking Photoshop as an example, I think we can say that Wasm does very well at bringing large C++ programs to the web. I know that it took quite some work, but I understand the end result to be essentially the same source code, just compiled for a different target.

Similarly for the JavaScript module case, Wasm finds success in getting legacy C++ code to the web, and as a way to write new web-targetting Rust code. These are often tasks that JavaScript doesn't do very well at, or which need a shared implementation between client and server deployments.

On the other hand, WebAssembly has not been a Web success for DOM-heavy apps. Nobody is talking about rewriting the front-end of wordpress.com in Wasm, for example. Why is that? It may sound like a silly question to you: Wasm just isn't good at that stuff. But why? If you dig down a bit, I think it's that the programming models are just too different: the Web's primary programming model is JavaScript, a language with dynamic typing and managed memory, whereas WebAssembly 1.0 was about static typing and linear memory. Getting to the DOM from Wasm was a hassle that was overcome only by the most ardent of the true Wasm faithful.

Relatedly, Wasm has also not really been a success for languages that aren't, like, C or Rust. I am guessing that wordpress.com isn't written mostly in C++. One of the sticking points for this class of language. is that C#, for example, will want to ship with a garbage collector, and that it is annoying to have to do this. Check my article from March this year for more details.

Happily, this restriction is going away, as all browsers are going to ship support for reference types and garbage collection within the next months; Chrome and Firefox already ship Wasm GC, and Safari shouldn't be far behind thanks to the efforts from my colleague Asumu Takikawa. This is an extraordinarily exciting development that I think will kick off a whole 'nother Gartner hype cycle, as more languages start to update their toolchains to support WebAssembly.

if you don't like my peaches

Which brings us to the meat of today's note: web Wasm will win where compilers create compact code. If your language's compiler toolchain can manage to produce useful Wasm in a file that is less than a handful of over-the-wire kilobytes, you can win. If your compiler can't do that yet, you will have to instead rely on hype and captured audiences for adoption, which at best results in an unstable equilibrium until you figure out what's next.

In the JavaScript world, managing bloat and deliverable size is a huge industry. Bundlers like esbuild are a ubiquitous part of the toolchain, compiling down a set of JS modules to a single file that should include only those functions and data types that are used in a program, and additionally applying domain-specific size-squishing strategies such as minification (making monikers more minuscule).

Let's focus on tree-shaking. The visual metaphor is that you write a bunch of code, and you only need some of it for any given page. So you imagine a tree whose, um, branches are the modules that you use, and whose leaves are the individual definitions in the modules, and you then violently shake the tree, probably killing it and also annoying any nesting birds. The only thing that's left still attached is what is actually needed.

This isn't how trees work: holding the trunk doesn't give you information as to which branches are somehow necessary for the tree's mission. It also primes your mind to look for the wrong fixed point, removing unneeded code instead of keeping only the necessary code.

But, tree-shaking is an evocative name, and so despite its horticultural and algorithmic inaccuracies, we will stick to it.

The thing is that maximal tree-shaking for languages with a thicker run-time has not been a huge priority. Consider Go: according to the golang wiki, the most trivial program compiled to WebAssembly from Go is 2 megabytes, and adding imports can make this go to 10 megabytes or more. Or look at Pyodide, the Python WebAssembly port: the REPL example downloads about 20 megabytes of data. These are fine sizes for technology demos or, in the limit, very rich applications, but they aren't winners for web development.

shake a different tree

To be fair, both the built-in Wasm support for Go and the Pyodide port of Python both derive from the upstream toolchains, where producing small binaries is nice but not necessary: on a server, who cares how big the app is? And indeed when targetting smaller devices, we tend to see alternate implementations of the toolchain, for example MicroPython or TinyGo. TinyGo has a Wasm back-end that can apparently go down to less than a kilobyte, even!

These alternate toolchains often come with some restrictions or peculiarities, and although we can consider this to be an evil of sorts, it is to be expected that the target platform exhibits some co-design feedback on the language. In particular, running in the sea of the DOM is sufficiently weird that a Wasm-targetting Python program will necessarily be different than a "native" Python program. Still, I think as toolchain authors we aim to provide the same language, albeit possibly with a different implementation of the standard library. I am sure that the ClojureScript developers would prefer to remove their page documenting the differences with Clojure if they could, and perhaps if Wasm becomes a viable target for Clojurescript, they will.

on the algorithm

To recap: now that it supports GC, Wasm could be a winner for web development in Python and other languages. You would need a different toolchain and an effective tree-shaking algorithm, so that user experience does not degrade. So let's talk about tree shaking!

I work on the Hoot Scheme compiler, which targets Wasm with GC. We manage to get down to 70 kB or so right now, in the minimal "main" compilation unit, and are aiming for lower; auxiliary compilation units that import run-time facilities (the current exception handler and so on) from the main module can be sub-kilobyte. Getting here has been tricky though, and I think it would be even trickier for Python.

Some background: like Whiffle, the Hoot compiler prepends a prelude onto user code. Tree-shakind happens in a number of places:

Generally speaking, procedure definitions (functions / closures) are the easy part: you just include only those functions that are referenced by the code. In a language like Scheme, this gets you a long way.

However there are three immediate challenges. One is that the evaluation model for the definitions in the prelude is letrec*: the scope is recursive but ordered. Binding values can call or refer to previously defined values, or capture values defined later. If evaluating the value of a binding requires referring to a value only defined later, then that's an error. Again, for procedures this is trivially OK, but as soon as you have non-procedure definitions, sometimes the compiler won't be able to prove this nice "only refers to earlier bindings" property. In that case the fixing letrec (reloaded) algorithm will end up residualizing bindings that are set!, which of all the tree-shaking passes above require the delicate DCE pass to remove them.

Worse, some of those non-procedure definitions are record types, which have vtables that define how to print a record, how to check if a value is an instance of this record, and so on. These vtable callbacks can end up keeping a lot more code alive even if they are never used. We'll get back to this later.

Similarly, say you print a string via display. Well now not only are you bringing in the whole buffered I/O facility, but you are also calling a highly polymorphic function: display can print anything. There's a case for bitvectors, so you pull in code for bitvectors. There's a case for pairs, so you pull in that code too. And so on.

One solution is to instead call write-string, which only writes strings and not general data. You'll still get the generic buffered I/O facility (ports), though, even if your program only uses one kind of port.

This brings me to my next point, which is that optimal tree-shaking is a flow analysis problem. Consider display: if we know that a program will never have bitvectors, then any code in display that works on bitvectors is dead and we can fold the branches that guard it. But to know this, we have to know what kind of arguments display is called with, and for that we need higher-level flow analysis.

The problem is exacerbated for Python in a few ways. One, because object-oriented dispatch is higher-order programming. How do you know what foo.bar actually means? Depends on foo, which means you have to thread around representations of what foo might be everywhere and to everywhere's caller and everywhere's caller's caller and so on.

Secondly, lookup in Python is generally more dynamic than in Scheme: you have __getattr__ methods (is that it?; been a while since I've done Python) everywhere and users might indeed use them. Maybe this is not so bad in practice and flow analysis can exclude this kind of dynamic lookup.

Finally, and perhaps relatedly, the object of tree-shaking in Python is a mess of modules, rather than a big term with lexical bindings. This is like JavaScript, but without the established ecosystem of tree-shaking bundlers; Python has its work cut out for some years to go.

in short

With GC, Wasm makes it thinkable to do DOM programming in languages other than JavaScript. It will only be feasible for mass use, though, if the resulting Wasm modules are small, and that means significant investment on each language's toolchain. Often this will take the form of alternate toolchains that incorporate experimental tree-shaking algorithms, and whose alternate standard libraries facilitate the tree-shaker.

Welp, I'm off to lunch. Happy wassembling, comrades!

24 November, 2023 11:41AM by Andy Wingo

November 23, 2023

parallel @ Savannah

GNU Parallel 20231122 ('Grindavík') released

GNU Parallel 20231122 ('Grindavík') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  Got around to using GNU parallel for the first time from a suggestion by @jdwasmuth ... now I'm wishing I started using this years ago
    -- Stefan Gavriliuc @GavriliucStefan@twitter

New in this release:

  • -a file1 -a +file2 will link file2 to file1 similar to ::::+
  • --bar shows total time when all jobs are done.
  • Bug fixes and man page updates.


News about GNU Parallel:


GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.


About GNU Parallel


GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
    12345678 883c667e 01eed62f 975ad28b 6d50e22a
    $ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
    cc21b4c9 43fd03e9 3ae1ae49 e28573c0
    $ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
    79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
    fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference


If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)


If GNU Parallel saves you money:



About GNU SQL


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.


About GNU Niceload


GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

23 November, 2023 10:50PM by Ole Tange

November 22, 2023

gnuastro @ Savannah

Gnuastro development job at CEFCA/Spain for ESA's ARRAKIHS mission

A (scientific) software developer position that has just opened up in CEFCA for the development of Gnuastro for the data reduction pipeline of European Space Agency (ESA's) newly approved ARRAKIHS mission (to be launched in 2030), as well as other data from our Astronomical Observatory of Javalambre (OAJ):

https://www.cefca.es/cefca_en/reference_0119
(For 2 years, deadline: January 15th 2024)

ARRAKIHS is expected to last until ~2035 and we will be applying for future grants to keep the core pipeline team until the end of the project.

The job will be based in Teruel/Spain, which is a beautiful city (recognized as a UNESCO World heritage for its "Mudejar" architecture). Teruel just 1.5 hours from Valencia by car and with a population of 35000 people, everything is nicely within reach and you will not waste hours every day in traffic or long commutes as in large cities! Our observatory (OAJ) is also just 1.5 hours away by car (we have one of the darkest skies with fewest cloudy nights in continental Europe)!

As the ARRAKIHS pipeline engineer, the successful applicant will also be visiting other ARRAKIHS consortium members: IFCA/Santander, ESAC/Madrid; UCM/Madrid, IAA/Granada, EPFL/Switzerland, Univ. Lund/Sweden, Univ. Innsbruck/Austria.

The job will involve major developments in Gnuastro for the missing features or things that can be improved for Low Surface Brightness optimized reduction pipelines and high-level science from it (Gnuastro's MakeCatalog for example).

Once tested in the ARRAKIHS/OAJ pipelines, all those features will be brought into the core of Gnuastro for everyone to use in any pipeline! This is thus a major development in Gnuastro's history!

Anyone with a B.Sc degree or higher can apply! So please share this email with anyone you think may be interested. People with a M.Sc or PhD are also welcome to apply; it is "scientific"/Research software engineer position after all; and we expect to publish many papers on the algorithms/tools that we develop.

If you can't wait to get your hands dirty, and want to improve your profile for the application, there is a nice checklist in our Google Summer of Code guidelines to help you get started and fix a bug or two until the deadline to include in the application (you have almost two months from now):

https://savannah.gnu.org/support/?110827#comment0

Please don't hesitate to ask any questions from the contact person in the main announcement above; we'd be happy to help clarify any doubts.

22 November, 2023 11:23PM by Mohammad Akhlaghi

FSF Blogs

FSF Giving Guide: Tech changes, freedom doesn't

This year's FSF Giving Guide is here: Make freedom your gift!

22 November, 2023 09:35PM

November 20, 2023

GNUnet News

RFC 9498: The GNU Name System

RFC 9498: The GNU Name System

We are happy to announce that our The GNU Name System (GNS) specification is now published as RFC 9498 .

GNS addresses long-standing security and privacy issues in the ubiquitous Domain Name System (DNS) . Previous attempts to secure DNS ( DNSSEC ) fail to address critical security issues such as end-to-end security, query privacy, censorship, and centralization of root zone governance. After 40 years of patching, it is time for a new beginning.

The GNU Name System is our contribution towards a decentralized and censorship-resistant domain name resolution system that provides a privacy-enhancing alternative to the Domain Name System (DNS).

As part of our work on RFC 9498, we have also contributed to the specification of the .alt top-level domain to be used by alternative name resolution systems and have established the GANA registry for ".alt" .

GNS is implemented according to RFC 9598 in GNUnet 0.20.0. It is also implemented as part of GNUnet-Go .

We thank all reviewers for their comments. In particular, we thank D. J. Bernstein, S. Bortzmeyer, A. Farrel, E. Lear, and R. Salz for their insightful and detailed technical reviews. We thank J. Yao and J. Klensin for the internationalization reviews. We thank Dr. J. Appelbaum for suggesting the name "GNU Name System" and Dr. Richard Stallman for approving its use. We thank T. Lange and M. Wachs for their earlier contributions to the design and implementation of GNS. We thank J. Yao and J. Klensin for the internationalization reviews. We thank NLnet and NGI DISCOVERY for funding work on the GNU Name System.

The work does not stop here: We encourage further implementations of RFC 9498 to learn more both in terms of technical documentation and actual deployment experiences. Further, we are currently working on the specification of the R 5 N DHT and BFT Set Reconciliation which are underlying building blocks of GNS in GNUnet and not covered by RFC 9498.

20 November, 2023 11:00PM

November 19, 2023

gettext @ Savannah

GNU gettext 0.22.4 released

Download from https://ftp.gnu.org/pub/gnu/gettext/gettext-0.22.4.tar.gz

This is a bug-fix release.

New in this release:

  • Bug fixes:
    • AM_GNU_GETTEXT now recognizes a statically built libintl on macOS and AIX.
    • Build fixes on AIX.

19 November, 2023 09:32PM by Bruno Haible

November 16, 2023

Andy Wingo

a whiff of whiffle

A couple nights ago I wrote about a superfluous Scheme implementation and promised to move on from sheepishly justifying my egregious behavior in my next note, and finally mention some results from this experiment. Well, no: I am back on my bullshit. Tonight I write about a couple of implementation details that discerning readers may find of interest: value representation, the tail call issue, and the standard library.

what is a value?

As a Lisp, Scheme is one of the early "dynamically typed" languages. These days when you say "type", people immediately think propositions as types, mechanized proof of program properties, and so on. But "type" has another denotation which is all about values and almost not at all about terms: one might say that vector-ref has a type, but it's not part of a proof; it's just that if you try to vector-ref a pair instead of a vector, you get a run-time error. You can imagine values as being associated with type tags: annotations that can be inspected at run-time for, for example, the sort of error that vector-ref will throw if you call it on a pair.

Scheme systems usually have a finite set of type tags: there are fixnums, booleans, strings, pairs, symbols, and such, and they all have their own tag. Even a Scheme system that provides facilities for defining new disjoint types (define-record-type et al) will implement these via a secondary type tag layer: for example that all record instances are have the same primary tag, and that you have to retrieve their record type descriptor to discriminate instances of different record types.

Anyway. In Whiffle there are immediate types and heap types. All values have a low-bit tag which is zero for heap objects and nonzero for immediates. For heap objects, the first word of the heap object has tagging in the low byte as well. The 3-bit heap tag for pairs is chosen so that pairs can just be two words, with no header word. There is another 3-bit heap tag for forwarded objects, which is used but the GC when evacuating a value. Other objects put their heap tags in the low 8 bits of the first word. Additionally there is a "busy" tag word value, used to prevent races when evacuating from multiple threads.

Finally, for generational collection of objects that can be "large" -- the definition of large depends on the collector implementation, and is not nicely documented, but is more than, like, 256 bytes -- anyway these objects might need to have space for a "remembered" bit in the object themselves. This is not the case for pairs but is the case for, say, vectors: even though they are prolly smol, they might not be, and they need space for a remembered bit in the header.

tail calls

When I started Whiffle, I thought, let's just compile each Scheme function to a C function. Since all functions have the same type, clang and gcc will have no problem turning any tail call into a proper tail call.

This intuition was right and wrong: at optimization level -O2, this works great. We don't even do any kind of loop recognition / contification: loop iterations are tail calls and all is fine. (Not the most optimal implementation technique, but the assumption is that for our test cases, GC costs will dominate.)

However, when something goes wrong, I will need to debug the program to see what's up, and so you might think to compile at -O0 or -Og. In that case, somehow gcc does not compile to tail calls. One time while debugging a program I was flummoxed at a segfault during the call instruction; turns out it was just stack overflow, and the call was trying to write the return address into an unmapped page. For clang, I could use the musttail attribute; perhaps I should, to allow myself to debug properly.

Not being able to debug at -O0 with gcc is annoying. I feel like if GNU were an actual thing, we would have had the equivalent of a musttail attribute 20 years ago already. But it's not, and we still don't.

stdlib

So Whiffle makes C, and that C uses some primitives defined as inline functions. Whiffle actually lexically embeds user Scheme code with a prelude, having exposed a set of primitives to that prelude and to user code. The assumption is that the compiler will open-code all primitives, so that the conceit of providing a primitive from the Guile compilation host to the Whiffle guest magically works out, and that any reference to a free variable is an error. This works well enough, and it's similar to what we currently do in Hoot as well.

This is a quick and dirty strategy but it does let us grow the language to something worth using. I think I'll come back to this local maximum later if I manage to write about what Hoot does with modules.

coda

So, that's Whiffle: the Guile compiler front-end for Scheme, applied to an expression that prepends a user's program with a prelude, in a lexical context of a limited set of primitives, compiling to very simple C, in which tail calls are just return f(...), relying on the C compiler to inline and optimize and all that.

Perhaps next up: some results on using Whiffle to test Whippet. Until then, good night!

16 November, 2023 09:11PM by Andy Wingo

FSF Blogs

November 15, 2023

Now's your chance to submit your session and nominations

LibrePlanet Call for Sessions ends Friday, November 17, and the deadline for Free Software Awards nominations is Tuesday, November 21.

15 November, 2023 05:00PM

November 14, 2023

Protecting free software against confusing additional restrictions

While we are pleased when people use GNU licenses to distribute and license software, we condemn the use of unauthorized, confusing derivatives of the licenses. In this article, we explain how users are protected against restrictive terms introduced by people using GNU licenses' terms in drafting their own, new licenses.

14 November, 2023 10:49PM

Andy Wingo

whiffle, a purpose-built scheme

Yesterday I promised an apology but didn't actually get past the admission of guilt. Today the defendant takes the stand, in the hope that an awkward cross-examination will persuade the jury to take pity on a poor misguided soul.

Which is to say, let's talk about Whiffle: what it actually is, what it is doing for me, and why on earth it is that [I tell myself that] writing a new programming language implementation is somehow preferable than re-using an existing one.

graphic designgarbage collection is my passion

Whiffle is purpose-built to test the Whippet garbage collection library.

Whiffle lets me create Whippet test cases in C, without actually writing C. C is fine and all, but the problem with it and garbage collection is that you have to track all stack roots manually, and this is an error-prone process. Generating C means that I can more easily ensure that each stack root is visitable by the GC, which lets me make test cases with more confidence; if there is a bug, it is probably not because of an untraced root.

Also, Whippet is mostly meant for programming language runtimes, not for direct use by application authors. In this use-case, probably you can use less "active" mechanisms for ensuring root traceability: instead of eagerly recording live values in some kind of handlescope, you can keep a side table that is only consulted as needed during garbage collection pauses. In particular since Scheme uses the stack as a data structure, I was worried that using handle scopes would somehow distort the performance characteristics of the benchmarks.

Whiffle is not, however, a high-performance Scheme compiler. It is not for number-crunching, for example: garbage collectors don't care about that, so let's not. Also, Whiffle should not go to any effort to remove allocations (sroa / gvn / cse); creating nodes in the heap is the purpose of the test case, and eliding them via compiler heroics doesn't help us test the GC.

I settled on a baseline-style compiler, in which I re-use the Scheme front-end from Guile to expand macros and create an abstract syntax tree. I do run some optimizations on that AST; in the spirit of the macro writer's bill of rights, it does make sense to provide some basic reductions. (These reductions can be surprising, but I am used to the Guile's flavor of cp0 (peval), and this project is mostly for me, so I thought it was risk-free; I was almost right!).

Anyway the result is that Whiffle uses an explicit stack. A safepoint for a thread simply records its stack pointer: everything between the stack base and the stack pointer is live. I do have a lingering doubt about the representativity of this compilation strategy; would a conclusion drawn from Whippet apply to Guile, which uses a different stack allocation strategy? I think probably so but it's an unknown.

what's not to love

Whiffle also has a number of design goals that are better formulated in the negative. I mentioned compiler heroics as one thing to avoid, and in general the desire for a well-understood correspondence between source code and run-time behavior has a number of other corrolaries: Whiffle is a pure ahead-of-time (AOT) compiler, as just-in-time (JIT) compilation adds noise. Worse, speculative JIT would add unpredictability, which while good on the whole would be anathema to understanding an isolated piece of a system like the GC.

Whiffle also should produce stand-alone C files, without a thick run-time. I need to be able to understand and reason about the residual C programs, and depending on third-party libraries would hinder this goal.

Oddly enough, users are also an anti-goal: as a compiler that only exists to test a specific GC library, there is no sense in spending too much time making Whiffle nicer for other humans, humans whose goal is surely not just to test Whippet. Whiffle is an interesting object, but is not meant for actual use or users.

corners: cut

Another anti-goal is completeness with regards to any specific language standard: the point is to test a GC, not to make a useful Scheme. Therefore Whippet gets by just fine without flonums, fractions, continuations (delimited or otherwise), multiple return values, ports, or indeed any library support at all. All of that just doesn't matter for testing a GC.

That said, it has been useful to be able to import standard Scheme garbage collection benchmarks, such as earley or nboyer. These have required very few modifications to run in Whippet, mostly related to Whippet's test harness that wants to spawn multiple threads.

and so?

I think this evening we have elaborated a bit more about the "how", complementing yesterday's note about the "what". Tomorrow (?) I'll see if I can dig in more to the "why": what questions does Whiffle let me ask of Whippet, and how good of a job does it do at getting those answers? Until then, may all your roots be traced, and happy hacking.

14 November, 2023 10:10PM by Andy Wingo

November 13, 2023

i accidentally a scheme

Good evening, dear hackfriends. Tonight's missive is an apology: not quite in the sense of expiation, though not quite not that, either; rather, apology in the sense of explanation, of exegesis: apologia. See, I accidentally made a Scheme. I know I have enough Scheme implementations already, but I went and made another one. It's for a maybe good reason, though!

one does not simply a scheme

I feel like we should make this the decade of leaning into your problems, and I have a Scheme problem, so here we are. See, I co-maintain Guile, and have been noodling on a new garbage collector (GC) for Guile, Whippet. Whippet is designed to be embedded in the project that uses it, so one day I hope it will be just copied into Guile's source tree, replacing the venerable BDW-GC that we currently use.

The thing is, though, that GC implementations are complicated. A bug in a GC usually manifests itself far away in time and space from the code that caused the bug. Language implementations are also complicated, for similar reasons. Swapping one GC for another is something to be done very carefully. This is even more the case when the switching cost is high, which is the case with BDW-GC: as a collector written as a library to link into "uncooperative" programs, there is more cost to moving to a conventional collector than in the case where the embedding program is already aware that (for example) garbage collection may relocate objects.

So, you need to start small. First, we need to prove that the new GC implementation is promising in some way, that it might improve on BDW-GC. Then... embed it directly into Guile? That sounds like a bug farm. Is there not any intermediate step that one might take?

But also, how do you actually test that a GC algorithm or implementation is interesting? You need a workload, and you need the ability to compare the new collector to the old, for that workload. In Whippet I had been writing some benchmarks in C (example), but this approach wasn't scaling: besides not sparking joy, I was starting to wonder if what I was testing would actually reflect usage in Guile.

I had an early approach to rewrite a simple language implementation like the other Scheme implementation I made to demonstrate JIT code generation in WebAssembly, but that soon foundered against what seemed to me an unlikely rock: the compiler itself. In my wasm-jit work, the "compiler" itself was in C++, using the C++ allocator for compile-time allocations, and the result was a tree of AST nodes that were interpreted at run-time. But to embed the benchmarks in Whippet itself I needed something C, which is less amenable to abstraction of any kind... Here I think I could have made a different choice: to somehow allow C++ or something as a dependency to write tests, or to do more mallocation in the "compiler"...

But that wasn't fun. A lesson I learned long ago is that if something isn't fun, I need to turn it into a compiler. So I started writing a compiler to a little bytecode VM, initially in C, then in Scheme because C is a drag and why not? Why not just generate the bytecode C file from Scheme? Same dependency set, once the C file is generated. And then, as long as you're generating C, why go through bytecode at all? Why not just, you know, generate C?

after all, why not? why shouldn't i keep it?

And that's how I accidentally made a Scheme, Whiffle. Tomorrow I'll write a little more on what Whiffle is and isn't, and what it's doing for me. Until then, happy hacking!

13 November, 2023 09:36PM by Andy Wingo

November 10, 2023

poke @ Savannah

Debuggers and Analysis Tools CfP @ FOSDEM 2024

Guinevere Piazera Larsen sent us the CfP for the upcoming Debuggers and Analysis Tools at FOSDEM 2024!:

We are excited to announce that the call for proposals is now open for the Debuggers and Analysis Tools developer room at the upcoming FOSDEM 2024, to be hosted on Saturday, February 3rd 2024 in Brussels, Belgium.

This devroom is a collaborative effort and is organized by dedicated people from projects such as GDB, SystemTap, Valgrind, GNU poke, Elfutils, binutils, Libabigail, and the like.

Important Dates:

   1st December 2023    Submission deadline
   8th December 2023    Acceptance notifications
   15th December 2023   Final Schedule announcement
   3rd February 2024    Conference dates

## CFP Introduction

This devroom is geared towards authors, users and enthusiasts of Free Software programs involved with debugging and analyzing ELF programs using all the binary information available (including DWARF data).

The goal of the devroom is for developers to get in touch with each other and with users of their tools, have interesting and hopefully productive discussions, and finally what is most important: to have fun.

The format of the expected presentations will cover a range that
includes:

    * Talks oriented towards developers of these projects
    * Presentation / Introductory Workshops for users of these programs
    * Activities oriented towards collaboration & standardization between these programs


All presentation slots are 25 minutes, with 15 minutes recommended for presentations, and 10 minutes for Q&A. This way we can have 8 slots and bio breaks, covering many topics!

### What is FOSDEM?

*FOSDEM is a free event for software developers to meet, share ideas and collaborate.*

Every year, thousands of developers of free and open source software from all over the world gather at the event in Brussels.

FOSDEM 2024 will take place on Saturday 3 and Sunday 4 February 2024.

It will be an in-person event at the *ULB Solbosch Campus, Brussels, Belgium, Europe.* If you aren't there, you can watch the live streams from the main tracks and developer rooms.

### Important stuff:

- FOSDEM is free to attend. There is no registration.
- [FOSDEM website](https://fosdem.org/)
- [FOSDEM code of conduct](https://fosdem.org/2024/practical/conduct/)
- [FOSDEM Schedule](https://fosdem.org/2024/schedule/)

### Desirable topics:

In this devroom, we're interested in any projects that directly reads binary files and analyses or manipulates them directly. Some projects and tools that fit the devroom are:

- GDB
- LLDB
- Ghidra
- Valgrind
- Dyninst
- GNU poke
- Binutils
- SystemTap
- Elfutils
- Libabigail
- Radare2

And any other debugging or binary analysis tool or framework
projects that might be of interest to Free Software enthusiasts.

### Topic overlap

There are over 50 developer rooms this year, and multiple main tracks, so there may be overlap with some other events.
If the topic would fit more than one devroom, you are free to choose which to submit to, but we encourage you to consider the focus of the talk, and see which devroom is best matched.

Full list of devrooms
[here](https://fosdem.org/2024/news/2023-11-08-devrooms-announced/).

### How to submit your proposal

To submit a talk, please visit the [FOSDEM 2024 Pretalx
website](https://pretalx.fosdem.org/fosdem-2024/cfp).

Click *Submit a proposal*. Make sure you choose the "Debuggers and analysis tools" devroom in the track drop-down menu (so that we see it rather than another devroom's organisers)

### What should be in your submission

- name
- short bio
- contact info
- title
- abstract (what you're going talk about, supports markdown)

Optional:

- description (more detail description, supports markdown)
- submission notes (for the organiser only, not made public)
- session image (if you have an illustration to go with the talk)
- additional speaker/s (if more than one person presenting)

Afterwards, click "Continue".
The second page will include fields for:

- Extra materials for reviewers (optional, for organisers only)
- Additional information about the speaker (optional).
- Resources to be used during the talk

All presentations will be recorded and made available under Creative Commons licenses. In the Submission notes field, please indicate that you agree that your presentation will be licensed under the CC-By-SA-4.0 or CC-By-4.0 license and that you agree to have your presentation recorded.

For example:

    "If my presentation is accepted for FOSDEM, I hereby agree to
license all recordings, slides, and other associated materials under the Creative Commons Attribution Share-Alike 4.0 International License.

    Sincerely,

    <NAME>."

Once you've completed the required sections, submit and we'll get
back to you soon!

### Things to be aware of

  • The reference time will be Brussels local lime (CET)

<https://www.timeanddate.com/worldclock/belgium/brussels>

  • There will be a Q/A session after the talk is over.

  Please make sure that you will be available on the
  day of the event.

  • If you're not able to attend the talk in-person, live stream links

  will be available on the FOSDEM schedule page:
<https://fosdem.org/2024/schedule/>.

  • FOSDEM Matrix channels are specific to each devroom,

  the general link is: </#/#fosdem:fosdem.org>

  • *Matrix bridge to the LibreSOC IRC channel*:

<https://matrix.to/#/#_oftc_#libre-soc:matrix.org>


### Contact

If you have any question about this devroom, please send a message to debuggers-and-analysis-devroom-manager at fosdem dot org.

Organizers of the devroom can also be reached on IRC at #dadevroom@irc.libera.chat

10 November, 2023 03:14PM by Jose E. Marchesi

November 03, 2023

FSF Blogs

Software that supports your body should always respect your freedom

Software that controls your body should always respect your freedom. This article is a recap of scandals of medical devices, like hearing aids, insulin pumps, bionic eyes, and pacemakers, and what we can learn from them. It's astonishing: you wouldn't expect these devices to be run by software in such a way that they can leave you completely helpless.

03 November, 2023 09:46PM

November 02, 2023

pspp @ Savannah

PSPP 2.0.0-pre3 has been released

I'm very pleased to announce the release of a new version of GNU PSPP.  PSPP is a program for statistical analysis of sampled data.  It is a free replacement for the proprietary program SPSS.

Changes from 2.0.0-pre2 to 2.0.0-pre3:

  • Testsuite fix.
  • User string fix.
  • Arabic translation update.

Please send PSPP bug reports to bug-gnu-pspp@gnu.org.

02 November, 2023 09:36PM by Ben Pfaff

November 01, 2023

PSPP 2.0.0-pre2 has been released

I'm very pleased to announce the release of a new version of GNU PSPP.  PSPP is a program for statistical analysis of sampled data.  It is a free replacement for the proprietary program SPSS.

Changes from 1.6.2 to 2.0.0-pre2:

  • The CTABLES command is now implemented.
  • FREQUENCIES now honors the LAYERED setting on SPLIT FILE.
  • AGGREGATE:
    • New aggregation functions CGT, CLT, CIN, and COUT.
    • Break variables are now optional.
  • ADD FILES, MATCH FILES, and UPDATE now allow string variables with the same name to have different widths.
  • CROSSTABS now calculates significance of Pearson and Spearman correlations in symmetric measures.
  • DISPLAY MACROS is now implemented.
  • SET SUMMARY is now implemented.
  • SHOW ENVIRONMENT is now implemented.
  • Removed the MODIFY VARS command, which is not in SPSS.
  • Building from a Git repository, which previously required GIMP, now requires rsvg-convert from librsvg2 instead.
  • The pspp-dump-sav program is no longer installed by default.
  • Improved the search options in the syntax editor.
  • Localisations for the ar (Arabic) and ta (Tamil) locales have been added.
  • Journaling is now enabled by default when PSPP or PSPPIRE is started interactively.  In PSPPIRE, use Edit|Options to override the default.

Please send PSPP bug reports to bug-gnu-pspp@gnu.org.

01 November, 2023 03:34AM by Ben Pfaff

October 31, 2023

gprofng-gui @ Savannah

gprofng GUI 1.0 released

We are are happy to announce the first release of GNU gprofng-gui, version 1.0.

gprofng-gui is a full-fledged graphical interface for the gprofng
profiler, which is part of the GNU binutils.

The tarball gprofng-gui-1.0.tar.gz is now available at
https://ftp.gnu.org/gnu/gprofng-gui/gprofng-gui-1.0.tar.gz.

--
Vladimir Mezentsev
Jose E. Marchesi
31 October 2023

31 October, 2023 09:18AM by Jose E. Marchesi

October 30, 2023

GNU Guix

A build daemon in Guile

When using Guix, you might be aware of the daemon. It runs in the background but it's a key component in Guix. Whenever you've been using Guix to operate on the store, whether that's building something or downloading some substitutes, it's the daemon managing that operation.

The daemon also is a key part of the history of Guix. The Guix project started mixing Guile with ideas from the Nix project, and the guix-daemon is a fork of the nix-daemon with some tweaks made over the years. Rather than being implemented in Guile though, the daemon is implemented in C++ with some helpers written in Guile. Given the focus on Guile in Guix, this is unusual, and I believe it's made working on the daemon less desirable, especially since rewriting it in Guile has been discussed for many years now. It has been the topic of a Google Summer of Code internship by Caleb Ristvedt back in 2017, which helped clarify implementation details and led to some preliminary code.

What would a build daemon in Guile bring?

Guix already has code written in Guile for doing some of what the daemon does internally, so being able to use this Guile code inside and outside the daemon would simplify Guix and allow removing the C++ code.

There isn't Guile code yet for everything the daemon does though, so getting to this point will make new exciting features easier to implement. That could be things like making it easier to use Guix in environments where running the daemon in the usual way is inconvenient or infeasible. It may also help with portability, so help with running Guix on the Hurd and new architectures.

As someone who's more experienced writing Guile than C++, I'm also hoping it'll generally make hacking on the daemon more accessible. This in turn might lead to new features. For example, I think having a build daemon written in Guile will simplify implementing a way to jump in to a build and inspect the environment.

With that in mind, I'm excited to announce that support from NLNet, will allow me to focus for the next year on getting a Guile implementation of the build daemon written and adopted.

A technical plan

Building on the recent discussion of this topic on the guix-devel@gnu.org mailing list, here's some technical thoughts on how I'm approaching this.

While I think there's a substantial amount of work to do, progress towards a Guile guix-daemon has already been made. Given that things in Guix have probably changed since this work has happened, I plan to carefully review that existing work (most of which can be found on the guile-daemon branch.

The priority for the Guile daemon is backwards compatibility, so the plan is to allow switching between the C++ implementation and Guile implementation without any issues. This'll require not making changes to the database schema, and generally doing things in a way which the current C++ daemon will understand.

Like the Guix Build Coordinator, I'm planning to make the daemon a single process using Fibers for concurrency. This is in contrast to the forking model using by the C++ daemon. Even though it's not a priority to address feature issues with the current daemon, this approach might help to reduce database contention issues experienced with the current daemon, and allow for less locking, like not having the big GC lock for example.

I'm planning on publishing more blog posts as the project progress, so keep an eye on the Guix blog for future updates.

Acknowledgments

Thanks to Simon Tournier and Ludovic Courtès for providing feedback on an earlier draft of this post.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64 and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

30 October, 2023 09:30AM by Christopher Baines

October 29, 2023

unifont @ Savannah

Unifont 15.1.04 Released

29 October 2023 Unifont 15.1.04 is now available.  This release adds the CJK Unified Ideographs Extension I glyphs (U+2EBF0..U+2EE5D).  It also includes other minor glyph and software updates.

This release no longer builds TrueType fonts by default, as announced over the past year.  They have been replaced with their OpenType equivalents.  TrueType fonts can still be built manually by typing "make truetype" in the font directory.

This release also includes a new Hangul Syllables Johab 6/3/1 encoding proposed by Ho-Seok Ee.  New Hangul supporting software for this encoding allows formation of all double-width Hangul syllables, including those with ancient letters that are outside the Unicode Hangul Syllables range.  Details are in the ChangeLog file.

Download this release from GNU server mirrors at:

     https://ftpmirror.gnu.org/unifont/unifont-15.1.04/

or if that fails,

     https://ftp.gnu.org/gnu/unifont/unifont-15.1.04/

or, as a last resort,

     ftp://ftp.gnu.org/gnu/unifont/unifont-15.1.04/

These files are also available on the unifoundry.com website:

     https://unifoundry.com/pub/unifont/unifont-15.1.04/

Font files are in the subdirectory

     https://unifoundry.com/pub/unifont/unifont-15.1.04/font-builds/

A more detailed description of font changes is available at

      https://unifoundry.com/unifont/index.html

and of utility program changes at

      https://unifoundry.com/unifont/unifont-utilities.html

Information about Hangul modifications is at

      https://unifoundry.com/hangul/index.html

and

      http://unifoundry.com/hangul/hangul-generation.html

29 October, 2023 10:11PM by Paul Hardy

October 25, 2023

poke @ Savannah

[VIDEO] Writing GNU poke pickles: GCOV data files

We have published a video that shows a small but interesting real-life example of a poke pickle to poke at GCOV data files.  The purpose of the video is to help learning useful idioms to write your own pickles.  Hope you find it useful!

https://youtu.be/95f5bB4ls7w

25 October, 2023 11:56AM by Jose E. Marchesi

October 24, 2023

parallel @ Savannah

GNU Parallel 20231022 ('Al-Aqsa Deluge') released [stable]

GNU Parallel 20231022 ('Al-Aqsa Deluge') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  Love to make a dual processor workstation absolutely whir running dozens of analysis scripts at once
    -- Best Catboy Key Grip @alamogordoglass@twitter
 
New in this release:

  • Bug fixes and man page updates.


News about GNU Parallel:


GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.


About GNU Parallel


GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
    12345678 883c667e 01eed62f 975ad28b 6d50e22a
    $ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
    cc21b4c9 43fd03e9 3ae1ae49 e28573c0
    $ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
    79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
    fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference


If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)


If GNU Parallel saves you money:



About GNU SQL


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.


About GNU Niceload


GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

24 October, 2023 09:15PM by Ole Tange

October 22, 2023

unifont @ Savannah

Unifont 15.1.03 Released

*21 October 2023*Unifont 15.1.03 is now available.  This release adds the CJK Unified Ideographs Extension I glyphs (U+2EBF0..U+2EE5D).  It also includes other minor glyph and software updates.

This release no longer builds TrueType fonts by default, as announced over the past year.  They have been replaced with their OpenType equivalents.  TrueType fonts can still be built manually by typing "make truetype" in the font directory.

This release also includes a new Hangul Syllables Johab 6/3/1 encoding proposed by Ho-Seok Ee.  New Hangul supporting software for this encoding allows formation of all double-width Hangul syllables, including those with ancient letters that are outside the Unicode Hangul Syllables range.  Details are in the ChangeLog file.

Download this release from GNU server mirrors at:

     https://ftpmirror.gnu.org/unifont/unifont-15.1.03/

or if that fails,

     https://ftp.gnu.org/gnu/unifont/unifont-15.1.03/

or, as a last resort,

     ftp://ftp.gnu.org/gnu/unifont/unifont-15.1.03/

These files are also available on the unifoundry.com website:

     https://unifoundry.com/pub/unifont/unifont-15.1.03/

Font files are in the subdirectory

     https://unifoundry.com/pub/unifont/unifont-15.1.03/font-builds/

A more detailed description of font changes is available at

      https://unifoundry.com/unifont/index.html

and of utility program changes at

      https://unifoundry.com/unifont/unifont-utilities.html

Information about Hangul modifications is at

      https://unifoundry.com/hangul/index.html

and

      http://unifoundry.com/hangul/hangul-generation.html

22 October, 2023 12:40AM by Paul Hardy

October 20, 2023

gnuastro @ Savannah

Gnuastro 0.21 released

The 21st release of GNU Astronomy Utilities (Gnuastro) is now available. See the full announcement for all the new features in this release and the many bugs that have been found and fixed: https://lists.gnu.org/archive/html/info-gnuastro/2023-10/msg00000.html

20 October, 2023 08:23PM by Mohammad Akhlaghi

October 19, 2023

Andy Wingo

requiem for a stringref

Good day, comrades. Today's missive is about strings!

a problem for java

Imagine you want to compile a program to WebAssembly, with the new GC support for WebAssembly. Your WebAssembly program will run on web browsers and render its contents using the DOM API: Document.createElement, Document.createTextNode, and so on. It will also use DOM interfaces to read parts of the page and read input from the user.

How do you go about representing your program in WebAssembly? The GC support gives you the ability to define a number of different kinds of aggregate data types: structs (records), arrays, and functions-as-values. Earlier versions of WebAssembly gave you 32- and 64-bit integers, floating-point numbers, and opaque references to host values (externref). This is what you have in your toolbox. But what about strings?

WebAssembly's historical answer has been to throw its hands in the air and punt the problem to its user. This isn't so bad: the direct user of WebAssembly is a compiler developer and can fend for themself. Using the primitives above, it's clear we should represent strings as some kind of array.

The source language may impose specific requirements regarding string representations: for example, in Java, you will want to use an (array i16), because Java's strings are specified as sequences of UTF-16¹ code units, and Java programs are written assuming that random access to a code unit is constant-time.

Let's roll with the Java example for a while. It so happens that JavaScript, the main language of the web, also specifies strings in terms of 16-bit code units. The DOM interfaces are optimized for JavaScript strings, so at some point, our WebAssembly program is going to need to convert its (array i16) buffer to a JavaScript string. You can imagine that a high-throughput interface between WebAssembly and the DOM is going to involve a significant amount of copying; could there be a way to avoid this?

Similarly, Java is going to need to perform a number of gnarly operations on its strings, for example, locale-specific collation. This is a hard problem whose solution basically amounts to shipping a copy of libICU in their WebAssembly module; that's a lot of binary size, and it's not even clear how to compile libICU in such a way that works on GC-managed arrays rather than linear memory.

Thinking about it more, there's also the problem of regular expressions. A high-performance regular expression engine is a lot of investment, and not really portable from the native world to WebAssembly, as the main techniques require just-in-time code generation, which is unavailable on Wasm.

This is starting to sound like a terrible system: big binaries, lots of copying, suboptimal algorithms, and a likely ongoing functionality gap. What to do?

a solution for java

One observation is that in the specific case of Java, we could just use JavaScript strings in a web browser, instead of implementing our own string library. We may need to make some shims here and there, but the basic functionality from JavaScript gets us what we need: constant-time UTF-16¹ code unit access from within WebAssembly, and efficient access to browser regular expression, internationalization, and DOM capabilities that doesn't require copying.

A sort of minimum viable product for improving the performance of Java compiled to Wasm/GC would be to represent strings as externref, which is WebAssembly's way of making an opaque reference to a host value. You would operate on those values by importing the equivalent of String.prototype.charCodeAt and friends; to get the receivers right you'd need to run them through Function.call.bind. It's a somewhat convoluted system, but a WebAssembly engine could be taught to recognize such a function and compile it specially, using the same code that JavaScript compiles to.

(Does this sound too complicated or too distasteful to implement? Disabuse yourself of the notion: it's happening already. V8 does this and other JS/Wasm engines will be forced to follow, as users file bug reports that such-and-such an app is slow on e.g. Firefox but fast on Chrome, and so on and so on. It's the same dynamic that led asm.js adoption.)

Getting properly good performance will require a bit more, though. String literals, for example, would have to be loaded from e.g. UTF-8 in a WebAssembly data section, then transcoded to a JavaScript string. You need a function that can convert UTF-8 to JS string in the first place; let's call it fromUtf8Array. An engine can now optimize the array.new_data + fromUtf8Array sequence to avoid the intermediate array creation. It would also be nice to tighten up the typing on the WebAssembly side: having everything be externref imposes a dynamic type-check on each operation, which is something that can't always be elided.

beyond the web?

"JavaScript strings for Java" has two main limitations: JavaScript and Java. On the first side, this MVP doesn't give you anything if your WebAssembly host doesn't do JavaScript. Although it's a bit of a failure for a universal virtual machine, to an extent, the WebAssembly ecosystem is OK with this distinction: there are different compiler and toolchain options when targetting the web versus, say, Fastly's edge compute platform.

But does that mean you can't run Java on Fastly's cloud? Does the Java compiler have to actually implement all of those things that we were trying to avoid? Will Java actually implement those things? I think the answers to all of those questions is "no", but also that I expect a pretty crappy outcome.

First of all, it's not technically required that Java implement its own strings in terms of (array i16). A Java-to-Wasm/GC compiler can keep the strings-as-opaque-host-values paradigm, and instead have these string routines provided by an auxiliary WebAssembly module that itself probably uses (array i16), effectively polyfilling what the browser would give you. The effort of creating this module can be shared between e.g. Java and C#, and the run-time costs for instantiating the module can be amortized over a number of Java users within a process.

However, I don't expect such a module to be of good quality. It doesn't seem possible to implement a good regular expression engine that way, for example. And, absent a very good run-time system with an adaptive compiler, I don't expect the low-level per-codepoint operations to be as efficient with a polyfill as they are on the browser.

Instead, I could see non-web WebAssembly hosts being pressured into implementing their own built-in UTF-16¹ module which has accelerated compilation, a native regular expression engine, and so on. It's nice to have a portable fallback but in the long run, first-class UTF-16¹ will be everywhere.

beyond java?

The other drawback is Java, by which I mean, Java (and JavaScript) is outdated: if you were designing them today, their strings would not be UTF-16¹.

I keep this little "¹" sigil when I mention UTF-16 because Java (and JavaScript) don't actually use UTF-16 to represent their strings. UTF-16 is standard Unicode encoding form. A Unicode encoding form encodes a sequence of Unicode scalar values (USVs), using one or two 16-bit code units to encode each USV. A USV is a codepoint: an integer in the range [0,0x10FFFF], but excluding surrogate codepoints: codepoints in the range [0xD800,0xDFFF].

Surrogate codepoints are an accident of history, and occur either when accidentally slicing a two-code-unit UTF-16-encoded-USV in the middle, or when treating an arbitrary i16 array as if it were valid UTF-16. They are annoying to detect, but in practice are here to stay: no amount of wishing will make them go away from Java, JavaScript, C#, or other similar languages from those heady days of the mid-90s. Believe me, I have engaged in some serious wishing, but if you, the virtual machine implementor, want to support Java as a source language, your strings have to be accessible as 16-bit code units, which opens the door (eventually) to surrogate codepoints.

So when I say UTF-16¹, I really mean WTF-16: sequences of any 16-bit code units, without the UTF-16 requirement that surrogate code units be properly paired. In this way, WTF-16 encodes a larger language than UTF-16: not just USV codepoints, but also surrogate codepoints.

The existence of WTF-16 is a consequence of a kind of original sin, originating in the choice to expose 16-bit code unit access to the Java programmer, and which everyone agrees should be somehow firewalled off from the rest of the world. The usual way to do this is to prohibit WTF-16 from being transferred over the network or stored to disk: a message sent via an HTTP POST, for example, will never include a surrogate codepoint, and will either replace it with the U+FFFD replacement codepoint or throw an error.

But within a Java program, and indeed within a JavaScript program, there is no attempt to maintain the UTF-16 requirements regarding surrogates, because any change from the current behavior would break programs. (How many? Probably very, very few. But productively deprecating web behavior is hard to do.)

If it were just Java and JavaScript, that would be one thing, but WTF-16 poses challenges for using JS strings from non-Java languages. Consider that any JavaScript string can be invalid UTF-16: if your language defines strings as sequences of USVs, which excludes surrogates, what do you do when you get a fresh string from JS? Passing your string to JS is fine, because WTF-16 encodes a superset of USVs, but when you receive a string, you need to have a plan.

You only have a few options. You can eagerly check that a string is valid UTF-16; this might be a potentially expensive O(n) check, but perhaps this is acceptable. (This check may be faster in the future.) Or, you can replace surrogate codepoints with U+FFFD, when accessing string contents; lossy, but preserves your language's semantic domain. Or, you can extend your language's semantics to somehow deal with surrogate codepoints.

My point is that if you want to use JS strings in a non-Java-like language, your language will need to define what to do with invalid UTF-16. Ideally the browser will give you a way to put your policy into practice: replace with U+FFFD, error, or pass through.

beyond java? (reprise) (feat: snakes)

With that detail out of the way, say you are compiling Python to Wasm/GC. Python's language reference says: "A string is a sequence of values that represent Unicode code points. All the code points in the range U+0000 - U+10FFFF can be represented in a string." This corresponds to the domain of JavaScript's strings; great!

On second thought, how do you actually access the contents of the string? Surely not via the equivalent of JavaScript's String.prototype.charCodeAt; Python strings are sequences of codepoints, not 16-bit code units.

Here we arrive to the second, thornier problem, which is less about domain and more about idiom: in Python, we expect to be able to access strings by codepoint index. This is the case not only to access string contents, but also to refer to positions in strings, for example when extracting a substring. These operations need to be fast (or fast enough anyway; CPython doesn't have a very high performance baseline to meet).

However, the web platform doesn't give us O(1) access to string codepoints. Usually a codepoint just takes up one 16-bit code unit, so the (zero-indexed) 5th codepoint of JS string s may indeed be at s.codePointAt(5), but it may also be at offset 6, 7, 8, 9, or 10. You get the point: finding the nth codepoint in a JS string requires a linear scan from the beginning.

More generally, all languages will want to expose O(1) access to some primitive subdivision of strings. For Rust, this is bytes; 8-bit bytes are the code units of UTF-8. For others like Java or C#, it's 16-bit code units. For Python, it's codepoints. When targetting JavaScript strings, there may be a performance impedance mismatch between what the platform offers and what the language requires.

Languages also generally offer some kind of string iteration facility, which doesn't need to correspond to how a JavaScript host sees strings. In the case of Python, one can implement for char in s: print(char) just fine on top of JavaScript strings, by decoding WTF-16 on the fly. Iterators can also map between, say, UTF-8 offsets and WTF-16 offsets, allowing e.g. Rust to preserve its preferred "strings are composed of bytes that are UTF-8 code units" abstraction.

Our O(1) random access problem remains, though. Are we stuck?

what does the good world look like

How should a language represent its strings, anyway? Here we depart from a precise gathering of requirements for WebAssembly strings, but in a useful way, I think: we should build abstractions not only for what is, but also for what should be. We should favor a better future; imagining the ideal helps us design the real.

I keep returning to Henri Sivonen's authoritative article, It’s Not Wrong that "🤦🏼‍♂️".length == 7, But It’s Better that "🤦🏼‍♂️".len() == 17 and Rather Useless that len("🤦🏼‍♂️") == 5. It is so good and if you have reached this point, pop it open in a tab and go through it when you can. In it, Sivonen argues (among other things) that random access to codepoints in a string is not actually important; he thinks that if you were designing Python today, you wouldn't include this interface in its standard library. Users would prefer extended grapheme clusters, which is variable-length anyway and a bit gnarly to compute; storage wants bytes; array-of-codepoints is just a bad place in the middle. Given that UTF-8 is more space-efficient than either UTF-16 or array-of-codepoints, and that it embraces the variable-length nature of encoding, programming languages should just use that.

As a model for how strings are represented, array-of-codepoints is outdated, as indeed is UTF-16. Outdated doesn't mean irrelevant, of course; there is lots of Python code out there and we have to support it somehow. But, if we are designing for the future, we should nudge our users towards other interfaces.

There is even a case that a JavaScript engine should represent its strings as UTF-8 internally, despite the fact that JS exposes a UTF-16 view on strings in its API. The pitch is that UTF-8 takes less memory, is probably what we get over the network anyway, and is probably what many of the low-level APIs that a browser uses will want; it would be faster and lighter-weight to pass UTF-8 to text shaping libraries, for example, compared to passing UTF-16 or having to copy when going to JS and when going back. JavaScript engines already have a dozen internal string representations or so (narrow or wide, cons or slice or flat, inline or external, interned or not, and the product of many of those); adding another is just a Small Matter Of Programming that could show benefits, even if some strings have to be later transcoded to UTF-16 because JS accesses them in that way. I have talked with JS engine people in all the browsers and everyone thinks that UTF-8 has a chance at being a win; the drawback is that actually implementing it would take a lot of effort for uncertain payoff.

I have two final data-points to indicate that UTF-8 is the way. One is that Swift used to use UTF-16 to represent its strings, but was able to switch to UTF-8. To adapt to the newer performance model of UTF-8, Swift maintainers designed new APIs to allow users to request a view on a string: treat this string as UTF-8, or UTF-16, or a sequence of codepoints, or even a sequence of extended grapheme clusters. Their users appear to be happy, and I expect that many languages will follow Swift's lead.

Secondly, as a maintainer of the Guile Scheme implementation, I also want to switch to UTF-8. Guile has long used Python's representation strategy: array of codepoints, with an optimization if all codepoints are "narrow" (less than 256). The Scheme language exposes codepoint-at-offset (string-ref) as one of its fundamental string access primitives, and array-of-codepoints maps well to this idiom. However, we do plan to move to UTF-8, with a Swift-like breadcrumbs strategy for accelerating per-codepoint access. We hope to lower memory consumption, simplify the implementation, and have general (but not uniform) speedups; some things will be slower but most should be faster. Over time, users will learn the performance model and adapt to prefer string builders / iterators ("string ports") instead of string-ref.

a solution for webassembly in the browser?

Let's try to summarize: it definitely makes sense for Java to use JavaScript strings when compiled to WebAssembly/GC, when running on the browser. There is an OK-ish compilation strategy for this use case involving externref, String.prototype.charCodeAt imports, and so on, along with some engine heroics to specially recognize these operations. There is an early proposal to sand off some of the rough edges, to make this use-case a bit more predictable. However, there are two limitations:

  1. Focussing on providing JS strings to Wasm/GC is only really good for Java and friends; the cost of mapping charCodeAt semantics to, say, Python's strings is likely too high.

  2. JS strings are only present on browsers (and Node and such).

I see the outcome being that Java will have to keep its implementation that uses (array i16) when targetting the edge, and use JS strings on the browser. I think that polyfills will not have acceptable performance. On the edge there will be a binary size penalty and a performance and functionality gap, relative to the browser. Some edge Wasm implementations will be pushed to implement fast JS strings by their users, even though they don't have JS on the host.

If the JS string builtins proposal were a local maximum, I could see putting some energy into it; it does make the Java case a bit better. However I think it's likely to be an unstable saddle point; if you are going to infect the edge with WTF-16 anyway, you might as well step back and try to solve a problem that is a bit more general than Java on JS.

stringref: a solution for webassembly?

I think WebAssembly should just bite the bullet and try to define a string data type, for languages that use GC. It should support UTF-8 and UTF-16 views, like Swift's strings, and support some kind of iterator API that decodes codepoints.

It should be abstract as regards the concrete representation of strings, to allow JavaScript strings to stand in for WebAssembly strings, in the context of the browser. JS hosts will use UTF-16 as their internal representation. Non-JS hosts will likely prefer UTF-8, and indeed an abstract API favors migration of JS engines away from UTF-16 over the longer term. And, such an abstraction should give the user control over what to do for surrogates: allow them, throw an error, or replace with U+FFFD.

What I describe is what the stringref proposal gives you. We don't yet have consensus on this proposal in the Wasm standardization group, and we may never reach there, although I think it's still possible. As I understand them, the objections are two-fold:

  1. WebAssembly is an instruction set, like AArch64 or x86. Strings are too high-level, and should be built on top, for example with (array i8).

  2. The requirement to support fast WTF-16 code unit access will mean that we are effectively standardizing JavaScript strings.

I think the first objection is a bit easier to overcome. Firstly, WebAssembly now defines quite a number of components that don't map to machine ISAs: typed and extensible locals, memory.copy, and so on. You could have defined memory.copy in terms of primitive operations, or required that all local variables be represented on an explicit stack or in a fixed set of registers, but WebAssembly defines higher-level interfaces that instead allow for more efficient lowering to machine primitives, in this case SIMD-accelerated copies or machine-specific sets of registers.

Similarly with garbage collection, there was a very interesting "continuation marks" proposal by Ross Tate that would give a low-level primitive on top of which users could implement root-finding of stack values. However when choosing what to include in the standard, the group preferred a more high-level facility in which a Wasm module declares managed data types and allows the WebAssembly implementation to do as it sees fit. This will likely result in more efficient systems, as a Wasm implementation can more easily use concurrency and parallelism in the GC implementation than a guest WebAssembly module could do.

So, the criteria of what to include in the Wasm standard is not "what is the most minimal primitive that can express this abstraction", or even "what looks like an ARMv8 instruction", but rather "what makes Wasm a good compilation target". Wasm is designed for its compiler-users, not for the machines that it runs on, and if we manage to find an abstract definition of strings that works for Wasm-targetting toolchains, we should think about adding it.

The second objection is trickier. When you compile to Wasm, you need a good model of what the performance of the Wasm code that you emit will be. Different Wasm implementations may use different stringref representations; requesting a UTF-16 view on a string that is already UTF-16 will be cheaper than doing so on a string that is UTF-8. In the worst case, requesting a UTF-16 view on a UTF-8 string is a linear operation on one system but constant-time on another, which in a loop over string contents makes the former system quadratic: a real performance failure that we need to design around.

The stringref proposal tries to reify as much of the cost model as possible with its "view" abstraction; the compiler can reason that any cost may incur then rather than when accessing a view. But, this abstraction can leak, from a performance perspective. What to do?

I think that if we look back on what the expected outcome of the JS-strings-for-Java proposal is, I believe that if Wasm succeeds as a target for Java, we will probably already end up with WTF-16 everywhere. We might as well admit this, I think, and if we do, then this objection goes away. Likewise on the Web I see UTF-8 as being potentially advantageous in the medium-long term for JavaScript, and certainly better for other languages, and so I expect JS implementations to also grow support for fast UTF-8.

i'm on a horse

I may be off in some of my predictions about where things will go, so who knows. In the meantime, in the time that it takes other people to reach the same conclusions, stringref is in a kind of hiatus.

The Scheme-to-Wasm compiler that I work on does still emit stringref, but it is purely a toolchain concept now: we have a post-pass that lowers stringref to WTF-8 via (array i8), and which emits calls to host-supplied conversion routines when passing these strings to and from the host. When compiling to Hoot's built-in Wasm virtual machine, we can leave stringref in, instead of lowering it down, resulting in more efficient interoperation with the host Guile than if we had to bounce through byte arrays.

So, we wait for now. Not such a bad situation, at least we have GC coming soon to all the browsers. appy hacking to all my stringfolk, and until next time!

19 October, 2023 10:33AM by Andy Wingo

October 18, 2023

texinfo @ Savannah

Texinfo 7.1 released

We have released version 7.1 of Texinfo, the GNU documentation format.

It's available via a mirror (xz is much smaller than gz, but gz is available too just in case):

http://ftpmirror.gnu.org/texinfo/texinfo-7.1.tar.xz
http://ftpmirror.gnu.org/texinfo/texinfo-7.1.tar.gz

Please send any comments to bug-texinfo@gnu.org.

Full announcement:

https://lists.gnu.org/archive/html/info-gnu/2023-10/msg00003.html

18 October, 2023 02:57PM by Gavin D. Smith

poke @ Savannah

GNU poke now available in PTXdist

Alexander Dahl has packaged poke [1] for PTXdist, which is a build system for embedded GNU/Linux images.  It is interesting to think about possible applications of having poke in embedded systems such as routers, multimedia players and the like...

Thank you Alexander!

[1] https://git.pengutronix.de/cgit/ptxdist/commit/?id=e6ec9f74f041658f3453f3b7ec4ba4b90e854f26

18 October, 2023 01:25PM by Jose E. Marchesi

Gary Benson

sudo tee >/dev/null

Need to redirect to a file from sudo? Use sudo tee >/dev/null:

$ sudo ls -l /root >/root/files.list
-bash: /root/files.list: Permission denied
$ sudo ls -l /root | sudo tee /root/files.list >/dev/null
$ sudo ls -l /root/files.list
-rw-r--r-- 1 root root 94 Oct 18 09:31 /root/files.list

18 October, 2023 08:40AM by gbenson

GNU MediaGoblin

MediaGoblin 0.13.0

We're pleased to announce the release of MediaGoblin 0.13.0. See the release notes for full details and upgrading instructions.

This minor release adds support for Python 3.10 and 3.11 and drops support for Python versions prior to 3.7. It also upgrades a number of Python dependencies and adds a few small bug fixes and improvements.

This version has been tested on Debian Bullseye (11), Debian Bookworm (12), Ubuntu 20.04, Ubuntu 22.04 and Fedora 39.

Thanks go to Olivier Mehani, Michael McMahon, Andrew Dudash for their contributions in this release!

To join us and help improve MediaGoblin, please visit our getting involved page.

18 October, 2023 05:00AM by Ben Sturmfels

October 16, 2023

German Arias

FisicaLab 0.4.0 is out!

After many years FisicaLab 0.4.0 is available. Comes with a new module for thermodynamics and a bug fixed in chalkboard. Most changes are internal, a reorganization of different solvers that will make easy, in future versions, save and load physics problems from a file. Due to a bug in GNUstep with Tool Tips, now there is an alternative interface based on IUP. Tool Tips are used in FisicaLab to show information of each element at chalkboard. For the moment there ins´t binary packages. But maybe later.

16 October, 2023 07:09AM by Germán Arias

October 10, 2023

FSF News

LibrePlanet 2024: "Cultivating Community" will feature a keynote by David Wilson

BOSTON, Massachusetts, USA -- October 10, 2023 -- The Free Software Foundation (FSF) today announced free software developer and video creator David Wilson, as its first keynote speaker for LibrePlanet 2024, the sixteenth edition of the FSF's conference on ethical technology and user freedom. LibrePlanet will be held in March in the Boston area as well as online.

10 October, 2023 09:30PM

October 06, 2023

poke @ Savannah

New GNU poke co-maintainer

I am happy to announce that Mohammad-Reza Nabipoor has just been
appointed as GNU co-maintainer of poke.  When Mohammad joined the
development he brough his enthusiam with him, and has contributed lots of good code, ideas and also helped organizing events, doing great talks, and even printing a batch of the mighty poke t-shirts.

Starting with poke 4, I plan to delegate on Mohammad the maintenance of the stable series, backporting fixes to the corresponding maintenance branches and doing releases whenever appropriate.  That will allow me more time to focus on the development master branch.

Congrats and thank you Mohammad!

--
Jose E. Marchesi
Friday 6 October 2023
Frankfurt am Main

06 October, 2023 11:14AM by Jose E. Marchesi

October 05, 2023

gettext @ Savannah

GNU gettext 0.22.3 released

Download from https://ftp.gnu.org/pub/gnu/gettext/gettext-0.22.3.tar.gz

This is a bug-fix release.

New in this release:

  • Portability:
    • The libintl library now works on macOS 14.  (Older versions of libintl crash on macOS 14, due to an incompatible change in macOS.)

05 October, 2023 08:00AM by Bruno Haible

October 03, 2023

FSF News

The third round of FSF board candidate discussions has begun

All eligible associate members are invited to participate in the final round of this cycle's FSF board candidate discussions, as part of its board process.

03 October, 2023 08:48PM

September 27, 2023

FSF celebrates forty years of GNU with a hackday for families, hackers, and hackers-to-be

BOSTON, Massachusetts, USA -- Wednesday, September 27, 2023 -- Today, the GNU Project turned forty years old. To celebrate this, the Free Software Foundation (FSF) is hosting a hack day for families, students, and anyone interested in hacking.

27 September, 2023 06:02PM

September 24, 2023

parallel @ Savannah

GNU Parallel 20230922 ('Derna') released [stable]

GNU Parallel 20230922 ('Derna') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  Parallel is so damn good! You’ve got to use it.
    -- @ThePrimeTimeagen@youtube.com

New in this release:

  • No new features. This is a candidate for a stable release.
  • Bug fixes and man page updates.


News about GNU Parallel:


GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.


About GNU Parallel


GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
    12345678 883c667e 01eed62f 975ad28b 6d50e22a
    $ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
    cc21b4c9 43fd03e9 3ae1ae49 e28573c0
    $ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
    79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
    fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference


If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)


If GNU Parallel saves you money:



About GNU SQL


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.


About GNU Niceload


GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

24 September, 2023 07:37PM by Ole Tange

health @ Savannah

Release of GNU Health HMIS 4.2.3 patchset

Dear community

GNU Health 4.2.3 patchset has been released !

Priority: High

Table of Contents


  • About GNU Health Patchsets
  • Updating your system with the GNU Health control Center
  • Installation notes
  • List of other issues related to this patchset


About GNU Health Patchsets


We provide "patchsets" to stable releases. Patchsets allow applying bug fixes and updates on production systems. Always try to keep your production system up-to-date with the latest patches.

Patches and Patchsets maximize uptime for production systems, and keep your system updated, without the need to do a whole installation.

NOTE: Patchsets are applied on previously installed systems only. For new, fresh installations, download and install the whole tarball (ie, gnuhealth-4.2.3.tar.gz)

Updating your system with the GNU Health control Center


Starting GNU Health 3.x series, you can do automatic updates on the GNU Health HMIS kernel and modules using the GNU Health control center program.

Please refer to the administration manual section ( https://en.wikibooks.org/wiki/GNU_Health/Control_Center )

The GNU Health control center works on standard installations (those done following the installation manual on wikibooks). Don't use it if you use an alternative method or if your distribution does not follow the GNU Health packaging guidelines.

Installation Notes


You must apply previous patchsets before installing this patchset. If your patchset level is 4.2.2, then just follow the general instructions. You can find the patchsets at GNU Health main download site at GNU.org (https://ftp.gnu.org/gnu/health/)

In most cases, GNU Health Control center (gnuhealth-control) takes care of applying the patches for you. 

Pre-requisites for upgrade to 4.2.3: None

Now follow the general instructions at
 https://en.wikibooks.org/wiki/GNU_Health/Control_Center

 
After applying the patches, make a full update of your GNU Health database as explained in the documentation.

When running "gnuhealth-control" for the first time, you will see the following message: "Please restart now the update with the new control center" Please do so. Restart the process and the update will continue.
 

  • Restart the GNU Health server


List of other issues and tasks related to this patchset


  • bug #64712: Bug afer installing patcheset 4.2.2
  • bug #64706: Error saving party with photo due to PIL deprecation of ANTIALIAS


For detailed information about each issue, you can visit :
 https://savannah.gnu.org/bugs/?group=health
 
About each task, you can visit:
 https://savannah.gnu.org/task/?group=health

For detailed information you can read about Patches and Patchsets

24 September, 2023 04:20PM by Luis Falcon

September 23, 2023

GNUnet News

GNUnet 0.20.0

GNUnet 0.20.0 released

We are pleased to announce the release of GNUnet 0.20.0.
GNUnet is an alternative network stack for building secure, decentralized and privacy-preserving distributed applications. Our goal is to replace the old insecure Internet protocol stack. Starting from an application for secure publication of files, it has grown to include all kinds of basic protocol components and applications towards the creation of a GNU internet.

This is a new major release. It breaks protocol compatibility with the 0.19.x versions. Please be aware that Git master is thus henceforth (and has been for a while) INCOMPATIBLE with the 0.19.x GNUnet network, and interactions between old and new peers will result in issues. 0.19.x peers will be able to communicate with Git master or 0.20.x peers, but some services will not be compatible.
In terms of usability, users should be aware that there are still a number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.20.0 release is still only suitable for early adopters with some reasonable pain tolerance .

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Changes

A detailed list of changes can be found in the git log , the NEWS and the bug tracker .

Known Issues

  • There are known major design issues in the TRANSPORT, ATS and CORE subsystems which will need to be addressed in the future to achieve acceptable usability, performance and security.
  • There are known moderate implementation limitations in CADET that negatively impact performance.
  • There are known moderate design issues in FS that also impact usability and performance.
  • There are minor implementation limitations in SET that create unnecessary attack surface for availability.
  • The RPS subsystem remains experimental.
  • Some high-level tests in the test-suite fail non-deterministically due to the low-level TRANSPORT issues.

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.

Thanks

This release was the work of many people. The following people contributed code and were thus easily identified: Christian Grothoff, t3sserakt, TheJackiMonster, Marshall Stone, Özgür Kesim and Martin Schanzenbach.

23 September, 2023 10:00PM

September 22, 2023

unifont @ Savannah

Unifont 15.1.02 Released

21 September 2023 Unifont 15.1.02 is now available.
This is a minor release.  It adjusts 46 glyphs in the Wen Quan Yi range, and adds Plane 3 ideographs.  This release no longer builds TrueType fonts by default, as announced over the past year.  They have been replaced with their OpenType equivalents.  TrueType fonts can still be built manually by typing "make truetype" in the font directory.

This release also includes a new Hangul Syllables Johab 6/3/1 encoding proposed by Ho-Seok Ee.  New Hangul supporting software for this encoding allows formation of all double-width Hangul syllables, including those with ancient letters that are outside the Unicode Hangul Syllables range.  Details are in the ChangeLog file.

Download this release from GNU server mirrors at:

     https://ftpmirror.gnu.org/unifont/unifont-15.1.02/

or if that fails,

     https://ftp.gnu.org/gnu/unifont/unifont-15.1.02/

or, as a last resort,

     ftp://ftp.gnu.org/gnu/unifont/unifont-15.1.02/

These files are also available on the unifoundry.com website:

     https://unifoundry.com/pub/unifont/unifont-15.1.02/

Font files are in the subdirectory

     https://unifoundry.com/pub/unifont/unifont-15.1.02/font-builds/

A more detailed description of font changes is available at

      https://unifoundry.com/unifont/index.html

and of utility program changes at

      https://unifoundry.com/unifont/unifont-utilities.html

Information about Hangul modifications is at

      https://unifoundry.com/hangul/index.html

and

      http://unifoundry.com/hangul/hangul-generation.html

22 September, 2023 04:05AM by Paul Hardy

September 19, 2023

gettext @ Savannah

GNU gettext 0.22.1 released

Download from https://ftp.gnu.org/pub/gnu/gettext/gettext-0.22.1.tar.gz

This is a bug-fix release.

New in this release:

  • Bug fixes:
    • The libintl shared library now exports again some symbols that were accidentally missing.
    • xgettext's processing of large Perl files may have led to errors.
    • "xgettext --join-existing" could encounter errors.


  • Portability:
    • Building on Android is now supported.

19 September, 2023 08:35AM by Bruno Haible

September 18, 2023

FSF News

Forty years of GNU and the free software movement

BOSTON, Massachusetts, USA -- September, 18, 2023 -- On September 27, the Free Software Foundation (FSF) celebrates the 40th anniversary of the GNU operating system and the launch of the free software movement. Free software advocates, tinkerers, and hackers all over the world will celebrate this event, which was a turning point in the history of computing. Forty years later, GNU and free software are even more relevant. While software has become deeply ingrained into everyday life, the vast majority of users do not have full control over it.

18 September, 2023 09:55PM

September 17, 2023

health @ Savannah

GNUHealth Hospital Management 4.2.2 patchset released

Dear community

GNU Health 4.2.2 patchset has been released !

Priority: High

Table of Contents


  • About GNU Health Patchsets
  • Updating your system with the GNU Health control Center
  • Installation notes
  • List of other issues related to this patchset


About GNU Health Patchsets


We provide "patchsets" to stable releases. Patchsets allow applying bug fixes and updates on production systems. Always try to keep your production system up-to-date with the latest patches.

Patches and Patchsets maximize uptime for production systems, and keep your system updated, without the need to do a whole installation.

NOTE: Patchsets are applied on previously installed systems only. For new, fresh installations, download and install the whole tarball (ie, gnuhealth-4.2.2.tar.gz)

Updating your system with the GNU Health control Center


Starting GNU Health 3.x series, you can do automatic updates on the GNU Health HMIS kernel and modules using the GNU Health control center program.

Please refer to the administration manual section ( https://en.wikibooks.org/wiki/GNU_Health/Control_Center )

The GNU Health control center works on standard installations (those done following the installation manual on wikibooks). Don't use it if you use an alternative method or if your distribution does not follow the GNU Health packaging guidelines.

Installation Notes


You must apply previous patchsets before installing this patchset. If your patchset level is 4.2.1, then just follow the general instructions. You can find the patchsets at GNU Health main download site at GNU.org (https://ftp.gnu.org/gnu/health/)

In most cases, GNU Health Control center (gnuhealth-control) takes care of applying the patches for you. 

Pre-requisites for upgrade to 4.2.2: None

Now follow the general instructions at
 https://en.wikibooks.org/wiki/GNU_Health/Control_Center

 
After applying the patches, make a full update of your GNU Health database as explained in the documentation.

When running "gnuhealth-control" for the first time, you will see the following message: "Please restart now the update with the new control center" Please do so. Restart the process and the update will continue.
 

  • Restart the GNU Health server


List of bugs and tasks related to this patchset


  • bug #64269: get_serial function of LabTest class in health_crypto_lab.py need conside add units.name
  • bug #64386: Automatically update the appointment sequence when state is confirm
  • bug #64432: Gestational weeks show floating point instead of weeks
  • bug #64457: Patient automatic critical information entries should be unique
  • bug #64530: traceback on evaluation page of life if no institution is given
  • bug #64665: Product cost_price needs to be passed as an argument in stock moves



For detailed information about each issue, you can visit :
 https://savannah.gnu.org/bugs/?group=health
 
About each task, you can visit:
 https://savannah.gnu.org/task/?group=health

For detailed information you can read about Patches and Patchsets

17 September, 2023 02:03PM by Luis Falcon

September 15, 2023

GNU Health

Caminos Cruzados and GNU Solidario join forces in healthcare for Gambia with GNU Health

We’re very happy to announce that “Caminos cruzados”, a Spanish non-for-profit organization, has signed an agreement with GNU Solidario to implement GNU Health in health institutions from Gambia.

María Eugenia Ramos, president of the organization traveled to Gran Canaria to meet with Luis Falcón, president of GNU Solidario to formalize the agreement and to plan the next actions in the Gambia.

During these three days we went through the main functionality of the GNU Health HMIS, with focus in the areas of Social Medicine, Primary Care and Public Health.

Classroom before the renovation (source: Caminos Cruzados)

Caminos Cruzados and GNU Solidario share the goals of health and education are indivisible entities. In fact education is health, and there is no healthy person or society without education.

Current state of some classrooms after renovation process (Source: Caminos Cruzados)

In the area of health, the NGO collaborates with the Ndungu Kebbeh health center, that takes care of a population around 8000 people, in addition to the surrounding villages.

Ndungu Kebbeh health center (Source: Caminos Cruzados)

We are very excited about this agreement, and can’t wait to start collaborating with Caminos Cruzados and the local team in Gambia to implement GNU Health in the different Health institutions. GNU Health will definitely help optimizing the internal processes and resources in the health institutions, and most importantly, will contribute to optimize the health promotion and disease prevention programs for the betterment of their people.

Resources:

Caminos Cruzados: http://www.caminoscruzados.org

GNU Solidario: https://www.gnusolidario.org

GNU Health: https://www.gnuhealth.org

15 September, 2023 07:52PM by GNU Solidario

September 12, 2023

unifont @ Savannah

Unifont 15.1.01 Released

12 September 2023 Unifont 15.1.01 is now available.
This is a major release.  This release no longer builds TrueType fonts by default, as announced over the past year.  They have been replaced with their OpenType equivalents.  TrueType fonts can still be built manually by typing "make truetype" in the font directory.

This release also includes a new Hangul Syllables Johab 6/3/1 encoding proposed by Ho-Seok Ee.  New Hangul supporting software for this encoding allows formation of all double-width Hangul syllables, including those with ancient letters that are outside the Unicode Hangul Syllables range.  Details are in the ChangeLog file.

Download this release from GNU server mirrors at:

     https://ftpmirror.gnu.org/unifont/unifont-15.1.01/

or if that fails,

     https://ftp.gnu.org/gnu/unifont/unifont-15.1.01/

or, as a last resort,

     ftp://ftp.gnu.org/gnu/unifont/unifont-15.1.01/

These files are also available on the unifoundry.com website:

     https://unifoundry.com/pub/unifont/unifont-15.1.01/

Font files are in the subdirectory

     https://unifoundry.com/pub/unifont/unifont-15.1.01/font-builds/

A more detailed description of font changes is available at

      https://unifoundry.com/unifont/index.html

and of utility program changes at

      https://unifoundry.com/unifont/unifont-utilities.html

Information about Hangul modifications is at

      https://unifoundry.com/hangul/index.html

and

      http://unifoundry.com/hangul/hangul-generation.html

12 September, 2023 06:48PM by Paul Hardy

GNU Guix

A new Quality Assurance tool for Guix

Maintaining and expanding Guix's collection of packages can be complicated. As a distribution with around 22,000 packages, spanning across around 7 architectures and with support for cross-compilation, it's quite common for problems to occur when making changes.

Quality Assurance (QA) is a general term to describe the approach taken to try and ensure something meets expectations. When applied to software, the term testing is normally used. While Guix is software, and has tests, much more than those tests are needed to maintain Guix as a distribution.

So what might quality relate to in the context of Guix as a distribution? This will differ from person to person, but these are some common concerns:

  • Packages successfully building (both now, and without any time bombs for the future)
  • The packaged software functioning correctly
  • Packages building on or for a specific architecture
  • Packages building reproducibly
  • Availability of translations for the package definitions

Tooling to help with Quality Assurance

There's a range of tools to help maintain Guix. The package linters are a set of simple tools, they cover basic things from the naming of packages to more complicated checkers that look for security issues for example.

The guix weather tool looks at substitute availability information and can indicate how many substitutes are available for the current Guix and system. The guix challenge tool is similar, but it highlights package reproducibility issues, which is when the substitutes and local store items (if available) differ.

For translations, Guix uses Weblate which can provide information on how many translations are available.

The QA front-page

Then there's the relatively new Quality Assurance (QA) front-page, the aim of which is to bring together some of the existing Quality Assurance related information, as well as new being a good place to do additional QA tasks.

The QA front-page started as a service to coordinate automated testing for patches. When a patch or patch series is submitted to guix-patches@gnu.org, it is automatically applied to create a branch; then once the information is available from the Data Service about this branch, the QA front-page web interface lets you view which packages were modified and submits builds for these changes to the Build Coordinator behind bordeaux.guix.gnu.org to provide build information about the modified packages.

QA issue page

A very similar process applies for branches other than the master branch, the QA front-page queries issues.guix.gnu.org to find out which branch is going to be merged next, then follows the same process for patches.

For both patches and branches the QA front-page displays information about the effects of the changes. When this information is available, it can assist with reviewing the changes and help get patches merged quicker. This is a work in progress though, and there's much more that the QA front-page should be able to do as providing clearer descriptions of the changes or any other problems that should be addressed. QA package changes page

How to get involved?

There's plenty of ways to get involved or contribute to the QA front-page.

If you submit patches to Guix, the QA front-page will attempt to apply the patches and show what's changed. You can click through from issues.guix.gnu.org to qa.guix.gnu.org via the QA badge by the status of the issue.

From the QA front-page, you can also view the list of branches which includes the requests for merging if they exist. Similar to the patch series, for the branch the QA front-page can display information about the package changes and substitute availability.

There's also plenty of ways to contribute to the QA front-page and connected tools. You can find some ideas and information on how to run the service in the README and if you have any questions or patches, please email guix-devel@gnu.org.

Acknowledgments

Thanks to Simon Tournier and Ludovic Courtès for providing feedback on an earlier draft of this post.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64 and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

12 September, 2023 02:30PM by Christopher Baines

September 08, 2023

gnuboot @ Savannah

Testers needed for GNU Boot 0.1 RC1

Hi,

GNU Boot has published its first release candidate, and we need help
for testing, at first from people who are able to recover from
computers that don't boot anymore.

This is because, while we have very minimal changes on top of the code
used by the last Libreboot release that didn't contain nonfree
software, we didn't test all the images ourselves yet, so there is still
risks of ending up with computers that don't boot anymore.

If the code works fine, we will most likely be able to release it as-is
but we (the current maintainers) still have a lot of work to do before
the release.

For instance we still need to integrate the code from the website, find
good ways to deploy it, make sure that the installation documentation
works (for instance by asking for help from testers and fixing it), etc.

As for accepting patches, we're not ready yet to do that yet, but we
plan to have that done for the first release, or before that depending
on how things work.

For reporting what images work, you can reply to this mail (or open a
bug report).

The GNU Boot maintainers.

08 September, 2023 09:09PM by Adrien Bourmault

September 01, 2023

Simon Josefsson

Trisquel on ppc64el: Talos II

The release notes for Trisquel 11.0 “Aramo” mention support for POWER and ARM architectures, however the download area only contains links for x86, and forum posts suggest there is a lack of instructions how to run Trisquel on non-x86.

Since the release of Trisquel 11 I have been busy migrating x86 machines from Debian to Trisquel. One would think that I would be finished after this time period, but re-installing and migrating machines is really time consuming, especially if you allow yourself to be distracted every time you notice something that Really Ought to be improved. Rabbit holes all the way down. One of my production machines is running Debian 11 “bullseye” on a Talos II Lite machine from Raptor Computing Systems, and migrating the virtual machines running on that host (including the VM that serves this blog) to a x86 machine running Trisquel felt unsatisfying to me. I want to migrate my computing towards hardware that harmonize with FSF’s Respects Your Freedom and not away from it. Here I had to chose between using the non-free software present in newer Debian or the non-free software implied by most x86 systems: not an easy chose. So I have ignored the dilemma for some time. After all, the machine was running Debian 11 “bullseye”, which was released before Debian started to require use of non-free software. With the end-of-life date for bullseye approaching, it seems that this isn’t a sustainable choice.

There is a report open about providing ppc64el ISOs that was created by Jason Self shortly after the release, but for many months nothing happened. About a month ago, Luis Guzmán mentioned an initial ISO build and I started testing it. The setup has worked well for a month, and with this post I want to contribute instructions how to get it up and running since this is still missing.

The setup of my soon-to-be new production machine:

  • Talos II Lite
  • POWER9 18-core v2 CPU
  • Inter-Tech 4U-4410 rack case with ASPOWER power supply
  • 8x32GB DDR4-2666 ECC RDIMM
  • HighPoint SSD7505 (the Rocket 1504 or 1204 would be a more cost-effective choice, but I re-used a component I had laying around)
  • PERC H700 aka LSI MegaRAID 2108 SAS/SATA (also found laying around)
  • 2x1TB NVMe
  • 3x18TB disks

According to the notes in issue 14 the ISO image is available at https://builds.trisquel.org/debian-installer-images/ and the following commands download, integrity check and write it to a USB stick:

wget -q https://builds.trisquel.org/debian-installer-images/debian-installer-images_20210731+deb11u8+11.0trisquel14_ppc64el.tar.gz
tar xfa debian-installer-images_20210731+deb11u8+11.0trisquel14_ppc64el.tar.gz ./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso
echo '6df8f45fbc0e7a5fadf039e9de7fa2dc57a4d466e95d65f2eabeec80577631b7  ./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso' | sha256sum -c
sudo wipefs -a /dev/sdX
sudo dd if=./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso of=/dev/sdX conv=sync status=progress

Sadly, no hash checksums or OpenPGP signatures are published.

Power off your device, insert the USB stick, and power it up, and you see a Petitboot menu offering to boot from the USB stick. For some reason, the "Expert Install" was the default in the menu, and instead I select "Default Install" for the regular experience. For this post, I will ignore BMC/IPMI, as interacting with it is not necessary. Make sure to not connect the BMC/IPMI ethernet port unless you are willing to enter that dungeon. The VGA console works fine with a normal USB keyboard, and you can chose to use only the second enP4p1s0f1 network card in the network card selection menu.

If you are familiar with Debian netinst ISO’s, the installation is straight-forward. I complicate the setup by partitioning two RAID1 partitions on the two NVMe sticks, one RAID1 for a 75GB ext4 root filesystem (discard,noatime) and one RAID1 for a 900GB LVM volume group for virtual machines, and two 20GB swap partitions on each of the NVMe sticks (to silence a warning about lack of swap, I’m not sure swap is still a good idea?). The 3x18TB disks use DM-integrity with RAID1 however the installer does not support DM-integrity so I had to create it after the installation.

There are two additional matters worth mentioning:

  • Selecting the apt mirror does not have the list of well-known Trisquel mirrors which the x86 installer offers. Instead I have to input the archive mirror manually, and fortunately the archive.trisquel.org hostname and path values are available as defaults, so I just press enter and fix this after the installation has finished. You may want to have the hostname/path of your local mirror handy, to speed things up.
  • The installer asks me which kernel to use, which the x86 installer does not do. I believe older Trisquel/Ubuntu installers asked this question, but that it was gone in aramo on x86. I select the default “linux-image-generic” which gives me a predictable 5.15 Linux-libre kernel, although you may want to chose “linux-image-generic-hwe-11.0” for a more recent 6.2 Linux-libre kernel. Maybe this is intentional debinst-behaviour for non-x86 platforms?

I have re-installed the machine a couple of times, and have now finished installing the production setup. I haven’t ran into any serious issues, and the system has been stable. Time to wrap up, and celebrate that I now run an operating system aligned with the Free System Distribution Guidelines on hardware that aligns with Respects Your Freedom — Happy Hacking indeed!

01 September, 2023 03:37PM by simon

August 29, 2023

coreutils @ Savannah

coreutils-9.4 released [stable]


This is to announce coreutils-9.4, a stable release.
This is a stabilization release coming about 19 weeks after the 9.3 release.
See the NEWS below for a summary of changes.

There have been 162 commits by 10 people in the 19 weeks since 9.3.
Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Andreas Schwab (1)      Jim Meyering (1)
  Bernhard Voelker (3)    Paul Eggert (60)
  Bruno Haible (11)       Pádraig Brady (80)
  Dragan Simic (3)        Sylvestre Ledru (2)
  Jaroslav Skarvada (1)   Ville Skyttä (1)

Pádraig [on behalf of the coreutils maintainers]
==================================================================

Here is the GNU coreutils home page:
    http://gnu.org/s/coreutils/

For a summary of changes and contributors, see:
  http://git.sv.gnu.org/gitweb/?p=coreutils.git;a=shortlog;h=v9.4
or run this command from a git-cloned coreutils directory:
  git shortlog v9.3..v9.4

Here are the compressed sources:
  https://ftp.gnu.org/gnu/coreutils/coreutils-9.4.tar.gz   (15MB)
  https://ftp.gnu.org/gnu/coreutils/coreutils-9.4.tar.xz   (5.8MB)

Here are the GPG detached signatures:
  https://ftp.gnu.org/gnu/coreutils/coreutils-9.4.tar.gz.sig
  https://ftp.gnu.org/gnu/coreutils/coreutils-9.4.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

  7dce42b8657e333ce38971d4ee512c4313b8f633  coreutils-9.4.tar.gz
  X2ANkJOXOwr+JTk9m8GMRPIjJlf0yg2V6jHHAutmtzk=  coreutils-9.4.tar.gz
  7effa305c3f4bc0d40d79f1854515ebf5f688a18  coreutils-9.4.tar.xz
  6mE6TPRGEjJukXIBu7zfvTAd4h/8O1m25cB+BAsnXlI=  coreutils-9.4.tar.xz

Verify the base64 SHA256 checksum with cksum -a sha256 --check
from coreutils-9.2 or OpenBSD's cksum since 2007.

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify coreutils-9.4.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa4096/0xDF6FD971306037D9 2011-09-23 [SC]
        Key fingerprint = 6C37 DC12 121A 5006 BC1D  B804 DF6F D971 3060 37D9
  uid                   [ unknown] Pádraig Brady <P@draigBrady.com>
  uid                   [ unknown] Pádraig Brady <pixelbeat@gnu.org>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key P@draigBrady.com

  gpg --recv-keys DF6FD971306037D9

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=coreutils&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify coreutils-9.4.tar.gz.sig

This release was bootstrapped with the following tools:
  Autoconf 2.72c.32-cb6fb
  Automake 1.16.5
  Gnulib v0.1-6658-gbb5bb43a1e
  Bison 3.8.2

NEWS

* Noteworthy changes in release 9.4 (2023-08-29) [stable]

** Bug fixes

  On GNU/Linux s390x and alpha, programs like 'cp' and 'ls' no longer
  fail on files with inode numbers that do not fit into 32 bits.
  [This bug was present in "the beginning".]

  'b2sum --check' will no longer read unallocated memory when
  presented with malformed checksum lines.
  [bug introduced in coreutils-9.2]

  'cp --parents' again succeeds when preserving mode for absolute directories.
  Previously it would have failed with a "No such file or directory" error.
  [bug introduced in coreutils-9.1]

  'cp --sparse=never' will avoid copy-on-write (reflinking) and copy offloading,
  to ensure no holes present in the destination copy.
  [bug introduced in coreutils-9.0]

  cksum again diagnoses read errors in its default CRC32 mode.
  [bug introduced in coreutils-9.0]

  'cksum --check' now ensures filenames with a leading backslash character
  are escaped appropriately in the status output.
  This also applies to the standalone checksumming utilities.
  [bug introduced in coreutils-8.25]

  dd again supports more than two multipliers for numbers.
  Previously numbers of the form '1024x1024x32' gave "invalid number" errors.
  [bug introduced in coreutils-9.1]

  factor, numfmt, and tsort now diagnose read errors on the input.
  [This bug was present in "the beginning".]

  'install --strip' now supports installing to files with a leading hyphen.
  Previously such file names would have caused the strip process to fail.
  [This bug was present in "the beginning".]

  ls now shows symlinks specified on the command line that can't be traversed.
  Previously a "Too many levels of symbolic links" diagnostic was given.
  [This bug was present in "the beginning".]

  pinky, uptime, users, and who no longer misbehave on 32-bit GNU/Linux
  platforms like x86 and ARM where time_t was historically 32 bits.
  Also see the new --enable-systemd option mentioned below.
  [bug introduced in coreutils-9.0]

  'pr --length=1 --double-space' no longer enters an infinite loop.
  [This bug was present in "the beginning".]

  shred again operates on Solaris when built for 64 bits.
  Previously it would have exited with a "getrandom: Invalid argument" error.
  [bug introduced in coreutils-9.0]

  tac now handles short reads on its input.  Previously it may have exited
  erroneously, especially with large input files with no separators.
  [This bug was present in "the beginning".]

  'uptime' no longer incorrectly prints "0 users" on OpenBSD,
  and is being built again on FreeBSD and Haiku.
  [bugs introduced in coreutils-9.2]

  'wc -l' and 'cksum' no longer crash with an "Illegal instruction" error
  on x86 Linux kernels that disable XSAVE YMM.  This was seen on Xen VMs.
  [bug introduced in coreutils-9.0]

** Changes in behavior

  'cp -v' and 'mv -v' will no longer output a message for each file skipped
  due to -i, or -u.  Instead they only output this information with --debug.
  I.e., 'cp -u -v' etc. will have the same verbosity as before coreutils-9.3.

  'cksum -b' no longer prints base64-encoded checksums.  Rather that
  short option is reserved to better support emulation of the standalone
  checksum utilities with cksum.

  'mv dir x' now complains differently if x/dir is a nonempty directory.
  Previously it said "mv: cannot move 'dir' to 'x/dir': Directory not empty",
  where it was unclear whether 'dir' or 'x/dir' was the problem.
  Now it says "mv: cannot overwrite 'x/dir': Directory not empty".
  Similarly for other renames where the destination must be the problem.
  [problem introduced in coreutils-6.0]

** Improvements

  cp, mv, and install now avoid copy_file_range on linux kernels before 5.3
  irrespective of which kernel version coreutils is built against,
  reinstating that behavior from coreutils-9.0.

  comm, cut, join, od, and uniq will now exit immediately upon receiving a
  write error, which is significant when reading large / unbounded inputs.

  split now uses more tuned access patterns for its potentially large input.
  This was seen to improve throughput by 5% when reading from SSD.

  split now supports a configurable $TMPDIR for handling any temporary files.

  tac now falls back to '/tmp' if a configured $TMPDIR is unavailable.

  'who -a' now displays the boot time on Alpine Linux, OpenBSD,
  Cygwin, Haiku, and some Android distributions

  'uptime' now succeeds on some Android distributions, and now counts
  VM saved/sleep time on GNU (Linux, Hurd, kFreeBSD), NetBSD, OpenBSD,
  Minix, and Cygwin.

  On GNU/Linux platforms where utmp-format files have 32-bit timestamps,
  pinky, uptime, and who can now work for times after the year 2038,
  so long as systemd is installed, you configure with a new, experimental
  option --enable-systemd, and you use the programs without file arguments.
  (For example, with systemd 'who /var/log/wtmp' does not work because
  systemd does not support the equivalent of /var/log/wtmp.)


29 August, 2023 03:16PM by Pádraig Brady

August 26, 2023

GNUnet News

GSoC Work Product: GNUnet over QUIC

GSoC Work Product: GNUnet over QUIC

Hi, my name is Marshall and throughout the summer of 2023 I worked on developing a new communicator for the GNUnet transport service. I learned a lot about GNUnet through my development experience. Here are some details about the journey!

Goals of the Project.

The goal of this project was to develop a new transport, QUIC, for the Transport Next Generation (TNG) service . TNG is a successor to the previous transport plugins and will be running in the fall 2023 GNUnet release. At the time of writing, GNUnet currently supports transports over TCP, UDP, and UNIX sockets. I chose to implement a QUIC transport communicator due to the rising popularity and speed of this protocol. Because of this popularity, QUIC will be a great transport protocol for GNUnet traffic to sit on top of. QUIC is intended to be a faster alternative to TCP and tries to address some issues that TLS has.

What I completed.

One of the first steps was deciding on a library that can process QUIC packets and would be available to users running different operating systems. We chose to go with Cloudflare's Quiche library because the C API seemed simpler than other available libraries. Installing cloudflare-quiche via the Homebrew package manager (MacOS) did not actually install the libraries properly for linking with other C programs so I made a pull request in the Homebrew repository and fixed the formula . After this, I worked on handling the receiving functionality of the communicator. This involved reading from the socket then processing the QUIC packets using the Quiche library. Then I implemented the ability to send messages in a similar manner. One of the last steps involved connecting everything together with the transport service so that the communicator can receive information about peers and relay messages. Once I finished these tasks, the QUIC communicator got merged upstream and is currently an experimental feature. This is due to the packaging situation with Quiche as it is difficult for some users to install the library, and there still may be bugs lingering in the QUIC communicator. More testing and refinement is needed to offer a truly robust and reliable communicator. Link to source code: QUIC communicator .

The current state.

The QUIC communicator currently functions and passes basic communicator tests. That being said, there are some latency issues that need to be addressed. Since the communicator suite is designed to run alongside the new TNG service, it is currently not usable since TNG is still under development (as mentioned previously). Mentioned below are some other things that have yet to be implemented in the QUIC communicator, but will be fixed in the future.

Future Work.

We still need to develop a more permanent solution to the certificate generation so that the Quiche API functions properly. This certificate generation has been done in previous implementations (for example the HTTPS plugin). Currently, we are using static, example certificates. Adding timers to each QUIC connection so that a timeout will trigger a connection to close also needs to be implemented. Finally, we should look into lowering the latency by finding points where the communicator is too slow and optimizing it.

Challenges I Encountered.

One of the challenges was reverse engineering the Quiche C API because it has such limited documentation. I learned how to make use of the API by looking at the very simple example client and server examples that are provided in the Quiche repository. There is documentation for the Rust API which seems to operate pretty similarly, so this was helpful too at times. I overcame this challenge with the help and guidance of my mentor Martin Schanzenbach.

Final notes.

Overall, my experience with GNUnet was fantastic. My mentors were friendly and consistently available when I needed help, and I thank them for that. I'm thankful for the GNUnet community for being welcoming and understanding toward new open source developers like myself. I had a lot of fun learning how GNUnet works while developing my project. I am looking forward to contributing to GNUnet in the future!

26 August, 2023 10:00PM

August 24, 2023

parallel @ Savannah

GNU Parallel 20230822 ('Chandrayaan') released [stable]

GNU Parallel 20230822 ('Chandrayaan') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  GNU parallel is your friend. Unleash your cores! #GNU
    -- Blake L @BlakeDL@twitter

New in this release:

  • Bug fixes and man page updates.

News about GNU Parallel:

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.


About GNU Parallel


GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
    12345678 883c667e 01eed62f 975ad28b 6d50e22a
    $ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
    cc21b4c9 43fd03e9 3ae1ae49 e28573c0
    $ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
    79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
    fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference


If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)


If GNU Parallel saves you money:



About GNU SQL


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.


About GNU Niceload


GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

24 August, 2023 04:31AM by Ole Tange

August 20, 2023

poke @ Savannah

GNU poke 3.3 released

I am happy to announce a new release of GNU poke, version 3.3.

This is a bugfix release in the 3.x series.

See the file NEWS in the distribution tarball for a list of issues
fixed in this release.

The tarball poke-3.3.tar.gz is now available at
https://ftp.gnu.org/gnu/poke/poke-3.3.tar.gz.

  GNU poke (http://www.jemarch.net/poke) is an interactive, extensible
  editor for binary data.  Not limited to editing basic entities such
  as bits and bytes, it provides a full-fledged procedural,
  interactive programming language designed to describe data
  structures and to operate on them.

Thanks to the people who contributed with code and/or documentation to
this release.

Happy poking!

--
Jose E. Marchesi
Frankfurt am Main
20 August 2023

20 August, 2023 03:41PM by Jose E. Marchesi

gzip @ Savannah

gzip-1.13 released [stable]


This is to announce gzip-1.13, a stable release.
Thanks to Paul and Bruno for contributing.

There have been 50 commits by 3 people in the 71 weeks since 1.12.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Bruno Haible (4)
  Jim Meyering (15)
  Paul Eggert (31)

Jim
 [on behalf of the gzip maintainers]
==================================================================

Here is the GNU gzip home page:
    http://gnu.org/s/gzip/

For a summary of changes and contributors, see:
  http://git.sv.gnu.org/gitweb/?p=gzip.git;a=shortlog;h=v1.13
or run this command from a git-cloned gzip directory:
  git shortlog v1.12..v1.13

Here are the compressed sources:
  https://ftp.gnu.org/gnu/gzip/gzip-1.13.tar.gz   (1.3MB)
  https://ftp.gnu.org/gnu/gzip/gzip-1.13.tar.xz   (820KB)

Here are the GPG detached signatures:
  https://ftp.gnu.org/gnu/gzip/gzip-1.13.tar.gz.sig
  https://ftp.gnu.org/gnu/gzip/gzip-1.13.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

  9cc4f2220c8028823433e9d869dc07610aefefb5  gzip-1.13.tar.gz
  IPyBiu666Hzb8gnTUUGtnTzzErNaXmvmG/z7+e3dISo=  gzip-1.13.tar.gz
  a793e107a54769576adc16703f97c39ee7afdd4e  gzip-1.13.tar.xz
  dFTraTXbF8ZlVXbC4bD6vv04tNCTbg+H9IzQYs6RoFc=  gzip-1.13.tar.xz

Verify the base64 SHA256 checksum with cksum -a sha256 --check
from GNU coreutils-9.2 or OpenBSD's cksum since 2007.

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify gzip-1.13.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa4096/0x7FD9FCCB000BEEEE 2010-06-14 [SCEA]
        Key fingerprint = 155D 3FC5 00C8 3448 6D1E  EA67 7FD9 FCCB 000B EEEE
  uid                   [ unknown] Jim Meyering <jim@meyering.net>
  uid                   [ unknown] Jim Meyering <meyering@fb.com>
  uid                   [ unknown] Jim Meyering <meyering@gnu.org>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key jim@meyering.net

  gpg --recv-keys 7FD9FCCB000BEEEE

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=gzip&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify gzip-1.13.tar.gz.sig

This release was bootstrapped with the following tools:
  Autoconf 2.72c.32-cb6fb
  Automake 1.16i
  Gnulib v0.1-6631-g5651802c60

NEWS

* Noteworthy changes in release 1.13 (2023-08-19) [stable]

** Changes in behavior

  zless now diagnoses gzip failures, if using less 623 or later.

  When SIGPIPE is ignored, gzip now exits with status 2 (warning)
  instead of status 1 (error) when writing to a broken pipe.  This is
  more useful with programs like 'less' that treat gzip exit status 2
  as a non-failure.

** Bug fixes

  'gzip -d' no longer fails to report invalid compressed data
  that uses a dictionary distance outside the input window.
  [bug present since the beginning]

  Port to C23, which does not allow K&R-style function definitions
  with parameters, and which does not define __alignas_is_defined.


20 August, 2023 12:33AM by Jim Meyering

August 17, 2023

screen @ Savannah

GNU Screen v.4.9.1

I'm announcing availability of GNU Screen v.4.9.1

Screen is a full-screen window manager that multiplexes a physical terminal between several processes, typically interactive shells.

This release:

  • Support stop/parity bits on serial port
  • Add needed system headers in checks and return values for implicit function declarations
  • Fixes:

- Avoid zombies after shell exit
- Missed signal sending permission check on failed query messages (CVE-2023-24626)
- manpage fixes
- source code fixes during cleanup
- UTF-8 encoding can emit invalid UTF-8 sequences for out of range unicode values


For full list of changes see
https://git.savannah.gnu.org/cgit/screen.git/log/?h=v.4.9.1

Release is available for download at:
https://ftp.gnu.org/gnu/screen/
or your closest mirror (may have some delay)
https://ftpmirror.gnu.org/screen/

Please report any bugs or regressions.
Thanks to everyone who contributed to this release.

Cheers,
Alex

17 August, 2023 02:24PM by Alexander Naumov

August 14, 2023

www-zh-cn @ Savannah

Welcome our new member - Jing Luo

Hi, all

I am very glad that we have a new member.

Real Name: Jing Luo
Login Name: jing
Id: #346988
Email Address: szmun.luoj@gmail.com

Jing Luo will show his/her passion in Free Software, and try to let more people know about GNU.

I wish Jing Luo a pleasant journey to a better world.

Let's welcome Jing Luo in this big family.
wxie

14 August, 2023 11:31PM by Wensheng XIE

gnucobol @ Savannah

Release of GnuCOBOL 3.2

GnuCOBOL 3.2 includes many new features compared to the previous release, while maintaining full source compatibility.

The amount of changes from GnuCOBOL 3.1 to 3.2 is huge, here are some of the highlights:

  • improved dialect handling including changed defaults to better match the selected dialect (see NEWS if you compile with any `-std` to know more about the implications), a complete new dialect GCOS and support for more COBOL statements, intrinsic functions and syntax from both "old" and new dialects
  • highly improved run-times for several statements, along with less memory usage, especially if runtime checks are enabled
  • fileio changes to support LINE-SEQUENTIAL per COBOL2023 and runtime options to change the way files are handled, see NEWS and runtime.cfg
  • improvements for source-level debugging via GDB and coredump support
  • output of context for diagnostics
  • improvements for reproducible builds


For a much more complete list, have a look at the NEWS file.

14 August, 2023 09:43PM by Simon Sobisch

August 11, 2023

Parabola GNU/Linux-libre

[From Arch]: budgie-desktop >= 10.7.2-6 update requires manual intervention

When upgrading from budgie-desktop 10.7.2-5 to 10.7.2-6, the package mutter43 must be replaced with magpie-wm, which currently depends on mutter. As mutter43 conflicts with mutter, manual intervention is required to complete the upgrade.

First remove mutter43, then immediately perform the upgrade. Do not relog or reboot between these steps.

pacman -Rdd mutter43

pacman -Syu

11 August, 2023 03:07PM by bill auger