Planet GNU

Aggregation of development blogs from the GNU Project

June 25, 2022

GNU Taler news

June 24, 2022

FSF Events

Mastodon Hour on Mastodon: Friday, July 8 starting at 16:00pm EDT (20:00 UTC)

Join the FSF for discussions around "Helping others find their reason to support free software" and "on the freedom ladder" Friday, July 8, from 16:00 to 17:00pm EDT (20:00 to 21:00 UTC).

24 June, 2022 09:35PM

FSF Blogs

FSD meeting recap 2022-06-24

Check out the great work our volunteers accomplished at today's Free Software Directory.

24 June, 2022 08:06PM

pspp @ Savannah

PSPP 1.6.1 has been released.

I'm very pleased to announce the release of a new version of GNU PSPP.  PSPP is a program for statistical analysis of sampled data.  It is a free replacement for the proprietary program SPSS.

Changes from 1.6.0 to 1.6.1:

  • The SET command now supports LEADZERO for controlling output of a leading zero in F, COMMA, and DOT format.
  • Bug fixes and translation updates.

Please send PSPP bug reports to bug-gnu-pspp@gnu.org.

24 June, 2022 04:57PM by Ben Pfaff

June 23, 2022

parallel @ Savannah

GNU Parallel 20220622 ('Bongbong') released

GNU Parallel 20220622 ('Bongbong') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Quote of the month:

  Parallel has been (and still is) super useful and simple tool for speeding up all kinds of shell tasks during my career.
    -- ValtteriL@ycombinator

New in this release:

  • , can be used in --sshlogin if quoted as \, or ,,
  • --plus {/#regexp/str} replace ^regexp with str.
  • --plus {/%regexp/str} replace regexp$ with str.
  • --plus {//regexp/str} replace every regexp with str.
  • 'make install' installs bash+zsh completion files.
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
    12345678 883c667e 01eed62f 975ad28b 6d50e22a
    $ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
    cc21b4c9 43fd03e9 3ae1ae49 e28573c0
    $ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
    79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
    fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

23 June, 2022 08:29AM by Ole Tange

June 22, 2022

www @ Savannah

New Article by Richard Stallman

The GNU Education Team has published a new article by Richard Stallman on the threats of Big Tech in the field of education.

Many Governments Encourage Schools to Let Companies Snoop on Students

Human Rights Watch studied 164 software programs and web sites  recommended by various governments for schools to make students use.  It found that 146 of them gave data to advertising and tracking companies.

The researchers were thorough and checked for various snooping methods, including fingerprinting of devices to identify users. The targets of the investigation were not limited to programs and sites specifically “for education;” they included, for instance, Zoom and Microsoft Teams.

I expect that each program collected personal data for its developer. I'm not sure whether the results counted that, but they should. Once the developer company gets personal data, it can provide that data to advertising profilers, as well as to other companies and governments, and it can engage directly in manipulation of students and teachers.

The recommendations Human Rights Watch makes follow the usual approach of regulating the use of data once collected.  This is fundamentally inadequate; personal data, once collected, will surely be misused.

The only approach that makes it possible to end massive surveillance starts with demanding that the software be free. Then users will be able to modify the software to avoid giving real data to companies.

More at gnu/education...

22 June, 2022 01:42PM by Dora Scilipoti

June 21, 2022

FSF Blogs

Andy Wingo

an optimistic evacuation of my wordhoard

Good morning, mallocators. Last time we talked about how to split available memory between a block-structured main space and a large object space. Given a fixed heap size, making a new large object allocation will steal available pages from the block-structured space by finding empty blocks and temporarily returning them to the operating system.

Today I'd like to talk more about nothing, or rather, why might you want nothing rather than something. Given an Immix heap, why would you want it organized in such a way that live data is packed into some blocks, leaving other blocks completely free? How bad would it be if instead the live data were spread all over the heap? When might it be a good idea to try to compact the heap? Ideally we'd like to be able to translate the answers to these questions into heuristics that can inform the GC when compaction/evacuation would be a good idea.

lospace and the void

Let's start with one of the more obvious points: large object allocation. With a fixed-size heap, you can't allocate new large objects if you don't have empty blocks in your paged space (the Immix space, for example) that you can return to the OS. To obtain these free blocks, you have four options.

  1. You can continue lazy sweeping of recycled blocks, to see if you find an empty block. This is a bit time-consuming, though.

  2. Otherwise, you can trigger a regular non-moving GC, which might free up blocks in the Immix space but which is also likely to free up large objects, which would result in fresh empty blocks.

  3. You can trigger a compacting or evacuating collection. Immix can't actually compact the heap all in one go, so you would preferentially select evacuation-candidate blocks by choosing the blocks with the least live data (as measured at the last GC), hoping that little data will need to be evacuated.

  4. Finally, for environments in which the heap is growable, you could just grow the heap instead. In this case you would configure the system to target a heap size multiplier rather than a heap size, which would scale the heap to be e.g. twice the size of the live data, as measured at the last collection.

If you have a growable heap, I think you will rarely choose to compact rather than grow the heap: you will either collect or grow. Under constant allocation rate, the rate of empty blocks being reclaimed from freed lospace objects will be equal to the rate at which they are needed, so if collection doesn't produce any, then that means your live data set is increasing and so growing is a good option. Anyway let's put growable heaps aside, as heap-growth heuristics are a separate gnarly problem.

The question becomes, when should large object allocation force a compaction? Absent growable heaps, the answer is clear: when allocating a large object fails because there are no empty pages, but the statistics show that there is actually ample free memory. Good! We have one heuristic, and one with an optimum: you could compact in other situations but from the point of view of lospace, waiting until allocation failure is the most efficient.

shrinkage

Moving on, another use of empty blocks is when shrinking the heap. The collector might decide that it's a good idea to return some memory to the operating system. For example, I enjoyed this recent paper on heuristics for optimum heap size, that advocates that you size the heap in proportion to the square root of the allocation rate, and that as a consequence, when/if the application reaches a dormant state, it should promptly return memory to the OS.

Here, we have a similar heuristic for when to evacuate: when we would like to release memory to the OS but we have no empty blocks, we should compact. We use the same evacuation candidate selection approach as before, also, aiming for maximum empty block yield.

fragmentation

What if you go to allocate a medium object, say 4kB, but there is no hole that's 4kB or larger? In that case, your heap is fragmented. The smaller your heap size, the more likely this is to happen. We should compact the heap to make the maximum hole size larger.

side note: compaction via partial evacuation

The evacuation strategy of Immix is... optimistic. A mark-compact collector will compact the whole heap, but Immix will only be able to evacuate a fraction of it.

It's worth dwelling on this a bit. As described in the paper, Immix reserves around 2-3% of overall space for evacuation overhead. Let's say you decide to evacuate: you start with 2-3% of blocks being empty (the target blocks), and choose a corresponding set of candidate blocks for evacuation (the source blocks). Since Immix is a one-pass collector, it doesn't know how much data is live when it starts collecting. It may not know that the blocks that it is evacuating will fit into the target space. As specified in the original paper, if the target space fills up, Immix will mark in place instead of evacuating; an evacuation candidate block with marked-in-place objects would then be non-empty at the end of collection.

In fact if you choose a set of evacuation candidates hoping to maximize your empty block yield, based on an estimate of live data instead of limiting to only the number of target blocks, I think it's possible to actually fill the targets before the source blocks empty, leaving you with no empty blocks at the end! (This can happen due to inaccurate live data estimations, or via internal fragmentation with the block size.) The only way to avoid this is to never select more evacuation candidate blocks than you have in target blocks. If you are lucky, you won't have to use all of the target blocks, and so at the end you will end up with more free blocks than not, so a subsequent evacuation will be more effective. The defragmentation result in that case would still be pretty good, but the yield in free blocks is not great.

In a production garbage collector I would still be tempted to be optimistic and select more evacuation candidate blocks than available empty target blocks, because it will require fewer rounds to compact the whole heap, if that's what you wanted to do. It would be a relatively rare occurrence to start an evacuation cycle. If you ran out of space while evacuating, in a production GC I would just temporarily commission some overhead blocks for evacuation and release them promptly after evacuation is complete. If you have a small heap multiplier in your Immix space, occasional partial evacuation in a long-running process would probably reach a steady state with blocks being either full or empty. Fragmented blocks would represent newer objects and evacuation would periodically sediment these into longer-lived dense blocks.

mutator throughput

Finally, the shape of the heap has its inverse in the shape of the holes into which the mutator can allocate. It's most efficient for the mutator if the heap has as few holes as possible: ideally just one large hole per block, which is the limit case of an empty block.

The opposite extreme would be having every other "line" (in Immix terms) be used, so that free space is spread across the heap in a vast spray of one-line holes. Even if fragmentation is not a problem, perhaps because the application only allocates objects that pack neatly into lines, having to stutter all the time to look for holes is overhead for the mutator. Also, the result is that contemporaneous allocations are more likely to be placed farther apart in memory, leading to more cache misses when accessing data. Together, allocator overhead and access overhead lead to lower mutator throughput.

When would this situation get so bad as to trigger compaction? Here I have no idea. There is no clear maximum. If compaction were free, we would compact all the time. But it's not; there's a tradeoff between the cost of compaction and mutator throughput.

I think here I would punt. If the heap is being actively resized based on allocation rate, we'll hit the other heuristics first, and so we won't need to trigger evacuation/compaction based on mutator overhead. You could measure this, though, in terms of average or median hole size, or average or maximum number of holes per block. Since evacuation is partial, all you need to do is to identify some "bad" blocks and then perhaps evacuation becomes attractive.

gc pause

Welp, that's some thoughts on when to trigger evacuation in Immix. Next time, we'll talk about some engineering aspects of evacuation. Until then, happy consing!

21 June, 2022 12:21PM by Andy Wingo

June 20, 2022

GNU Taler news

A digital euro and the future of cash

The Central Bank of Austria has published a report in the context of a workshop celebrating 20 years of Euro-denominated cash. The report discusses the future of cash, including account- and blockchain-based designs, as well as GNU Taler.

20 June, 2022 10:00PM

Andy Wingo

blocks and pages and large objects

Good day! In a recent dispatch we talked about the fundamental garbage collection algorithms, also introducing the Immix mark-region collector. Immix mostly leaves objects in place but can move objects if it thinks it would be profitable. But when would it decide that this is a good idea? Are there cases in which it is necessary?

I promised to answer those questions in a followup article, but I didn't say which followup :) Before I get there, I want to talk about paged spaces.

enter the multispace

We mentioned that Immix divides the heap into blocks (32kB or so), and that no object can span multiple blocks. "Large" objects -- defined by Immix to be more than 8kB -- go to a separate "large object space", or "lospace" for short.

Though the implementation of a large object space is relatively simple, I found that it has some points that are quite subtle. Probably the most important of these points relates to heap size. Consider that if you just had one space, implemented using mark-compact maybe, then the procedure to allocate a 16 kB object would go:

  1. Try to bump the allocation pointer by 16kB. Is it still within range? If so we are done.

  2. Otherwise, collect garbage and try again. If after GC there isn't enough space, the allocation fails.

In step (2), collecting garbage could decide to grow or shrink the heap. However when evaluating collector algorithms, you generally want to avoid dynamically-sized heaps.

cheatery

Here is where I need to make an embarrassing admission. In my role as co-maintainer of the Guile programming language implementation, I have long noodled around with benchmarks, comparing Guile to Chez, Chicken, and other implementations. It's good fun. However, I only realized recently that I had a magic knob that I could turn to win more benchmarks: simply make the heap bigger. Make it start bigger, make it grow faster, whatever it takes. For a program that does its work in some fixed amount of total allocation, a bigger heap will require fewer collections, and therefore generally take less time. (Some amount of collection may be good for performance as it improves locality, but this is a marginal factor.)

Of course I didn't really go wild with this knob but it now makes me doubt all benchmarks I have ever seen: are we really using benchmarks to select for fast implementations, or are we in fact selecting for implementations with cheeky heap size heuristics? Consider even any of the common allocation-heavy JavaScript benchmarks, DeltaBlue or Earley or the like; to win these benchmarks, web browsers are incentivised to have large heaps. In the real world, though, a more parsimonious policy might be more appreciated by users.

Java people have known this for quite some time, and are therefore used to fixing the heap size while running benchmarks. For example, people will measure the minimum amount of memory that can allow a benchmark to run, and then configure the heap to be a constant multiplier of this minimum size. The MMTK garbage collector toolkit can't even grow the heap at all currently: it's an important feature for production garbage collectors, but as they are just now migrating out of the research phase, heap growth (and shrinking) hasn't yet been a priority.

lospace

So now consider a garbage collector that has two spaces: an Immix space for allocations of 8kB and below, and a large object space for, well, larger objects. How do you divide the available memory between the two spaces? Could the balance between immix and lospace change at run-time? If you never had large objects, would you be wasting space at all? Conversely is there a strategy that can also work for only large objects?

Perhaps the answer is obvious to you, but it wasn't to me. After much reading of the MMTK source code and pondering, here is what I understand the state of the art to be.

  1. Arrange for your main space -- Immix, mark-sweep, whatever -- to be block-structured, and able to dynamically decomission or recommission blocks, perhaps via MADV_DONTNEED. This works if the blocks are even multiples of the underlying OS page size.

  2. Keep a counter of however many bytes the lospace currently has.

  3. When you go to allocate a large object, increment the lospace byte counter, and then round up to number of blocks to decommission from the main paged space. If this is more than are currently decommissioned, find some empty blocks and decommission them.

  4. If no empty blocks were found, collect, and try again. If the second try doesn't work, then the allocation fails.

  5. Now that the paged space has shrunk, lospace can allocate. You can use the system malloc, but probably better to use mmap, so that if these objects are collected, you can just MADV_DONTNEED them and keep them around for later re-use.

  6. After GC runs, explicitly return the memory for any object in lospace that wasn't visited when the object graph was traversed. Decrement the lospace byte counter and possibly return some empty blocks to the paged space.

There are some interesting aspects about this strategy. One is, the memory that you return to the OS doesn't need to be contiguous. When allocating a 50 MB object, you don't have to find 50 MB of contiguous free space, because any set of blocks that adds up to 50 MB will do.

Another aspect is that this adaptive strategy can work for any ratio of large to non-large objects. The user doesn't have to manually set the sizes of the various spaces.

This strategy does assume that address space is larger than heap size, but only by a factor of 2 (modulo fragmentation for the large object space). Therefore our risk of running afoul of user resource limits and kernel overcommit heuristics is low.

The one underspecified part of this algorithm is... did you see it? "Find some empty blocks". If the main paged space does lazy sweeping -- only scanning a block for holes right before the block will be used for allocation -- then after a collection we don't actually know very much about the heap, and notably, we don't know what blocks are empty. (We could know it, of course, but it would take time; you could traverse the line mark arrays for all blocks while the world is stopped, but this increases pause time. The original Immix collector does this, however.) In the system I've been working on, instead I have it so that if a mutator finds an empty block, it puts it on a separate list, and then takes another block, only allocating into empty blocks once all blocks are swept. If the lospace needs blocks, it sweeps eagerly until it finds enough empty blocks, throwing away any nonempty blocks. This causes the next collection to happen sooner, but that's not a terrible thing; this only occurs when rebalancing lospace versus paged-space size, because if you have a constant allocation rate on the lospace side, you will also have a complementary rate of production of empty blocks by GC, as they are recommissioned when lospace objects are reclaimed.

What if your main paged space has ample space for allocating a large object, but there are no empty blocks, because live objects are equally peppered around all blocks? In that case, often the application would be best served by growing the heap, but maybe not. In any case in a strict-heap-size environment, we need a solution.

But for that... let's pick up another day. Until then, happy hacking!

20 June, 2022 02:59PM by Andy Wingo

June 17, 2022

FSF Blogs

June 16, 2022

Beat the heat with GNU summer swag

16 June, 2022 10:25PM

health @ Savannah

GNU Health Hospital Management 4.0.4 patchset released

Dear community

GNU Health 4.0.4 patchset has been released !

Priority: High

Table of Contents

  • About GNU Health Patchsets
  • Updating your system with the GNU Health control Center
  • Summary of this patchset
  • Installation notes
  • List of other issues related to this patchset

About GNU Health Patchsets

We provide "patchsets" to stable releases. Patchsets allow applying bug fixes and updates on production systems. Always try to keep your production system up-to-date with the latest patches.

Patches and Patchsets maximize uptime for production systems, and keep your system updated, without the need to do a whole installation.

NOTE: Patchsets are applied on previously installed systems only. For new, fresh installations, download and install the whole tarball (ie, gnuhealth-4.0.4.tar.gz)

Updating your system with the GNU Health control Center

Starting GNU Health 3.x series, you can do automatic updates on the GNU Health HMIS kernel and modules using the GNU Health control center program.

Please refer to the administration manual section (https://en.wikibooks.org/wiki/GNU_Health/Control_Center )

The GNU Health control center works on standard installations (those done following the installation manual on wikibooks). Don't use it if you use an alternative method or if your distribution does not follow the GNU Health packaging guidelines.

Installation Notes

You must apply previous patchsets before installing this patchset. If your patchset level is 4.0.3, then just follow the general instructions. You can find the patchsets at GNU Health main download site at GNU.org (https://ftp.gnu.org/gnu/health/)

In most cases, GNU Health Control center (gnuhealth-control) takes care of applying the patches for you. 

Pre-requisites for upgrade to 4.0.4: None

Now follow the general instructions at

After applying the patches, make a full update of your GNU Health database as explained in the documentation.

When running "gnuhealth-control" for the first time, you will see the following message: "Please restart now the update with the new control center" Please do so. Restart the process and the update will continue.

  • Restart the GNU Health server

List of other issues and tasks related to this patchset

  • bug #62598, Payment term search stops in party
  • bug #62596: Traceback if there is no Account Receivable defined neither on party or default acct config
  • bug #62555: Too many decimals error when generating the invoice with certain discounts
  • bug #62439: Error in sequence when generating Dx Imaging order
  • bug #62428: complete blood count report takes two pages
  • bug #62427: Typo in health_services exceptions

 For detailed information about each issue, you can visit https://savannah.gnu.org/bugs/?group=health
 For detailed information about each task, you can visit https://savannah.gnu.org/task/?group=health

 For detailed information you can read about Patches and Patchsets

16 June, 2022 12:42PM by Luis Falcon

June 15, 2022

GNU Taler news

GNU Taler Scalability

Anonymity loves company. Hence, to provide the best possible anonymity to GNU Taler users, the scalability of individual installations of a Taler payment service matters. While our design scales nicely on paper, NGI Fed4Fire+ enabled us to evaluate the transaction rates that could be achieved with the actual implementation. Experiments were conducted by Marco Boss for his Bachelor's thesis at the Bern University of Applied Sciences to assess bottlenecks and suggest avenues for further improvement.

15 June, 2022 10:00PM

Andy Wingo

defragmentation

Good morning, hackers! Been a while. It used to be that I had long blocks of uninterrupted time to think and work on projects. Now I have two kids; the longest such time-blocks are on trains (too infrequent, but it happens) and in a less effective but more frequent fashion, after the kids are sleeping. As I start writing this, I'm in an airport waiting for a delayed flight -- my first since the pandemic -- so we can consider this to be the former case.

It is perhaps out of mechanical sympathy that I have been using my reclaimed time to noodle on a garbage collector. Managing space and managing time have similar concerns: how to do much with little, efficiently packing different-sized allocations into a finite resource.

I have been itching to write a GC for years, but the proximate event that pushed me over the edge was reading about the Immix collection algorithm a few months ago.

on fundamentals

Immix is a "mark-region" collection algorithm. I say "algorithm" rather than "collector" because it's more like a strategy or something that you have to put into practice by making a concrete collector, the other fundamental algorithms being copying/evacuation, mark-sweep, and mark-compact.

To build a collector, you might combine a number of spaces that use different strategies. A common choice would be to have a semi-space copying young generation, a mark-sweep old space, and maybe a treadmill large object space (a kind of copying collector, logically; more on that later). Then you have heuristics that determine what object goes where, when.

On the engineering side, there's quite a number of choices to make there too: probably you make some parts of your collector to be parallel, maybe the collector and the mutator (the user program) can run concurrently, and so on. Things get complicated, but the fundamental algorithms are relatively simple, and present interesting fundamental tradeoffs.


figure 1 from the immix paper

For example, mark-compact is most parsimonious regarding space usage -- for a given program, a garbage collector using a mark-compact algorithm will require less memory than one that uses mark-sweep. However, mark-compact algorithms all require at least two passes over the heap: one to identify live objects (mark), and at least one to relocate them (compact). This makes them less efficient in terms of overall program throughput and can also increase latency (GC pause times).

Copying or evacuating spaces can be more CPU-efficient than mark-compact spaces, as reclaiming memory avoids traversing the heap twice; a copying space copies objects as it traverses the live object graph instead of after the traversal (mark phase) is complete. However, a copying space's minimum heap size is quite high, and it only reaches competitive efficiencies at large heap sizes. For example, if your program needs 100 MB of space for its live data, a semi-space copying collector will need at least 200 MB of space in the heap (a 2x multiplier, we say), and will only run efficiently at something more like 4-5x. It's a reasonable tradeoff to make for small spaces such as nurseries, but as a mature space, it's so memory-hungry that users will be unhappy if you make it responsible for a large portion of your memory.

Finally, mark-sweep is quite efficient in terms of program throughput, because like copying it traverses the heap in just one pass, and because it leaves objects in place instead of moving them. But! Unlike the other two fundamental algorithms, mark-sweep leaves the heap in a fragmented state: instead of having all live objects packed into a contiguous block, memory is interspersed with live objects and free space. So the collector can run quickly but the allocator stops and stutters as it accesses disparate regions of memory.

allocators

Collectors are paired with allocators. For mark-compact and copying/evacuation, the allocator consists of a pointer to free space and a limit. Objects are allocated by bumping the allocation pointer, a fast operation that also preserves locality between contemporaneous allocations, improving overall program throughput. But for mark-sweep, we run into a problem: say you go to allocate a 1 kilobyte byte array, do you actually have space for that?

Generally speaking, mark-sweep allocators solve this problem via freelist allocation: the allocator has an array of lists of free objects, one for each "size class" (say 2 words, 3 words, and so on up to 16 words, then more sparsely up to the largest allocatable size maybe), and services allocations from their appropriate size class's freelist. This prevents the 1 kB free space that we need from being "used up" by a 16-byte allocation that could just have well gone elsewhere. However, freelists prevent objects allocated around the same time from being deterministically placed in nearby memory locations. This increases variance and decreases overall throughput for both the allocation operations but also for pointer-chasing in the course of the program's execution.

Also, in a mark-sweep collector, we can still reach a situation where there is enough space on the heap for an allocation, but that free space broken up into too many pieces: the heap is fragmented. For this reason, many systems that perform mark-sweep collection can choose to compact, if heuristics show it might be profitable. Because the usual strategy is mark-sweep, though, they still use freelist allocation.

on immix and mark-region

Mark-region collectors are like mark-sweep collectors, except that they do bump-pointer allocation into the holes between survivor objects.

Sounds simple, right? To my mind, though the fundamental challenge in implementing a mark-region collector is how to handle fragmentation. Let's take a look at how Immix solves this problem.


part of figure 2 from the immix paper

Firstly, Immix partitions the heap into blocks, which might be 32 kB in size or so. No object can span a block. Block size should be chosen to be a nice power-of-two multiple of the system page size, not so small that common object allocations wouldn't fit. Allocating "large" objects -- greater than 8 kB, for Immix -- go to a separate space that is managed in a different way.

Within a block, Immix divides space into lines -- maybe 128 bytes long. Objects can span lines. Any line that does not contain (a part of) an object that survived the previous collection is part of a hole. A hole is a contiguous span of free lines in a block.

On the allocation side, Immix does bump-pointer allocation into holes. If a mutator doesn't have a hole currently, it scans the current block (obtaining one if needed) for the next hole, via a side-table of per-line mark bits: one bit per line. Lines without the mark are in holes. Scanning for holes is fairly cheap, because the line size is not too small. Note, there are also per-object mark bits as well; just because you've marked a line doesn't mean that you've traced all objects on that line.

Allocating into a hole has good expected performance as well, as it's bump-pointer, and the minimum size isn't tiny. In the worst case of a hole consisting of a single line, you have 128 bytes to work with. This size is large enough for the majority of objects, given that most objects are small.

mitigating fragmentation

Immix still has some challenges regarding fragmentation. There is some loss in which a single (piece of an) object can keep a line marked, wasting any free space on that line. Also, when an object can't fit into a hole, any space left in that hole is lost, at least until the next collection. This loss could also occur for the next hole, and the next and the next and so on until Immix finds a hole that's big enough. In a mark-sweep collector with lazy sweeping, these free extents could instead be placed on freelists and used when needed, but in Immix there is no such facility (by design).

One mitigation for fragmentation risks is "overflow allocation": when allocating an object larger than a line (a medium object), and Immix can't find a hole before the end of the block, Immix allocates into a completely free block. So actually mutator threads allocate into two blocks at a time: one for small objects and medium objects if possible, and the other for medium objects when necessary.

Another mitigation is that large objects are allocated into their own space, so an Immix space will never be used for blocks larger than, say, 8kB.

The other mitigation is that Immix can choose to evacuate instead of mark. How does this work? Is it worth it?

stw

This question about the practical tradeoffs involving evacuation is the one I wanted to pose when I started this article; I have gotten to the point of implementing this part of Immix and I have some doubts. But, this article is long enough, and my plane is about to land, so let's revisit this on my return flight. Until then, see you later, allocators!

15 June, 2022 12:47PM by Andy Wingo

June 13, 2022

GNU Guix

Celebrating 10 years of Guix in Paris, 16–18 September

It’s been ten years of GNU Guix! To celebrate, and to share knowledge and enthusiasm, a birthday event will take place on September 16–18th, 2022, in Paris, France. The program is being finalized, but you can already register!

10 year anniversary artwork

This is a community event with several twists to it:

  • Friday, September 16th, is dedicated to reproducible research workflows and high-performance computing (HPC)—the focuses of the Guix-HPC effort. It will consist of talks and experience reports by scientists and practitioners.
  • Saturday targets Guix and free software enthusiasts, users and developers alike. We will reflect on ten years of Guix, show what it has to offer, and present on-going developments and future directions.
  • on Sunday, users, developers, developers-to-be, and other contributors will discuss technical and community topics and join forces for hacking sessions, unconference style.

Check out the web site and consider registering as soon as possible so we can better estimate the size of the birthday cake!

If you’re interested in presenting a topic, in facilitating a session, or in organizing a hackathon, please get in touch with the organizers at guix-birthday-event@gnu.org and we’ll be happy to make room for you. We’re also looking for people to help with logistics, in particular during the event; please let us know if you can give a hand.

Whether you’re a scientist, an enthusiast, or a power user, we’d love to see you in September. Stay tuned for updates!

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64 and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

13 June, 2022 03:00PM by Ludovic Courtès, Tanguy Le Carrour, Simon Tournier

June 12, 2022

GNUnet News

DHT Specification Milestones 1-3/5

DHT Technical Specification Milestones 1-3/5

We are happy to announce the completion of the following milestones for the DHT specification. The objective is to provide a detailed and comprehensive guide for implementors of the GNUnet DHT "R 5 N". The milestones consist of documenting the base data structures and processes of the protocol. This includes the specification of the DHT message wire and serialization formats.

Completed milestones overview:

  1. Defined base data structures and processes that form the foundation of the protocol: Routing table, distance metrics, infrastructure messages, bootstrapping and base functions for block processing.
  2. Defined the core data structures and processes that are specific to the R 5 N protocol: Block and peer filtering, routing table management and lookup algorithms.
  3. The protocol was extended to support path signatures. This enables optional integrity protection of paths result messages have taken in a potentially rouge environment.

The current protocol is implemented as part of GNUnet 0.17.x and gnunet-go as previously announced on the mailing list .

We invite any interested party to read the document and provide critical review and feedback. This greatly helps us to improve the protocol and help future implementations. Contact us at the gnunet-developers mailing list . As part of the remaining milestones, the specification will be updated and interoperability testing will be conducted. Further, we aim to present the draft specification at IETF.

This work is generously funded by NLnet as part of their NGI Assure fund .

12 June, 2022 10:00PM

GNUnet 0.17.1

GNUnet 0.17.1

This is a bugfix release for gnunet 0.17.0.

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.17.0 (since 0.17.1)

  • DHT : Bugfix in HELLO message format. LSD0004 compliance.
  • RECLAIM : OpenID Connect plugin now needs (optional) jose dependency.

A detailed list of changes can be found in the ChangeLog and the bugtracker .

12 June, 2022 10:00PM

June 09, 2022

Luca Saiu

GNU Hackers' Meeting 2022 proposal: İzmir, Turkey

The GNU Hackers Meetings (https://www.gnu.org/ghm) are a friendly and informal venue to discuss technical issues concerning GNU (https://www.gnu.org) and free software (https://www.gnu.org/philosophy/free-sw.html). The time we proposed for GHM 2022 is approaching but unfortunately we only received three replies expressing interest. If we are to hold the event then we need more participants; at this stage a simple informal expression of interest is enough. The event is planned for an extended weekend (with talks from Friday to Saturday) in October 2022 in İzmir, Turkey. For the time being all the infamous entry barriers or restrictions are lifted in Turkey, with the ... [Read more]

09 June, 2022 11:55PM by Luca Saiu (positron@gnu.org)

June 05, 2022

GNUnet News

GNUnet 0.17.0

GNUnet 0.17.0 released

We are pleased to announce the release of GNUnet 0.17.0.
GNUnet is an alternative network stack for building secure, decentralized and privacy-preserving distributed applications. Our goal is to replace the old insecure Internet protocol stack. Starting from an application for secure publication of files, it has grown to include all kinds of basic protocol components and applications towards the creation of a GNU internet.

This is a new major release. It breaks protocol compatibility with the 0.16.x versions. Please be aware that Git master is thus henceforth (and has been for a while) INCOMPATIBLE with the 0.16.x GNUnet network, and interactions between old and new peers will result in issues. 0.16.x peers will be able to communicate with Git master or 0.17.x peers, but some services - in particular the DHT - will not be compatible.
In terms of usability, users should be aware that there are still a number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.17.0 release is still only suitable for early adopters with some reasonable pain tolerance .

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.17.0 (since 0.16.3)

  • GNS :
    • FCFSD: Allow configuration of relative expiration time of added records.
    • Aligned with breaking changes in specification. LSD0001
  • DHT :
    • Aligned and reordered message formats. LSD0004
    • Moved block type definitions to GANA
    • The specification has been updated to reflect the changes. LSD0004
  • UTIL :
    • Fix scheduler bug with same-priority immediately-ready tasks possibly hogging the scheduler.
    • Fix mysql/mariadb detection.

A detailed list of changes can be found in the ChangeLog and the bug tracker .

Known Issues

  • There are known major design issues in the TRANSPORT, ATS and CORE subsystems which will need to be addressed in the future to achieve acceptable usability, performance and security.
  • There are known moderate implementation limitations in CADET that negatively impact performance.
  • There are known moderate design issues in FS that also impact usability and performance.
  • There are minor implementation limitations in SET that create unnecessary attack surface for availability.
  • The RPS subsystem remains experimental.
  • Some high-level tests in the test-suite fail non-deterministically due to the low-level TRANSPORT issues.

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.

Thanks

This release was the work of many people. The following people contributed code and were thus easily identified: Christian Grothoff, Tristan Schwieren, Florian Dold, Thien-Thi Nguyen, t3sserakt, TheJackiMonster and Martin Schanzenbach.

05 June, 2022 10:00PM

unifont @ Savannah

Unifont 14.0.04 Released

4 June 2022 Unifont 14.0.04 is now available.  This is a minor release to fix an issue with parallel font builds.  It also contains updates to some glyphs, notably in the Runic script.

Download this release from GNU server mirrors at:

     https://ftpmirror.gnu.org/unifont/unifont-14.0.04/

or if that fails,

     https://ftp.gnu.org/gnu/unifont/unifont-14.0.04/

or, as a last resort,

     ftp://ftp.gnu.org/gnu/unifont/unifont-14.0.04/

These files are also available on the unifoundry.com website:

     https://unifoundry.com/pub/unifont/unifont-14.0.04/

Font files are in the subdirectory

     https://unifoundry.com/pub/unifont/unifont-14.0.04/font-builds/

A more detailed description of font changes is available at

      https://unifoundry.com/unifont/index.html

and of utility program changes at

      http://unifoundry.com/unifont/unifont-utilities.html

05 June, 2022 01:05AM by Paul Hardy

June 01, 2022

freedink @ Savannah

New Maintainer

Note that a new maintainer, Kjharcombe, has recently been put in place. We are hoping to be able to continue development with this package.

01 June, 2022 12:32PM by Keiran Harcombe

pspp @ Savannah

PSPP 1.6.0 has been released.

I'm very pleased to announce the release of a new version of GNU PSPP.  PSPP is a program for statistical analysis of sampled data.  It is a free replacement for the proprietary program SPSS.

Changes from 1.4.1 to 1.6.0:

  • In the Kruskal-Wallis test, a misleading result could occur if the lower bound specified by the user was in fact higher than the upper bound specified.  This has been fixed.
  • The DEFINE, MATRIX, MCONVERT, and MATRIX DATA commands are now implemented.
  • An error in the displayed signficance of oneway anova contrasts tests has been corrected.
  • Added Drag-N-Drop in output view.
  • The Explore GUI dialog supports the "Plots" subdialog. Boxplots, Q-Q Plots and Spreadlevel plots are now also available via the GUI.
  • The graphical user interface for importing spreadsheets has been improved.  The new interface provides the user with a preview of the data to be imported and interactive methods to select the desired ranges.
  • The user manual, in its Info and HTML versions, now includes graphical output examples and screenshots.
  • New command SHOW SYSTEM to easily print system information useful in bug reports.
  • Build changes:
    • Perl is no longer required to build.
    • Build now requires Python 3.4 or later.  (Building PSPP 1.4.0 also required Python, but it wasn't properly documented at the time.)
    • The Cairo and Pango libraries are now required.
    • gettext 0.20 or later is now required.
    • gtksourceview 4.x is now supported (3.x also remains supported).
  • Output improvements:
    • New drivers for output to TeX source files and to PNG files.
    • Table output styles may now be set with the new option --table-look and the new SET TLOOK command.
    • New driver option "trim" to remove empty space from PDF, PostScript, SVG, and PNG output files.
    • The PDF output driver now adds an outline to allow PDF viewers to display as a "table of contents" for the file.
    • The HTML output driver has a new option "bare".
  • New features in pspp-output:
    • New --table-look and --nth-commands options.
    • New get-table-look and convert-table-look commands.

Please send PSPP bug reports to bug-gnu-pspp@gnu.org.

01 June, 2022 05:53AM by Ben Pfaff

May 29, 2022

hello @ Savannah

hello-2.12.1 released [stable]

This release has no code changes since 2.12, but just a minor documentation fix and updated translations.

29 May, 2022 11:08PM by Reuben Thomas

May 28, 2022

www-zh-cn @ Savannah

Happy 20th Birthday GNU CTT

20 Year ago, May the 28th, GNU Chinese Translators Team was registered at Savannah.

I joined the project from Help GNU. My original intention was to support this project and maybe help myself to understand more of GNU. At that time it was only me who worked actively in translating the GNU web pages into Simplified Chinese. I had even had to approve my own translation which was not the correct approach. I really wanted some other people to join the project like the project creators, and I started to really understand that to be a volunteer in a Free Software project means to be persistent and stubborn.

Gradually we had some newcomers joining the project, like hagb, hahawang, psiace, shankangke, shi, wind, etc. I am very excited whenever there is a newcomer because I know I am the person to let them know the project and I am the person to encourage them contributing their time and talent in this project.

Today, the translation project is going on smoothly. Our progress is often at the top list of all language teams. Thanks to all the Translators. Well, there is still a long way to go because our goal is not only to translate the articles, but also to promote the idea of Free Software for its care of computer users and the community. I do wish our effort is a progressive forward step toward a world where all software is free as in freedom.

Today, GNU CTT is just 20-year young. It is still a baby who need care from all of us. Let's work together to grow it stronger.

Dear Translators, dear free software lovers, dear friends, please light your candles, please wish your wish, please make your contribution to let the world be brighter.

Happy 20th Birthday, GNU CTT.
wxie

28 May, 2022 02:56AM by Wensheng XIE

May 27, 2022

Trisquel GNU/Linux

Trisquel 10.0.1 LTS "Nabia" incremental update

Today we publish a new set of live and installation media for the 10.0 series, that applies all package upgrades and security fixes to date, and corrects bugs in the installer applications and package managers. If you are already using Trisquel 10 you can upgrade without reinstalling, simply by using your package manager or update application of choice, or by running these two commands on a terminal:

sudo apt update
sudo apt dist-upgrade

The new release iso images are available in the downloads page.

27 May, 2022 04:53PM by quidam

May 24, 2022

GNU Taler news

Who comes after us? The correct mindset for designing a Central Bank Digital Currency

The title of the paper refers to the former DIRNSA, who claimed that "nobody comes after us" just before the NSA lost control of its data on Afghanistan collaborators to the Taliban. The paper urges for this cautionary tale to be considered when central banks are creating digital currencies.

24 May, 2022 10:00PM

May 22, 2022

parallel @ Savannah

GNU Parallel 20220522 ('NATO') released

GNU Parallel 20220522 ('NATO') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  It's amazing how fast you can get with bash pipelines and GNU Parallel.
    -- Eric Pauley @EricPauley_

New in this release:

  • --latest-line shows only the latest line of running jobs.
  • --color colors output in different colors per job (this obsoletes --ctag).
  • xargs compatibility: --process-slot-var foo sets $foo to jobslot-1.
  • xargs compatibility: --open-tty opens the terminal on stdin (standard input).
  • Bug fixes and man page updates.

News about GNU Parallel:

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
    12345678 883c667e 01eed62f 975ad28b 6d50e22a
    $ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
    cc21b4c9 43fd03e9 3ae1ae49 e28573c0
    $ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
    79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
    fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 May, 2022 01:25PM by Ole Tange

May 18, 2022

www @ Savannah

The History of GNU

Richard Stallman at the First Hackers Conference

The first Hackers Conference was held in Sausalito, California, in November 1984. The makers of the documentary Hackers: Wizards of the Electronic Age interviewed Richard Stallman at the event. They included only parts of the interviews in the film, but made some other footage available. Stallman's statements at the conference went beyond what he had written in the initial announcement of GNU.

It was at this conference that Richard Stallman first publicly and explicitly stated the idea that all software should be free, and makes it clear that "free" refers to freedom, not price, by saying that software should be freely accessible to everyone. This was probably the first time he made that distinction to the public.

Stallman continues by explaining why it is wrong to agree to accept a program on condition of not sharing it with others. So what can one say about a business based on developing nonfree software and luring others into accepting that condition? Such things are bad for society and shouldn't be done at all. (In later years he used stronger condemnation.)

Here are the things he said:

Video
Video
Video

18 May, 2022 08:50AM by Dora Scilipoti

May 16, 2022

Luca Saiu

Hackers getting married

On May 14th E. and I got married, here in Zürich. I do not normally share very personal information here; but people who knew me before January 2021 will remember me before and since that time. How she changed me for the better. E. is my joy. [Hugging photo] E. and I hugging under the cloister next to the Stadthaus. Photo by Gloria Bressan (http://www.byphotoz.com). For the occasion we invited our friends and relatives, most of whom live as émigrés in one country or another, like us. We had several of our old-time friends from the GNU Project, and some ... [Read more]

16 May, 2022 08:05PM by Luca Saiu (positron@gnu.org)

May 15, 2022

libiconv @ Savannah

libiconv 1.17 released

GNU libiconv 1.17 is released.

New in this release:

  • The libiconv library is now licensed under the LGPL version 2.1, instead of the LGPL version 2.0. The iconv program continues to be licensed under GPL version 3.
  • Added converters for many single-byte EBCDIC encodings: IBM-{037,273,277,278,280,282,284,285,297,423,424,425,500,838,870,871,875}, IBM-{880,905,924,1025,1026,1047,1097,1112,1122,1123,1130,1132,1137,1140}, IBM-{1141,1142,1143,1144,1145,1146,1147,1148,1149,1153,1154,1155,1156,1157}, IBM-{1158,1160,1164,1165,1166,4971,12712,16804}. They are available through the configure option '--enable-extra-encodings'.

15 May, 2022 03:31PM by Bruno Haible

May 10, 2022

FSF News

May 04, 2022

GNUnet News

Messenger-GTK 0.7.0

Messenger-GTK 0.7.0 released

We are pleased to announce the release of the Messenger-GTK application.
The application is a convergent GTK messaging application using the GNUnet Messenger service. The goal is to provide private and secure communication between any group of devices. The interface is also designed in a way to scale down to mobile and small screen devices like phones or tablets.

The application provides the following features:

  • Creating direct chats and group chats
  • Managing your contacts and groups
  • Invite contacts to a group
  • Sending text messages
  • Sending voice recordings
  • Sharing files privately
  • Deleting messages with any custom delay
  • Renaming contacts
  • Exchanging contact details physically
  • Verifying contact identities
  • Switching between different accounts

The application utilizes the previously released library "libgnunetchat" in a convergent graphical user interface. More information about that can be found here .

It is also possible to install and try the application as flatpak. The application is already available on flathub.org . Otherwise you will find the source code ready to compile below as well.

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.7.0

  • The version iteration will be inherited by cadet-gtk as logical successor.
  • It is possible to create direct chats and group chats via physical or virtual exchange.
  • Groups and contacts can be named, left, verified or deleted.
  • Existing contacts can be invited to any private or public group.
  • Chats allow sending text messages, voice recordings or files.
  • Messages can be deleted with a custom delete or automatically.
  • Switching between different accounts can be done during runtime.

A detailed list of changes can be found in the ChangeLog .

Known Issues

  • It is still difficult to get reliable chats between different devices. This might change with the upcoming changes on the GNUnet transport layer though.
  • It might happen that the FS service is not connected which might stop any file upload or stall it forever.
  • The webcam/camera to scan QR codes might not get picked up properly (for example it doesn't work yet with the Pinephone).
  • The application might crash at times. So consider it still being in development.

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org .

04 May, 2022 10:00PM

May 03, 2022

education @ Savannah

Along: an app to collect students' data for marketing purposes

The nonfree app Along, developed by a company controlled by Zuckerberg, leads students to reveal to their teacher personal information about themselves and their families. Conversations are recorded and the collected data sent to the company, which grants itself the right to sell it.

03 May, 2022 08:08PM by Dora Scilipoti

May 01, 2022

www @ Savannah

Along: an app to collect students' data for marketing purposes

The nonfree app Along, developed by a company controlled by Zuckerberg, leads students to reveal to their teacher personal information about themselves and their families. Conversations are recorded and the collected data sent to the company, which grants itself the right to sell it.

01 May, 2022 10:55AM by Dora Scilipoti

April 29, 2022

New free program needed

The world urgently needs a free program that can subtract background music from a field recording.

The purpose is to prevent censorship of people's video recordings of how cops treat the public.

29 April, 2022 05:59AM by Ineiev

April 27, 2022

FSF News

FSF job opportunity: Licensing and compliance manager

The Free Software Foundation (FSF), a Massachusetts 501(c)(3) charity with a worldwide mission to protect computer user freedom, seeks a motivated and talented individual to be our full-time licensing and compliance manager.

27 April, 2022 07:10PM

April 23, 2022

remotecontrol @ Savannah

parallel @ Savannah

GNU Parallel 20220422 ('Буча') released

GNU Parallel 20220422 ('Буча') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  Immensely useful which I am forever grateful that it exists.
    -- AlexDragusin@ycombinator

New in this release:

  • sash is no longer supported as shell.
  • --retries 0 is an alias for --retries 2147483647.
  • --shell-completion returns shell completion code.
  • --ssh-login-file reloads every second.
  • --parset is replaced with --_parset because it is only used internally.
  • sem --pipe passes STDIN (standard input) to the command.
  • Bug fixes and man page updates.

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
    12345678 883c667e 01eed62f 975ad28b 6d50e22a
    $ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
    cc21b4c9 43fd03e9 3ae1ae49 e28573c0
    $ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
    79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
    fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.GNU Parallel 20220422 ('Буча') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  Immensely useful which I am forever grateful that it exists.
    -- AlexDragusin@ycombinator

 

New in this release:

  • sash is no longer supported as shell.
  • --retries 0 is an alias for --retries 2147483647.
  • --shell-completion returns shell completion code.
  • --ssh-login-file reloads every second.
  • --parset is replaced with --_parset because it is only used internally.
  • sem --pipe passes STDIN (standard input) to the command.
  • Bug fixes and man page updates.

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
    12345678 883c667e 01eed62f 975ad28b 6d50e22a
    $ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
    cc21b4c9 43fd03e9 3ae1ae49 e28573c0
    $ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
    79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
    fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

23 April, 2022 11:19AM by Ole Tange

April 19, 2022

health @ Savannah

The Free Software community mourns the loss of Pedro Francisco (MasGNULinux)

Dear friends

These are very sad days for the Free Software and Social Justice movements. Our beloved friend Pedro Francisco has passed away.
Pedro fought relentlessly for equity in our society. He was also a Free/Libre Software activist.

Pedro created and managed MasGNULinux, a Spanish blog with news about Free Software and GNU/Linux. MasGNULinux was the best reference in the latest Free Software projects for the Spanish speaking community.

Thank you for your integrity, your honesty and your dedication to make this world a better place for this and future generations. Pedro's legacy will live on forever, in every line of code of each Free Software project.

From the GNU Health community, we send our deepest condolences to his family and friends.

PS: Rumor has it that God has switched to GNU/Linux.

19 April, 2022 08:27AM by Luis Falcon

April 18, 2022

parted @ Savannah

parted-3.5 released [stable]

I have released parted v3.5, the only change from the previous alpha was
updating gnulib to the current version.

Here are the compressed sources and a GPG detached signature[*]:
  https://ftp.gnu.org/gnu/parted/parted-3.5.tar.xz
  https://ftp.gnu.org/gnu/parted/parted-3.5.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA256 checksums:

4938dd5c1c125f6c78b1f4b3e297526f18ee74aa43d45c248578b1d2470c05a2  parted-3.5.tar.xz
1b4a381f344435baf69616a985fac6f411d740de9eebd91e4cccdf046332366a  parted-3.5.tar.xz.sig

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify parted-3.5.tar.xz.sig

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to update
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key bcl@redhat.com

  gpg --recv-keys 117E8C168EFE3A7F

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=parted&download=1' | gpg --import -

This release was bootstrapped with the following tools:
  Autoconf 2.71
  Automake 1.16.5
  Gettext 0.21
  Gnulib v0.1-5201-g0cda5beb79
  Gperf 3.1

NEWS

  • Noteworthy changes in release 3.5 (2022-04-18) [stable]
    • New Features

  Update to latest gnulib for 3.5 release

  • Noteworthy changes in release 3.4.64.2 (2022-04-05) [alpha]
    • Bug Fixes

  usage: remove the mention of "a particular partition"

  • Noteworthy changes in release 3.4.64 (2022-03-30) [alpha]
    • New Features

  Add --fix to --script mode to automatically fix problems like the backup
  GPT header not being at the end of a disk.

  Add use of the swap partition flag to msdos disk labeled disks.

  Allow the partition name to be an empty string when set in script mode.

  Add --json command line switch to output the details of the disk as JSON.

  Add support for the Linux home GUID using the linux-home flag.

    • Bug Fixes

  Decrease disk sizes used in tests to make it easier to run the test suite
  on systems with less memory. Largest filesystem is now 267MB (fat32). The
  rest are only 10MB.

  Add aarch64 and mips64 as valid machines for testing.

  Escape colons and backslashes in the machine output. Device path,
  model, and partition name could all include these. They are now
  escaped with a backslash.

  Use libdevmapper's retry remove option when the device is BUSY. This
  prevents libdevmapper from printing confusin output when trying to
  remove a busy partition.

  Keep GUID specific attributes when writing the GPT header. Previously
  they were set to 0.

18 April, 2022 08:26PM by Brian C. Lane

GNU Guix

10 years of stories behind Guix

It’s been ten years today since the very first commit to what was already called Guix—the unimaginative name is a homage to Guile and Nix, which Guix started by blending together. On April 18th, 2012, there was very little to see and no actual “project”. The project formed in the following months and became a collective adventure around a shared vision.

Ten years later, it’s amazing to see what more than 600 people achieved, with 94K commits, countless hours of translation, system administration, web design work, and no less than 175 blog posts to share our enthusiasm at each major milestone. It’s been quite a ride!

What follows is a series of personal accounts by some of the contributors who offered their time and energy and made it all possible. Read their stories and perhaps you too will be inspired to join all the nice folks on this journey?

10 year anniversary artwork

Alice Brenon

As a conclusion, Guix is a really neat system and I hope you enjoy it as much as I do!

My story with Guix is a bit topsy-turvy so I thought I might as well start by the end :) I first ranked it last among systems I wanted to test, then was a bit puzzled by it when I had the chance to test it after all the others had disappointed me, and we finally got to know each other once I installed it on my work laptop, because when you need a good, stable system you know you can rely on, why not use the most surprising one? Strangely, the alchemy worked and it has never let me down so far.

Like all good computer things, it looked way scarier from a distance than it really was, and seemed to be very much about ethics and theory while it's actually very pragmatic. I had struggled for years with the myriad of incompatible package formats for systems, then for each specific languages, and was flushed to discover at last what seemed to be a reasonable universal format. That's probably what I like best about it: the ability to use potentially any software I want without trashing my system. The welcoming community eager to help and valuing my contributions made it even better, and submitting patches came naturally. I mostly use it for development, and to keep my sanity in spite of all the data science tools I have to use for work. I sometimes wish it were easier to tweak the core of the system, but I blame my lack of free time at least as much as its design. I would absolutely love to see my Guix system using the runit init system one day but it just works and I finally learnt that it was all that mattered if you wanted to get things done in the end.

Andreas Enge

When I think of Guix, I always kid myself into believing that I had the idea — I remember chatting with Ludovic around a GNU hackers' meeting about Nix; I joked that since Guile is the GNU language, Nix should be rewritten in Guile. But it turned out that Ludovic had already started such a project in earnest... Luckily there is the git history to refresh our memories. Apparently I installed Guix, at the time only the package manager, with Ludovic's help in December 2012, and immediately reported a few bugs. My next action was to update the package for my own software. Learning Scheme was quite difficult, but I fondly remember discussing quotes and quasiquotes with Ludovic. After that, I mostly added packages to Guix, which was possible without knowing much of functional programming; the most tricky packages that stayed in my mind were ImageMagick and TeX Live. I came to appreciate the GNU Autotools — with all their shortcomings, having a uniform (and usually reliable) way of compiling and installing a software makes creating a Guix package almost trivial.

The most compelling feature of Guix was (and still is, I think) the ability to roll back package installations, and now complete system installations — no more fear of updating a package to a non-working state! And on a completely different level, the nice and welcoming atmosphere in the community, in no small part thanks to Ludovic's initial efforts of creating an inclusive environment.

Many formidable adventures are attached to working on Guix. Buying our first server for the build farm was difficult, since we wanted to use a machine with Libreboot that would work well with the GNU system. Eventually we succeeded, and it is still hosted with the non-profit Aquilenet in Bordeaux, so we managed to start our own infrastructure in accordance with our values.

Writing the bylaws of the Guix Europe non-profit was another exciting adventure; again we tried to create a structure in line with our values, where the decisions were taken as collectively as possible.

And personally I have fond memories of taking part in Guix meetings at FOSDEM and co-organising the Guix Days in Brussels; these are such nice occasions to meet people passionate about Guix and free software! I will never forget the Guix Days 2020 when I had just returned from a Reproducible Build Summit where I admired their way of facilitating the workshop, which I then tried to copy for our own meeting.

The Guix system meets all my daily needs now, so I have no technical wishes for the future — but I trust the many creative minds working on advancing the project to come up with nice new ideas. And I wish the human adventure and community building around Guix to continue!

Andrew Tropin

It's all lisp, no distraction, pure consistency! Every few years I migrate to a different workhorse and it has always been a pain to bring my software setup with me: forgot a pam/udev rule here, package here and some tiny hack here and everything is messed up, easier to reinstall everything from ground up. With declarative and reproducible nature of Guix System, Guix Home and rde project it's just a pure pleasure: write the configuration once, use it everywhere! Daemon configurations, package build phases, cron tasks, everything described in Guile Scheme. The one language to rule them all! I look forward for a little more: wider adoption of Guix for managing development environments and infrastructures.

GNU Guix respects you and your freedoms: anyone can explore and study, tweak and adjust, and share everything they want, every program available. Moreover every package is bootstrapped, what a miracle! Yes, some hardware isn't supported, but for a good reason, for this price you get a lot. I look forward for a little more: decentralized substitutes and RISC-V laptops running GNU Guix.

Thank you the community for all the hard work, already enjoy Guix for more than an year and look forward for more exciting things to appear!

Arun Isaac

I was introduced to Guix in 2016. That was around the time I was learning lisp and having my mind blown. As soon as I heard that Guix was written in a lisp and that it was a FSDG compliant distro, I knew that this was going to be the best distro ever and immediately jumped ship.

While I was immediately smitten by the perfection and elegance of Guix, I have stayed this long not for the technical excellence but for the people. The Guix community is the warmest and most caring free software community I have ever been a part of. I honestly did not believe such people could exist online until I saw Guix. In the future, I would love for this community to grow and thrive with a smoother issue tracking and contribution process so that potential contributors, especially newcomers, don't turn away frustrated.

Björn Höfling

In 2016, I searched a GNU/Linux distribution for a friend's laptop and chose GuixSD (now Guix System), inspired by a LibrePlanet video from MediaGoblin. The laptop failed to digest this OS, as it demanded binary, non-free drivers. In contrast, my wetware is 100% GNU FSDG-compatible, and so, generation 1 of Guix burned successfully into my brain, without any headaches. Or at least, they went away with the help from the very friendly, supportive, tolerant (only to the tolerant, thanks to the code of conduct) community.

My contributions started with not understanding Guix, and asking stupid questions on the mailing lists. Sometimes during that process I found bugs and analyzed them further, which helped Ludovic and other developers to fix them. I reviewed mostly Java packages and added some on my own. I very much enjoyed co-mentoring for Outreachy, and learned more on automated video/screencast generation. I should be really more active in the community again!

For the future of Guix, I would like to see more Java and Go packages (hem, this comes not from alone, I should review and contribute more). Internally I wish a more intuitive bug- and patch-tracker instead of Debbugs+Mumi. Externally I wish a much bigger awareness in the commercial "Open Source" world about software freedom, bootstrapping your environment and dependencies from free source code in a real reproducible way, instead of relying on opaque, binary containers. I wish people would much more take care of their dependencies (with Guix, of course!), put much more thoughts in their usage of dependencies, break-up dependency hell and support third parties in building their sources with free software (instead of relying on binary dependencies and opaque containerized dev-environments).

Blake Shaw

New media artists and designers suffer from following dilemma: our work, with its primary medium being code, is at once perhaps the simplest medium to distribute — requiring little more than copying a directory of text files from one hard drive to another — yet the works themselves remain a total nightmare to faithfully reproduce across varying machines at different points in time. Among other reasons, this is because our works are often composed of disparate parts with accompanying technical debt: an audio-visual installation may use the C++ library openFrameworks for high-performance interactive graphics, Haskell's TidalCycles for realtime sequencing, the fantastic FAUST signal processing language for zero-delay audio DSP, plus the usual dependencies; openCV, libfreenect, cairo, gstreamer, ffmpeg and so on. Time and its corresponding ABI changes intensify our predicament; not only is it often an error-prone and laborious task to get all of this running correctly across many machines, the nature of technical debt means that getting an installation from 2014 up and running in 2022 is often more trouble than its worth. Sadly, these seemingly immaterial works of art that are the simplest to copy are simultaneously some of the most difficult to reproduce and the quickest to depreciate.

Guix, on the other hand, offers its users provenance-preserving bit-reproducible builds of their entire operating systems: using Guix's implementation of the functional software deployment model, I should be able to reproduce, bit-for-bit, the exact same results across equivalent hardware. Suddenly our artworks can be deterministically produced not only at the level of the source code's changelog but also as the level of the build, offering the guarantee that our usually glued-together towers of systems that power our installations can be revisited and reproduced in the future, and the deployment processes becomes as simple as packing your system into a container to be deployed to a remote target. These guarantees means that the scaling of our works become simplified: if you want to do an installation that involves 100 Raspberry Pis communicating with one another in a dynamic system, you can focus on working on just a small parameterized subset, and then generate their varying configurations using the infrastructure that Guix provides. I'm currently in the early phases of developing re::producer, a "creative plumber's toolkit" that seeks to simplify this process for artists, tailoring the tools provided by Guix to the domain-specific needs of media art and allowing artists to declaratively define complex media art systems using one of the greatest programming language of all time, Scheme.

This is still new terrain, so there is plenty of work to do. But that’s no excuse to keep up with your old habits, so roll up your sleeves and come hack the good hack!

Cyril Roelandt

Back in early 2013, I was lucky enough to be unemployed for a few months. This gave me a lot of time to try out GNU Guix. Ludovic had told me about it a few months earlier, while we were still working at the same place. It was so easy to add new packages that I naturally ended up submitting a few patches and quickly trolled the whole project by adding my editor of choice, vim. Debugging package definitions was also very simple since the builds were reproducible by default.

I also had some fun writing the first version of the linter and improving the importer/updater for Python packages. I even hacked tox to make it use Guix instead of virtualenv and gave a talk about this at FOSDEM. Even though I left the project a few years ago, I'm glad to see it's doing well and is used in science and has joined forces with Software Heritage.

Efraim Flashner

Back in 2015 or so I had been using GNU/Linux on the desktop for a number of years and I wanted to contribute somehow. I had just finished a course in University using Lisp and Prolog and then I heard about Guix having its 0.8.3 (or so) release and it looked like something that I could try to contribute to. I certainly made a number of mistakes in the beginning; I didn't know that git revert was an actual command and I tried to revert a commit by hand, leaving a dangling parenthesis and breaking the repo. Another time I added Java as a dependency to an image library and broke the graphics stack for half the architectures until I reverted that! I even had a stint as a failed GSoC student. I was working on Bournish, a Gash/Gash-utils like utility to make debugging in the early boot process far easier by providing common CLI utilities. I had some issues with time management and ended up spending more time than I should have updating packages in the repository, as a result I didn't spend enough time working on Bournish and it's languished since then.

Currently, I enjoy working on troublesome packages and expanding the number of packages available on non-popular architectures. Sometimes it's removing compiler flags or ‘ifdef gating’ architecture-specific includes and other times certain parts of programs need to be disabled. Then everything needs to be double-checked for cross-compiling. Right now I'm working on riscv64-linux support in Guix, it has a lot of potential but powerful boards are hard to come by. Also there are some lingering bugs with guix show showing different supported-systems for packages depending on which architecture you run it from; on x86_64-linux only two are shown, from aarch64-linux all 9 architectures are shown.

Ekaitz Zarraga

A friend of mine introduced me to Nix and Guix a while ago but I was hesitant to try it because I hate configuring stuff and this didn't look as an easy distribution to use. Once I discovered we could have separate environments and how easy is to write a package (despite all other difficulties Guix has) I was completely into it. I installed Guix in my laptop and never looked back. In the 10 years I've been using a GNU/Linux distribution I never interacted that directly with my packages: creating custom ones, sending them upstream, making fixes… That's also freedom! Now, a couple of years later, I am working on improving the bootstrap process for RISC-V and using Guix as a mechanism that provides reproducible builds and an easy way to manage all the actors I have to deal with: very old software with old dependencies, colliding libraries, environment variables, custom patches in source code… This would be a pain to build in any other environment, but Guix makes hard things easy. Guix also makes easy things hard sometimes, but we are working on that!

Eric Bavier

As a young(ish) computer programmer, I had been running GNU/Linux systems for about 7 years but wanted to find a project I could contribute back to. Fortunately, I came upon a release announcement for Guix after having poked around the GNU Hurd and Guile spheres. To me at the time Guix had the exact mix of upstart energy, optimism, and long-term vision that I was hoping to find. Over the years I've been able to contribute packages I use in both my personal and work lives, and I'm proud to have implemented the first version of guix refresh --list-dependents. I've really loved how Guix allows me to easily move my environments around to different systems, and "rollback" gives me much peace of mind knowing that I can tinker with the system and recover should something go wrong. But probably my favorite part of Guix is the fantastic community I've seen grow around the project. It exemplifies the sort of caring, kind, supportive group I wish many other projects had. Together I know we'll be able to make advances on many fronts. In particular, I'd like to see further work on peer-to-peer substitutes delivery, a native build daemon, additional tools for managing relocatable pack collections, and continued leadership in bootstrapping.

Florian Pelz (pelzflorian)

GNU Guix to me is a group that cares about its software and about the people involved. I’ve got to know Guix by reading discussions on how to do sandboxing properly. But actually, Guix convinced me with its clarity and approachability and principles. Guix opened my eyes on how parts of GNU fit together. Thanks to all who give us this freedom to understand and decide. By contributing to the translation, I hope to make it reach more users and developers.

Guillaume Le Vaillant

Before I started using Guix in 2019, I was using Gentoo because I liked how I could easily package software and make package variants with some custom patches. However one day an upgrade didn't go well and many packages ended in a bad state. I realized I would have to reinstall the whole system to get things to work again. Before recompiling the whole system, I tried Nix and Guix, because I had read somewhere that they used functional package management, which gives the possibility to roll back to a working state when an upgrade causes problems. I chose Guix because I thought it was going in the right direction by using only free software and trying to get reproducible builds. The fact that package definitions use the Scheme language was a bonus point as I like Lisp languages. And there was even a build system for Common Lisp packages, which is rarely the case in the GNU/Linux distributions I tried over time. So I started using Guix, and packaging software I wanted that were not in Guix yet. One day someone asked me if I would be interested in having commit access, and I accepted. I also found a way to improve the build system for Common Lisp packages that simplified package definitions. In the future, I think it would be nice to add an importer fetching information from Quicklisp, as it would make packaging Common Lisp software even easier.

Hartmut Goebel

Christian Grothoff (GNU Taler) pointed me to Guix early 2016, saying “This will become the new Debian!” and asking me to look at it for GNU Taler. Well, quickly I was attracted by the ideas of reproducible build and the ease of packaging software. I also love the one-time usage of programs without littering my system.

Curiously, even as I'm a Python developer, my first contributions have been about Java packaging. And I spend quite some time trying to build maven. This challenge I gave up after two (or three? can't remember) attempts. Glad Julien Lepiller continued the endeavor and created the Maven build system.

Nowadays I still use Guix on a foreign distro only, as KDE desktop and some of my main applications are still not here. Guix keeps my main system tidy, while I can have different development environments without dependency conflicts.

As you can imagine, I'd like to see KDE desktop in Guix as well as some guix compose for managing compound containers.

Jan (janneke) Nieuwenhuizen

At FOSDEM 2016 there were seven talks about GNU Guix: A talk about the Hurd by Manolis Ragkousis, about functional package management by Ricardo Wurmus and that was just what I needed to hear: Finally a viable promise for the GNU System and much more innovative than I could have hoped for. At the time I also worked on a project where building binary releases was becoming more unmanageable with every release because of conflicting requirements. We were slowly migrating away from C++ to GNU Guile, so while not directly applicable the talk “Your distro is a Scheme library” by Ludovic Courtès also made me feel: Go Guix!

Using Guix, my intricate dependency problems building binary packages quickly and easily disappeared. That gave me the confidence that I needed and I wanted to get involved. My first contributions where a programming game called Laby and its dependencies and a few more packages that I missed. After running Guix on top of Debian GNU/Linux for three months I switched to what we now call Guix System. Guix did not have log rotation yet in those days, so I created a package.

This is how I found how amazingly helpful and friendly the community was. I created the MinGW cross build for Guile 2.0 and then "found out" about the bootstrap binaries: The only packages in Guix that are not built from source. Something just did not feel right. The manual said: “These big chunks of binary code are practically non-auditable which breaks the source to binary transparency that we get in the rest of the package dependency graph.” So, I wrote GNU Mes and started working on solving this problem. Twice we halved the size of the bootstrap binaries and the work is still ongoing.

What possibly started somewhat as a April fools joke in 2020 about the Hurd—this is still unclear—was (mis?)taken by some as a real project and led to a fun hacking frenzy of several months finally producing the "Childhurd": A Guix Shepherd service that gives access to the GNU/Hurd in a VM. My wish for the near future would be see an up-to-date Hurd including the Debian rumpkernel patches that may finally enable running the Hurd on real hardware again.

John Kehayias

All I wanted to do was to try out a new status bar, but the author only officially supported Nix for building. That started me to finally look at Nix after hearing about it in passing before. I was intrigued by the focus on reproducible and declarative builds. The language, not so much. My brother mentioned another project in the same vein but built on a Lisp. As a lover of all things Lisp, that was basically enough for me to dive right in. Beyond the banner features of the powerful package and system management, reproducible builds, system configuration, and, of course, Guile, I quickly found perhaps the biggest and most important: the GNU Guix community. They have been nothing short of amazing: helpful, intelligent, supportive, and fun to hang out with on the #guix channel. In less than a year, my journey so far has taken me through the (is it infamous yet?) recent big core-updates branch and merge, submitting patches for random libraries and key desktop features I use, and participating in the motivating Guix Days 2022. Looking to the future, I hope we can better harness the energy and resources of the growing Guix community. It is already a great project to get involved with and make your own, but with better and quicker patch review, further building out and using our development tools and ecosystem, and continuing to smooth out the rough edges for new users/contributors, I'm sure the next 10 years of GNU Guix will be very bright indeed.

Konrad Hinsen

In my work as a computational scientist, my first encounter with reproducibility issues happened in 1995, when a quantum chemistry package produced different results on two almost identical Silicon Graphics workstations. This was the beginning of a long quest for better computational reproducibility, in the course of which I discovered in 2014 Nix and Guix as two implementations of the same promising idea: the fully automated construction of a complete reproducible software stack. Of the two, Guix was more aligned with my lispy past, and already had a burgeoning computational science user community. I started playing with Guix in 2016, in a virtual machine under macOS, but only fully adopted Guix for my everyday work in 2021, when I left macOS for Linux. During those five years, I also learned to appreciate the Guix community, which is friendly, competent, and refreshingly low-ceremony in spite of continuous growth. That makes for an easy transition from newbie to contributor (mostly contributing packages, but also the time-machine command that matters for reproducibility). The anniversary is a good occasion to express my thanks to all those who answered my many questions, ranging from conceptual to technical, and to the maintainer team that does an important but not very visible work by critically examining all submitted packages and code enhancements. My main wish for the future is a lower barrier to adoption for my colleagues in computational science, and I hope to contribute to making this happen.

Lars-Dominik Braun

Around the end of 2019 we were looking for a way to provide reproducible software environments to researchers in(?) psychology and I was researching software to accomplish that. Binder/repo2docker was the obvious and most mature solution at that time and a colleague of mine had set up a proof of concept server already. But it could only handle public projects out-of-the-box and setting up an entire Kubernetes cluster didn’t seem particularly appealing at that time, because no other project was moving in that direction yet. So I set out to look for alternatives. Another idea was based around OpenStack and one virtual machine per project with preinstalled software, which we would keep for eternity. Also not ideal and OpenStack is very hard to master too. So I looked further at Nix, which – at that time – lacked an obvious way to spawn ad-hoc environments with a certain set of packages. Thankfully I stumbled upon GNU Guix by mere accident, which had exactly that feature. And so in December 2019 my first code contribution was merged.

Prior to that I had never written a single line of Scheme or Lisp and even now it’s still a steep hill. GNU Guix still powers our project and allows us to easily share software environments while providing excellent application startup times. I also started contributing software that I run on my own machines, but I’m not running Guix System, because compared to systemd, Shepherd is quite limited on the desktop and Guix’ lack of first-class support for non-free drivers/firmware, which I need to even boot my machine.

Ludovic Courtès

It all started as a geeky itch-scratching experiment: a tiny bit of Guile code to make remote procedure calls (RPCs) to the Nix build daemon. Why? As I was involved in and excited about Guile and Nix, it felt natural to try and bridge them. Guile had just had its 2.0.0 release, which broadened its scope, and I wanted to take advantage of it. Whether to go beyond the mere experiment is a decision I made sometime after a presentation at the 2012 GNU Hackers Meeting.

It was far from obvious that this would lead us anywhere—did the world really need another package manager? The decisive turn of event, for me, was to see that, at the time Guix officially became part of GNU in November 2012, it had already become a group effort; there was, it seems, a shared vision of why such a crazy-looking project made sense not just technically but also socially—for GNU, for user freedom. I remember Nikita Karetnikov as the first heroic contributor at a time when Guix could barely install packages.

One of my “ah ha!” moments was when I built the first bootable image a year later. G-expressions, the service framework, and checkout authentication are among my favorite hacks. What’s mind-blowing to me though is what others have achieved over the years: the incredible bootstrapping work, Disarchive, Emacs-Guix, the installer, Hurd support, Guix Home, supporting tools like Cuirass, the Data Service, and mumi. There’s also the less visible but crucial work: Outreachy and GSoC mentoring, support on IRC and the mailing lists, build farm administration, translation, dealing with the occasional incident on communication channels, organizing events such as the Guix Days or FOSDEM, and more.

As much as I love hacking the good hack, I think Guix’s main asset is its community: a friendly, productive, and creative group with a sense of attention to the other. I started clueless about what it means “to build a community” and learned a lot from everyone met on the way. We did it, we built this! Thumbs up, Guix!

Luis Felipe

When I found that Guix existed, I saw it could make it easier for GNU to release its Operating System and reach a wider audience. I intended to propose some graphic designs related to this, and sent a message to GNU in order to test the waters. Things didn't go as I expected, so, instead, I decided to direct my contributions towards GNU Guix and its distribution of GNU.

Since then, I've contributed with graphics (Guix and Guile logos and website designs, desktop backgrounds, release and promotional artwork), testing, bug reporting, packaging, and Spanish translations.

It's been about 8 years of Guix for me (the heck!). I started using the package manager on Debian, gradually switched the provenance of my software from Debian to Guix, and, once GNOME became available, I moved to Guix’s distribution of the GNU operating system, which I've been using as my main system for about 3 years now (and I don't see that changing anytime soon).

Right now, I'm enjoying developing software using Guix's reproducible environments and containers, and using one single package manager for every dependency.

I hope this system reaches a wider audience and brings science to home computing along the way. Homes should be capable of producing scientific work too.

Manolis Ragkousis

When I think how I started with Guix, I use one word to describe it, luck! It was early 2014 when I encountered Guix by luck, while I was still a student at Crete, Greece. I remember there was a strike during that time and I had plenty of free time for a week, so I decided that I will try to start working on this. Then an idea came in mind, why not try porting Guix to GNU/Hurd and build a system with it? One thing led to another and it also became a GSoC project in 2015 and 2016. In 2016 I also gave a FOSDEM talk about this, which somehow ended up being the start of me helping out with the GNU Guile devroom in 2017 and 2018, and then what became the Minimalistic Languages until today. When I am thinking about Guix is like thinking about the story of me growing up and the people I met through all these years I consider family! Guix is a big part of my life, I use it everywhere and even though I am not able to help much nowadays I am following the project as much as I can. Here's hoping to another 10 year!

Marius Bakke

I originally got interested in Guix after facing shortcomings in traditional configuration management tools. A fully declarative and immutable system which cleans out old user accounts and packages, that also offers reproducibility, rollbacks, and the ability to generate virtual machines and containers from the same code. Where do I sign up?

It turns out, signing up was easy, and I soon found myself contributing the pieces I needed to make it a daily driver. Watching the community grow from a handful of contributors to 100 monthly has been astonishing. I have learned a lot from this community and am proud to be a part of it. Can't wait to see what the next decade brings. Happy birthday Guix!

Mathieu Othacehe

I was introduced to GNU Guix by a colleague, Clément Lassieur in 2016. At first I found the concept overwhelming. Writing Guile wrappers for each and every Linux service out there and keeping them up to date seemed like impossible. However, I quickly fell in love with the package manager, the distribution and the community behind. A few months later, GNU Guix was running on all my machines and I started hacking on the continuous integration tool: Cuirass.

Since then GNU Guix has been an important part of my life. I wrote most of the Guix System installer while traveling by bike to China in 2018. During the 2020 lockdown, I worked with janneke on the new image API and the Hurd port. At that time, I was proposed a co-maintainer position of the project. In 2021, thanks to an NGI sponsorship, I dedicated 6 months to improving our continuous integration process and overall substitutes coverage.

Recently it has been harder to dedicate as much efforts on the project but I'm sure this is a transient phase. I can't wait to start working again with the incredibly talented people making this piece of software so special to me.

Paul Garlick

I began using and contributing to the Guix project in 2016. I had been searching for a way to preserve software environments that are used for numerical simulation. The applications that run in these environments often comprise a combination of specialised code and building blocks drawn from an underlying framework. There are many moving parts and changes to a low-level library can block the operation of the high- level application. How much better things would be if one could specify the exact components of the environment and re-create it whenever it is needed. I discovered that Guix provides the machinery to do just that. Scheme was new to me then so I had some learning to do before contributing. This included a detour via Vonnegut/Cat's Cradle, of course, to discover the meaning of ice-9. Suitably informed I returned to add a number of finite volume and finite element frameworks to the Guix package collection. Keeping these packages up- to-date and welcoming new simulation-related packages is the next target. Looking ahead to the next ten years, an important task is to popularise the use of the Guix tools. Many more engineers and scientists stand to benefit from the use of the dependable software environments that are now made possible.

Ricardo Wurmus

In 2014 I became responsible for building and installing scientific software at the Max Delbrück Centre, a research institute in Berlin. We used CentOS, so I built custom RPMs, installing applications to ad-hoc prefix directories. After a few weeks I took a minute to consider the horrific implications of maintaining a growing collection of custom software with RPM. As I tried to remember what life choices had led me to this moment, I recalled an announcement email of a quirky GNU package manager written in Scheme. A short web search later I was playing around with Guix.

After an encouraging chat on IRC I realized that I could probably replace our custom RPM repository and build different variants of scientific software on much more solid ground—all the while contributing to a project that felt like a new and exciting take on GNU. We're building the GNU system!

Guix only had very few of the packages I needed, so I got busy. I packaged and bootstrapped the JDK because I was under the mistaken assumption that I would need it for R (turns out Java is optional). Many more foolish adventures followed, and some of them have actually been useful for others.

I had found my tribe of fellow hackers who cared about the vision of the GNU system, encouraged playful experimentation, and were rooting for each other to succeed in building a better system that made software freedom a practical reality, blurring the lines between developers and users. In the decades to come I hope many more people will get to experience what I did and end up calling this community their home.

Simon Tournier

Back in 2014, I watched the video “Growing a GNU with Guix” at FOSDEM but the real revelation had been in 2015 with “GNU Guix: The Emacs of Distros”, again at FOSDEM. Then, I was following the development but not using Guix yet. 2016, new job where I was spending my time to fight against dependencies and Modulefiles. Then I have totally jumped in Guix in December 2018. My first interaction with the project — and not yet running Guix — was a in-person event in Paris before the Reproducible Builds workshop. Back to home, I proofread cover to cover the French manual — my first contribution — and installed Guix on the top of my Debian GNU/Linux system. So amazing! Guix fixes many issues I had at work — and introduce new ones^W challenges. Plus, thanks to people around, I am learning a lot, both about technical details and about inter-personal interactions. My wish for the near future is a community more structured: more events and meetups, more process for smoothing the contributions (“teams” for improving the reviewing by sharing the load, RFC for discussing new features, regular releases, etc.), and more materials for using Guix in various configurations.

In scientific context, transparency — being able to audit the whole computational environment from the source codes to the production of binaries — is one of the keys for a true reproducible research. Since Guix is transparent by design, it appears to me one part for a solution in tackling the computational side of the replication crisis. For the near future, I wish more scientific practitioners will employ Guix.

Thiago Jung Bauermann

I learned about Guix when I was looking for alternative, safe ways of installing an up-to-date Rust toolchain on my machine (at the time rustup didn't verify signatures of downloaded binaries, and it still doesn't do the full job). Guix is a great way to have the latest and greatest software on top of your slower-moving Linux distribution. I love how easy it makes to create instant, ad hoc environments with the packages you need for a specific task. Or to temporarily try out some new app or tool, leaving Guix to garbage-collect it and its dependencies. The Guix community is amazing as well! It's a pleasure to participate on the mailing lists. And I've been enjoying learning Scheme! For the future, I hope Guix can get even better test coverage so that every update of the master branch is guaranteed to not introduce regressions. And that the project gets more committers, to help with the constant influx of patches.

raingloom

There are multiple reasons I started using Guix. On the tech side, I'd been playing around with 9front for a while at that time, but kept running into issues where the imperative structure of namespaces was getting in my way. I like Haskell a lot and heard about the many benefits of a pure functional approach to build systems, et cetera. I ran Guix on top of Arch for a while and liked it a lot. Using package transformations still feels magical. But yall already know about this cool stuff from Ambrevar's blog post. On the social side, I saw that one of my favorite compsci people — Christine Lemmer Webber — was involved with the project, so I knew it probably has a nice community, which turned out to be very true. This is one of the best communities centered around a piece of tech that I've been in and yall inspire me to be better with each interaction. Huge thank you for that. My favorite Guix memory is when someone CC'd me in a patch for the egg importer, which was built on top of the Chicken Scheme build system I contributed. Seeing others build on top of my work is an amazing feeling. For the future, I hope the service management improvements will keep coming, but what I'd like to see the most is Guix running on old and slow devices. There is a lot of work to be done to make it more bandwidth and space efficient and to support development on systems with little RAM. If I could use it instead of PostmarketOS/Alpine, I'd be elated. On the human side, I hope we can keep contributors from burnout, while increasing the software's quality. I think the way Blender development is structured could be a source of inspiration. On that note, using Guix for reproducible art workflows would be rad. Okay that's it, bye yall lovely people.

Vagrant Cascadian

I think I first heard of Guix in 2016, triggering a late-night session trying to wrap my head around the crazy symlink farms at the heart of Guix. By late 2017 I was filing bug reports and eventually patches!

I am deeply fascinated that Guix has Reproducible Builds built right in, with normalized, containerized build environments and the "guix challenge" tool to verify reproducibility. I had heard of Nix as an interesting model, but valued the strong commitment to Free Software with Guix.

Eventually I even grew crazy enough to package Guix in Debian... which indirectly lead to one of my most creative contributions to a Free Software project, a typo poem embedded in!

I really appreciate the community around Guix and the process, values and thoughtfulness that work proactively to maintain a healthy community, even in the face of inevitable and occasional conflict. Guix balances formal and informal in a way that works for me.

I look forward to the day when Guix has a full source bootstrap!

10 Years of Guix artwork by Luis Felipe.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64 and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

18 April, 2022 06:45PM by Guix Hackers

April 17, 2022

unifont @ Savannah

Unifont 14.0.03 Released

17 April 2022 Unifont 14.0.03 is now available.  This release adds the new hex2otf program, which can convert Unifont .hex format files into OpenType fonts, as well as TrueType and other formats.  See the hex2otf documentation for details.

The font files just add several new Under ConScript Unicode Registry (UCSUR) scripts: Xaîni (U+E2D0..U+E2FF), Ophidian (U+E5E0..U+E5FF), Niji (U+ED40..U+ED5F), Sitelen Pona (U+F1900..U+F19FF), and Shidann (U+F1B00..U+F1C3F).

Download this release from GNU server mirrors at:

     https://ftpmirror.gnu.org/unifont/unifont-14.0.03/

or if that fails,

     https://ftp.gnu.org/gnu/unifont/unifont-14.0.03/

or, as a last resort,

     ftp://ftp.gnu.org/gnu/unifont/unifont-14.0.03/

These files are also available on the unifoundry.com website:

     https://unifoundry.com/pub/unifont/unifont-14.0.03/

Font files are in the subdirectory

     https://unifoundry.com/pub/unifont/unifont-14.0.03/font-builds/

A more detailed description of font changes is available at

      https://unifoundry.com/unifont/index.html

and of utility program changes at

      http://unifoundry.com/unifont/unifont-utilities.html

17 April, 2022 08:13PM by Paul Hardy

mailutils @ Savannah

Version 3.15

Version 3.15 is released today. New in this version:

  • mbox format: don't count terminating empty line as part of the message
  • Improve performance of the Sieve fileinto action
  • Improve efficiency of operations on flat mailboxes in append mode
  • Bugfixes in quoted-printable and fromrd filters
  • Variois fixes in mbox and dotmail format libraries
  • Fix compilation with flex version 2.6.1

17 April, 2022 07:22PM by Sergey Poznyakoff

April 15, 2022

coreutils @ Savannah

coreutils-9.1 released [stable]

This is to announce coreutils-9.1, a stable release.
See the NEWS below for details.

Thanks to everyone who has contributed!
There have been 210 commits by 10 people in the 29 weeks since 9.0

  Bernhard Voelker (3)            Max Filippov (1)
  Bruno Haible (1)                Paul Eggert (136)
  Christian Hesse (1)             Pádraig Brady (64)
  Daniel Knittl-Frank (1)         Rohan Sable (1)
  Jim Meyering (4)                Ville Skyttä (1)

Pádraig [on behalf of the coreutils maintainers]

==================================================================

Here is the GNU coreutils home page:
    https://gnu.org/software/coreutils/

For a summary of changes and contributors, see:
    https://git.sv.gnu.org/gitweb/?p=coreutils.git;a=shortlog;h=v9.1
or run this command from a git-cloned coreutils directory:
    git shortlog v9.0..v9.1

To summarize the 259 gnulib-related changes, run these commands
from a git-cloned coreutils directory:
    git checkout v9.1
    git submodule summary v9.0

==================================================================

Here are the compressed sources:
  https://ftp.gnu.org/gnu/coreutils/coreutils-9.1.tar.gz   (14MB)
  https://ftp.gnu.org/gnu/coreutils/coreutils-9.1.tar.xz   (5.5MB)

Here are the GPG detached signatures[*]:
  https://ftp.gnu.org/gnu/coreutils/coreutils-9.1.tar.gz.sig
  https://ftp.gnu.org/gnu/coreutils/coreutils-9.1.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

cab498ddc655fd3c7da553d80436d28bc9b17283  coreutils-9.1.tar.gz
YFXfkmhgPoI5pcnB1kyyW5qZJTDfZuM7jXimYO2zezU  coreutils-9.1.tar.gz
aa7bf0be95eef29d98eb5c76d4455698b3b705b3  coreutils-9.1.tar.xz
YaH0ENeLp+fzelpPUObRMgrKMzdUhKMlXt3xejhYBCM  coreutils-9.1.tar.xz

The SHA256 checksum is base64 encoded, instead of the
hexadecimal encoding that most checksum tools default to.

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify coreutils-9.1.tar.gz.sig

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to update
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key P@draigBrady.com

  gpg --recv-keys DF6FD971306037D9

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=coreutils&download=1' | gpg --import -

This release was bootstrapped with the following tools:
  Autoconf 2.71
  Automake 1.16.4
  Gnulib v0.1-5194-g58c597d13
  Bison 3.7.4

NEWS

* Noteworthy changes in release 9.1 (2022-04-15) [stable]

** Bug fixes

  chmod -R no longer exits with error status when encountering symlinks.
  All files would be processed correctly, but the exit status was incorrect.
  [bug introduced in coreutils-9.0]

  If 'cp -Z A B' checks B's status and some other process then removes B,
  cp no longer creates B with a too-generous SELinux security context
  before adjusting it to the correct value.
  [bug introduced in coreutils-8.17]

  'cp --preserve=ownership A B' no longer ignores the umask when creating B.
  Also, 'cp --preserve-xattr A B' is less likely to temporarily chmod u+w B.
  [bug introduced in coreutils-6.7]

  On macOS, 'cp A B' no longer miscopies when A is in an APFS file system
  and B is in some other file system.
  [bug introduced in coreutils-9.0]

  On macOS, fmt no longer corrupts multi-byte characters
  by misdetecting their component bytes as spaces.
  [This bug was present in "the beginning".]

  'id xyz' now uses the name 'xyz' to determine groups, instead of xyz's uid.
  [bug introduced in coreutils-8.22]

  'ls -v' and 'sort -V' no longer mishandle corner cases like "a..a" vs "a.+"
  or lines containing NULs.  Their behavior now matches the documentation
  for file names like ".m4" that consist entirely of an extension,
  and the documentation has been clarified for unusual cases.
  [bug introduced in coreutils-7.0]

  On macOS, 'mv A B' no longer fails with "Operation not supported"
  when A and B are in the same tmpfs file system.
  [bug introduced in coreutils-9.0]

  'mv -T --backup=numbered A B/' no longer miscalculates the backup number
  for B when A is a directory, possibly inflooping.
  [bug introduced in coreutils-6.3]

** Changes in behavior

  cat now uses the copy_file_range syscall if available, when doing
  simple copies between regular files.  This may be more efficient, by avoiding
  user space copies, and possibly employing copy offloading or reflinking.

  chown and chroot now warn about usages like "chown root.root f",
  which have the nonstandard and long-obsolete "." separator that
  causes problems on platforms where user names contain ".".
  Applications should use ":" instead of ".".

  cksum no longer allows abbreviated algorithm names,
  so that forward compatibility and robustness is improved.

  date +'%-N' now suppresses excess trailing digits, instead of always
  padding them with zeros to 9 digits.  It uses clock_getres and
  clock_gettime to infer the clock resolution.

  dd conv=fsync now synchronizes output even after a write error,
  and similarly for dd conv=fdatasync.

  dd now counts bytes instead of blocks if a block count ends in "B".
  For example, 'dd count=100KiB' now copies 100 KiB of data, not
  102,400 blocks of data.  The flags count_bytes, skip_bytes and
  seek_bytes are therefore obsolescent and are no longer documented,
  though they still work.

  ls no longer colors files with capabilities by default, as file-based
  capabilties are very rarely used, and lookup increases processing per file by
  about 30%.  It's best to use getcap [-r] to identify files with capabilities.

  ls no longer tries to automount files, reverting to the behavior
  before the statx() call was introduced in coreutils-8.32.

  stat no longer tries to automount files by default, reverting to the
  behavior before the statx() call was introduced in coreutils-8.32.
  Only `stat --cached=never` will continue to automount files.

  timeout --foreground --kill-after=... will now exit with status 137
  if the kill signal was sent, which is consistent with the behavior
  when the --foreground option is not specified.  This allows users to
  distinguish if the command was more forcefully terminated.

** New Features

  dd now supports the aliases iseek=N for skip=N, and oseek=N for seek=N,
  like FreeBSD and other operating systems.

  dircolors takes a new --print-ls-colors option to display LS_COLORS
  entries, on separate lines, colored according to the entry color code.

  dircolors will now also match COLORTERM in addition to TERM environment
  variables.  The default config will apply colors with any COLORTERM set.

** Improvements

  cp, mv, and install now use openat-like syscalls when copying to a directory.
  This avoids some race conditions and should be more efficient.

  On macOS, cp creates a copy-on-write clone if source and destination
  are regular files on the same APFS file system, the destination does
  not already exist, and cp is preserving mode and timestamps (e.g.,
  'cp -p', 'cp -a').

  The new 'date' option --resolution outputs the timestamp resolution.

  With conv=fdatasync or conv=fsync, dd status=progress now reports
  any extra final progress just before synchronizing output data,
  since synchronizing can take a long time.

  printf now supports printing the numeric value of multi-byte characters.

  sort --debug now diagnoses issues with --field-separator characters
  that conflict with characters possibly used in numbers.

  'tail -f file | filter' now exits on Solaris when filter exits.

  root invoked coreutils, that are built and run in single binary mode,
  now adjust /proc/$pid/cmdline to be more specific to the utility
  being run, rather than using the general "coreutils" binary name.

** Build-related

  AIX builds no longer fail because some library functions are not found.
  [bug introduced in coreutils-8.32]

15 April, 2022 10:34PM by Pádraig Brady

April 11, 2022

GNUnet News

libgnunetchat 0.1.0

libgnunetchat 0.1.0 released

We are pleased to announce the release of the client side library libgnunetchat 0.1.0.
This library brings an abstraction layer using the client API from different GNUnet services to provide the functionality of a typical messenger application. The goal is to make developing such applications easier and independent of the GUI toolkit. So people can develop different interfaces being compatible with eachother despite visual differences, a few missing features or differences in overall design.
The library relies on multiple services from GNUnet to implement its features. More information about that can be found here .

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.1.0

  • This release requires the GNUnet Messenger Service 0.1!
  • It allows account management (creation, listing and deletion).
  • Clients are able to switch between accounts during runtime.
  • The client can rename an account or update its key.
  • Contact exchange is possible via lobbies in form of URIs to be shared as text form or potentially QR encoded.
  • Each resource allows handling a user pointer for the client application.
  • Contacts and groups can be managed individually and given a custom nick name.
  • It is possible to request and open a direct chat with any contact.
  • Groups allow listing their members with custom user pointers related to the group memberships.
  • Chats can be left explicitly.
  • Each chat will be represented as context resource abstracting the variant of chat.
  • It is possible to send text messages, send files, share files and send read receipts explicitly.
  • Received messages allow checking for a read receipt status.
  • Messages can be deleted with a custom delay.
  • Files in a chat can be fully managed (they can be uploaded, downloaded, unindexed and provide a decrypted temporary preview if necessary) while being encrypted individually.
  • The status of each operation (upload, download, unindex) regarding files can be tracked.
  • Received invitations to new chats can be accepted.

A detailed list of changes can be found in the ChangeLog .

Known Issues

  • The test cases are not fully complete and they may fail because of timeouts erratically.

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org .

11 April, 2022 10:00PM

April 07, 2022

gzip @ Savannah

gzip-1.12 released [stable]

Thanks to Paul Eggert and Lasse Collin for all the work
on fixing the exploitable zgrep bug, and to Paul for
handling most of the other changes.

Here are the compressed sources:
  https://ftp.gnu.org/gnu/gzip/gzip-1.12.tar.gz   (1.3MB)
  https://ftp.gnu.org/gnu/gzip/gzip-1.12.tar.xz   (808KB)

Here are the GPG detached signatures[*]:
  https://ftp.gnu.org/gnu/gzip/gzip-1.12.tar.gz.sig
  https://ftp.gnu.org/gnu/gzip/gzip-1.12.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

91fa501ada319c4dc8f796208440d45a3f48ed13  gzip-1.12.tar.gz
W0+xTTgxTgny/IocUQ581UCj6g4+ubBCAEa4LDv0EIU  gzip-1.12.tar.gz
318107297587818c8f1e1fbb55962f4b2897bc0b  gzip-1.12.tar.xz
zl4D5Rn2N+H4FAEazjXE+HszwLur7sNbr1+9NHnpGVY  gzip-1.12.tar.xz

The SHA256 checksum is base64 encoded, instead of the
hexadecimal encoding that most checksum tools default to.

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify gzip-1.12.tar.gz.sig

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to update
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key jim@meyering.net

  gpg --recv-keys 7FD9FCCB000BEEEE

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=gzip&download=1' | gpg --import -

This release was bootstrapped with the following tools:
  Autoconf 2.71
  Automake 1.16d
  Gnulib v0.1-5194-g58c597d13b

NEWS

* Noteworthy changes in release 1.12 (2022-04-07) [stable]

** Changes in behavior

  'gzip -l' no longer misreports file lengths 4 GiB and larger.
  Previously, 'gzip -l' output the 32-bit value stored in the gzip
  header even though that is the uncompressed length modulo 2**32.
  Now, 'gzip -l' calculates the uncompressed length by decompressing
  the data and counting the resulting bytes.  Although this can take
  much more time, nowadays the correctness pros seem to outweigh the
  performance cons.

  'zless' is no longer installed on platforms lacking 'less'.

** Bug fixes

  zgrep applied to a crafted file name with two or more newlines
  can no longer overwrite an arbitrary, attacker-selected file.
  [bug introduced in gzip-1.3.10]

  zgrep now names input file on error instead of mislabeling it as
  "(standard input)", if grep supports the GNU -H and --label options.

  'zdiff -C 5' no longer misbehaves by treating '5' as a file name.
  [bug present since the beginning]

  Configure-time options like --program-prefix now work.

07 April, 2022 05:07PM by Jim Meyering

April 04, 2022

GNU Health

GNU Health declared Digital Public Good

We are very proud to announce that the GNU Health project has been declared a Digital Public Good by the Digital Public Goods Alliance (DPGA). GNU Solidario received the announcement this Sunday, April 3rd 2022.

The Digital Public Goods Alliance is a multi-stakeholder initiative endorsed by the United Nations Secretary-General, working to accelerate the attainment of the Sustainable Development Goals in low-and middle-income countries by facilitating the discovery, development, use of, and investment in digital public goods.

Current Digital Public Good Alliance board members (2022) . German Federal Ministry for Economic Cooperation and Development (BMZ), the Government of Sierra Leone, the Norwegian Agency for Development Cooperation (Norad), iSPIRT, United Nations Development Program (UNDP) , and the United Nations Children's Fund (UNICEF).
Digital Public Good Alliance board members (2022)

Current board members of the DPGA include the German Federal Ministry for Economic Cooperation and Development (BMZ), the Government of Sierra Leone, the Norwegian Agency for Development Cooperation (Norad), iSPIRT, United Nations Development Program (UNDP) , and the United Nations Children’s Fund (UNICEF).

The goal of the DPGA and its registry is to promote digital public goods in order to create a more equitable world. Being recognized as a DPG increases the visibility, support for, and prominence of open projects that have the potential to tackle global challenges.

GNU Health is now at the Digital Public Goods registry

After its nomination to become a Digital Public Good project, GNU Health had to pass the requirements of the DPGA standards. As the DPGA states :

The Digital Public Goods Standard is a set of specifications and guidelines designed to maximise consensus about whether a digital solution conforms to the definition of a digital public good: open-source software, open data, open AI models, open standards, and open content that adhere to privacy and other applicable best practices, do no harm by design and are of high relevance for attainment of the United Nations 2030 Sustainable Development Goals (SDGs). This definition stems from the UN Secretary-General’s Roadmap for Digital Cooperation.

Digital Public Good Standards

At GNU Solidario and GNU Health we are humbled and very happy with this recognition, and accept it with profound commitment, responsibility and determination. It makes us work even harder to keep on fighting for the advancement of Social Medicine, and to give voice to the voiceless around the world.

About GNU Health

GNU Health is a Libre, community driven project from GNU Solidario, a non-profit humanitarian organization focused on Social Medicine. Our project has been adopted by public and private health institutions and laboratories, multilateral organizations and national public health systems around the world.

The GNU Health project provides the tools for individuals, health professionals, institutions and governments to proactively assess and improve the underlying determinants of health, from the socioeconomic agents to the molecular basis of disease. From primary health care to precision medicine.

The following are the main components that make up the GNU Health ecosystem:

  • Social Medicine and Public Health
  • Hospital Management (HMIS)
  • Laboratory Management (Occhiolino)
  • Personal Health Record (MyGNUHealth)
  • Bioinformatics and Medical Genetics
  • Thalamus and Federated health networks
  • GNU Health embedded on Single Board devices

GNU Health is a Free/Libre, community-driven project from GNU Solidario, that counts with a large and friendly international community. GNU Solidario celebrates GNU Health Con and the International Workshop on e-Health in Emerging Economies (IWEEE) every year, that gathers the GNU Health and social medicine advocates from around the world.

GNU Health is a GNU (www.gnu.org) official package, awarded with the Free Software Foundantion award of Social benefit, among others. GNU Health has been adopted by many hospitals, governments and multilateral organizations around the globe.

See also:

GNU Solidario : https://www.gnusolidario.org

Digital Public Good Alliance: https://digitalpublicgoods.net/

Source : https://my.gnusolidario.org/2022/04/04/gnu-health-declared-digital-public-good/

04 April, 2022 09:12PM by Luis Falcon

April 02, 2022

health @ Savannah

GNU Health Hospital Management 4.0.3 patchset released

Dear community

GNU Health 4.0.3 patchset has been released !

Priority: High

Table of Contents

  • About GNU Health Patchsets
  • Updating your system with the GNU Health control Center
  • Summary of this patchset
  • Installation notes
  • List of other issues related to this patchset

About GNU Health Patchsets

We provide "patchsets" to stable releases. Patchsets allow applying bug fixes and updates on production systems. Always try to keep your
production system up-to-date with the latest patches.

Patches and Patchsets maximize uptime for production systems, and keep your system updated, without the need to do a whole installation.

NOTE: Patchsets are applied on previously installed systems only. For new, fresh installations, download and install the whole tarball (ie,
gnuhealth-4.0.3.tar.gz)

Updating your system with the GNU Health control Center

Starting GNU Health 3.x series, you can do automatic updates on the GNU Health HMIS kernel and modules using the GNU Health control center
program.

Please refer to the administration manual section (https://en.wikibooks.org/wiki/GNU_Health/Control_Center )

The GNU Health control center works on standard installations (those done following the installation manual on wikibooks). Don't use it if you use an alternative method or if your distribution does not follow the GNU Health packaging guidelines.

Summary of this patchset

Update Python dependencies

  • vobject: We have removed the version pinning on this library, using the current, latest version.
  • Update gnuhealth-control and gnuhealth-setup
  • Fixed issue on stock and nursing due to an old method on default health professional.
 

Installation Notes

You must apply previous patchsets before installing this patchset. If your patchset level is 4.0.2, then just follow the general instructions. You can find the patchsets at GNU Health main download site at GNU.org (https://ftp.gnu.org/gnu/health/)

In most cases, GNU Health Control center (gnuhealth-control) takes care of applying the patches for you.

Pre-requisites for upgrade to 4.0.3: None

Now follow the general instructions at

 

After applying the patches, make a full update of your GNU Health database as explained in the documentation.

When running "gnuhealth-control" for the first time, you will see the following message: "Please restart now the update with the new control center" Please do so. Restart the process and the update will continue.

  • Restart the GNU Health server

List of other issues and tasks related to this patchset

  • bug #62240: trytond-console: ValueError due to old vobject version
  • bug #62235: Traceback on default health professional

 Detailed information about each issue: https://savannah.gnu.org/bugs/?group=health

 Information about each task: https://savannah.gnu.org/task/?group=health

 For detailed information you can read about Patches and Patchsets

Happy and healthy hacking !

02 April, 2022 02:48PM by Luis Falcon

Amin Bandali

LibrePlanet 2022: The Net beyond the Web

Today I gave a talk at LibrePlanet 2022 about the internet and the web, giving a brief account of the web's past, its current state, and ideas for better futures.

In this talk I go over the old web (of 1990s and early 2000s) and how websites looked back then, fast-forwarding to the present day and the sad current state of the web, and some possibilities on where we could go from here if we would like to have a better net/web in the future for user freedom, privacy, and control.

Here is the abstract for my talk, also available on the LibrePlanet 2022's speakers page:

The modern web is filled to the brim with complexity, no shortage of nonfree software, and malware. Many, many people have written and spoken at length on these issues and their implications and negative effects on users' freedom, privacy, and digital autonomy. With the advent of technologies like WebAssembly, the modern day web browser has effectively become an operating system of its own, along with all the issues and complexities of operating systems and then some. Opening arbitrary websites with a typical web browser amounts to downloading an executing [mostly nonfree] software on your machine. But is all of this complexity really necessary? Is all of this needed to achieve the web's original purpose, an information system for relaying documents (and now media)? What if there was a way to do away with all of these complexities and go back to the basics?

In this talk we will examine the Internet beyond the modern web, some possibilities of what that might look like with concrete examples from protocols like Gopher from time immemorial, and more recent experiments and reimaginations of it in today's world, such as Gemini and Spartan. The talk will give a brief tour of these protocols and their histories, what they have to offer, and why one might want to use them in the 21st century.

Presentation slides: txt | pdf | bib
Speaker notes: txt

I will add the presentation video once the conference recordings have been processed and published by the Free Software Foundation. You can watch the presentation video below:

02 April, 2022 04:30AM

LibrePlanet 2021: Jami and how it empowers users

I am giving my very first LibrePlanet talk today on March 20th. I will be talking about Jami, the GNU package for universal communication that respects the freedoms and privacy of its users. I'll be giving an introduction to Jami and its architecture, sharing important and exciting development news from the Jami team about rendezvous points, JAMS, the plugin SDK, Swarm chats, and more; and how these features each help empower users to communicate with their loved ones without sacrificing their privacy or freedom.

Here is the abstract for my talk, also available on the LibrePlanet 2021's speakers page:

Jami is free software for universal communication that respects the freedoms and privacy of its users. Jami is an official GNU package with a main goal of providing a framework for virtual communications, along with a series of end-user applications for audio/video calling and conferencing, text messaging, and file transfer.

With the outbreak of the COVID-19 pandemic, working from home has become the norm for many workers around the world. More and more people are using videoconferencing tools to work or communicate with their loved ones. The emergence of these tools has been followed by many questions and scandals concerning the privacy and freedom of users.

This talk gives an introduction to Jami, a free/libre, truly distributed, and peer-to-peer solution, and explains why and how it differs from all other existing solutions and how it empowers users.

I have been an attendee of LibrePlanet for some years, and am very excited to be giving my first ever talk at LibrePlanet 2021 this year! You can watch my talk and other speakers' talks live this weekend, from the LibrePlanet 2021 - Live page. Attendance is gratis (no cost), and you can register at https://u.fsf.org/lp21-sp.

Presentation slides: pdf (with notes) | bib
LaTeX sources: tar.gz | zip
Video: webm

You can watch the presentation video below:

02 April, 2022 04:30AM

LibrePlanet 2022: The Net beyond the Web

Today I gave a talk at LibrePlanet 2022 about the internet and the web, giving a brief account of the web's past, its current state, and ideas for better futures.

In this talk I go over the old web (of 1990s and early 2000s) and how websites looked back then, fast-forwarding to the present day and the sad current state of the web, and some possibilities on where we could go from here if we would like to have a better net/web in the future for user freedom, privacy, and control.

Here is the abstract for my talk, also available on the LibrePlanet 2022's speakers page:

The modern web is filled to the brim with complexity, no shortage of nonfree software, and malware. Many, many people have written and spoken at length on these issues and their implications and negative effects on users' freedom, privacy, and digital autonomy. With the advent of technologies like WebAssembly, the modern day web browser has effectively become an operating system of its own, along with all the issues and complexities of operating systems and then some. Opening arbitrary websites with a typical web browser amounts to downloading an executing [mostly nonfree] software on your machine. But is all of this complexity really necessary? Is all of this needed to achieve the web's original purpose, an information system for relaying documents (and now media)? What if there was a way to do away with all of these complexities and go back to the basics?

In this talk we will examine the Internet beyond the modern web, some possibilities of what that might look like with concrete examples from protocols like Gopher from time immemorial, and more recent experiments and reimaginations of it in today's world, such as Gemini and Spartan. The talk will give a brief tour of these protocols and their histories, what they have to offer, and why one might want to use them in the 21st century.

Presentation slides: txt | pdf | bib
Speaker notes: txt

I will add the presentation video once the conference recordings have been processed and published by the Free Software Foundation. You can watch the presentation video below:

02 April, 2022 04:30AM

LibrePlanet 2021: Jami and how it empowers users

I am giving my very first LibrePlanet talk today on March 20th. I will be talking about Jami, the GNU package for universal communication that respects the freedoms and privacy of its users. I'll be giving an introduction to Jami and its architecture, sharing important and exciting development news from the Jami team about rendezvous points, JAMS, the plugin SDK, Swarm chats, and more; and how these features each help empower users to communicate with their loved ones without sacrificing their privacy or freedom.

Here is the abstract for my talk, also available on the LibrePlanet 2021's speakers page:

Jami is free software for universal communication that respects the freedoms and privacy of its users. Jami is an official GNU package with a main goal of providing a framework for virtual communications, along with a series of end-user applications for audio/video calling and conferencing, text messaging, and file transfer.

With the outbreak of the COVID-19 pandemic, working from home has become the norm for many workers around the world. More and more people are using videoconferencing tools to work or communicate with their loved ones. The emergence of these tools has been followed by many questions and scandals concerning the privacy and freedom of users.

This talk gives an introduction to Jami, a free/libre, truly distributed, and peer-to-peer solution, and explains why and how it differs from all other existing solutions and how it empowers users.

I have been an attendee of LibrePlanet for some years, and am very excited to be giving my first ever talk at LibrePlanet 2021 this year! You can watch my talk and other speakers' talks live this weekend, from the LibrePlanet 2021 - Live page. Attendance is gratis (no cost), and you can register at https://u.fsf.org/lp21-sp.

Presentation slides: pdf (with notes) | bib
LaTeX sources: tar.gz | zip
Video: webm

You can watch the presentation video below:

02 April, 2022 04:30AM

March 31, 2022

poke @ Savannah

GNU poke 2.3 released

I am not that happy to announce a new release of GNU poke, version 2.3.

This fixes a little bug in diagnostics that broke the 32-bit
testsuite, making 2.2 the shortest lived poke release to date.

See the file NEWS in the distribution tarball for a list of issues fixed in this release.

The tarball poke-2.3.tar.gz is now available at
https://ftp.gnu.org/gnu/poke/poke-2.3.tar.gz.

  GNU poke (http://www.jemarch.net/poke) is an interactive,
  extensible editor for binary data.  Not limited to editing
  basic entities such as bits and bytes, it provides a full-
  fledged procedural, interactive programming language designed
  to describe data structures and to operate on them.

Happy poking!

--
Jose E. Marchesi
Frankfurt am Main
31 March 2022

31 March, 2022 06:22AM by Jose E. Marchesi

March 30, 2022

parted @ Savannah

parted-3.4.64 released [alpha]

Here are the compressed sources and a GPG detached signature[*]:
  http://alpha.gnu.org/gnu/parted/parted-3.4.64.tar.xz
  http://alpha.gnu.org/gnu/parted/parted-3.4.64.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA256 checksums:

parted-3.4.64.tar.xz = 00b686e9cb536a14b5a2831077903fb573f0e7d644d1ba8bbb1b255b767560af
parted-3.4.64.tar.xz.sig = b309bcb6630d004e76452e2dcaf71e0f84d41f19abe7564a39f14d55bc9d9ee9

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify parted-3.4.64.tar.xz.sig

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to update
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key bcl@redhat.com

  gpg --recv-keys 117E8C168EFE3A7F

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=parted&download=1' | gpg --import -

This release was bootstrapped with the following tools:
  Autoconf 2.71
  Automake 1.16.5
  Gettext 0.21
  Gnulib v0.1-5192-gc386ed6eb0
  Gperf 3.1

NEWS

  • Noteworthy changes in release 3.4.64 (2022-03-30) [alpha]
    • New Features

  Add --fix to --script mode to automatically fix problems like the backup
  GPT header not being at the end of a disk.

  Add use of the swap partition flag to msdos disk labeled disks.

  Allow the partition name to be an empty string when set in script mode.

  Add --json command line switch to output the details of the disk as JSON.

  Add support for the Linux home GUID using the linux-home flag.

    • Bug Fixes

  Decrease disk sizes used in tests to make it easier to run the test suite
  on systems with less memory. Largest filesystem is now 267MB (fat32). The
  rest are only 10MB.

  Add aarch64 and mips64 as valid machines for testing.

  Escape colons and backslashes in the machine output. Device path,
  model, and partition name could all include these. They are now
  escaped with a backslash.

  Use libdevmapper's retry remove option when the device is BUSY. This
  prevents libdevmapper from printing confusin output when trying to
  remove a busy partition.

  Keep GUID specific attributes when writing the GPT header. Previously
  they were set to 0.

30 March, 2022 05:01PM by Brian C. Lane

March 29, 2022

poke @ Savannah

[VIDEO] Introduction to GNU poke at LP2022

The good Libreplanet people have just published the recordings of the presentations made at the Libreplanet 2022 conference earlier in March.  (kudos to them.)

One of the talks was an introduction to GNU poke, that may be useful to watch.  The video can be watched here:

https://framatube.org/w/eMXntiH1syB81a2Tk2zhJX

and downloaded from here:

https://media.libreplanet.org/u/libreplanet/tag/libreplanet-2022-video/

PS: The video also shows the very new (and still very experimental) Emacs interface to poke ;)

29 March, 2022 10:02PM by Jose E. Marchesi

GNU poke 2.2 released

I am happy to announce a new release of GNU poke, version 2.2.

This is a bugfix release in the 2.x series.

See the file NEWS in the distribution tarball for a list of issues fixed in this release.

The tarball poke-2.2.tar.gz is now available at
https://ftp.gnu.org/gnu/poke/poke-2.2.tar.gz.

  GNU poke (http://www.jemarch.net/poke) is an interactive,
  extensible editor for binary data.  Not limited to editing
  basic entities such as bits and bytes, it provides a full-
  fledged procedural,interactive programming language designed
  to describe data structures and to operate on them.

Happy poking!

--
Jose E. Marchesi
Frankfurt am Main
29 March 2022

29 March, 2022 07:01PM by Jose E. Marchesi

March 25, 2022

remotecontrol @ Savannah

March 22, 2022

parallel @ Savannah

GNU Parallel 20220322 ('Маріу́поль')

GNU Parallel 20220322 ('Маріу́поль') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  My favorite software, ever. Keep the good work.
    -- Federico Alves @federicoalves@twitter

New in this release:

  • --sshlogin user:password@host is now supported by using sshpass.
  • Bug fixes and man page updates.

News about GNU Parallel:

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
    12345678 883c667e 01eed62f 975ad28b 6d50e22a
    $ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
    cc21b4c9 43fd03e9 3ae1ae49 e28573c0
    $ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
    79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
    fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 March, 2022 10:29PM by Ole Tange