Planet GNU

Aggregation of development blogs from the GNU Project

January 16, 2020

GNU Guile

GNU Guile 3.0.0 released

We are ecstatic and relieved to announce the release of GNU Guile 3.0.0. This is the first release in the new stable 3.0 release series.

See the release announcement for full details and a download link.

The principal new feature in Guile 3.0 is just-in-time (JIT) native code generation. This speeds up the performance of all programs. Compared to 2.2, microbenchmark performance is around twice as good on the whole, though some individual benchmarks are up to 32 times as fast.

Comparison of microbenchmark performance for Guile 3.0 versus 2.2

For larger use cases, notably, this finally makes the performance of "eval" as written in Scheme faster than "eval" written in C, as in the days of Guile 1.8.

Other new features in 3.0 include support for interleaved definitions and expressions in lexical contexts, native support for structured exceptions, better support for the R6RS and R7RS Scheme standards, along with a pile of optimizations. See the NEWS file for a complete list of user-visible changes.

Guile 3.0.0 and all future releases in the 3.0.x series are parallel-installable with other stable release series (e.g. 2.2). As the first release in a new stable series, we anticipate that Guile 3.0.0 might have build problems on uncommon platforms; bug reports are very welcome. Send any bug reports you might have as email at to bug-guile@gnu.org.

Happy hacking with Guile 3!

16 January, 2020 11:30AM by Andy Wingo (guile-devel@gnu.org)

Parabola GNU/Linux-libre

[From Arch] rsync compatibility

Our rsync package was shipped with bundled zlib to provide compatibility with the old-style --compress option up to version 3.1.0. Version 3.1.1 was released on 2014-06-22 and is shipped by all major distributions now.

So we decided to finally drop the bundled library and ship a package with system zlib. This also fixes security issues, actual ones and in future. Go and blame those running old versions if you encounter errors with rsync 3.1.3-3.

16 January, 2020 04:44AM by Isaac David

[From Arch] Now using Zstandard instead of xz for package compression

As announced on the Arch-dev mailing list, on Friday, Dec 27 2019, the package compression scheme has changed from xz (.pkg.tar.xz) to zstd (.pkg.tar.zst).

zstd and xz trade blows in their compression ratio. Recompressing all packages to zstd with their options yields a total ~0.8% increase in package size on all of their packages combined, but the decompression time for all packages saw a ~1300% speedup.

We already have hundreds of zstd-compressed packages in our repositories, and as packages get updated more will keep rolling in. No user-facing issues have been found as of yet, so things appear to be working.

As an end-user no manual intervention is required, assuming that you have read and followed the news post from late last year.

16 January, 2020 04:34AM by Isaac David

January 15, 2020

FSF News

First LibrePlanet 2020 keynote announcement: Internet Archive founder Brewster Kahle

BOSTON, Massachusetts, USA -- Wednesday, January 15, 2020 -- The Free Software Foundation (FSF) today announced Brewster Kahle as its first keynote speaker for LibrePlanet 2020. The annual technology and social justice conference will be held in the Boston area on March 14 and 15, 2020, with the theme "Free the Future." Attendees can register at https://my.fsf.org/civicrm/event/info?id=87&reset=1.

Internet archivist, digital librarian, and Internet Hall of Famer Brewster Kahle has been announced as the first of multiple keynote speakers for the FSF's annual LibrePlanet conference. Kahle is renowned as the founder of the Internet Archive, a nonprofit dedicated to preserving the cultural history of the Web.

With its mission to provide "universal access to all knowledge," the Internet Archive is an inspiration to digital activists from all over the world. Through its "Wayback Machine," the Internet Archive provides historically indexed versions of millions of Web pages. For his work as an Internet activist and digital librarian, Brewster was inducted into the Internet Hall of Fame in 2012.

Commenting on his selection as a LibrePlanet keynote speaker, Kahle said, "Free software is crucial in building a digital ecosystem with many winners. The Internet Archive is completely dependent, as are millions of others, on free software but also free content. I look forward to presenting at LibrePlanet, but mostly from learning from those attending as to where free software is going."

FSF executive director John Sullivan welcomed Kahle's announcement as a keynote speaker by saying, "The Internet Archive plays an important role in our lives, ensuring that Internet users for years to come will be able to view all of the Web exactly as it was at a specific point in history. Our focus at this year's LibrePlanet is to 'free the future,' and Brewster's work reminds all of us that we cannot have a future without a reliable history. The FSF is honored to have Brewster keynoting the conference."

The FSF will announce further keynote speakers before the start of the conference, and the full LibrePlanet 2020 schedule is expected very soon. Thousands of people have attended LibrePlanet over the years: some in person, and some by tuning into the fully free software livestream the FSF has of the event. LibrePlanet has welcomed visitors from up to fifteen countries each year, and individuals from many others participate online. The conference's video archive contains talks recorded throughout the conference's history, including keynote talks by Edward Snowden and Cory Doctorow.

About LibrePlanet

LibrePlanet is the annual conference of the Free Software Foundation. Over the last decade, LibrePlanet has blossomed from a small gathering of FSF associate members into a vibrant multi-day event that attracts a broad audience of people who are interested in the values of software freedom. LibrePlanet 2020 will be held on March 14th and 15th, 2020. To sign up for announcements about LibrePlanet 2020, visit https://lists.gnu.org/mailman/listinfo/libreplanet-discuss.

Registration for LibrePlanet: "Free the Future" is open. Attendance is free of charge to FSF associate members and students.

For information on how your company can sponsor LibrePlanet or have a table in our exhibit hall, email campaigns@fsf.org.

Keynote speakers at LibrePlanet 2019 included Bdale Garbee, who has contributed to the free software community since 1979, and Tarek Loubani, who runs the Glia Project, which seeks to provide medical supplies to impoverished locations. The closing keynote was given by Micky Metts, a hacker, activist and organizer, as well as a member of Agaric, a worker-owned cooperative of Web developers.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://www.fsf.org and https://www.gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

MEDIA CONTACT

Greg Farough
Campaigns Manager
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

Photo by Vera de Kok Š 2015. Licensed under CC-BY-SA 4.0.

15 January, 2020 10:41PM

sed @ Savannah

sed-4.8 released [stable]

This is to announce sed-4.8, a stable release.

There have been 21 commits by 2 people in the 56 weeks since 4.7.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Assaf Gordon (4)
  Jim Meyering (17)

Jim [on behalf of the sed maintainers]
==================================================================

Here is the GNU sed home page:
    http://gnu.org/s/sed/

For a summary of changes and contributors, see:
  http://git.sv.gnu.org/gitweb/?p=sed.git;a=shortlog;h=v4.8
or run this command from a git-cloned sed directory:
  git shortlog v4.7..v4.8

To summarize the 865 gnulib-related changes, run these commands
from a git-cloned sed directory:
  git checkout v4.8
  git submodule summary v4.7

Here are the compressed sources:
  https://ftp.gnu.org/gnu/sed/sed-4.8.tar.gz   (2.2MB)
  https://ftp.gnu.org/gnu/sed/sed-4.8.tar.xz   (1.3MB)

Here are the GPG detached signatures[*]:
  https://ftp.gnu.org/gnu/sed/sed-4.8.tar.gz.sig
  https://ftp.gnu.org/gnu/sed/sed-4.8.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify sed-4.8.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys 7FD9FCCB000BEEEE

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
  Autoconf 2.69.202-d78a
  Automake 1.16a
  Gnulib v0.1-3167-g6b9d15b8b

NEWS

* Noteworthy changes in release 4.8 (2020-01-14) [stable]

** Bug fixes

  "sed -i" now creates temporary files with correct umask (limited to u=rwx).
  Previously sed would incorrectly set umask on temporary files, resulting
  in problems under certain fuse-like file systems.
  [bug introduced in sed 4.2.1]

** Release

  distribute gzip-compressed tarballs once again

** Improvements

  a year's worth of gnulib development, including improved DFA performance

15 January, 2020 04:45AM by Jim Meyering

January 14, 2020

GNU Guix

Reproducible computations with Guix

This post is about reproducible computations, so let's start with a computation. A short, though rather uninteresting, C program is a good starting point. It computes π in three different ways:

#include <math.h>
#include <stdio.h>

int main()
{
    printf( "M_PI                         : %.10lf\n", M_PI);
    printf( "4 * atan(1.)                 : %.10lf\n", 4.*atan(1.));
    printf( "Leibniz' formula (four terms): %.10lf\n", 4.*(1.-1./3.+1./5.-1./7.));
    return 0;
}

This program uses no random element, such as a random number generator or parallelism. It's strictly deterministic. It is reasonable to expect it to produce exactly the same output, on any computer and at any point in time. And yet, many programs whose results should be perfectly reproducible are in fact not. Programs using floating-point arithmetic, such as this short example, are particularly prone to seemingly inexplicable variations.

My goal is to explain why deterministic programs often fail to be reproducible, and what it takes to fix this. The short answer to that question is "use Guix", but even though Guix provides excellent support for reproducibility, you still have to use it correctly, and that requires some understanding of what's going on. The explanation I will give is rather detailed, to the point of discussing parts of the Guile API of Guix. You should be able to follow the reasoning without knowing Guile though, you will just have to believe me that the scripts I will show do what I claim they do. And in the end, I will provide a ready-to-run Guile script that will let you explore package dependencies right from the shell.

Dependencies: what it takes to run a program

One keyword in discussions of reproducibility is "dependencies". I will revisit the exact meaning of this term later, but to get started, I will define it loosely as "any software package required to run a program". Running the π computation shown above is normally done using something like

gcc pi.c -o pi
./pi

C programmers know that gcc is a C compiler, so that's one obvious dependency for running our little program. But is a C compiler enough? That question is surprisingly difficult to answer in practice. Your computer is loaded with tons of software (otherwise it wouldn't be very useful), and you don't really know what happens behind the scenes when you run gcc or pi.

Containers are good

A major element of reproducibility support in Guix is the possibility to run programs in well-defined environments that contain exactly the software packages you request, and no more. So if your program runs in an environment that contains only a C compiler, you can be sure it has no other dependencies. Let's create such an environment:

guix environment --container --ad-hoc gcc-toolchain

The option --container ensures the best possible isolation from the standard environment that your system installation and user account provide for day-to-day work. This environment contains nothing but a C compiler and a shell (which you need to type in commands), and has access to no other files than those in the current directory.

If the term "container" makes you think of Docker, note that this is something different. Note also that the option --container requires support from the Linux kernel, which may not be present on your system, or may be disabled by default. Finally, note that by default, a containerized environment has no network access, which may be a problem. If for whatever reason you cannot use --container, use --pure instead. This yields a less isolated environment, but it is usually good enough. For a more detailed discussion of these options, see the Guix manual.

The above command leaves me in a shell inside my environment, where I can now compile and run my little program:

gcc pi.c -o pi
./pi
M_PI                         : 3.1415926536
4 * atan(1.)                 : 3.1415926536
Leibniz' formula (four terms): 2.8952380952

It works! So now I can be sure that my program has a single dependency: the Guix package gcc-toolchain. I'll leave that special-environment shell by typing Ctrl-D, as otherwise the following examples won't work.

Perfectionists who want to exclude the possibility that my program requires a shell could run each step in a separate container:

guix environment --container --ad-hoc gcc-toolchain -- gcc pi.c -o pi
guix environment --container --ad-hoc gcc-toolchain -- ./pi
M_PI                         : 3.1415926536
4 * atan(1.)                 : 3.1415926536
Leibniz' formula (four terms): 2.8952380952

Welcome to dependency hell!

Now that we know that our only dependency is gcc-toolchain, let's look at it in more detail:

guix show gcc-toolchain
name: gcc-toolchain
version: 9.2.0
outputs: out debug static
systems: x86_64-linux i686-linux
dependencies: binutils@2.32 gcc@9.2.0 glibc@2.29 ld-wrapper@0
location: gnu/packages/commencement.scm:2532:4
homepage: https://gcc.gnu.org/
license: GPL 3+
synopsis: Complete GCC tool chain for C/C++ development  
description: This package provides a complete GCC tool chain for C/C++
+ development to be installed in user profiles.  This includes GCC, as well as
+ libc (headers an d binaries, plus debugging symbols in the `debug' output),
+ and Binutils.

name: gcc-toolchain
version: 8.3.0
outputs: out debug static
systems: x86_64-linux i686-linux
dependencies: binutils@2.32 gcc@8.3.0 glibc@2.29 ld-wrapper@0
location: gnu/packages/commencement.scm:2532:4
homepage: https://gcc.gnu.org/
license: GPL 3+
synopsis: Complete GCC tool chain for C/C++ development  
description: This package provides a complete GCC tool chain for C/C++
+ development to be installed in user profiles.  This includes GCC, as well as
+ libc (headers an d binaries, plus debugging symbols in the `debug' output),
+ and Binutils.

name: gcc-toolchain
version: 7.4.0
outputs: out debug static
systems: x86_64-linux i686-linux
dependencies: binutils@2.32 gcc@7.4.0 glibc@2.29 ld-wrapper@0
location: gnu/packages/commencement.scm:2532:4
homepage: https://gcc.gnu.org/
license: GPL 3+
synopsis: Complete GCC tool chain for C/C++ development  
description: This package provides a complete GCC tool chain for C/C++
+ development to be installed in user profiles.  This includes GCC, as well as
+ libc (headers an d binaries, plus debugging symbols in the `debug' output),
+ and Binutils.

name: gcc-toolchain
version: 6.5.0
outputs: out debug static
systems: x86_64-linux i686-linux
dependencies: binutils@2.32 gcc@6.5.0 glibc@2.29 ld-wrapper@0
location: gnu/packages/commencement.scm:2532:4
homepage: https://gcc.gnu.org/
license: GPL 3+
synopsis: Complete GCC tool chain for C/C++ development  
description: This package provides a complete GCC tool chain for C/C++
+ development to be installed in user profiles.  This includes GCC, as well as
+ libc (headers an d binaries, plus debugging symbols in the `debug' output),
+ and Binutils.

name: gcc-toolchain
version: 5.5.0
outputs: out debug static
systems: x86_64-linux i686-linux
dependencies: binutils@2.32 gcc@5.5.0 glibc@2.29 ld-wrapper@0
location: gnu/packages/commencement.scm:2532:4
homepage: https://gcc.gnu.org/
license: GPL 3+
synopsis: Complete GCC tool chain for C/C++ development  
description: This package provides a complete GCC tool chain for C/C++
+ development to be installed in user profiles.  This includes GCC, as well as
+ libc (headers an d binaries, plus debugging symbols in the `debug' output),
+ and Binutils.

name: gcc-toolchain
version: 4.9.4
outputs: out debug static
systems: x86_64-linux i686-linux
dependencies: binutils@2.32 gcc@4.9.4 glibc@2.29 ld-wrapper@0
location: gnu/packages/commencement.scm:2532:4
homepage: https://gcc.gnu.org/
license: GPL 3+
synopsis: Complete GCC tool chain for C/C++ development  
description: This package provides a complete GCC tool chain for C/C++
+ development to be installed in user profiles.  This includes GCC, as well as
+ libc (headers an d binaries, plus debugging symbols in the `debug' output),
+ and Binutils.

name: gcc-toolchain
version: 4.8.5
outputs: out debug static
systems: x86_64-linux i686-linux
dependencies: binutils@2.32 gcc@4.8.5 glibc@2.29 ld-wrapper@0
location: gnu/packages/commencement.scm:2532:4
homepage: https://gcc.gnu.org/
license: GPL 3+
synopsis: Complete GCC tool chain for C/C++ development  
description: This package provides a complete GCC tool chain for C/C++
+ development to be installed in user profiles.  This includes GCC, as well as
+ libc (headers an d binaries, plus debugging symbols in the `debug' output),
+ and Binutils.

Guix actually knows about several versions of this toolchain. We didn't ask for a specific one, so what we got is the first one in this list, which is the one with the highest version number. Let's check that this is true:

guix environment --container --ad-hoc gcc-toolchain -- gcc --version
gcc (GCC) 9.2.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

The output of guix show contains a line about dependencies. These are the dependencies of our dependency, and you may already have guessed that they will have dependencies as well. That's why reproducibility is such a difficult job in practice! The dependencies of gcc-toolchain@9.2.0 are:

guix show gcc-toolchain@9.2.0 | recsel -P dependencies
binutils@2.32 gcc@9.2.0 glibc@2.29 ld-wrapper@0

To dig deeper, we can try feeding these dependencies to guix show, one by one, in order to learn more about them:

guix show binutils@2.32
name: binutils
version: 2.32
outputs: out
systems: x86_64-linux i686-linux
dependencies: 
location: gnu/packages/base.scm:415:2
homepage: https://www.gnu.org/software/binutils/
license: GPL 3+
synopsis: Binary utilities: bfd gas gprof ld  
description: GNU Binutils is a collection of tools for working with binary
+ files.  Perhaps the most notable are "ld", a linker, and "as", an assembler.
+ Other tools include programs to display binary profiling information, list the
+ strings in a binary file, and utilities for working with archives.  The "bfd"
+ library for working with executable and object formats is also included.
guix show gcc@9.2.0
guix show: error: gcc@9.2.0: package not found

This looks a bit surprising. What's happening here is that gcc is defined as a hidden package in Guix. The package is there, but it is hidden from package queries. There is a good reason for this: gcc on its own is rather useless, you need gcc-toolchain to actually use the compiler. But if both gcc and gcc-toolchain showed up in a search, that would be more confusing than helpful for most users. Hiding the package is a way of saying "for experts only".

Let's take this as a sign that it's time to move on to the next level of Guix hacking: Guile scripts. Guile, an implementation of the Scheme language, is Guix' native language, so using Guile scripts, you get access to everything there is to know about Guix and its packages.

A note in passing: the emacs-guix package provides an intermediate level of Guix exploration for Emacs users. It lets you look at hidden packages, for example. But much of what I will show in the following really requires Guile scripts. Another nice tool for package exploration is guix graph, which creates a diagram showing dependency relations between packages. Unfortunately that diagram is legible only for a relatively small number of dependencies, and as we will see later, most packages end up having lots of them.

Anatomy of a Guix package

From the user's point of view, a package is a piece of software with a name and a version number that can be installed using guix install. The packager's point of view is quite a bit different. In fact, what users consider a package is more precisely called the package's output in Guix jargon. The package is a recipe for creating this output.

To see how all these concepts fit together, let's look at an example of a package definition: xmag. I have chosen this package not because I care much about it, but because its definition is short while showcasing all the features I want to explain. You can access it most easily by typing guix edit xmag. Here is what you will see:

(package
  (name "xmag")
  (version "1.0.6")
  (source
   (origin
     (method url-fetch)
     (uri (string-append
           "mirror://xorg/individual/app/" name "-" version ".tar.gz"))
     (sha256
      (base32
       "19bsg5ykal458d52v0rvdx49v54vwxwqg8q36fdcsv9p2j8yri87"))))
  (build-system gnu-build-system)
  (arguments
   `(#:configure-flags
     (list (string-append "--with-appdefaultdir="
                          %output ,%app-defaults-dir))))
  (inputs
   `(("libxaw" ,libxaw)))
  (native-inputs
   `(("pkg-config" ,pkg-config)))
  (home-page "https://www.x.org/wiki/")
  (synopsis "Display or capture a magnified part of a X11 screen")
  (description "Xmag displays and captures a magnified snapshot of a portion
of an X11 screen.")
  (license license:x11))

The package definition starts with the name and version information you expected. Next comes source, which says how to obtain the source code and from where. It also provides a hash that allows to check the integrity of the downloaded files. The next four items, build-system, arguments, inputs, and native-inputs supply the information required for building the package, which is what creates its outputs. The remaining items are documentation for human consumption, important for other reasons but not for reproducibility, so I won't say any more about them. (See this packaging tutorial if you want to define your own package.)

The example package definition has native-inputs in addition to "plain" inputs. There's a third variant, propagated-inputs, but xmag doesn't have any. The differences between these variants don't matter for my topic, so I will just refer to "inputs" from now on. Another omission I will make is the possibility to define several outputs for a package. This is done for particularly big packages, in order to reduce the footprint of installations, but for the purposes of reproducibility, it's OK to treat all outputs of a package a single unit.

The following figure illustrates how the various pieces of information from a package are used in the build process (done explicitly by guix build, or implicitly when installing or otherwise using a package): Diagram of a Guix package.

It may help to translate the Guix jargon to the vocabulary of C programming:

| Guix package | C program        |
|--------------+------------------|
| source code  | source code      |
| inputs       | libraries        |
| arguments    | compiler options |
| build system | compiler         |
| output       | executable       |

Building a package can be considered a generalization of compiling a program. We could in fact create a "GCC build system" for Guix that would simply run gcc. However, such a build system would be of little practical use, since most real-life software consists of more than just one C source code file, and requires additional pre- or post-processing steps. The gnu-build-system used in the example is based on tools such as make and autoconf, in addition to gcc.

Package exploration in Guile

Guile uses a record type called <package> to represent packages, which is defined in module (guix packages). There is also a module (gnu packages), which contains the actual package definitions - be careful not to confuse the two (as I always do). Here is a simple Guile script that shows some package information, much like the guix show command that I used earlier:

(use-modules (guix packages)
             (gnu packages)) 

(define gcc-toolchain
  (specification->package "gcc-toolchain"))

(format #t "Name   : ~a\n" (package-name gcc-toolchain))
(format #t "Version: ~a\n" (package-version gcc-toolchain))
(format #t "Inputs : ~a\n" (package-direct-inputs gcc-toolchain))
Name   : gcc-toolchain
Version: 9.2.0
Inputs : ((gcc #<package gcc@9.2.0 gnu/packages/gcc.scm:524 7fc2d76af160>) (ld-wrapper #<package ld-wrapper@0 gnu/packages/base.scm:505 7fc2d306f580>) (binutils #<package binutils@2.32 gnu/packages/commencement.scm:2187 7fc2d306fdc0>) (libc #<package glibc@2.29 gnu/packages/commencement.scm:2145 7fc2d306fe70>) (libc-debug #<package glibc@2.29 gnu/packages/commencement.scm:2145 7fc2d306fe70> debug) (libc-static #<package glibc@2.29 gnu/packages/commencement.scm:2145 7fc2d306fe70> static))

This script first calls specification->package to look up the package using the same rules as the guix command line interface: pick the latest available version if none is explicitly requested. Then it extracts various information about the package. Note that package-direct-inputs returns the combination of package-inputs, package-native-inputs, and package-propagated-inputs. As I said above, I don't care about the distinction here.

The inputs are not shown in a particularly nice form, so let's write two Guile functions to improve it:

(use-modules (guix packages)
             (gnu packages)
             (ice-9 match))

(define (package->specification package)
  (format #f "~a@~a"
          (package-name package)
          (package-version package)))

(define (input->specification input)
  (match input
    ((label (? package? package) . _)
     (package->specification package))
    (other-item
     (format #f "~a" other-item))))

(define gcc-toolchain
  (specification->package "gcc-toolchain"))

(format #t "Package: ~a\n"
        (package->specification gcc-toolchain))
(format #t "Inputs : ~a\n"
        (map input->specification (package-direct-inputs gcc-toolchain)))
Package: gcc-toolchain@9.2.0
Inputs : (gcc@9.2.0 ld-wrapper@0 binutils@2.32 glibc@2.29 glibc@2.29 glibc@2.29)

That looks much better. As you can see from the code, a list of inputs is a bit more than a list of packages. It is in fact a list of labelled package outputs. That also explains why we see glibc three times in the input list: glibc defines three distinct outputs, all of which are used in gcc-toolchain. For reproducibility, all we care about is the package references. Later on, we will deal with much longer input lists, so as a final cleanup step, let's show only unique package references from the list of inputs:

(use-modules (guix packages)
             (gnu packages)
             (srfi srfi-1)
             (ice-9 match))

(define (package->specification package)
  (format #f "~a@~a"
          (package-name package)
          (package-version package)))

(define (input->specification input)
  (match input
    ((label (? package? package) . _)
     (package->specification package))
    (other-item
     (format #f "~a" other-item))))

(define (unique-inputs inputs)
  (delete-duplicates
   (map input->specification inputs)))

(define gcc-toolchain
  (specification->package "gcc-toolchain"))

(format #t "Package: ~a\n"
        (package->specification gcc-toolchain))
(format #t "Inputs : ~a\n"
        (unique-inputs (package-direct-inputs gcc-toolchain)))
Package: gcc-toolchain@9.2.0
Inputs : (gcc@9.2.0 ld-wrapper@0 binutils@2.32 glibc@2.29)

Dependencies

You may have noticed the absence of the term "dependency" from the last two sections. There is a good reason for that: the term is used in somewhat different meanings, and that can create confusion. Guix jargon therefore avoids it.

The figure above shows three kinds of input to the build system: source, inputs, and arguments. These categories reflect the packagers' point of view: source is what the authors of the software supply, inputs are other packages, and arguments is what the packagers themselves add to the build procedure. It is important to understand that from a purely technical point of view, there is no fundamental difference between the three categories. You could, for example, define a package that contains C source code in the build system arguments, but leaves source empty. This would be inconvenient, and confusing for others, so I don't recommend you actually do this. The three categories are important, but for humans, not for computers. In fact, even the build system is not fundamentally distinct from its inputs. You could define a special-purpose build system for one package, and put all the source code in there. At the level of the CPU and the computer's memory, a build process (as in fact any computation) looks like Image of a computation. It is human interpretation that decomposes this into Code and data. and in a next step into Data, program, and environment. We can go on and divide the environment into operating system, development tools, and application software, for example, but the further we go in decomposing the input to a computation, the more arbitrary it gets.

From this point of view, a software's dependencies consist of everything required to run it in addition to its source code. For a Guix package, the dependencies are thus,

  • its inputs
  • the build system arguments
  • the build system itself
  • Guix (which is a piece of software as well)
  • the GNU/Linux operating system (kernel, file system, etc.).

In the following, I will not mention the last two items any more, because they are a common dependency of all Guix packages, but it's important not to forget about them. A change in Guix or in GNU/Linux can actually make a computation non-reproducible, although in practice that happens very rarely. Moreover, Guix is actually designed to run older versions of itself, as we will see later.

Build systems are (mostly) packages as well

I hope that by now you have a good idea of what a package is: a recipe for building outputs from source and inputs, with inputs being the outputs of other packages. The recipe involves a build system and arguments supplied to it. So... what exactly is a build system? I have introduced it as a generalization of a compiler, which describes its role. But where does a build system come from in Guix?

The ultimate answer is of course the source code. Build systems are pieces of Guile code that are part of Guix. But this Guile code is only a shallow layer orchestrating invocations of other software, such as gcc or make. And that software is defined by packages. So in the end, from a reproducibility point of view, we can replace the "build system" item in our list of dependenies by "a bundle of packages". In other words: more inputs.

Before Guix can build a package, it must gather all the required ingredients, and that includes replacing the build system by the packages it represents. The resulting list of ingredients is called a bag, and we can access it using a Guile script:

(use-modules (guix packages)
             (gnu packages)
             (srfi srfi-1)
             (ice-9 match))

(define (package->specification package)
  (format #f "~a@~a"
          (package-name package)
          (package-version package)))

(define (input->specification input)
  (match input
    ((label (? package? package) . _)
     (package->specification package))
    ((label (? origin? origin))
     (format #f "[source code from ~a]"
             (origin-uri origin)))
    (other-input
     (format #f "~a" other-input))))

(define (unique-inputs inputs)
  (delete-duplicates
   (map input->specification inputs)))

(define hello
  (specification->package "hello"))

(format #t "Package       : ~a\n"
        (package->specification hello))
(format #t "Package inputs: ~a\n"
        (unique-inputs (package-direct-inputs hello)))
(format #t "Build inputs  : ~a\n"
        (unique-inputs
         (bag-direct-inputs
          (package->bag hello))))
Package       : hello@2.10
Package inputs: ()
Build inputs  : ([source code from mirror://gnu/hello/hello-2.10.tar.gz] tar@1.32 gzip@1.10 bzip2@1.0.6 xz@5.2.4 file@5.33 diffutils@3.7 patch@2.7.6 findutils@4.6.0 gawk@5.0.1 sed@4.7 grep@3.3 coreutils@8.31 make@4.2.1 bash-minimal@5.0.7 ld-wrapper@0 binutils@2.32 gcc@7.4.0 glibc@2.29 glibc-utf8-locales@2.29)

I have used a different example, hello, because for gcc-toolchain, there is no difference between package inputs and build inputs (check for yourself if you want!) My new example, hello (a short demo program printing "Hello, world" in the language of the system installation), is interesting because it has no package inputs at all. All the build inputs except for the source code have thus been contributed by the build system.

If you compare this script to the previous one that printed only the package inputs, you will notice two major new features. In input->specification, there is an additional case for the source code reference. And in the last statement, package->bag constructs a bag from the package, before bag-direct-inputs is called to get that bag's input list.

Inputs are outputs

I have mentioned before that one package's inputs are other packages' outputs, but that fact deserves a more in-depth discussion because of its crucial importance for reproducibility. A package is a recipe for building outputs from source and inputs. Since these inputs are outputs, they must have been built as well. Package building is therefore a process consisting of multiple steps. An immediate consequence is that any computation making use of packaged software is a multi-step computation as well.

Remember the short C program computing π from the beginning of this post? Running that program is only the last step in a long series of computations. Before you can run pi, you must compile pi.c. That requires the package gcc-toolchain, which must first be built. And before it can be built, its inputs must be built. And so on. If you want the output of pi to be reproducible, the whole chain of computations must be reproducible, because each step can have an impact on the results produced by pi.

So... where does this chain start? Few people write machine code these days, so almost all software requires some compiler or interpreter. And that means that for every package, there are other packages that must be built first. The question of how to get this chain started is known as the bootstrapping problem. A rough summary of the solution is that the chain starts on somebody else's computer, which creates a bootstrap seed, an ideally small package that is downloaded in precompiled form. See this post by Jan Nieuwenhuizen for details of this procedure. The bootstrap seed is not the real start of the chain, but as long as we can retrieve an identical copy at a later time, that's good enough for reproducibility. In fact, the reason for requiring the bootstrap seed to be small is not reproducibility, but inspectability: it should be possible to audit the seed for bugs and malware, even in the absence of source code.

Reaching closure

Now we are finally ready for the ultimate step in dependency analysis: identifying all packages on which a computation depends, right up to the bootstrap seed. The starting point is the list of direct inputs of the bag derived from a package, which we looked at in the previous script. For each package in that list, we must apply this same procedure, recursively. We don't have to write this code ourselves, because the function package-closure in Guix does that job. These closures have nothing to do with closures in Lisp, and even less with the Clojure programming language. They are a case of what mathematicians call transitive closures: starting with a set of packages, you extend the set repeatedly by adding the inputs of the packages that are already in the set, until there is nothing more to add. If you have a basic knowledge of Scheme, you should now be able to understand implementation of this function. Let's add it to our dependency analysis code:

(use-modules (guix packages)
             (gnu packages)
             (srfi srfi-1)
             (ice-9 match))

(define (package->specification package)
  (format #f "~a@~a"
          (package-name package)
          (package-version package)))

(define (input->specification input)
  (match input
    ((label (? package? package) . _)
     (package->specification package))
    ((label (? origin? origin))
     (format #f "[source code from ~a]"
             (origin-uri origin)))
    (other-input
     (format #f "~a" other-input))))

(define (unique-inputs inputs)
  (delete-duplicates
   (map input->specification inputs)))

(define (length-and-list lists)
  (list (length lists) lists))

(define hello
  (specification->package "hello"))

(format #t "Package        : ~a\n"
        (package->specification hello))
(format #t "Package inputs : ~a\n"
        (length-and-list (unique-inputs (package-direct-inputs hello))))
(format #t "Build inputs   : ~a\n"
        (length-and-list
         (unique-inputs
          (bag-direct-inputs
           (package->bag hello)))))
(format #t "Package closure: ~a\n"
        (length-and-list
         (delete-duplicates
          (map package->specification
               (package-closure (list hello))))))
Package        : hello@2.10
Package inputs : (0 ())
Build inputs   : (20 ([source code from mirror://gnu/hello/hello-2.10.tar.gz] tar@1.32 gzip@1.10 bzip2@1.0.6 xz@5.2.4 file@5.33 diffutils@3.7 patch@2.7.6 findutils@4.6.0 gawk@5.0.1 sed@4.7 grep@3.3 coreutils@8.31 make@4.2.1 bash-minimal@5.0.7 ld-wrapper@0 binutils@2.32 gcc@7.4.0 glibc@2.29 glibc-utf8-locales@2.29))
Package closure: (84 (m4@1.4.18 libatomic-ops@7.6.10 gmp@6.1.2 libgc@7.6.12 libltdl@2.4.6 libunistring@0.9.10 libffi@3.2.1 pkg-config@0.29.2 guile@2.2.6 libsigsegv@2.12 lzip@1.21 ed@1.15 perl@5.30.0 guile-bootstrap@2.0 zlib@1.2.11 xz@5.2.4 ncurses@6.1-20190609 libxml2@2.9.9 attr@2.4.48 gettext-minimal@0.20.1 gcc-cross-boot0-wrapped@7.4.0 libstdc++@7.4.0 ld-wrapper-boot3@0 bootstrap-binaries@0 ld-wrapper-boot0@0 flex@2.6.4 glibc-intermediate@2.29 libstdc++-boot0@4.9.4 expat@2.2.7 gcc-mesboot1-wrapper@4.7.4 mesboot-headers@0.19 gcc-core-mesboot@2.95.3 bootstrap-mes@0 bootstrap-mescc-tools@0.5.2 tcc-boot0@0.9.26-6.c004e9a mes-boot@0.19 tcc-boot@0.9.27 make-mesboot0@3.80 gcc-mesboot0@2.95.3 binutils-mesboot0@2.20.1a make-mesboot@3.82 diffutils-mesboot@2.7 gcc-mesboot1@4.7.4 glibc-headers-mesboot@2.16.0 glibc-mesboot0@2.2.5 binutils-mesboot@2.20.1a linux-libre-headers@4.19.56 linux-libre-headers-bootstrap@0 gcc-mesboot@4.9.4 glibc-mesboot@2.16.0 gcc-cross-boot0@7.4.0 bash-static@5.0.7 gettext-boot0@0.19.8.1 python-minimal@3.5.7 perl-boot0@5.30.0 texinfo@6.6 bison@3.4.1 gzip@1.10 libcap@2.27 acl@2.2.53 glibc-utf8-locales@2.29 gcc-mesboot-wrapper@4.9.4 file-boot0@5.33 findutils-boot0@4.6.0 diffutils-boot0@3.7 make-boot0@4.2.1 binutils-cross-boot0@2.32 glibc@2.29 gcc@7.4.0 binutils@2.32 ld-wrapper@0 bash-minimal@5.0.7 make@4.2.1 coreutils@8.31 grep@3.3 sed@4.7 gawk@5.0.1 findutils@4.6.0 patch@2.7.6 diffutils@3.7 file@5.33 bzip2@1.0.6 tar@1.32 hello@2.10))

That's 84 packages, just for printing "Hello, world!". As promised, it includes the boostrap seed, called bootstrap-binaries. It may be more surprising to see Perl and Python in the dependency list of what is a pure C program. The explanation is that the build process of gcc and glibc contains Perl and Python code. Considering that both Perl and Python are written in C and use glibc, this hints at why bootstrapping is a hard problem!

Get ready for your own analyses

As promised, here is a Guile script that you can download and run from the command line to do dependency analyses much like the ones I have shown. Just give the packages whose combined list of dependencies you want to analyze. For example:

./show-dependencies.scm hello
Packages: 1
  hello@2.10
Package inputs: 0 packages
 
Build inputs: 20 packages
  [source code from mirror://gnu/hello/hello-2.10.tar.gz] bash-minimal@5.0.7 binutils@2.32 bzip2@1.0.6 coreutils@8.31 diffutils@3.7 file@5.33 findutils@4.6.0 gawk@5.0.1 gcc@7.4.0 glibc-utf8-locales@2.29 glibc@2.29 grep@3.3 gzip@1.10 ld-wrapper@0 make@4.2.1 patch@2.7.6 sed@4.7 tar@1.32 xz@5.2.4
Package closure: 84 packages
  acl@2.2.53 attr@2.4.48 bash-minimal@5.0.7 bash-static@5.0.7 binutils-cross-boot0@2.32 binutils-mesboot0@2.20.1a binutils-mesboot@2.20.1a binutils@2.32 bison@3.4.1 bootstrap-binaries@0 bootstrap-mes@0 bootstrap-mescc-tools@0.5.2 bzip2@1.0.6 coreutils@8.31 diffutils-boot0@3.7 diffutils-mesboot@2.7 diffutils@3.7 ed@1.15 expat@2.2.7 file-boot0@5.33 file@5.33 findutils-boot0@4.6.0 findutils@4.6.0 flex@2.6.4 gawk@5.0.1 gcc-core-mesboot@2.95.3 gcc-cross-boot0-wrapped@7.4.0 gcc-cross-boot0@7.4.0 gcc-mesboot-wrapper@4.9.4 gcc-mesboot0@2.95.3 gcc-mesboot1-wrapper@4.7.4 gcc-mesboot1@4.7.4 gcc-mesboot@4.9.4 gcc@7.4.0 gettext-boot0@0.19.8.1 gettext-minimal@0.20.1 glibc-headers-mesboot@2.16.0 glibc-intermediate@2.29 glibc-mesboot0@2.2.5 glibc-mesboot@2.16.0 glibc-utf8-locales@2.29 glibc@2.29 gmp@6.1.2 grep@3.3 guile-bootstrap@2.0 guile@2.2.6 gzip@1.10 hello@2.10 ld-wrapper-boot0@0 ld-wrapper-boot3@0 ld-wrapper@0 libatomic-ops@7.6.10 libcap@2.27 libffi@3.2.1 libgc@7.6.12 libltdl@2.4.6 libsigsegv@2.12 libstdc++-boot0@4.9.4 libstdc++@7.4.0 libunistring@0.9.10 libxml2@2.9.9 linux-libre-headers-bootstrap@0 linux-libre-headers@4.19.56 lzip@1.21 m4@1.4.18 make-boot0@4.2.1 make-mesboot0@3.80 make-mesboot@3.82 make@4.2.1 mes-boot@0.19 mesboot-headers@0.19 ncurses@6.1-20190609 patch@2.7.6 perl-boot0@5.30.0 perl@5.30.0 pkg-config@0.29.2 python-minimal@3.5.7 sed@4.7 tar@1.32 tcc-boot0@0.9.26-6.c004e9a tcc-boot@0.9.27 texinfo@6.6 xz@5.2.4 zlib@1.2.11

You can now easily experiment yourself, even if you are not at ease with Guile. For example, suppose you have a small Python script that plots some data using matplotlib. What are its dependencies? First you should check that it runs in a minimal environment:

guix environment --container --ad-hoc python python-matplotlib -- python my-script.py

Next, find its dependencies:

./show-dependencies.scm python python-matplotlib

I won't show the output here because it is rather long - the package closure contains 499 packages!

OK, but... what are the real dependencies?

I have explained dependencies along these lines in a few seminars. There's one question that someone in the audience is bound to ask: What do the results of a computation really depend on? The output of hello is "Hello, world!", no matter which version of gcc I use to compile it, and no matter which version of python was used in building glibc. The package closure is a worst-case estimate: it contains everything that can potentially influence the results, though most of it doesn't in practice. Unfortunately, there is no way to identify the dependencies that matter automatically, because answering that question in general (i.e. for arbitrary software) is equivalent to solving the halting problem.

Most package managers, such as Debian's apt or the multi-platform conda, take a different point of view. They define the dependencies of a program as all packages that need to be loaded into memory in order to run it. They thus exclude the software that is required to build the program and its run-time dependencies, but can then be discarded. Whereas Guix' definition errs on the safe side (its dependency list is often longer than necessary but never too short), the run-time-only definition is both too vast and too restrictive. Many run-time dependencies don't have an impact on most programs' results, but some build-time dependencies do.

One important case where build-time dependencies matter is floating-point computations. For historical reasons, they are surrounded by an aura of vagueness and imprecision, which goes back to its early days, when many details were poorly understood and implementations varied a lot. Today, all computers used for scientific computing respect the IEEE 754 standard that precisely defines how floating-point numbers are represented in memory and what the result of each arithmetic operation must be. Floating-point arithmetic is thus perfectly deterministic and even perfectly portable between machines, if expressed in terms of the operations defined by the standard. However, high-level languages such as C or Fortran do not allow programmers to do that. Its designers assume (probably correctly) that most programmers do not want to deal with the intricate details of rounding. Therefore they provide only a simplified interface to the arithmetic operations of IEEE 754, which incidentally also leaves more liberty for code optimization to compiler writers. The net result is that the complete specification of a program's results is its source code plus the compiler and the compilation options. You thus can get reproducible floating-point results if you include all compilation steps into the perimeter of your computation, at least for code running on a single processor. Parallel computing is a different story: it involves voluntarily giving up reproducibility in exchange for speed. Reproducibility then becomes a best-effort approach of limiting the collateral damage done by optimization through the clever design of algorithms.

Reproducing a reproducible computation

So far, I have explained the theory behind reproducible computations. The take-home message is that to be sure to get exactly the same results in the future, you have to use the exact same versions of all packages in the package closure of your immediate dependencies. I have also shown you how you can access that package closure. There is one missing piece: how do you actually run your program in the future, using the same environment?

The good news is that doing this is a lot simpler than understanding my lengthy explanations (which is why I leave this for the end!). The complex dependency graphs that I have analyzed up to here are encoded in the Guix source code, so all you need to re-create your environment is the exact same version of Guix! You get that version using

guix describe
Generation 15 Jan 06 2020 13:30:45    (current)
  guix 769b96b
    repository URL: https://git.savannah.gnu.org/git/guix.git
    branch: master
    commit: 769b96b62e8c09b078f73adc09fb860505920f8f

The critical information here is the unpleasantly looking string of hexadecimal digits after "commit". This is all it takes to uniquely identify a version of Guix. And to re-use it in the future, all you need is Guix' time machine:

guix time-machine --commit=769b96b62e8c09b078f73adc09fb860505920f8f -- environment --ad-hoc gcc-toolchain
Updating channel 'guix' from Git repository at 'https://git.savannah.gnu.org/git/guix.git'...
gcc pi.c -o pi
./pi
M_PI                         : 3.1415926536
4 * atan(1.)                 : 3.1415926536
Leibniz' formula (four terms): 2.8952380952

The time machine actually downloads the specified version of Guix and passes it the rest of the command line. You are running the same code again. Even bugs in Guix will be reproduced faithfully! As before, guix environment leaves us in a special-environment shell which needs to be terminated by Ctrl-D.

For many practical use cases, this technique is sufficient. But there are two variants you should know about for more complicated situations:

  • If you need an environment with many packages, you should use a manifest rather than list the packages on the command line. See the manual for details.

  • If you need packages from additional channels, i.e. packages that are not part of the official Guix distribution, you should store a complete channel description in a file using

guix describe -f channels > guix-version-for-reproduction.txt

and feed that file to the time machine:

guix time-machine --channels=guix-version-for-reproduction.txt -- environment --ad-hoc gcc-toolchain
Updating channel 'guix' from Git repository at 'https://git.savannah.gnu.org/git/guix.git'...
gcc pi.c -o pi
./pi
M_PI                         : 3.1415926536
4 * atan(1.)                 : 3.1415926536
Leibniz' formula (four terms): 2.8952380952

Last, if your colleagues do not use Guix yet, you can pack your reproducible software for use on other systems: as a tarball, or as a Docker or Singularity container image. For example:

guix pack            \
     -f docker       \
     -C none         \
     -S /bin=bin     \
     -S /lib=lib     \
     -S /share=share \
     -S /etc=etc     \
     gcc-toolchain
/gnu/store/iqn9yyvi8im18g7y9f064lw9s9knxp0w-docker-pack.tar

will produce a Docker container image, and with the knowledge of the Guix commit (or channel specification), you will be able in the future to reproduce this container bit-to-bit using guix time-machine.

And now... congratulations for having survived to the end of this long journey! May all your computations be reproducible, with Guix.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

14 January, 2020 04:30PM by Konrad Hinsen

January 13, 2020

Join GNU Guix through Outreachy

We are happy to announce that for the fourth time GNU Guix offers a three-month internship through Outreachy, the inclusion program for groups traditionally underrepresented in free software and tech. We currently propose four subjects to work on:

  1. Implement netlink bindings for Guile.
  2. Improve internationalization support for the Guix Data Service.
  3. Add accessibility support for the Guix System Installer.
  4. Add monitoring support for the Guix daemon and Cuirass.

The initial applications for this round open on Jan. 20, 2020 at 4PM UTC and the initial application deadline is on Feb. 25, 2020 at 4PM UTC.

The final project list is announced on Feb. 25, 2020.

For further information, check out the timeline, information about the application process, and the eligibility rules.

If you’d like to contribute to computing freedom, Scheme, functional programming, or operating system development, now is a good time to join us. Let’s get in touch on the mailing lists and on the #guix channel on the Freenode IRC network, or come chat with us at FOSDEM!

Last year we had the pleasure to welcome Laura Lazzati as an Outreachy intern working on documentation video creation, which led to the videos you can now see on the home page.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

13 January, 2020 02:30PM by Gábor Boskovits

libredwg @ Savannah

libredwg-0.10.1 released

Major bugfixes:
  * Fixed dwg2SVG htmlescape overflows and off-by-ones (#182)
  * Removed direct usages of fprintf and stderr in the lib. All can be
    redefined now. (#181)

Minor bugfixes:
  * Fuzzing fixes for dwg2SVG, dwgread. (#182)
  * Fixed eed.raw leaks

Here are the compressed sources:

  http://ftp.gnu.org/gnu/libredwg/libredwg-0.10.1.tar.gz   (10.9MB)
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.10.1.tar.xz   (4.5MB)

Here are the GPG detached signatures[*]:

  http://ftp.gnu.org/gnu/libredwg/libredwg-0.10.1.tar.gz.sig
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.10.1.tar.xz.sig

Use a mirror for higher download bandwidth:

  https://www.gnu.org/order/ftp.html

Here are more binaries:

  https://github.com/LibreDWG/libredwg/releases/tag/0.10.1

Here are the SHA256 checksums:

6539a9a762f74e937f08000e2bb3d3d4dddd326b85b5361f7532237b68ff0ae3  libredwg-0.10.1.tar.gz
0fa603d5f836dfceb8ae4aac28d1e836c09dce3936ab98703bb2341126678ec3  libredwg-0.10.1.tar.xz

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify libredwg-0.10.1.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys B4F63339E65D6414

and rerun the 'gpg --verify' command.

13 January, 2020 09:40AM by Reini Urban

GNU Guile

GNU Guile 2.9.9 (beta) released

We are delighted to announce the release of GNU Guile 2.9.9. This is the ninth and final pre-release of what will eventually become the 3.0 release series.

See the release announcement for full details and a download link.

This release fixes a number of bugs, omissions, and regressions. Notably, it fixes the build on 32-bit systems.

We plan to release a final Guile 3.0.0 on 17 January: this Friday! Please do test this prerelease; build reports, good or bad, are very welcome; send them to guile-devel@gnu.org. If you know you found a bug, please do send a note to bug-guile@gnu.org. Happy hacking!

13 January, 2020 08:44AM by Andy Wingo (guile-devel@gnu.org)

Applied Pokology

First Poke-Conf at Mont-Soleil - A report

pokists in Mont-Soleil
Poking at Mont-Soleil

This last weekend we had the first gathering of poke developers, as part of the GNU Hackers Meeting at Mont-Soleil, in Switzerland. I can say we had a lot of fun, and it was a quite productive meeting too: many patches were written, and many technical aspects designed and clarified.

13 January, 2020 12:00AM

January 12, 2020

GNUnet News

GNUnet 0.12.2

GNUnet 0.12.2 released

We are pleased to announce the release of GNUnet 0.12.2.
This is a new bugfix release. In terms of usability, users should be aware that there are still a large number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.12.2 release is still only suitable for early adopters with some reasonable pain tolerance.

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.12.2 (since 0.12.1)

  • GNS: Resolver clients are now able to specify a recursion depth limit.
  • TRANSPORT/TNG: The transport rewrite (aka TNG) is underway and various transport components have been worked on, including TCP, UDP and UDS communicators.
  • RECLAIM: Added preliminary support for third party attested credentials.
  • UTIL: The cryptographic changes introduced in 0.12.0 broke ECDSA ECDH and consequently other components. The offending ECDSA key normalization was dropped.

Known Issues

  • There are known major design issues in the TRANSPORT, ATS and CORE subsystems which will need to be addressed in the future to achieve acceptable usability, performance and security.
  • There are known moderate implementation limitations in CADET that negatively impact performance.
  • There are known moderate design issues in FS that also impact usability and performance.
  • There are minor implementation limitations in SET that create unnecessary attack surface for availability.
  • The RPS subsystem remains experimental.
  • Some high-level tests in the test-suite fail non-deterministically due to the low-level TRANSPORT issues.

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.

Thanks

This release was the work of many people. The following people contributed code and were thus easily identified: Christian Grothoff, Florian Dold, Christian Ulrich, dvn, lynx and Martin Schanzenbach.

12 January, 2020 11:00PM

January 11, 2020

GNS Specification Milestone 2/4

GNS Technical Specification Milestone 2/4

We are happy to announce the completion of the second milestone for the GNS Specification. The second milestone consists of documenting the GNS name resolution process and record handling.
With the release of GNUnet 0.12.x, the currently specified protocol is implemented according to the specification. As before, the draft specification LSD001 can be found at:

As already announced on the mailing list, the Go implementation of GNS is also proceeding as planned and implements the specification.

The next and third milestone will cover namespace revocation.

This work is generously funded by NLnet as part of their Search and discovery fund.

11 January, 2020 11:00PM

January 10, 2020

GNU Guix

Meet Guix at FOSDEM

As usual, GNU Guix will be present at FOSDEM on February 1st and 2nd. This year, we’re happy to say that there will be quite a few talks about Guix and related projects!

The Minimalistic, Experimental and Emerging Languages devroom will also feature talks about about Racket, Lua, Crystal, Nim, and Pharo that you should not miss under any circumstances!

Guix Days logo.

For the third time, we are also organizing the Guix Days as a FOSDEM fringe event, a two-day Guix workshop where contributors and enthusiasts will meet. The workshop takes place on Thursday Jan. 30st and Friday Jan. 31st at the Institute of Cultural Affairs (ICAB) in Brussels.

Again this year there will be few talks; instead, the event will consist primarily of “unconference-style” sessions focused on specific hot topics about Guix, the Shepherd, continuous integration, and related tools and workflows.

Attendance to the workshop is free and open to everyone, though you are invited to register (there are only a few seats left!). Check out the workshop’s wiki page for registration and practical info. Hope to see you in Brussels!

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

10 January, 2020 02:30PM by Manolis Ragkousis

January 08, 2020

FSF Blogs

Bring the planet to LibrePlanet by sponsoring an attendee

LibrePlanet 2020: Free the Future is only ten weeks away! On March 14 and 15, we will welcome free software enthusiasts and experts to Boston for the Free Software Foundation's (FSF) annual conference on technology and social justice.

We're hard at work creating an event with engaging talks with speakers from all over the world, and without spoiling any future announcements, we're very excited about the program we have so far. It is promising to be a year filled with talks about interesting and successful projects. Anticipated talks will expose the fascinating parallels between social movements in free software, dig into community-related subjects, and as always, explore the latest issues in licensing, security, education, and government adoption of free software with experts from these fields.

The FSF is proud of the fact that the LibrePlanet audience and speakers come from a diverse range of backgrounds, countries, and cultures. We believe that anyone who wants to attend or speak at the conference should not be held back by financial burdens, so if you have a few dollars to spare, why not make a donation in support of the LibrePlanet Scholarship Fund? You'll be supporting a robust, diverse free software community by helping to reduce the financial barrier for those who need the help.

Those who are awarded travel scholarships bring unique ideas to the conference, and commit to sharing what they learn at LibrePlanet with their local community.

We look forward to sharing this year's LibrePlanet conference program soon, and welcoming members of the free software community from all corners of the world. Registration for the event is open. If you're not already a member of the FSF, we have multiple good reasons for you to join today:

  • FSF associate members can attend LibrePlanet free of charge! (We nevertheless ask you to register so we'll know how many people to expect.)

  • As part of our current fundraising drive, we're offering exclusive membership gifts to all new members through January 17th!

  • FSF associate members get a 5% discount at Technoethical until January 17th.

Volunteers also attend gratis, and get an exclusive LibrePlanet 2020 T-shirt.

On top of all the free software work we fund and do year-round, with your financial support, we can invite speakers who can enlighten us with their knowledge and experience. Your donations will also help free software enthusiasts attend who otherwise would not have the means to do so. Your contribution, even if it's only a couple dollars, can be the difference between someone attending or not.

08 January, 2020 09:10PM

libredwg @ Savannah

libredwg-0.10 released

Some minor API changes and bugfixes, mostly stabilization.

API breaking changes:
  * added a new int *isnewp argument to all dynapi utf8text getters,
    if the returned string is freshly malloced or not.
  * removed the UNKNOWN supertype, there are only UNKNOWN_OBJ and UNKNOWN_ENT
    left, with common_entity_data.
  * renamed BLOCK_HEADER.preview_data to preview, preview_data_size to preview_size
  * renamed SHAPE.shape_no to style_id
  * renamed CLASS.wasazombie to is_zombie

Major bugfixes:
  * Improved building the perl5 binding, proper dependencies.
    Set proper -I and -L paths, create LibreDWG.c not swig_perl.c
  * Harmonized INDXFB with INDXF, removed extra src/in_dxfb.c (#134).
    Slimmed the .so size by 260Kb. Still untested though.
  * Fixed encoding of added r2000 AUXHEADER address (broken since 0.9)
  * Fixed EED encoding from dwgrewrite (a dxf2dwg regression from 0.9) (#180)

Minor bugfixes:
  * Many fuzzing and static analyzer fixes for dwg2dxf, dxf2dwg, dwgrewrite,
    including a stack-overflow on outdxf cquote. (#172-174, #178, #179).
    dwgrewrite and indxf are pretty robust now, but still highly experimental,
    as many dxf2dwg import and DWG validity tests are missing.
    indxf still has many asserts on many structural DXF errors.
  * Protect indxf from many NULL ptr, overflows and truncation.
  * Fixed most indxf and encode leaks. (#151)
  * More section decoders protections from invalid (fuzzed) values.
  * Stabilized the ASAN leak tests for make check.
  * Fix MULTILEADER.ctx.lline handles <r2010
  * Fix indxf color.alpha; at DXF 440
  * Fixed most important make scan-build warnings, the rest are mostly bogus.

Other newsworthy changes:
  * Added LIBREDWG_VERSION et al to include/dwg.h
  * Added support for AcDb3dSolid history_id (r2007+)
  * Improved the indxf speed in new_object. Do a proper linear search, and
    break on first found type.
  * Rename the ./dxf helper to ./dwg, and added a ./dxf test helper.
  * dxf2dwg got a new experimental --force-free option to check for leaks and
    UAF or double-free's.
  * Allow -o /dev/null sinks for dxf2dwg and dwg2dxf, for faster fuzzing.
  * Harmonized *.spec formatting and adjusted gen-dynapi.pl
  * Harmonized out_dxfb with out_dxf, e.g. the new mspace improvements (#173).

Here are the compressed sources:
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.10.tar.gz   (10.9MB)
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.10.tar.xz   (4.5MB)

Here are the GPG detached signatures[*]:
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.10.tar.gz.sig
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.10.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are more binaries:
  https://github.com/LibreDWG/libredwg/releases/tag/0.10

Here are the SHA256 checksums:

e890b4d3ab8071c78c4eb36e6f7ecd30e7f54630b0e2f051b3fe51395395d5f7  libredwg-0.10.tar.gz
8c37c4ef985e4135e3d2020c502c887b6115cdbbab2148b2e730875d5659cd66  libredwg-0.10.tar.xz

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify libredwg-0.10.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys B4F63339E65D6414

and rerun the 'gpg --verify' command.

08 January, 2020 04:52PM by Reini Urban

January 06, 2020

FSF Blogs

Extending our offer for exclusive membership gifts through January

In the final weeks of 2019, the Free Software Foundation (FSF) welcomed nearly 300 new associate members. That is a strong achievement, but we to boost our numbers further in order to continue our work to educate others about free software and defend copyleft.

Every day, millions of new people globally are gaining access to software, and are integrating it into their lives. We need to continue to spread the message of software freedom far and wide to reach these newcomers, and the millions of longtime software users who are unaware of how proprietary software is being used to exploit and abuse them. It’s a big challenge.

At the beginning of this new decade, we're inspired to dream up a freer future. To help turn this dream into reality, we're extending our membership drive and our offer for exclusive associate membership gifts as an extra incentive for people to join the movement until January 17th. To assist us further, our friends at Technoethical are offering a 5% discount for FSF members until this date as well.

Will you start out the new decade with an FSF associate membership?

If you can't join us yourself, we also offer these membership gifts as a special thank-you gift if you convince just three others to join the FSF -- email campaigns@fsf.org and let us know who they are.

You care about free software, just like we do. Let your friends, family and followers know that free software needs their support. If you're looking for something to help get the message across online, we recommend using the ShoeTool video, as well as these images.

Let's make 2020 the start of a decade in which we liberate the future for generations to come.

06 January, 2020 06:05PM

remotecontrol @ Savannah

January 02, 2020

grep @ Savannah

grep-3.4 released [stable]

This is to announce grep-3.4, a stable release.
Special thanks to Paul Eggert and Norihiro Tanaka for their many fine contributions.

There have been 71 commits by 4 people in the 54 weeks since 3.3.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Jim Meyering (31)
  Norihiro Tanaka (5)
  Paul Eggert (34)
  Zev Weiss (1)

Jim [on behalf of the grep maintainers]
==================================================================

Here is the GNU grep home page:
  http://gnu.org/s/grep/

For a summary of changes and contributors, see:
  http://git.sv.gnu.org/gitweb/?p=grep.git;a=shortlog;h=v3.4
or run this command from a git-cloned grep directory:
  git shortlog v3.3..v3.4

To summarize the 819 gnulib-related changes, run these commands
from a git-cloned grep directory:
  git checkout v3.4
  git submodule summary v3.3

==================================================================
Here are the compressed sources and a GPG detached signature[*]:
  https://ftp.gnu.org/gnu/grep/grep-3.4.tar.xz
  https://ftp.gnu.org/gnu/grep/grep-3.4.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://ftpmirror.gnu.org/grep/grep-3.4.tar.xz
  https://ftpmirror.gnu.org/grep/grep-3.4.tar.xz.sig

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify grep-3.4.tar.xz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys 7FD9FCCB000BEEEE

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
  Autoconf 2.69.197-b8fd7-dirty
  Automake 1.16a
  Gnulib v0.1-3121-gc3c36de58

==================================================================
NEWS

* Noteworthy changes in release 3.4 (2020-01-02) [stable]

** New features

  The new --no-ignore-case option causes grep to observe case
  distinctions, overriding any previous -i (--ignore-case) option.

** Bug fixes

  '.' no longer matches some invalid byte sequences in UTF-8 locales.
  [bug introduced in grep 2.7]

  grep -Fw can no longer false match in non-UTF-8 multibyte locales
  For example, this command would erroneously print its input line:
    echo ab | LC_CTYPE=ja_JP.eucjp grep -Fw b
  [Bug#38223 introduced in grep 2.28]

  The exit status of 'grep -L' is no longer incorrect when standard
  output is /dev/null.
  [Bug#37716 introduced in grep 3.2]

  A performance bug has been fixed when grep is given many patterns,
  each with no back-reference.
  [Bug#33249 introduced in grep 2.5]

  A performance bug has been fixed for patterns like '01.2' that
  cause grep to reorder tokens internally.
  [Bug#34951 introduced in grep 3.2]

** Build-related

  The build procedure no longer relies on any already-built src/grep
  that might be absent or broken.  Instead, it uses the system 'grep'
  to bootstrap, and uses src/grep only to test the build.  On Solaris
  /usr/bin/grep is broken, but you can install GNU or XPG4 'grep' from
  the standard Solaris distribution before building GNU Grep yourself.
  [bug introduced in grep 2.8]

02 January, 2020 10:32PM by Jim Meyering

GNU Guile

GNU Guile 2.9.8 (beta) released

We are delighted to announce the release of GNU Guile 2.9.8. This is the eighth and possibly final pre-release of what will eventually become the 3.0 release series.

See the release announcement for full details and a download link.

This release fixes an error in libguile that could cause Guile to crash in some particular conditions, and was notably experienced by users compiling Guile itself on Ubuntu 18.04.

We plan to release a final Guile 3.0.0 on 17 January, though we may require another prerelease in the meantime. However until then, note that GNU Guile 2.9.8 is a beta release, and as such offers no API or ABI stability guarantees. Users needing a stable Guile are advised to stay on the stable 2.2 series.

As always, experience reports with GNU Guile 2.9.8, good or bad, are very welcome; send them to guile-devel@gnu.org. If you know you found a bug, please do send a note to bug-guile@gnu.org. Happy hacking!

02 January, 2020 01:33PM by Andy Wingo (guile-devel@gnu.org)

January 01, 2020

bison @ Savannah

Bison 3.5 released [stable]

We are very happy to announce the release of Bison 3.5, the best release
ever of Bison!  Better than 3.4, although it was a big improvement over 3.3,
which was huge upgrade compared to 3.2, itself way ahead Bison 3.1.  Ethic
demands that we don't mention 3.0.  Rumor has it that Bison 3.5 is not as
good as 3.6 will be though...

Paul Eggert revised the use of integral types in both the generator and the
generated parsers.  As a consequence small parsers have a smaller footprint,
and very large automata are now possible with the default back-end (yacc.c).
If you are interested in smaller parsers, also have a look at api.token.raw.

Adrian Vogelsgesang contributed lookahead correction for C++.

The purpose of string literals has been clarified.  Indeed, they are used
for two different purposes: freeing from having to implement the keyword
matching in the scanner, and improving error messages.  Most of the time
both can be achieved at the same time, but on occasions, it does not work so
well.  We promote their use for error messages.  We still support the former
case (at least for historical skeletons), but it is not a recommended
practice.  The documentation now warns against this use.  A new warning,
-Wdangling-alias, should help users who want to enforce the use of aliases
only for error messages.

An experimental back-end for the D programming language was added thanks to
Oliver Mangold and H. S. Teoh.  It is looking for active support from the D
community.

Happy parsing!

==================================================================

Bison is a general-purpose parser generator that converts an annotated
context-free grammar into a deterministic LR or generalized LR (GLR) parser
employing LALR(1) parser tables.  Bison can also generate IELR(1) or
canonical LR(1) parser tables.  Once you are proficient with Bison, you can
use it to develop a wide range of language parsers, from those used in
simple desk calculators to complex programming languages.

Bison is upward compatible with Yacc: all properly-written Yacc grammars
work with Bison with no change.  Anyone familiar with Yacc should be able to
use Bison with little trouble.  You need to be fluent in C, C++ or Java
programming in order to use Bison.

Here is the GNU Bison home page:
   https://gnu.org/software/bison/

==================================================================

Here are the compressed sources:
  https://ftp.gnu.org/gnu/bison/bison-3.5.tar.gz   (5.1MB)
  https://ftp.gnu.org/gnu/bison/bison-3.5.tar.xz   (3.1MB)

Here are the GPG detached signatures[*]:
  https://ftp.gnu.org/gnu/bison/bison-3.5.tar.gz.sig
  https://ftp.gnu.org/gnu/bison/bison-3.5.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify bison-3.5.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys 0DDCAA3278D5264E

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
  Autoconf 2.69
  Automake 1.16.1
  Flex 2.6.4
  Gettext 0.19.8.1
  Gnulib v0.1-2971-gb943dd664

==================================================================

* Noteworthy changes in release 3.5 (2019-12-11) [stable]

** Backward incompatible changes

  Lone carriage-return characters (aka \r or ^M) in the grammar files are no
  longer treated as end-of-lines.  This changes the diagnostics, and in
  particular their locations.

  In C++, line numbers and columns are now represented as 'int' not
  'unsigned', so that integer overflow on positions is easily checkable via
  'gcc -fsanitize=undefined' and the like.  This affects the API for
  positions.  The default position and location classes now expose
  'counter_type' (int), used to define line and column numbers.

** Deprecated features

  The YYPRINT macro, which works only with yacc.c and only for tokens, was
  obsoleted long ago by %printer, introduced in Bison 1.50 (November 2002).
  It is deprecated and its support will be removed eventually.

** New features

*** Lookahead correction in C++

  Contributed by Adrian Vogelsgesang.

  The C++ deterministic skeleton (lalr1.cc) now supports LAC, via the
  %define variable parse.lac.

*** Variable api.token.raw: Optimized token numbers (all skeletons)

  In the generated parsers, tokens have two numbers: the "external" token
  number as returned by yylex (which starts at 257), and the "internal"
  symbol number (which starts at 3).  Each time yylex is called, a table
  lookup maps the external token number to the internal symbol number.

  When the %define variable api.token.raw is set, tokens are assigned their
  internal number, which saves one table lookup per token, and also saves
  the generation of the mapping table.

  The gain is typically moderate, but in extreme cases (very simple user
  actions), a 10% improvement can be observed.

*** Generated parsers use better types for states

  Stacks now use the best integral type for state numbers, instead of always
  using 15 bits.  As a result "small" parsers now have a smaller memory
  footprint (they use 8 bits), and there is support for large automata (16
  bits), and extra large (using int, i.e., typically 31 bits).

*** Generated parsers prefer signed integer types

  Bison skeletons now prefer signed to unsigned integer types when either
  will do, as the signed types are less error-prone and allow for better
  checking with 'gcc -fsanitize=undefined'.  Also, the types chosen are now
  portable to unusual machines where char, short and int are all the same
  width.  On non-GNU platforms this may entail including <limits.h> and (if
  available) <stdint.h> to define integer types and constants.

*** A skeleton for the D programming language

  For the last few releases, Bison has shipped a stealth experimental
  skeleton: lalr1.d.  It was first contributed by Oliver Mangold, based on
  Paolo Bonzini's lalr1.java, and was cleaned and improved thanks to
  H. S. Teoh.

  However, because nobody has committed to improving, testing, and
  documenting this skeleton, it is not clear that it will be supported in
  the future.

  The lalr1.d skeleton *is functional*, and works well, as demonstrated in
  examples/d/calc.d.  Please try it, enjoy it, and... commit to support it.

*** Debug traces in Java

  The Java backend no longer emits code and data for parser tracing if the
  %define variable parse.trace is not defined.

** Diagnostics

*** New diagnostic: -Wdangling-alias

  String literals, which allow for better error messages, are (too)
  liberally accepted by Bison, which might result in silent errors.  For
  instance

    %type <exVal> cond "condition"

  does not define "condition" as a string alias to 'cond' (nonterminal
  symbols do not have string aliases).  It is rather equivalent to

    %nterm <exVal> cond
    %token <exVal> "condition"

  i.e., it gives the type 'exVal' to the "condition" token, which was
  clearly not the intention.

  Also, because string aliases need not be defined, typos such as "baz"
  instead of "bar" will be not reported.

  The option -Wdangling-alias catches these situations.  On

    %token BAR "bar"
    %type <ival> foo "foo"
    %%
    foo: "baz" {}

  bison -Wdangling-alias reports

    warning: string literal not attached to a symbol
          | %type <ival> foo "foo"
          |                  ^~~~~
    warning: string literal not attached to a symbol
          | foo: "baz" {}
          |      ^~~~~

   The -Wall option does not (yet?) include -Wdangling-alias.

*** Better POSIX Yacc compatibility diagnostics

  POSIX Yacc restricts %type to nonterminals.  This is now diagnosed by
  -Wyacc.

    %token TOKEN1
    %type  <ival> TOKEN1 TOKEN2 't'
    %token TOKEN2
    %%
    expr:

  gives with -Wyacc

    input.y:2.15-20: warning: POSIX yacc reserves %type to nonterminals [-Wyacc]
        2 | %type  <ival> TOKEN1 TOKEN2 't'
          |               ^~~~~~
    input.y:2.29-31: warning: POSIX yacc reserves %type to nonterminals [-Wyacc]
        2 | %type  <ival> TOKEN1 TOKEN2 't'
          |                             ^~~
    input.y:2.22-27: warning: POSIX yacc reserves %type to nonterminals [-Wyacc]
        2 | %type  <ival> TOKEN1 TOKEN2 't'
          |                      ^~~~~~

*** Diagnostics with insertion

  The diagnostics now display the suggestion below the underlined source.
  Replacement for undeclared symbols are now also suggested.

    $ cat /tmp/foo.y
    %%
    list: lis '.' |

    $ bison -Wall foo.y
    foo.y:2.7-9: error: symbol 'lis' is used, but is not defined as a token and has no rules; did you mean 'list'?
        2 | list: lis '.' |
          |       ^~~
          |       list
    foo.y:2.16: warning: empty rule without %empty [-Wempty-rule]
        2 | list: lis '.' |
          |                ^
          |                %empty
    foo.y: warning: fix-its can be applied.  Rerun with option '--update'. [-Wother]

*** Diagnostics about long lines

  Quoted sources may now be truncated to fit the screen.  For instance, on a
  30-column wide terminal:

    $ cat foo.y
    %token FOO                       FOO                         FOO
    %%
    exp: FOO
    $ bison foo.y
    foo.y:1.34-36: warning: symbol FOO redeclared [-Wother]
        1 | …         FOO                  …
          |           ^~~
    foo.y:1.8-10:      previous declaration
        1 | %token FOO                     …
          |        ^~~
    foo.y:1.62-64: warning: symbol FOO redeclared [-Wother]
        1 | …         FOO
          |           ^~~
    foo.y:1.8-10:      previous declaration
        1 | %token FOO                     …
          |        ^~~

** Changes

*** Debugging glr.c and glr.cc

  The glr.c skeleton always had asserts to check its own behavior (not the
  user's).  These assertions are now under the control of the parse.assert
  %define variable (disabled by default).

*** Clean up

  Several new compiler warnings in the generated output have been avoided.
  Some unused features are no longer emitted.  Cleaner generated code in
  general.

** Bug Fixes

  Portability issues in the test suite.

  In theory, parsers using %nonassoc could crash when reporting verbose
  error messages. This unlikely bug has been fixed.

  In Java, %define api.prefix was ignored.  It now behaves as expected.

01 January, 2020 09:37AM by Akim Demaille

www-zh-cn @ Savannah

2019 summary

By GNU

Dear GNU translators!

This year, the number of new translations was similar to 2018,
but our active teams generally tracked the changes
in the original articles more closely than in 2018, especially
in the second half of the year.  Our French, Spanish,
Brazilian Portuguese and ("Simplified") Chinese teams were
particularly good, I think that the figures for them may be
within the precision of my evaluations.  Unfortunately, many our
teams are inactive or aren't so active as desirable, and
few teams are re-establised.

      General Statistics

The number of translations per file in important directories
continued growing.  Currently it is maximum (8.79 translations
per original file and 8.03 translations weighted with size
of articles).

The table below shows the number and size of newly translated articles
and the translations that were converted to the PO format in important
directories (as of 2019-12-31).

+--team--+------new-------+----converted---+---to convert---+-&-outdated-+
|  ca    |   0 (  0.0Ki)  | ^ 2 ( 91.1Ki)  |   1 (120.5Ki)  |  21 (30%)  |
+--------+----------------+----------------+----------------+------------+
|  de    |   0 (  0.0Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |  76 (35%)  |
+--------+----------------+----------------+----------------+------------+
|  el    | * 1 (  6.9Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |  26 (55%)  |
+--------+----------------+----------------+----------------+------------+
|  es    |  24 (453.8Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  | 3.2 (1.8%) |
+--------+----------------+----------------+----------------+------------+
|  fi    |   0 (  0.0Ki)  |   3 (118.5Ki)  |   0 (  0.0Ki)  |            |
+--------+----------------+----------------+----------------+------------+
|  fr    |   7 ( 57.0Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  | 0.9 (0.3%) |
+--------+----------------+----------------+----------------+------------+
|  hr    | * 1 (  6.9Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |  38 (55%)  |
+--------+----------------+----------------+----------------+------------+
|  it    |   0 (  0.0Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |  38 (29%)  |
+--------+----------------+----------------+----------------+------------+
|  ja    | * 1 (  6.9Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |  59 (41%)  |
+--------+----------------+----------------+----------------+------------+
|  ko    |   0 (  0.0Ki)  | ^19 (357.1Ki)  |   2 (218.7Ki)  |  23 (53%)  |
+--------+----------------+----------------+----------------+------------+
|  ml    |   3 ( 68.3Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |  13 (48%)  |
+--------+----------------+----------------+----------------+------------+
|  ms    |   1 (  3.0Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |            |
+--------+----------------+----------------+----------------+------------+
|  nl    | * 1 (  6.9Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |  40 (31%)  |
+--------+----------------+----------------+----------------+------------+
|  pl    | % 2 ( 15.7Ki)  | ^ 4 (192.0Ki)  |   1 (181.0Ki)  |  56 (39%)  |
+--------+----------------+----------------+----------------+------------+
|  pt-br |  25 (275.0Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  | 0.4 (0.3%) |
+--------+----------------+----------------+----------------+------------+
|  ru    |  10 ( 71.3Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  | 1.9 (0.6%) |
+--------+----------------+----------------+----------------+------------+
|  sq    |   0 (  0.0Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  | 2.2 (6.7%) |
+--------+----------------+----------------+----------------+------------+
|  tr    |  25 (277.7Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |  18 (66%)  |
+--------+----------------+----------------+----------------+------------+
|  zh-cn |  14 (305.8Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  | 0.4 (0.4%) |
+--------+----------------+----------------+----------------+------------+
|  zh-tw |   4 ( 32.0Ki)  |   7 ( 65.6Ki)  |  16 (232.6Ki)  | 4.9 (18%)  |
+--------+----------------+----------------+----------------+------------+
+--------+----------------+----------------+----------------+
| total  | 118 (1574.5Ki) |  35 ( 824.3Ki) |  68 (1761.4Ki) |
+--------+----------------+----------------+----------------+

& Typical number of outdated GNUNified translations throughout
  the year.

  • The translations of a new page,

  /education/edu-free-learning-resources.html,
  were picked from translations of an older page by Thérèse Godefroy.

^ The translations were GNUNified by Thérèse Godefroy.

% The files were committed back in 2017, but were technically
  incomplete; Thérèse Godefroy filled the missing strings.

For the reference: 7 new articles were added, amounting to 57Ki,
and there were about 700 modifications in more than 100 English
files in important directories.

Most of our active teams have no old translations to GNUNify,
and the ("Traditional") Chinese team converted a considerable part
of old translations this year.

      Orphaned Teams, New and Reformed Teams

Catalan, Czech, Greek teams were orphaned due to inactivity
(no commits for more than 3 years).

Malayalam team was re-established, and now we have one more
translation of the Free Software Definition!

The situation with the Turkish team is transitional: in June,
T. E. Kalaycı was appointed an admin of www-tr, but the team
still lacks a co-ordinator, despite being one of our most active
teams this year.

In August, we installed our first Malay translation,
/p/stallmans-law.ms.html; however, the volunteer didn't proceed
with further translations.

Executives of two free software-related businesses independently
offered us help with establishing a team for Swahili translations,
but we failed to overcome the organizational issues.

A Finnish volunteer updated the few existing old translations
in August and September.

A Romanian volunteer updated a few important translations in May.

People also offered help with Hindi (twice), Arabic and Danish
translations, but didn't succeed.

      Changes in the Page Regeneration System

GNUN had no releases this year, though there are a few minor,
but incompatible changes, so GNUN 1.0 release is pending.

Happy GNU year, and thank you for your contributions!

(I see nothing secret in this message, so if you think it may be
interesting to people who are not subscribed to the list, please
feel free to forward it).

01 January, 2020 07:19AM by Wensheng XIE

Christopher Allan Webber

201X in review

Well, this has been a big decade for me. At the close of 200X I was still very young as a programmer, had just gotten married to Morgan, had just started my job at Creative Commons, and was pretty sure everyone would figure out I was a fraud and that it would all come crashing down around me when everyone found out. (Okay, that last part is still true, but now I have more evidence I've gotten things done despite apparently pulling the wool over everyone's eyes.)

At work my boss left and I temporarily became tech lead, and we did some interesting things like kick off CC BY-SA and GPL compatibility work (which made it into the 4.0 license suite) and ran Liberated Pixel Cup (itself an interesting form of success, but I would like to spend more time talking about what the important lessons of it were... another time).

In 2011 I started MediaGoblin as a side project, but felt like I didn't really know what I was doing, yet people kept showing up and we were pushing out releases. Some people were even running the thing, and it felt fairly amazing. I left my job at Creative Commons in late 2012 and decided to try to make working on network freedom issues my main thing. It's been my main thing since, and I'm glad I've stayed focused in that way.

What I didn't expect was that the highlight of my work in the decade wasn't MediaGoblin itself but the standard we started participating in, which became ActivityPub. The work on ActivityPub arguably caused MediaGoblin to languish, but on the other hand ActivityPub was successfully ratified by the W3C as a standard and now has over 3.5 million registered users on the network and is used by dozens (at least 50) pieces of (mostly) interoperable software. That's a big success for all of it that worked on it (and there were quite a few), and in many ways I think is the actual legacy of MediaGoblin.

After ActivityPub becoming a W3C Recommendation, I took a look around and realized that other projects were using ActivityPub to accomplish the work of MediaGoblin maybe even better than MediaGoblin. The speed at which this decade passed made me conscious of how short time is and made me wonder how I should best budget it. After all, the most successful thing I worked on turned out to not be the networked software itself but the infrastructure for building networks. That lead me to reconsider whether my role was more importantly as trying to advance the state of the art, which has lead me to more recently start work on the federation laboratory called Spritely, of which I've written a bit about here.

My friend Serge Wroclawski and I also launched a podcast in the last year, Libre Lounge. I've been very proud of it; we have a lot of great episodes, so check the archive.

Keeping this work funded has turned out to be tough. In MediaGoblin land, we ran two crowdfunding campaigns, the first of which paid for my work, the second of which paid for Jessica Tallon's work on federation. The first campaign got poured entirely into MediaGoblin, the second one surprisingly resulted in making space so that we could do ActivityPub's work. (I hope people feel happy with the work we did, I do think ActivityPub wouldn't have happened without MediaGoblin's donors' support. That seems worth celebrating and a good outcome to me personally, at least.) I also was fortunate enough to get accepted into Stripe's Open Source Retreat and more recently my work on Spritely has been funded by the Samsung Stack Zero grant. Recently, people have been helping by donating on Patreon and both my increase in prominence from ActivityPub and Libre Lounge have helped grow that. That probably sounds like a lot of funding and success, but still most of this work has had to be quite... lean. You stretch that stuff out over nearly a decade and it doesn't account for nearly enough. To be honest, I've also had to pay for a lot of it myself too, especially by frequently contracting with other organizations (such as Open Tech Strategies and Digital Bazaar, great folks). But I've also had to rely on help from family resources at times. I'm much more privileged than other people, and I can do the work, and I think the work is necessary, so I've devoted myself to it. Sometimes I get emails asking how to be completely dedicated to FOSS without lucking out at a dedicated organization and I feel extra imposter-y in responding because I mean, I don't know, everything still feels very hand-to-mouth. A friend pointed to a blogpost from Fred Hicks at Evil Hat about how behind the scenes, things don't feel as sustainable sometimes, and that struck a chord with me (it was especially surprising to me, because Evil Hat is one of the most prominent tabletop gaming companies.) Nonetheless, I'm still privileged enough that I've been able to keep it working and stay dedicated, and I've received a lot of great support from all the resources mentioned above, and I'm happy about all that. I just wish I could give better advice on how to "make it work"... I'm in search of a good answer for that myself.

In more personal reflections of this decade, Morgan and I went through a number of difficult moves and some very difficult family situations, but I think our relationship is very strong, and some of the hardest stuff strengthened our resolve as a team. We've finally started to settle down, having bought a house and all that. Morgan completed one graduate program and is on the verge of completing her next one. A decade into our marriage (and 1.5 decades into our relationship), things are still wonderfully weird.

I'm halfway through my 30s now. This decade made it clearer to me how fast time goes. In the book A Deepness in the Sky, a space-trading civilization is described that measures time in seconds, kiloseconds, megaseconds, gigaseconds, etc. Increasingly I feel like the number of seconds ahead in life are always shorter than we feel like they will be; time is a truly precious resource. How much more time do I have to do interesting and useful things, to live a nice life? How much more time do we have left to get our shit together as a human species? (We seem to be doing an awful lot to shorten our timespan.)

I will say that I am kicking of 202X by doing something to hopefully contribute to lengthening both the timespan of myself and (more marginally individually, more significantly if done collectively) human society: 2020 will be the "Year of No Travel" for me. I hate traveling; it's bad for myself, bad for the environment. It's seemed most importantly to be the main thing that continues to throw my health out of whack, over and over again.

But speaking of time and its resource usage, a friend once told me that I had a habit in talks to "say the perfect thing, then ruin it by saying one more thing". I probably did something similar above (well, not claiming anything I write is perfect), but I'll preserve it anyway.

Everything, especially this blog, is imperfect anyway. Hopefully this next decade is imperfect and weird in a way we can, for the most part, enjoy.

Goodbye 201X, hello 202X.

01 January, 2020 04:46AM by Christopher Lemmer Webber

December 30, 2019

FSF Blogs

Last chance to help us reach our membership goal in 2019!

The pace and demands of modern life pressure us to carry computers in our pockets laden with nonfree software (our cell phones), and new threats to our privacy are popping up on every street corner, via proprietary Amazon Ring cameras, and on many kitchen counters, via “smart” home devices. Back when our movement was born, software freedom was only of great concern to people who were actively involved in development. Today, nobody in the world can afford to ignore the crucial importance of knowing what our software is doing, and keeping it from doing us harm.

As the battles and triumphs of 2019 fade into the past and the new challenges of 2020 emerge, the Free Software Foundation (FSF) continues our commitment to the goal we’ve had from our earliest days: a future in which all software is free, and can be trusted to serve the needs and best interests of every user. Our strength depends on your support: we need you to boldly carry the message and goal of software freedom to everyone you know, bring them into the fold, and help us mobilize them to use and talk about free software.

That's why the central message of our fall/winter fundraiser has been membership: because our associate members are the most committed fighters for the cause of software freedom, pledging not just financial support but also a vote of confidence. This is your final chance to help us reach our goal of 600 new members by the end of 2019 (and your last chance for a tax deduction if you're in the US), so today is the day to join the FSF! The numbers matter, and FSF membership is a very powerful gesture to make for only $10 a month ($5 if you are a student).

As a special bonus, all new and renewing annual associate members ($120+) can choose to receive one of our exclusive year-end gifts. If you get a minimum of three people to mention you as a referral, you can get the gifts, too!

While financial support is a must, using your voice to boost our movement is an equally important role for every member and supporter. We're behind on our new membership goal for this fundraiser period, so please help us get the word out today! You can expand our reach by sharing our posts on social media, sharing our articles via email, and talking to your friends, family, and coworkers about why free software should matter to them. We've prepared some images as fun conversation starters you can share: use the hashtag #ISupportFreeSoftware and help them spread!

We’re spending the end of this year making plans to make 2020 the best year for the FSF ever: you can read about some of these plans in the reports from our tech team, licensing and compliance team, and campaigns team. Will this be the year that we make user freedom a kitchen table issue? We’ll never stop trying – and we hope you’ll be by our side all the way.

30 December, 2019 04:50PM

December 29, 2019

pspp @ Savannah

PSPP now supports .spv files

I just pushed support for SPV files to the master branch of PSPP. This means a number of new features:

  • There is a new program "pspp-output" that can convert .spv files into other formats, e.g. you can use it to produce text or PDF files from SPSS viewer files.
  • The "pspp" program can now output to .spv files.
  • PSPPIRE can now read and write .spv files.  The support for reading them is not refined enough (it simply dumps the to the output window), so it's not really documented yet.

I would appreciate experience reports, positive or negative.  The main known limitation is that graphs are not yet supported (this is actually a huge amount of work due to the way that SPSS implements graphs).

29 December, 2019 07:04AM by Ben Pfaff

December 28, 2019

Christopher Allan Webber

Noncommercial Doesn't Compose (and it never will)

NOTE: I actually posted this some time ago on license-discuss and some people suggested that I blog it. A good idea, which I never did, until now. The topic has't become any less relevant, so...

It's sad to see history repeat itself, but that's what history does, and it seems like we're in an awfully echo'y period of time. Given the volume of submissions in favor of some sort of noncommercial style license, I feel I must weigh in on the issue in general. Most of my thoughts on this developed when I worked at Creative Commons (which famously includes a "noncommercial" clause that can be mixed into a few licenses), and it took me a while to sort out why there was so much conflict and unhappiness over that clause. What was clear was that Non-Commercial and No-Derivatives were both not considered "free culture" licenses, and I was told this was drawn from the lessons of the free software world, but here we are hashing it out again so anyway...

(I'm not suggesting this is a CC position; Creative Commons hasn't to my knowledge taken an official stance on whether NonCommercial is right, and not everyone internally agreed, and also I don't work there anymore anyhow.)

I thank Rob Myers for a lot of clarity here, who used to joke that NC (the shorthand name for Non-Commercial) really stood for "No Community". I think that's true, but I'll argue that even more so it stands for "No Composition", which is just as much or more of a threat, as I hope to explain below.

As a side note, I am of course highly empathetic to the motivations of trying to insert a noncommercial clause; I've worn many hats, and funding the software I've worked on has by far been the hardest. At first glance, an NC approach appears to be a way to solve the problem. Unfortunately, it doesn't work.

The first problem with noncommercial is that nobody really knows for sure what it means. Creative Commons made a great effort to gather community consensus. But my read from going through that is that it's still very "gut feel" for a lot of people, and while there's some level of closeness the results of the report result in nothing crisp enough to be considered "defined" in my view. Personally I think that nothing will ever hit that point. For instance, which of these is commercial, and which is noncommercial?

  • Playing music at home
  • Playing music overhead, in a coffee shop
  • A song I produced being embedded in a fundraising video by the Red Cross
  • Embedding my photo in a New York Times article
  • Embedding my photo in a Mother Jones article
  • Embedding my photo on Wikipedia (if you think this is a clear and easy example btw, perhaps you'd like to take a selfie with this monkey?)

But this actually isn't the most significant part of why noncommercial fails, has always failed, and will always fail in the scope of FOSS: it simply doesn't compose.

Using the (A)GPL as the approximate maxima (and not saying it's the only possible maxima) of license restrictions, we still have full composition from top to bottom. Lax and copyleft code can be combined and reused with all participants intending to participate in growing the same commons, and with all participants on equal footing.

Unfortunately, NC destroys the stack. NC has the kind of appeal that a lottery does: it's very fun to think about participating when you imagine yourself as the recipient. The moment you have to deal with it underneath, it becomes a huge headache.

I had an argument about this with someone I tend to work closely with, they began arguing for the need to insert NC style clauses into code, because developers gotta eat, which is at any rate a point I don't disagree with. But recently several of the formerly FOSS infrastructure switched to using an NC license, and they began to express that this opened up a minefield underneath them. If it felt like a minefield with just one or two libraries or utilities following the NC path, what will happen once it's the whole system?

What would using Debian be like if 1/4 of the packages were under NC licenses? Would you deploy it on your home machine? Would you deploy it on your personal VPS? Would you deploy it at your corporate datacenter? Even if I am a "noncommercial" user, if my VPS is at Linode, would Linode have to pay? What about Amazon? Worse yet... what if some of the package authors were dead or defunct?

To me it's no coincidence that we're seeing an interest in NC right at exactly the same time that faith in proprietary relicensing of copyleft code as a business strategy has begun to wane. If you were at my talk at CopyleftConf, you may have heard me talk about this in some detail (and some other things that aren't relevant to this right now). You can see the original table on slide 8 from my presentation, but here it is reproduced:

  Libre Commoner Proprietary Relicensor
Motivation Protect the commons Develop income
Mitigating Tragedy of the commons Free rider problem
Wants Compliance Non-compliance

(By "tragedy of the commons", here I mean "prevent the commons from being eaten away".)

The difference in the "wants" field is extremely telling: the person I will call the "libre commoner" wants everyone to be able to abide by the terms of the license. The "propretary relicensor" actually hopes and dreams that some people will not comply with the license at all, because their business strategy depends on it. And in this part, I agree with Rob's "No-Community" pun.

Let me be clear: I'm not arguing with the desire to pay developers in this system, I'm arguing that this is a non-solution. To recap, here are the problems with noncommercial:

  • What commercial/noncommercial are is hard to define
  • NC doesn't compose; a tower of noncommercial-licensed tools is a truly brittle one to audit and resolve
  • The appeal of NC is in non-compliance

Noncommercial fails in its goals and it fails the community. It sounds nice when you think you'll be the only one on top, but it doesn't work, and it never will.

28 December, 2019 12:46AM by Christopher Lemmer Webber

December 27, 2019

FSF Blogs

Bringing the free software vision to 2020

2019 has been an eye-opening, transformative year for free software and the Free Software Foundation (FSF), bringing some major changes both internally and in the world around us. As we navigate these changes, we are guided by the FSF's founding vision -- the four freedoms that define free software, and our mission to make all software be compatible with human freedom. It must be honest, transparent, and shareable, and it must truly work in service of its users.

For the last sixteen years, I have been steeped in these principles, and along with so many of you, have absorbed them into my heart and soul. Thank you for being a member of this community, for your advocacy and code and commitment. It is your support that has put us in a position to be able to face new challenges, and to continue evolving into an organization that can last for as long as the work still needs to be done.

Over the last month and a half, we've been sharing highlights of the work our campaigns, licensing, tech, and operations team have done in 2019. We don't have a full-time position dedicated to fundraising, so you've heard these details directly from the people doing the work. I'm proud of what our teams have accomplished this year with your support: huge steps forward for the Respects Your Freedom product certification program, significant updates to the infrastructure we provide for thousands of free software developers and users worldwide, an impactful International Day Against Digital Restrictions Management, a successful pilot program to teach public school students about free software, and of course our new ShoeTool video. Our intense focus on program work earned us another 4-star top rating from Charity Navigator.

Author and activist Cory Doctorow said recently of the FSF, "You interact with code that they made possible a million times a day, and they never stop working to make sure that the code stays free." We need your help now to be able to continue this work. But we can't stop there. We need to take the free software vision much further in 2020. We have to be better in the areas of mobile devices, network services, software-driven cars, artificial intelligence, and machine learning.

Most of all, we need to do better at communicating and spreading the free software vision in different ways so that others, from all walks of life, will join us in tackling these problems. This means building a stronger, kinder, more united, and powerful community.

We are behind on our goal of welcoming 600 new associate members by December 31st. But I know we can still reach that goal. In my sixteen years, I have seen single individuals inspire a dozen people to join in a week. Please join us in our final year-end push by becoming a member or renewing your membership? Show your friends, family, and colleagues the ShoeTool video, and explain to them the reasons you support free software. We'll even send you a special thank-you gift if you convince just three others -- email campaigns@fsf.org and let us know who they are.

Thank you for everything you've done for free software, and for believing in us to carry the vision forward into 2020.

27 December, 2019 10:30PM

GNU Spotlight with Mike Gerwitz: 14 new GNU releases in December!

For announcements of most new GNU releases, subscribe to the info-gnu mailing list: https://lists.gnu.org/mailman/listinfo/info-gnu.

To download: nearly all GNU software is available from https://ftp.gnu.org/gnu/, or preferably one of its mirrors from https://www.gnu.org/prep/ftp.html. You can use the URL https://ftpmirror.gnu.org/ to be automatically redirected to a (hopefully) nearby and up-to-date mirror.

This month, we welcome Martin Schanzenbach as comaintainer of GNUnet.

A number of GNU packages, as well as the GNU operating system as a whole, are looking for maintainers and other assistance: please see https://www.gnu.org/server/takeaction.html#unmaint if you'd like to help. The general page on how to help GNU is at https://www.gnu.org/help/help.html.

If you have a working or partly working program that you'd like to offer to the GNU project as a GNU package, see https://www.gnu.org/help/evaluation.html.

As always, please feel free to write to us at maintainers@gnu.org with any GNUish questions or suggestions for future installments.

27 December, 2019 06:07PM

December 25, 2019

GNUnet News

Upcoming GNUnet Talks

Upcoming GNUnet Talks

There will be various talks in the next few months on GNUnet and related projects on both the Chaos Communication Congress (36C3) as well as FOSDEM. Here is an overview:

Privacy and Decentralization @ 36c3 (YBTI)

We are pleased to have 5 talks to present as part of our "youbroketheinternet/wefixthenet" session, taking place on the OIO (Open Infrastructure Orbit) stage:

  • "re:claimID - Self-sovereign, Decentralised Identity Management and Personal Data Sharing" by Hendrik Meyer zum Felde will take place at 2019-12-27 18:30 in OIO Stage. Info
  • "Buying Snacks via NFC with GNU Taler" by Dominik Hofer will take place at 2019-12-27 21:20 in OIO Stage Info
  • "CloudCalypse 2: Social network with net2o" by Bernd Paysan will take place at 2019-12-28 21:20 in OIO Stage Info
  • "Delta Chat: e-mail based messaging, the Rustocalypse and UX driven approach" by holger krekel will take place at 2019-12-29 17:40 in OIO Stage Info
  • "Cryptography of Killing Proof-of-Work" by Jeff Burdges will take place at 2019-12-30 12:00 in OIO Stage Info

In addition to these talks, we will be hosting a snack machine which accepts Taler for payment. The first of its kind! It will be filled with various goodies, including Swiss chocolates, books, and electronics. The machine will be located somewhere in the OIO assembly, and there will be a station at which you may exchange Euro for digital Euro for immediate use. We welcome all to come try it out. :)

Decentralized Internet and Privacy devroom @FOSDEM 2020

We have 2 GNUnet-related talks at the Decentralized Internet and Privacy devroom at FOSDEM 2020 in February:

  • GNUnet: A network protocol stack for building secure, distributed, and privacy-preserving applications Info
  • Knocking Down the Nest: secushareBOX - p2p, encrypted IoT and beyond... Info

25 December, 2019 11:00PM

libredwg @ Savannah

libredwg-0.9.3 released

This is another minor patch update, with some bugfixes from fuzzed DWG's.

Here are the compressed sources:

  http://ftp.gnu.org/gnu/libredwg/libredwg-0.9.3.tar.gz   (9.8MB)
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.9.3.tar.xz   (3.7MB)

Here are the GPG detached signatures[*]:

  http://ftp.gnu.org/gnu/libredwg/libredwg-0.9.3.tar.gz.sig
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.9.3.tar.xz.sig

Use a mirror for higher download bandwidth:

  https://www.gnu.org/order/ftp.html

Here are more binaries:

  https://github.com/LibreDWG/libredwg/releases/tag/0.9.3

Here are the SHA256 checksums:

e53d4134208ee35fbf866171ee2052edd73bf339ab5b091acbc2769d8c20c43f  libredwg-0.9.3.tar.gz
62df9eb21e7b8f107e7b2eaf0e61ed54e7939ee10fd10b896a57d59319f09483  libredwg-0.9.3.tar.xz

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify libredwg-0.9.3.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys B4F63339E65D6414

and rerun the 'gpg --verify' command.

25 December, 2019 09:55PM by Reini Urban

December 24, 2019

GNUnet News

GNUnet 0.12.1

GNUnet 0.12.1 released

We are pleased to announce the release of GNUnet 0.12.1.
This is a very minor release. It largely fixes one function that is needed by GNU Taler 0.6.0. Please read the release notes for GNUnet 0.12.0, as they still apply. Updating is only recommended for those using GNUnet in combination with GNU Taler.

Download links

The GPG key used to sign is: D8423BCB326C7907033929C7939E6BE1E29FC3CC

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

24 December, 2019 11:00PM

December 23, 2019

Parabola GNU/Linux-libre

manual intervention required (xorgproto dependency errors)

due to some recent changes in arch, manual intervention may be required if you hit any errors of the form:

:: installing xorgproto (2019.2-2) breaks dependency * required by *

to correct dependency errors related to 'xorgproto':

# pacman -Qi libdmx      &>1 && pacman -Rdd libdmx
# pacman -Qi libxxf86dga &>1 && pacman -Rdd libxxf86dga
# pacman -Syu

23 December, 2019 05:41AM by bill auger

December 22, 2019

parallel @ Savannah

GNU Parallel 20191222 ('Impeachment') released [stable]

GNU Parallel 20191222 ('Impeachment') [stable] has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

No new functionality was introduced so this is a good candidate for a stable release.

GNU Parallel is 10 years old next year on 2020-04-22. You are here by invited to a reception on Friday 2020-04-17.

See https://www.gnu.org/software/parallel/10-years-anniversary.html

Quote of the month:

  GNU parallel all the way!
    -- David Manouchehri @DaveManouchehri@twitter

New in this release:

  • Bug fixes and man page updates.

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 3374ec53bacb199b245af2dda86df6c9
    12345678 3374ec53 bacb199b 245af2dd a86df6c9
    $ md5sum install.sh | grep 029a9ac06e8b5bc6052eac57b2c3c9ca
    029a9ac0 6e8b5bc6 052eac57 b2c3c9ca
    $ sha512sum install.sh | grep f517006d9897747bed8a4694b1acba1b
    40f53af6 9e20dae5 713ba06c f517006d 9897747b ed8a4694 b1acba1b 1464beb4
    60055629 3f2356f3 3e9c4e3c 76e3f3af a9db4b32 bd33322b 975696fc e6b23cfb
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 December, 2019 02:43PM by Ole Tange

December 19, 2019

GNUnet News

GNUnet 0.12.0

GNUnet 0.12.0 released

We are pleased to announce the release of GNUnet 0.12.0.
This is a new major release. It breaks protocol compatibility with the 0.11.x versions. Please be aware that Git master is thus henceforth INCOMPATIBLE with the 0.11.x GNUnet network, and interactions between old and new peers will result in signature verification failures. 0.11.x peers will NOT be able to communicate with Git master or 0.12.x peers.
In terms of usability, users should be aware that there are still a large number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.12.0 release is still only suitable for early adopters with some reasonable pain tolerance.

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.12.0 (since 0.11.8)

  • GNS:
    • Changed key derivation protocols to adhere with LSD001. #5921
    • Names are not expected to be UTF-8 (as opposed to IDNA). #5922
    • NSS plugin now properly handles non-standard IDNA names. #5927
    • NSS plugin will refuse to process requests from root (as GNUnet code should never run as root). #5907
    • Fixed BOX service/protocol label parsing (for TLSA et al)
  • GNS/NSE: Zone revocation proof of work algorithm changed to be less susceptible to specialized ASIC hardware. #3795
  • TRANSPORT: UDP plugin moved to experimental as it is known to be unstable.
  • UTIL:
    • Improved and documented RSA binary format. #5968
    • Removed redundant hashing in EdDSA signatures. #5398
    • The gnunet-logread script for log auditing (requires perl) can now be installed.
    • Now using TweetNaCl for ECDH implementation.
  • Buildsystem: A significant number of build system issued have been fixed and improvements implemented, including:
    • GLPK dependency dropped.
    • Fixed guix package definition.
  • Documentation: Improvements to the handbook and documentation.

Known Issues

  • There are known major design issues in the TRANSPORT, ATS and CORE subsystems which will need to be addressed in the future to achieve acceptable usability, performance and security.
  • There are known moderate implementation limitations in CADET that negatively impact performance.
  • There are known moderate design issues in FS that also impact usability and performance.
  • There are minor implementation limitations in SET that create unnecessary attack surface for availability.
  • The RPS subsystem remains experimental.
  • Some high-level tests in the test-suite fail non-deterministically due to the low-level TRANSPORT issues.

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.

Thanks

This release was the work of many people. The following people contributed code and were thus easily identified: ng0, Christian Grothoff, Florian Dold, xrs, Naomi Phillips and Martin Schanzenbach.

19 December, 2019 11:00PM

Christopher Allan Webber

telnet postcard.sfconservancy.org 2333

Conservancy card running in a terminal

Try running "telnet postcard.sfconservancy.org 2333" in your terminal... you won't regret it, I promise! You should get to see a fun "season's greetings postcard" animated live in your terminal. My friends at Software Freedom Conservancy are running a supporters drive and I was happy to do this to help (if you were a supporter in the past, you should be getting a matching postcard in the mail!)

As you probably guessed, this is built on top of the same tech stack as Terminal Phase. Speaking of, at the time of writing we're getting quite close to that... only $40/month to go! Maybe we'll make it by the end of the year?

In the meanwhile, if you want to show your support for Conservancy and convince your friends to join, there are some matching banners you can put up on your website! And if you want to run the animated postcard yourself or generate any of its associated images, well... it's free software!

19 December, 2019 06:11PM by Christopher Lemmer Webber

remotecontrol @ Savannah

Amazon, Apple, Google and Zigbee join forces for an open smart home standard

19 December, 2019 05:06AM by Stephen H. Dawson DSL

December 16, 2019

GNU Guix

Reproducible Builds Summit, 5th edition

For several years, the Reproducible Builds Summit has become this pleasant and fruitful retreat where we Guix hackers like to go and share, brainstorm, and hack with people from free software projects and companies who share this interest in reproducible builds and related issues. This year, several of us had the chance to be in Marrakesh for the fifth Reproducible Builds Summit, which was attended by about thirty people.

Reproducible Builds logo

This blog post summarizes takeaways from the different sessions we attended, and introduces some of the cool hacks that came to life on the roof top of the lovely riad that was home to the summit.

Java

Java is a notoriously difficult topic, as far as bootstrapping and reproducibility go. For instance, Gradle is now the most common tool for building Java code, and in particular Android apps. However, the current way of building Gradle is to use Gradle and a build script written in Kotlin. The Kotlin project, in turn, is also built using Gradle and a build script written in Kotlin. So we end up with the most cyclic graph one can imagine with two packages: a circle between the two, and two additional loops from the packages to themselves. However, the Kotlin dependency of Gradle was introduced less than two years ago, so there is some hope we can disentangle the bootstrapping mess...

Andreas took part in a session on bootstrapping the Android toolchain, with a very vague hope of getting more than adb, fastboot and a few more utilities into Guix. The task looks daunting, since the sourcecode is spread over a large number of git repositories with gigabytes of data, and the idea of modular builds apparently has not influenced the design decisions. But all is not lost, Sylvain from Android Rebuilds has done a lot of work to disentangle the sources, and we could also look for inspiration from the Replicant project. Interestingly, the Android NDK, which provides a foreign function interface to C libraries, appears to be an easier target.

Another working group, in which none of us took part, evolved around Maven; Hans wrote a short summary of the outcome.

Some discussions have also evolved around F-Droid, the free app store for Android, and the topic of building the apps reproducibly and adding relevant information to the competition.

Verifying and sharing build results

Speaking of which, the website retracing reproducibility feats and issues was also the subject of a cross-distribution discussion round between Debian, Arch, Nix, Guix and OpenWRT. Currently the page is tightly connected to a continuous integration instance rebuilding distributions such as Debian and Arch. We have discussed a file format (probably based on JSON) that would help to separate the process of creating the reproducibility information from collecting, evaluating and displaying it. From a Guix point of view, the idea would be to have the website communicate with an instance of the Guix Data Service.

Additionally, Bernhard started a discussion about a possible new site to easily show for a package, if it builds reproducibly in different distributions, this is mentioned on this post about the summit. This would probably also consume some data about the reproducibility of packages within Guix from the Guix Data Service.

Guix Data Service

This nifty project can serve to collect data from a number of independent Guix build farms (of which we currently have two, the farm behind ci.guix.gnu.org, and the farmlet of one or two machines behind bayfront.guix.gnu.org. Meeting in person was the occasion to update the bayfront configuration to mimic more closely that of ci; in particular, the build farm results are now exported to the web frontend.

We had quite some discussion (so far without conclusion) about the exact boundaries between Cuirass and the Guix Data Service: should the former only be a thin layer on top of the Guix daemon with the latter processing all the data towards a web frontend, or should Cuirass continue to handle its own web page?

While the Guix Data Service is not currently running at data.guix.gnu.org as the server is down for maintenance, lots of progress was made with the code. Information about normalized archives (nars), such as package binaries, that are provided by substitute servers can now be imported and stored in the database, and the ability to fetch and store builds from Cuirass has been improved. This is building towards being able to automatically and continuously track the reproducibility of Guix packages.

Bootstrapping

This year the summit had an official extended format; encouraging participants to attend for a full week by adding coding time around the usual three more structured core days that were facilitated in a lovely productive and high-energy fashion by Gunner and Evelyn of Aspiration Tech.

Even before the core days started, David had packaged GNU Mes for Nix with the aim of creating a Reduced Binary Seed bootstrap for NixOS. As Vagrant managed to get Mes into Debian unstable before the summit, he expressed that we should do something with it. We decided to attempt a cross-distribution Diverse Double Compilation of Mes. Initially, David (Nix), Vagrant (Debian) and janneke (GNU Guix) took up the challenge, soon to be joined by Jelle (Arch). David was the first do do a diffoscope comparison to find that Mes v0.21 actually embeds a store file name. Always nice to see Reproducible meet Bootstrappable ;-) Upstream was easily convinced to write a patch. More news on this real soon!

Ludovic and janneke took the opportunity to take the Guix Scheme-only bootstrap a couple of steps further. In a joint effort the last functional bug was fixed and Ludovic came up with a way to avoid actually adding Gash and Gash Core Utils to the bootstrap binary seeds. The idea of bootstrapping from the current %bootstrap-mes (v0.19) instead of updating to v0.21 presented itself and was implemented by janneke right after the summit.

Andreas was wondering about the use of GCC 2.95.3 in the Guix bootstrap and then worked to create a patch to compile GMP, MPFR, and MPC using TinyCC. That work is helping the effort to remove the intermediate GCC 2.95.3 from the Guix bootstrap and instead target GCC 4.6.4 directly.

All in all a very productive and especially inspiring summit for bootstrapping with more people and projects on board, giving new perspectives to work on... and dream about.

In the last extreme bootstrapping work session, Hannes from MirageOS was inspired to start an initial port of Mes to FreeBSD and gave rise to...

Extreme bootstrapping!

As part of the discussions about bootstrapping, people noted that Guix’ build daemon is usually ignored from bootstrapping considerations, and wondered whether it should be taken into account. In effect, the build daemon emulates builds from scratch, as if one had booted into an empty machine. It does that by creating isolated build environments that contain nothing but the explicitly declared inputs. However, the build daemon is part of the Trusted Computing Base (TCB): like compilers in the “trusting trust” attack, it could inject backdoors into build results. Thus, the question becomes: how can we reduce the TCB by removing guix-daemon from it?

Vagrant came up with this crazy-looking idea: what if we started building things straight from the initrd? That way, our TCB would be stripped of guix-daemon, the Shepherd, and other services running on a normal system. Since Guix has all the build information available in the form of derivations, which are normally interpreted by the daemon, we found that it shouldn’t be that hard to convert them to a minimal Guile script that would be executed during startup, from the initrd. Some hack hours later, we had a proof-of-concept branch, adding a (gnu system bootstrap) module with all the necessary machinery:

  1. a function that converts an arbitrary derivation to a linear build script that builds the complete dependency graph in topological order;
  2. the declaration of an operating system that boots into such a script from the initrd;
  3. a function to run a pure-Scheme SHA256 implementation to compute and display the hash of the build result.

More on that in a future post! Interestingly, we learned that NBS is taking a similar approach — building from the initrd — though with different binary seeds and specific build and packaging tooling.

We went on exploring the space of what we called “extreme bootstrapping” some more. How could we further reduce the TCB? The kernel is an obvious target: as long as we use the Linux kernel, we could disable many optional features, even perhaps networking and storage drivers. Fabrice Bellard’s 2004 impressive tcc-boot experiment reminds us that we could even aim for a bootloader that builds the OS kernel before it boots it; this removes Linux entirely from the TCB, in exchange for TinyCC. As part of the “Bootstrappable Debian” project, asmc takes a similar approach: providing a very small OS kernel that’s enough to compile simple things. This is like going “from inorganic matter to organic molecules”, as Giovanni Mascellani nicely puts it.

When a Mirage developer and hackers familiar with GNU/Hurd talk about bootstrapping, it is no surprise that they end up looking at library OSes and microkernels. Indeed, one could imagine booting into a dedicated Mirage unikernel (though it would lack a POSIX personality), or booting into GNU Mach with few or no Hurd services initially running. That would be a way to strip the TCB to a bare minimum… It will be some time before we get there, but it could well be our horizon!

More cool hacks

During the summit, support for system provenance tracking in guix system landed in Guix. This allows a deployed system to embed the information needed to rebuild it: its channels and its configuration file. In other words, the result is what we could call a source-carrying system, which could also be thought of as a sort of Quine. For users it’s a convenient way to map a running system or virtual machine image back to its source, or to verify that its binaries are genuine by rebuiling it.

The guix challenge command started its life shortly before the first summit. During this year’s hacking sessions, it gained a --diff option that automates the steps of downloading, decompressing, and diffing non-reproducible archives, possibly with Diffoscope. The idea came up some time ago, and it’s good that we can cross that line from our to-do list.

Thanks!

We are grateful to everyone who made this summit possible: Gunner and Evelyn of Aspiration, Hannes, Holger, Lamby, Mattia, and Vagrant, as well as our kind hosts at Priscilla. And of course, thanks to all fellow participants whose openmindedness and focus made this both a productive and a pleasant experience!

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

16 December, 2019 02:00PM by Christopher Baines, Ludovic Courtès, Andreas Enge, Jan Nieuwenhuizen

December 15, 2019

Riccardo Mottola

ArcticFox 27.9.19 release

Arctic Fox 27.9.19 has been released!

Plenty of enhancements, still supports your trusty Mac from 10.6 up and Linux PowerPC!

Arctic Fox on Devuan amd64


Code has been fixed to support newer compilers. On Linux, currently, the highest supported compiler remains gcc 6.5, more recent versions do compile now with this release, but fail to link afterwards with errors on very standard symbols. Help appreciated! On NetBSD gcc 7 now works fine instead.


15 December, 2019 06:16PM by Riccardo (noreply@blogger.com)

December 13, 2019

GNU Guile

GNU Guile 2.9.7 (beta) released

We are delighted to announce GNU Guile 2.9.7, the seventh and hopefully penultimate beta release in preparation for the upcoming 3.0 stable series. See the release announcement for full details and a download link.

This release makes Guile go faster. Compared to 2.9.6 there are some significant improvements:

Comparison of microbenchmark performance for Guile 2.9.6 and 2.9.7

The cumulative comparison against 2.2 is finally looking like we have no significant regressions:

Comparison of microbenchmark performance for Guile 2.2.6 and 2.9.7

Now we're on the home stretch! Hopefully we'll get out just one more prerelease and then release a stable Guile 3.0.0 in January. However until then, note that GNU Guile 2.9.7 is a beta release, and as such offers no API or ABI stability guarantees. Users needing a stable Guile are advised to stay on the stable 2.2 series.

As always, experience reports with GNU Guile 2.9.7, good or bad, are very welcome; send them to guile-devel@gnu.org. If you know you found a bug, please do send a note to bug-guile@gnu.org. Happy hacking!

13 December, 2019 01:31PM by Andy Wingo (guile-devel@gnu.org)

December 08, 2019

www-zh-cn @ Savannah

The FSF tech team: Doing more for free software

Dear CTT translators:

At the Free Software Foundation, we like to set big goals for ourselves, and cover a lot of ground in a short time. The FSF tech team, for example, has just four members -- two senior systems administrators, one Web developer, and a part-time chief technology officer -- yet we manage to run over 120 virtual servers. These run on about a dozen machines hosted at four different data centers. These include many public-facing Web sites and community services, as well as every single IT requirement for the staff: workstations, data storage and backup, networking, printing, accounting, telephony, email, you name it.

We don't outsource any of our daily software needs because we need to be sure that they are done using only free software. Remember, there is no "cloud," just other people's computers. For example: we don't outsource our email, so every day we send over half a million messages to thousands of free software hackers through the community mailing lists we host. We also don't outsource our Web storage or networking, so we serve tens of thousands of free software downloads -- over 1.5 terabytes of data -- a day. And our popularity, and the critical nature of the resources we make available, make us a target for denial of service attacks (one is ongoing as we write this), requiring constant monitoring by the tech team, whose members take turns being ready for emergency work so that the resources our supporters depend on stay available.

As hard as we work, we still want to do more, like increasing our already strict standards on hardware compliance, so in 2020, we will finish replacing the few remaining servers that require a nonfree BIOS. To be compliant to our own high standards, we need to be working with devices that are available through Respects Your Freedom retailers. We plan to add new machines to our farm, so that we can host more community servers like the ones we already host for KDE, SugarLabs, GNU Guix, Replicant, gNewSense, GNU Linux-Libre, and FSFLA. We provide completely virtual machines that these projects use for their daily operations, whether that's Web hosting, mailing lists, software repositories, or compiling and testing software packages.

We know that many software projects and individual hackers are looking for more options on code hosting services that focus on freedom and privacy, so we are working to set up a public site that anybody can use to publish, collaborate, or document their progress on free software projects. We will follow strict criteria to ensure that this code repository hosts only fully free software, and that it follows the very best practices towards freedom and privacy.

Another project that we are very excited about for this year is a long-awaited refresh of https://www.fsf.org. Not only will it be restyled, but also easier to browse on mobile devices. As our campaigns and licensing teams are eager to create and publish more resources in different formats, we will also work to improve the support for publishing audio and video files on the site. And to enable you to do more, too, we are also developing a site to organize petitions and collect signatures, so that together we can run more effective grassroots campaigns and fight for the freedom of all computer users.

All of these efforts require countless hours of hard work, and the use of high quality hardware. These come to us at a significant cost, not just to purchase, but to keep running and to host at specialized data centers (if you have rack space in the Boston area, we are always looking for donors). For all this work, we depend on the continuous commitment of individual contributors to keep providing the technical foundation to fight for software freedom.

In solidarity,

Ruben Rodriguez Perez
Chief Technology Officer

08 December, 2019 02:20AM by Wensheng XIE

December 06, 2019

GNU Guile

GNU Guile 2.9.6 (beta) released

We are delighted to announce GNU Guile 2.9.6, the sixth beta release in preparation for the upcoming 3.0 stable series. See the release announcement for full details and a download link.

This release fixes bugs caught by users of the previous 2.9.5 prerelease, and adds some optimizations as well as a guile-3 feature for cond-expand.

In this release, we also took the opportunity to do some more rigorous benchmarking:

Comparison of microbenchmark performance for Guile 2.2.6 and 2.9.6

GNU Guile 2.9.6 is a beta release, and as such offers no API or ABI stability guarantees. Users needing a stable Guile are advised to stay on the stable 2.2 series.

As always, experience reports with GNU Guile 2.9.6, good or bad, are very welcome; send them to guile-devel@gnu.org. If you know you found a bug, please do send a note to bug-guile@gnu.org. Happy hacking!

06 December, 2019 01:15PM by Andy Wingo (guile-devel@gnu.org)

Gary Benson

GNOME 3 won’t unlock

Every couple days something on my RHEL 7 box goes into a swapstorm and uses up all the memory. I think it’s Firefox, but I never figured out why, generally I have four different Firefoxes running with four different profiles, so it’s hard to tell which one’s failing (if it even is that). Anyway, sometimes it makes the screen lock crash or something, and I can’t get in, and I can never remember what process you have to kill to get back in, so here it is: gnome-shell. You have to killall -9 gnome-shell, and it lets you back in. Also killall -STOP firefox and killall -STOP "Web Content" are handy if the swapstorm is still under way.

06 December, 2019 10:17AM by gbenson

December 02, 2019

remotecontrol @ Savannah

November 27, 2019

GNU Guix

Guix on an ARM Board

Increasingly people discovering Guix want to try it on an ARM board, instead of their x86 computer. There might be various reasons for that, from power consumption to security. In my case, I found these ARM boards practical for self-hosting, and I think the unique properties of GNU Guix are making it very suitable for that purpose. I have installed GNU Guix on a Cubietruck, so my examples below will be about that board. However, you should be able to change the examples for your own use case.

Installing the Guix System on an ARM board is not as easy as installing it on an x86 desktop computer: there is no installation image. However, Guix supports ARM and can be installed on a foreign distribution running on that architecture. The trick is to use the Guix installed on that foreign distribution to initialize the Guix System. This article will show you how to install the Guix System on your board, without using an installer image. As we have previously mentioned it is possible to generate an installation image yourself, if your board is supported.

Most boards can be booted from an existing GNU+Linux distribution. You will need to install a distribution (any of them) and install GNU Guix on it, using e.g. the installer script. Then, my plan was to install the Guix System on an external SSD drive, instead of the SD card, but we will see that both are perfectly possible.

The first part of the article will focus on creating a proper u-boot configuration and an operating system declaration that suits your board. The second part of this article will focus on the installation procedure, when there is no installer working for your system.

Writing a configuration file for an ARM board

A configuration file for an ARM board is not very different from a configuration file for a desktop or a server running on another architecture. However, most boards use the u-boot bootloader and require some less common modules to be available at boot time.

The root file system

First of all, you should decide where your root file system is going to be installed. In my case, I wanted to install is on the external SSD, so I chose it:

(file-systems
  (cons* (file-system
           (mount-point "/")
           (device "/dev/sda1")
           (type "ext4"))
         %base-file-systems))

If you instead want to install the root file system on an SD card, you'll need to find its device name, usually /dev/mmcblk0 and the partition number. The device corresponding to the first partition should be /dev/mmcblk0p1. In that case, you would have:

(file-systems
  (cons* (file-system
           (mount-point "/")
           (device "/dev/mmcblk0p1")
           (type "ext4"))
         %base-file-systems))

The bootloader

Because of the way the Guix System is designed, you cannot use an already existing bootloader to boot your system: it wouldn't know where to look for the kernel, because it doesn't know its store path. It wouldn't be able to let you boot older generations either. Most boards use the u-boot bootloader, so we will focus on that bootloader here.

Contrary to grub, there are multiple variants of u-boot, one per board type. The installation procedure for u-boot is also somewhat specific to the board, so there are two things that you need to take care of: the u-boot package and the bootloader declaration.

Guix already define a few u-boot based bootloaders, such as u-boot-a20-olinuxino-lime-bootloader or u-boot-pine64-plus-bootloader among others. If your board already has a u-boot-*-bootloader defined in (gnu bootloader u-boot), you're lucky and you can skip this part of the article!

Otherwise, maybe the bootloader package is defined in (gnu packages bootloaders), such as the u-boot-cubietruck package. If so, you're a bit lucky and you can skip creating your own package definition.

If your board doesn't have a u-boot-* package defined, you can create one. It could be as simple as (make-u-boot-package "Cubietruck" "arm-linux-gnueabihf"). The first argument is the board name, as expected by the u-boot build sysetem. The second argument is the target triplet that corresponds to the architecture of the board. You should refer to the documentation of your board for selecting the correct values. If you're really unlucky, you'll need to do some extra work to make the u-boot package you just created work, as is the case for the u-boot-puma-rk3399 for instance: it needs additional phases to install firmware.

You can add the package definition to your operating system configuration file like so, before the operating-system declaration:

(use-modules (gnu packages bootloaders))

(define u-boot-my-board
  (make-u-boot-package "Myboard" "arm-linux-gnueabihf"))

(operating-system
  [...])

Then, you need to define the bootloader. A bootloader is a structure that has a name, a package, an installer, a configuration file and a configuration file generator. Fortunately, Guix already defines a base u-boot bootloader, so we can inherit from it and only redefine a few things.

The Cubietruck happens to be based on an allwinner core, for which there is already a u-boot bootloader definition u-boot-allwinner-bootloader. This bootloader is not usable as is for the Cubietruck, but it defines most of what we need. In order to get a proper bootloader for the Cubietruck, we define a new bootloader based on the Allwinner bootloader definition:

(define u-boot-cubietruck-bootloader
  (bootloader
    (inherit u-boot-allwinner-bootloader)
    (package u-boot-cubietruck)))

Now that we have our definitions, we can choose where to install the bootloader. In the case of the Cubietruck, I decided to install it on the SD card, because it cannot boot from the SSD directly. Refer to your board documentation to make sure you install u-boot on a bootable device. As we said earlier, the SD card is /dev/mmcblk0 on my device.

We can now put everything together like so:

(use-modules (gnu packages bootloaders))

(define u-boot-cubietruck
  (make-u-boot-package "Cubietruck" "arm-linux-gnueabihf"))

;; u-boot-allwinner-bootloader is not exported by (gnu bootloader u-boot) so
;; we use @@ to get it.  (@ (module) variable) means: get the value of "variable"
;; as defined (and exported) in (module).  (@@ (module) variable) is the same, but
;; it doesn't care whether it is exported or not.
(define u-boot-allwinner-bootloader
  (@@ (gnu bootloader u-boot) u-boot-allwinner-bootloader))

(define u-boot-cubietruck-bootloader
  (bootloader
    (inherit u-boot-allwinner-bootloader)
    (package u-boot-cubietruck)))

(operating-system
  [...]
  (bootloader
    (bootloader-configuration
      (target "/dev/mmcblk0")
      (bootloader u-boot-cubietruck-bootloader)))
  [...])

The kernel modules

In order for Guix to be able to load the system from the initramfs, it will probably need to load some modules, especially to access the root file system. In my case, the SSD is on an ahci device, so I need a driver for it. The kernel defines ahci_sunxi for that device on any sunxi board. The SD card itself also requires two drivers: sunxi-mmc and sd_mod.

Your own board may need other kernel modules to boot properly, however it is hard to discover them. Guix can tell you when a module is missing in your configuration file if it is loaded as a module. Most distros however build these modules in the kernel directly, so Guix cannot detect them reliably. Another way to find what drivers might be needed is to look at the output of dmesg. You'll find messages such as:

[    5.193684] sunxi-mmc 1c0f000.mmc: Got CD GPIO
[    5.219697] sunxi-mmc 1c0f000.mmc: initialized, max. request size: 16384 KB
[    5.221819] sunxi-mmc 1c12000.mmc: allocated mmc-pwrseq
[    5.245620] sunxi-mmc 1c12000.mmc: initialized, max. request size: 16384 KB
[    5.255341] mmc0: host does not support reading read-only switch, assuming write-enable
[    5.265310] mmc0: new high speed SDHC card at address 0007
[    5.268723] mmcblk0: mmc0:0007 SD32G 29.9 GiB

or

[    5.614961] ahci-sunxi 1c18000.sata: controller can't do PMP, turning off CAP_PMP
[    5.614981] ahci-sunxi 1c18000.sata: forcing PORTS_IMPL to 0x1
[    5.615067] ahci-sunxi 1c18000.sata: AHCI 0001.0100 32 slots 1 ports 3 Gbps 0x1 impl platform mode
[    5.615083] ahci-sunxi 1c18000.sata: flags: ncq sntf pm led clo only pio slum part ccc 
[    5.616840] scsi host0: ahci-sunxi
[    5.617458] ata1: SATA max UDMA/133 mmio [mem 0x01c18000-0x01c18fff] port 0x100 irq 37
[    5.933494] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)

Also note that module names are not consistent between what Guix expects and what is printed by dmesg, especially when the contain a "-" or a "_". You will find the correct file name by building (or using a substitute for) linux-libre beforehand:

find `guix build linux-libre`/lib/modules -name '*mmc*'

Here, I could find a file named "kernel/drivers/mmc/host/sunxi-mmc.ko", hence the module name sunxi-mmc. For the other driver, I found a "kernel/drivers/ata/ahci_sunxi.ko", hence the name ahci_sunxi, even if dmesg suggested ahci-sunxi.

Once you have found the modules you want to load before mounting the root partition, you can add them to your operating-system declaration file:

(initrd-modules (cons* "sunxi-mmc" "sd_mod" "ahci_sunxi" %base-initrd-modules))

Installing the Guix System

Installing on another drive

In my case, I wanted to install the system on an external SSD, while the currently running foreign distribution was running from the SD card. What is nice with this setup is that, in case of real trouble (you SSD caught fire or broke), you can still boot from the old foreign system with an installed Guix and all your tools by re-flashing only the bootloader.

In this scenario, we use the foreign system as we would the installer iso, using the manual installation procedures described in the manual. Essentially, you have to partition your SSD to your liking, format your new partations and make sure to reference the correct partition for the root file system in your configuration file. Then, initialize the system with:

mount /dev/sda1 /mnt
mkdir /mnt/etc
$EDITOR /mnt/etc/config.scm # create the configuration file
guix system init /mnt/etc/config.scm /mnt

You can now reboot and enjoy your new Guix System!

Installing on the same drive

Another option is to install the Guix System over the existing foreign distribution, replacing it entirely. Note that the root filesystem for the new Guix System is the current root filesystem, so no need to mount it. The following will initialize your system:

$EDITOR /etc/config.scm # create the configuration file
guix system init /etc/config.scm /

Make sure to remove the files from the old system. You should at least get rid of the old /etc directory, like so:

mv /etc{,.bak}
mkdir /etc

Make sure there is an empty /etc, or the new system won't boot properly. You can copy your config.scm to the new /etc directory. You can now reboot and enjoy your new Guix System!

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

27 November, 2019 12:00PM by Julien Lepiller

www-zh-cn @ Savannah

Come together for free software

Here at the Free Software Foundation (FSF), we strongly believe that one person can make a difference. Our main task, as the principal organization in the fight for user freedom, is one of connection; to bring people together around an unwavering set of principles. We will achieve global software freedom by staying the course, by focusing on education, and by making tools and solutions available, all by working together with this passionate and diverse community.

Every individual that takes action now will help us reach our goal of welcoming 600 new associate members by December 31st. Associate members give us the strength to amplify the free software message -- each new member exponentially increases our reach and our ability to make change. Visit fsf.org/appeal to learn more about all the different ways we can stand strong together and for access to engaging images to help you spread the message using the hashtag #ISupportFreeSoftware!

The FSF is supported by thousands of individuals like you who form the heart of the movement. Thank you for being an associate member we can depend on. While it's humbling to have thousands of people around the world giving us their votes of confidence, we also know we need to connect with millions more. Please take this moment to publicly share your passion for free software. If each associate member inspires just one other, we can double our strength. Plus, if you manage to get a minimum of three people to mention you as a referral before December 31st, you will receive an exclusive year-end gift. If you can do a little more, any extra donation you can make will help us reach even more people on your behalf.

This year, our staff of only fourteen used your financial support to unite people all over the world around our mission, with increased opportunities both in-person and online.

The only way to make sure free software stays free is through enforcing copyleft licenses, like the GNU General Public License, according to the Principles of Community-Oriented GPL Enforcement. In addition to their GPL enforcement work, our Licensing and Compliance Lab also provides educational resources to guide people through myriad licensing choices. With the help of a dedicated volunteer team, they help organizations and individuals properly distribute software while protecting user freedom.

    This October we organized a Continuing Legal Education (CLE) seminar on GPL Compliance and Legal Ethics, educating law professionals, students, and anyone interested about a range of relevant free software licensing topics.

    We invested in the development of a brand new Respects Your Freedom (RYF) Web site, so that we can connect retailers with potential customers. Our RYF program provides anyone looking for freedom-respecting products with an increasingly wide range of options. Our Licensing and Compliance Lab does the important leg work of verifying devices for users.

More and bigger seminars are in the pipeline, and we are currently processing 55 RYF certification applications. Any financial support will go into increased infrastructure, sourcing volunteers, certification, and hosting in-person, educational events.

    For years, people everywhere have been able to participate remotely in the annual LibrePlanet conference through our fully free livestream. This year, our tech team was able to share that knowledge with the EmacsConf team, as well as with the WikiConference team so that they could successfully stream their online conferences using only free software for the first time. For EmacsConf, the FSF also organized one of only two satellite instances, and hosted two of the speakers.

    Our tech team supports the development of free software by providing server space for FSF infrastructure, for the GNU Project, and for other free software projects. We recently moved our cluster of 100+ virtual machines to a new location, where we will continue to work on upgrading and expanding it. With an upgraded cluster, we'll be able to provide even more new and promising free software projects with a fully free hosting location. Rent, equipment, and RYF-compliant hardware are necessary to be able to offer this service more professionally to the public.

    The thirteenth International Day Against DRM (IDAD) hosted activists in Boston for protests and a sprint on writing educational materials, but also brought together fourteen online partners who amplified IDAD further worldwide. They organized activities ranging from promotional offers to increased writing and local activism. The dust jacket that was specially designed for the event was translated into eight languages by supporters from around the world, which is a testament to its effectiveness and our reach.

    Lastly, every six months, we find ourselves in the FSF office working with some thirty volunteers to hold true to our promise of sending the print version of our biannual free software publication to our associate members. A combined 300 hours of connecting with local free software enthusiasts allows us to send over 12,500 Bulletins to dedicated advocates in 53 countries. It will soon be published online as well.

This is just a snapshot of the many ways we were able to form new connections this year. Upholding free software and copyleft standards; providing technical infrastructure for free software developers globally; educating about free software; campaigning; organizing events; speaking and tabling at other industry events; and publishing advocacy articles, are at the core of the Foundation's work. We use funds for design, venue logistics, equipment, and operational support; we offer the possibility of attending our events to those who typically would not have the funds; and we also provide guidance and fiscal sponsorship for other free software projects and conferences who are making a difference.

We will continue to do this work and to establish and motivate connections that allow us to build awareness about the unjust power of proprietary software. We achieve a lot for little with the help of volunteers, and often repurpose equipment where we can. We have received Charity Navigator's top rating for six consecutive years. And you can read our financial statements and annual reports online.

Thank you for everything you do to help this cause. The faces behind the free software movement may change, but with your support, the Free Software Foundation will not diverge from our continued defense of the four freedoms -- not now, not ever. We advocate for and facilitate the creation of free software because it is the right thing to do -- and we need you. Our connection with you is valuable to us because you connect the movement to the world.

Thank you.

Zoë Kooyman
Program Manager

27 November, 2019 09:36AM by Wensheng XIE

November 26, 2019

gnuastro @ Savannah

Gnuastro 0.11 released

The 11th release of GNU Astronomy Utilities (Gnuastro) is now available. Please see the announcement for more.

26 November, 2019 04:46PM by Mohammad Akhlaghi

November 25, 2019

FSF News

Contract opportunity: Bookkeeper

The Free Software Foundation (FSF), a Massachusetts 501(c)(3) charity with a worldwide mission to protect computer user freedom, seeks a motivated and talented Boston-based individual to provide bookkeeping and financial operations support. This is a temporary, part-time contract opportunity with potential for additional hours and/or extension.

The contractor will work closely with our business operations manager and the rest of the operations team to ensure that the organization's day-to-day financial functions run smoothly. We are looking for a hands-on and detail-oriented professional who is comfortable working both independently and with multiple teams as needed. Ideal candidates will be proactive and highly adaptable, with an aptitude for learning new tools and paying close attention to minutiae despite dense financial material. Applicants should have at least three years of experience with nonprofit bookkeeping and finance. Familiarity with tools we use is a plus, such as SQL Ledger, CiviCRM, LibreOffice, and Request Tracker.

Contract expectations include:

  • preparing weekly accounts receivable, payables, deposits, and purchasing,

  • assisting with monthly financial reconciliation,

  • processing incoming tickets in our internal/external ticketing system, and

  • supporting the annual audit.

Contract details

This is a 3-month contract position at 10 to 20 hours per week, with responsibilities to be performed on-site at the FSF's downtown Boston office. All work will be done in the office with free software. Compensation is competitive.

Application instructions

Applications must be submitted via email to hiring@fsf.org. The email must contain the subject line "Bookkeeper." A complete application should include:

  • cover letter,
  • resume,
  • hourly rate requirements, and
  • three recent references.

All materials must be in a free format (such as text, LibreOffice, or PDF files). Email submissions that do not follow these instructions will probably be overlooked. No phone calls, please.

Applications will be reviewed on a rolling basis until the position is filled. To guarantee consideration, submit your application by December 11, 2019.

The FSF is an equal opportunity employer and does not discriminate against any employee, contractor, or application for employment or contracting, on the basis of race, color, marital status, religion, age, sex, sexual orientation, national origin, handicap, or any other legally protected status recognized by federal, state or local law. We value diversity in our workplace.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. We are based in Boston, MA, USA.

25 November, 2019 09:25PM

November 23, 2019

health @ Savannah

GNU Health patchset 3.6.2 released !

Dear community

GNU Health 3.6.2 patchset has been released !

Priority: High

Table of Contents

  • About GNU Health Patchsets
  • Updating your system with the GNU Health control Center
  • Summary of this patchset
  • Installation notes
  • List of issues related to this patchset

About GNU Health Patchsets

We provide "patchsets" to stable releases. Patchsets allow applying bug fixes and updates on production systems. Always try to keep your production system up-to-date with the latest patches.

Patches and Patchsets maximize uptime for production systems, and keep your system updated, without the need to do a whole installation.

NOTE: Patchsets are applied on previously installed systems only. For new, fresh installations, download and install the whole tarball (ie, gnuhealth-3.6.2.tar.gz)

Updating your system with the GNU Health control Center

Starting GNU Health 3.x series, you can do automatic updates on the GNU Health HMIS kernel and modules using the GNU Health control center program.

Please refer to the administration manual section ( https://en.wikibooks.org/wiki/GNU_Health/Control_Center )

The GNU Health control center works on standard installations (those done following the installation manual on wikibooks). Don't use it if you use an alternative method or if your distribution does not follow the GNU Health packaging guidelines.

Summary of this patchset

Patch 3.6.2 fixes a problem in the obstetric history (OBS command) on representing the weeks at the end of pregnancy.

Installation Notes

You must apply previous patchsets before installing this patchset. If your patchset level is 3.6.1, then just follow the general instructions.

In most cases, GNU Health Control center (gnuhealth-control) takes care of applying the patches for you. 
Login as gnuhealth user

$ cdutil
$ gnuhealth-control update

For detailed information, follow the general instructions at

After applying the patches, make a full update of your GNU Health database as explained in the documentation.

  • Restart the GNU Health server

List of issues and tasks related to this patchset

  • GH HMIS server. health_gyneco . Bug #57292: OBS command generates a traceback
  • GH HMIS server. health_federation.  Enforce unicode on setup read method

 For detailed information about each issue, you can visit https://savannah.gnu.org/bugs/?group=health
 For detailed information about each task, you can visit https://savannah.gnu.org/task/?group=health

 For detailed information you can read about Patches and Patchsets

Happy and healthy hacking !

Dr. Luis Falcon, MD, MSc
President, GNU Solidario
GNU Health: Freedom and Equity in Healthcare
http://www.gnuhealth.org
GPG Fingerprint :ACBF C80F C891 631C 68AA  8DC8 C015 E1AE 0098 9199

Join us at GNU Health Con 2019 https://www.gnuhealthcon.org

23 November, 2019 04:51PM by Luis Falcon

parallel @ Savannah

GNU Parallel 20191122 ('Quantum Supremacy') released [stable]

GNU Parallel 20191122 ('Quantum Supremacy') [stable] has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

No new functionality was introduced so this is a good candidate for a stable release.

GNU Parallel is 10 years old next year on 2020-04-22. You are here by invited to a reception on Friday 2020-04-17.

See https://www.gnu.org/software/parallel/10-years-anniversary.html

Quote of the month:

  [L]earning about parallel was amazing for me, it gives us many beautiful solutions.
    -- SergioAraujo@stackoverflow

New in this release:

  • Bug fixes and man page updates.

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 3374ec53bacb199b245af2dda86df6c9
    12345678 3374ec53 bacb199b 245af2dd a86df6c9
    $ md5sum install.sh | grep 029a9ac06e8b5bc6052eac57b2c3c9ca
    029a9ac0 6e8b5bc6 052eac57 b2c3c9ca
    $ sha512sum install.sh | grep f517006d9897747bed8a4694b1acba1b
    40f53af6 9e20dae5 713ba06c f517006d 9897747b ed8a4694 b1acba1b 1464beb4
    60055629 3f2356f3 3e9c4e3c 76e3f3af a9db4b32 bd33322b 975696fc e6b23cfb
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

23 November, 2019 01:03PM by Ole Tange

November 22, 2019

Sylvain Beucler

Android Rebuilds updates

What is it already?

Android Rebuilds provides freely-licensed builds of Android development tools written by somebody else.

New builds

SDK 10 (API 29) and NDK 20 rebuilds are now available, as unattended build scripts as well as binaries you shan't trust.

sdkmanager integration will be complete when we figure out how to give our repo precedence over somebody else's.

Evolution of the situation

SDK build remains monolithic and growing (40GB .git, 7h multi-core build, 200GB build space).

But there are fewer build issues, thanks to newer "prebuilts" deps straight in Git, now including OpenJDK.
I expect we'll soon chroot in Git before build.

Also for the first time ever I could complete a NDK windows build.

Licensing

Official binaries are still click-wrapped with a proprietary license.

It was discovered that such a license is also covering past versions of android.jar & al. hidden in a prebuilts directory and somehow necessary to the builds.
Archeological work already successfully started to rebuild SDKs from the start of the decade.

Fanbase

Android Rebuilds is showcased in ungoogled-chromium-android, a lightweight approach to removing Google web service dependency.

F-Droid mirror

After some back and forth, the F-Droid mirror is stable and limited to the experimental sdkmanager repository.
F-Droid showed high dedication to implementing upload restrictions and establishing procedures.
I have great hope that they will soon show the same level of dedication dropping non-free licenses and freeing their build server.

22 November, 2019 01:35PM

November 19, 2019

health @ Savannah

GNU Health patchset 3.6.1 released !

Dear community

GNU Health 3.6.1 patchset has been released !

Priority: High

Table of Contents

  • About GNU Health Patchsets
  • Updating your system with the GNU Health control Center
  • Summary of this patchset
  • Installation notes
  • List of issues related to this patchset

About GNU Health Patchsets

We provide "patchsets" to stable releases. Patchsets allow applying bug
fixes and updates on production systems. Always try to keep your
production system up-to-date with the latest patches.

Patches and Patchsets maximize uptime for production systems, and keep
your system updated, without the need to do a whole installation.

NOTE: Patchsets are applied on previously installed systems only. For
new, fresh installations, download and install the whole tarball (ie,
gnuhealth-3.6.1.tar.gz)

Updating your system with the GNU Health control Center

Starting GNU Health 3.x series, you can do automatic updates on the GNU
Health HMIS kernel and modules using the GNU Health control center
program.

Please refer to the administration manual section (
https://en.wikibooks.org/wiki/GNU_Health/Control_Center )

The GNU Health control center works on standard installations (those
done following the installation manual on wikibooks). Don't use it if
you use an alternative method or if your distribution does not follow
the GNU Health packaging guidelines.

Summary of this patchset

Patch 3.6.1 mainly fixes crash / tracebacks on medicament computation,
which is now using the standard product location quantities. Version
3.6.1 also provides the latest gnuhealth-control center tool. 

Installation Notes

You must apply previous patchsets before installing this patchset. If
your patchset level is 3.6.0 , then just follow the general
instructions.

In most cases, GNU Health Control center (gnuhealth-control) takes care
of applying the patches for you.

Follow the general instructions at

After applying the patches, make a full update of your GNU Health
database as explained in the documentation.

  • Restart the GNU Health server

List of issues and tasks related to this patchset

  https://savannah.gnu.org/bugs/?57267

  • bug #57266: Remove the 2to3 conversion in updates

  https://savannah.gnu.org/bugs/?57266

For detailed information about each issue, you can visit
 https://savannah.gnu.org/bugs/?group=health

For more information you can read about Patches and Patchsets

Happy and healthy hacking !
--
Dr. Luis Falcon, M.D.
President, GNU Solidario
GNU Health: Freedom and Equity in Healthcare
http://www.gnuhealth.org
GPG Fingerprint :ACBF C80F C891 631C 68AA  8DC8 C015 E1AE 0098 9199

Join us at GNU Health Con 2019 https://www.gnuhealthcon.org

19 November, 2019 11:04PM by Luis Falcon

November 18, 2019

Sylvain Beucler

SCP Foundation needs you!

SCP is a mind-blowing, diverse, high-quality collection of writings and illustrations, all released under the CC-BY-SA free license.
If you never read horror stories written with scientific style -- have a try :)

[obviously this has nothing to do with OpenSSH Secure CoPy ;)]

Faced with a legal threat through the aggressive use of a RU/EU trademark, the SCP project is raising a legal fund.
I suggest you have a look.

18 November, 2019 12:57PM

November 17, 2019

denemo @ Savannah

Version 2.3 was released

This is a belated notice that version 2.3 has been released.

New Features

    Seek Locations in Scores
        Specify type of object sought
        Or valid note range
        Or any custom condition
        Creates a clickable list of locations
        Each location is removed from list once visited
    Syntax highlighting in LilyPond view
    Playback Start/End markers draggable
    Source window navigation by page number
        Page number always visible
    Rapid marking of passages
    Two-chord Tremolos
    Allowing breaks at half-measure for whole movement
        Also breaks at every beat
    Passages
        Mark Passages of music
        Perform tasks on the marked passages
        Swapping musical material with staff below implemented
    Search for lost scores
        Interval-based
        Searches whole directory hierarchy
        Works for transposed scores
    Compare Scores
    Index Collection of Scores
        All scores below a start directory indexed
        Index includes typeset incipit for music
        Title, Composer, Instrumentation, Score Comment fields
        Sort by composer surname
        Filter by any Scheme condition
        Open files by clicking on them in Index
    Intelligent File Opening
        Re-interprets file paths for moved file systems
    Improved Score etc editor appearance
    Print History
        History records what part of the score was printed
        Date and printer included
    Improvements to Scheme Editor
        Title bar shows open file
        Save dialog gives help
    Colors now differentiate palettes, titles etc. in main display
    Swapping Display and Source positions
        for switching between entering music and editing
        a single keypress or MIDI command
    Activate object from keyboard
        Fn2 key equivalent to mouse-right click
        Shift and Control right-click via Shift-Fn2 and Control-Fn2
    Help via Email
    Auto-translation to Spanish

Bug Fixes

    Adding buttons to palettes no longer brings hidden buttons back

    MIDI playback of empty measures containing non-notes

    Instrument name with Ambitus clash in staff properties menu fixed

    Visibility of emmentaler glyphs fixed

    Update of layout on staff to voice change

    Open Recent anomalies fixed

    Failures to translate menu titles and palettes fixed

17 November, 2019 05:05PM by Richard Shann

November 12, 2019

FSF Events

Hang out with the FSF staff in Seattle, November 15

We are hosting this get-together to show our appreciation for your support of the FSF's work and to provide an opportunity to meet other FSF members and supporters in the area. We'll give updates on what the FSF is currently working on and we are curious to hear your thoughts, as well as answer any questions you may have.

  • WHEN: Friday, November 15, from 18:30 - 20:30 PDT

  • WHERE: Herb & Bitter, 516 Broadway, East Seattle, WA 98102

  • RSVP: RSVP to campaigns@fsf.org before Thursday, November 14, if you're thinking of attending

We will be providing appetizers, including vegan/vegetarian friendly items. We reserve some space in the venue for the amount of people we expect based on the RSVP, but we still welcome people that have not planned ahead. This is an informal gathering for anyone who is interested in participating in the free software community or wants to learn more about the FSF; you don't have to be a current member to attend.

We look forward to meeting you in person!

12 November, 2019 03:50PM

November 10, 2019

health @ Savannah

GNU Health HMIS 3.6 released !

Dear community:

I am very proud to announce the release of the GNU Health 3.6 series !

This version is the result of many developments and integration of ideas from the community.

We are now 11 years old. We should all be very proud because not only we have built the best Libre Health and Hospital Information System, but we have created a strong, committed and friendly international community around it.

What is new in GNU Health 3.6 series

  • Both GNU Health client and server are now in Python3
  • Remove Python2 support
  • GH HMIS server uses Tryton 5.0 LTS kernel (5 year support)
  • Client is based on Tryton GTK client 5.2
  • Automation on the GH Federation queue management
  • Integration to Orthanc DICOM server
  • Pages of Life models fully integrated with patient evaluation & GH Federation
  • GNU Health camera plugin integrated with the latest OpenCV
  • GH Client uses GI. Removed pygtkcompat.
  • GH Federation HIS has been migrated from MongoDB to PostgreSQL
  • New demo database downloader
  • Thalamus uses now uwsgi as the default WSGI
  • SSL is the default method for Thalamus and the GH Federation

Upgrading from GNU Health 3.4

  • Make a FULL BACKUP your kernel, database and attach directories !!!
  • Follow the instructions on the Wikibooks
  • Read specific instructions found under scripts/upgrade/3.6 of the main source installation tarball, an apply the scripts in the specified order.

Development focus

In addition of the GH HMIS server, we will focus the development in the following  areas of the GNU Health ecosystem:

  • The GNU Health Federation Portal
  • The mobile client
  • Interoperability

The GH Federation Portal has already started. It is a VueJS application and provides a single point of entry for individuals, health professionals and epidemiologists to the GNU Health Information system.

The GNU Health Federation now receives information coming from many health institutions and people from a region or country. The GH Federation portal will allow to manage resources, as well as the main point for analytics and reporting of massive amount of demographics health data generated nationwide. People, health centers and research institutions (eg genomics already can enjoy the benefits from the GNU Health Federation.

The mobile client (MyGNUHealth) development will remain in QT and will be focus on KDE plasma mobile technology, and run in Libre mobile operating systems and devices (such as Pine64). We need fully libre mobile devices if we want to preserve privacy in healthcare.

As far as Interoperability goes, GNU Health is now very interoperable. It uses open coding standards, as well as open formats (XML, JSON, .. ) to exchange messages. We currently have support for read operations in HL7 FHIR for a number of resources. Needless to say, we are open to other open standard communities that are willing to integrate to GNU Health.

Last but not least....no matter how hard we try to avoid them, there will be bugs, so please test the new system, upgrade process, languages, and give us your feedback via them via health@gnu.org

Happy and Healthy Hacking !

--
Dr. Luis Falcon, M.D.
President, GNU Solidario
GNU Health: Freedom and Equity in Healthcare
https://www.gnuhealth.org
GNUPG Fingerprint :ACBF C80F C891 631C 68AA  8DC8 C015 E1AE 0098 9199

10 November, 2019 10:42PM by Luis Falcon

November 08, 2019

libredwg @ Savannah

libredwg-0.9.2 released

This is a minor patch update.
Added the -x,--extnames option to dwglayers for r13-r14 DWGs,
Fixed some more leaks,
Added DICTIONARY.itemhandles[] for r13 and r14,
Added geom utils to some programs: dwg2SVG and dwg2ps,
Added basic POLYLINE_2D and LWPOLYLINE support to dwg2SVG.

Here are the compressed sources:
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.9.2.tar.gz   (9.8MB)
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.9.2.tar.xz   (3.7MB)

Here are the GPG detached signatures[*]:
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.9.2.tar.gz.sig
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.9.2.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are more binaries:
  https://github.com/LibreDWG/libredwg/releases/tag/0.9.2

Here are the SHA256 checksums:

e80dd6006c3622df76d2a684e3119d32f61c1d522d54922799149f6ab84aada4  libredwg-0.9.2.tar.gz
d4ba88bfd031a0901f6f3ad007ec87f5d9f328fb10d1bce2daf66315625d0364  libredwg-0.9.2.tar.xz

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify libredwg-0.9.2.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys B4F63339E65D6414

and rerun the 'gpg --verify' command.

08 November, 2019 08:32AM by Reini Urban

November 07, 2019

FSF News

Talos II Mainboard and Talos II Lite Mainboard now FSF-certified to Respect Your Freedom

Talos II

BOSTON, Massachusetts, USA -- Thursday, November 7th, 2019 -- The Free Software Foundation (FSF) today awarded Respects Your Freedom (RYF) certification to the Talos II and Talos II Lite mainboards from Raptor Computing Systems, LLC. The RYF certification mark means that these products meet the FSF's standards in regard to users' freedom, control over the product, and privacy.

While these are the first devices from Raptor Computing Systems to receive RYF certification, the FSF has supported their work since 2015, starting with the original Talos crowdfunding effort. Raptor Computing Systems has worked very hard to protect the rights of users.

"From our very first products through our latest offerings, we have always placed a strong emphasis on returning control of computing to the owner of computing devices -- not retaining it for the vendor or the vendor's partners. We hope that with the addition of our modern, powerful, owner-controlled systems to the RYF family, we will help spur on industry adoption of a similar stance from the many silicon vendors required to support modern computing," said Timothy Pearson, Chief Technology Officer, Raptor Computing Systems, LLC.

These two mainboards are the first PowerPC devices to receive certification. Several GNU/Linux distributions endorsed by the FSF are currently working towards offering support for PowerPC platform.

"These certifications represent a new era for the RYF program. Raptor's new boards were designed to respect our rights, and will open up new possibilities for free software users everywhere," said the FSF's executive director, John Sullivan.

The Talos II and Talos II Lite also represent an interesting first in terms of reproducible builds. When two people compile the same code, the resulting object code usually differs slightly because of variables like build timestamps and other differences affecting the object code. Making it so users can independently reproduce exactly the same builds for important free software programs makes it so that anyone can distribute the builds with more certainty that they do not contain hidden malware. For the Talos II, the FSF was able to reproduce the build that is loaded onto the FPGA chip of the board that was tested, and will include the checksum of that build along with the source code we publish.

"We want to congratulate Raptor Engineering on this, and we encourage vendors to ship more reproducible builds, which we will be happy to reproduce as part of the RYF certification," said the FSF's senior system administrator, Ian Kelling.

To learn more about the Respects Your Freedom certification program, including details on the certification of these Raptor Computing Systems devices, please visit https://ryf.fsf.org.

Retailers interested in applying for certification can consult https://ryf.fsf.org/about/criteria.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

About Raptor Computing Systems, LLC

Raptor Computing Systems, LLC is focused on developing and marketing user-controlled devices.

Media Contacts

Donald Robertson, III
Licensing and Compliance Manager
Free Software Foundation
+1 (617) 542 5942
licensing@fsf.org

Raptor Computing Systems, LLC sales@raptorcs.com

Image of Talos II by Raptor Computing Systems, LLC Copyright 2018 licensed under CC-BY-SA 4.0.

07 November, 2019 07:13PM

LibrePlanet returns in 2020 to Free the Future! March 14-15, Boston area

BOSTON, Massachusetts, USA -- Thursday, November 7, 2019 -- The Free Software Foundation (FSF) today announced that registration is open for the twelfth LibrePlanet conference on free software. The annual technology and social justice conference will be held in the Boston area on March 14 and 15, 2020, with the theme "Free the Future." Session proposals will be accepted through November 20.

The FSF invites activists, hackers, law professionals, artists, students, developers, young people, policymakers, tinkerers, newcomers to free software, and anyone looking for technology that respects their freedom to register to attend, and to submit a proposal for a session for LibrePlanet: "Free the Future."

Submissions to the call for sessions are being accepted through Wednesday, November 20, 2019, at 12:00 EST (17:00 UTC).

LibrePlanet provides an opportunity for community activists, domain experts, and people seeking solutions for themselves to come together in order to discuss current issues in technology and ethics.

"LibrePlanet attendees and speakers will be discussing the hot button issues we've all been reading about every day, and their connection to the free software movement. How do you fight Facebook? How do we make software-driven cars safe? How do we stop algorithms from making terrible, unreviewable decisions? How do we enjoy the convenience of mobile phones and digital home assistants without being constantly under surveillance? What is the future of digital currency? Can we have an Internet that facilitates respectful dialogue?" said FSF's executive director, John Sullivan.

The free software community has continuously demanded that users and developers be permitted to understand, study, and alter the software they use, offering hope and solutions for a free technological future. LibrePlanet speakers will display their unique combination of digital knowledge and educational skills in the two day conference, as well as give more insights into their ethical dedication to envision a future rich with free "as in freedom" software and without network services that mistreat their users. The FSF's LibrePlanet 2020 edition is therefore aptly named "Free the Future."

"For each new technological convenience we gain, it seems that we lose even more in the process. To exchange intangible but vital rights to freedom and privacy for the latest new gadget can make the future of software seem bleak," said ZoĂŤ Kooyman, program manager for the FSF. "But there is resistance, and it is within our capabilities to reject this outcome."

Thousands of people have attended LibrePlanet over the years, both in person and remotely. The conference welcomes visitors from up to 15 countries each year, with many more joining online. Hundreds of impressive free software speaker sessions, including keynote talks by Edward Snowden and Cory Doctorow, can be viewed on the conference's MediaGoblin instance, in anticipation of further program announcements.

For those who cannot attend LibrePlanet in person, there are plenty of other ways to participate remotely. The FSF is encouraging free software advocates worldwide to use the tools provided on libreplanet.org to host satellite viewing parties and other events. They also opened applications for scholarships for people around the globe to attend the conference in Boston, and encourage supporters who are able to help others attend by donating to the LibrePlanet travel fund.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://www.fsf.org and https://www.gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

MEDIA CONTACT

ZoĂŤ Kooyman
Program Manager
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

07 November, 2019 05:10PM

Christopher Allan Webber

Terminal Phase: building a space shooter that runs in your terminal

Yeah you read (and saw, via the above gif) that right! A space shooter! That runs in your terminal! Pew pew pew!

Well it's most of one, anyway. It's a prototype that I built as a test program for Spritely Goblins.

I've satisfied the technical needs I had in building the program; I might still finish it as a game, and it's close enough where making a satisfying game rather than just a short demo is super feasible, but I've decided to see whether or not there's actually enough interest in that at all by leaving that as a milestone on my Patreon. (We're actually getting quite close to meeting it... would be cool if it happened!)

But what am I, a person who is mostly known for work on a federated social web protocol, doing making a game demo, especially for a singleplayer game? Was it just for fun? It turns out it has more to do with my long term plans for the federated social web than it may appear.

And while it would be cool to get something out there that I would be proud of for its entertainment value, in the meanwhile the most interesting aspects of this demo to me are actually the technical ones. I thought I'd walk through what those are in this post, because in a sense it's a preview of some of the stuff ahead in Spritely. (Now that I've written most of this post, I have to add the forewarning that this blogpost wanders a lot, but I hope all the paths it goes down are sufficiently interesting.)

Racket and game development

Before we get to the Spritely stuff, I want to talk about why I think Racket is an underappreciated environment for making games. Well, in a certain sense Racket been advertised for its association with game-making, but in some more obscure ways. Let's review those:

  • Racket has historically been built as an educational teaching-programming environment, for two audiences: middle schoolers, and college freshmen. Both of them to some degree involve using the big-bang "toy" game engine. It is, by default, functional in its design, though you can mix it with imperative constructs. And yet maybe to some that makes it sound like it would be more complicated, but it's very easy to pick up. (DrRacket, Racket's bundled editor, also makes this a lot easier.) If middle schoolers can learn it, so can you.
  • Along those lines, there's a whole book called Realm of Racket that's all about learning to program by making games. It's pretty good!
  • Game studio Naughty Dog has used Racket to build custom in-house tools for their games.
  • Pioneering game developer John Carmack built prototypes for the Oculus Rift using Racket. (It didn't become production code, though John's assessments of Racket's strengths appear to align with my own largely; I was really pleased to see him repost on twitter my blogpost about Racket being an acceptable Python. He did give a caveat, however.)

So, maybe you've already heard about Racket used in a game development context, but despite that I think most people don't really know how to start using Racket as a game development environment. It doesn't help that the toy game engine, big-bang, is slowed down by a lot of dynamic safety checks that are there to help newcomers learn about common errors.

It turns out there are a bunch of really delightful game development tools for Racket, but they aren't very well advertised and don't really have a nice comprehensive tutorial that explains how to get started with them. (I've written up one hastily for a friend; maybe I should polish it up and release it as its own document.) Most of these are written by Racket developer Jay McCarthy, whom I consider to be one of those kind of "mad genius hacker" types. They're fairly lego-like, so here's a portrait of what I consider to be the "Jay McCarthy game development stack":

  • Lux, a functional game engine loop. Really this is basically "big-bang for grown-ups"; the idea is very similar but the implementation is much faster. (It has some weird naming conventions in it though; turns out there's a funny reason for that...)
  • By default Lux ships with a minimal rendering engine based on the Racket drawing toolkit. Combined with (either of) Racket's functional picture combinator libraries (pict or the 2htdp/image library), this can be used to make game prototypes very quickly, but they're still likely to be quite slow. Fortunately, Lux has a clever design by which you can swap out different rendering engines (and input mechanisms) which can compose with it.
  • One of these engines is mode-lambda which has the hilarious tagline of "the best 2d graphics of the 90s, today!" If you're making 2d games, this library is very fast. What's interesting about this library is that it's more or less designed off of the ideas from the Super Nintendo graphics engine (including support for Mode-7 style graphics, the graphic hack that powered games like the Super Nintendo version of Mario Kart). Jay talks about this in his "Get Bonus!" talk from a few years ago. (As a side note, for a while I was confused about Get Bonus's relationship to the rest of the Jay McCarthy stack; at 2018's RacketCon I queried Jay about this and he explained to me that that's more or less the experimental testing grounds for the rest of his libraries, and when the ideas congeal they get factored out. That makes a lot of sense!)
  • Another rendering engine (and useful library in general) is Jay's raart library. This is a functional picture library like pict, but instead of building up raster/vector graphic images, you're building up ascii art. (It turns out that ascii art is a topic of interest for me so I really like raart. Unsurprisingly, this is what's powering Terminal Phase's graphics.)

As I said before, the main problem with these is knowing how to get started. While the reference documentation for each library is quite good, none of them really have a tutorial that show off the core ideas. Fortunately each project does ship with examples in its git repository; I recommend looking at each one. What I did was simply split my editor and type in each example line by line so I could think about what it was doing. But yeah, we really could use a real tutorial for this stuff.

Okay, so 2d graphical and terminal programs are both covered. What about 3d? Well, there's really two options so far:

  • There's a very promising library called Pict3d that does functional 3d combinators. The library is so very cool and I highly, highly recommend you watch the Pict3d RacketCon talk which is hands down one of the coolest videos I've ever seen. If you use DrRacket, you can compose together shapes and not only will they display at the REPL, you can rotate around and view them from different angles interactively. Unfortunately it hasn't seen much love the last few years and it also mostly only provides tools out of the box for very basic geometric primitives. Most people developing 3d games would want to import their meshes and also their skeletal animations and etc etc and Pict3d doesn't really provide any of that yet. I don't think it even has support for textures. It could maybe have those things, but probably needs development help. At the moment it's really optimized for building something with very abstract geometry; a 3d space shooter that resembles the original Star Fox game could be a good fit. (Interestingly my understanding is that Pict3d doesn't really store things as meshes; I think it may store shapes in their abstract math'y form and raytrace on the GPU? I could be wrong about this.)
  • There's also an OpenGL library for Racket but it's very, very low level. (There's also this other OpenGL library but I haven't really looked at it.) However raw OpenGL is so extremely low level that you'll need to build a lot of abstractions on top of it to make it usable for anything.

So that covers the game loop, input, display.

That leaves us with audio and game behavior. I'll admit that I haven't researched audio options sufficiently; rsound seems like a good fit though. (I really need a proper Guix package of this for it to work on my machine for FFI-finding-the-library reasons...)

As for game behavior, that kind of depends on what your game is. When I wrote Racktris I really only had a couple of pieces of state to manage (the grid, where the falling piece was, etc) and it was fairly easy to do it using very basic functional programming constructs (really you're just creating a new version of these objects every time that a tick or input happens). As soon as I tried moving to a game with many independently operating objects doing their own thing, that approach fell apart.

The next thing I tried using was Racket's classes and object system. That was... okay, but it also felt like I was losing out on a lot. Previously I had the pleasant experience that "Woo, yeah! It's all functional!" A functional game engine is nice because returning to a prior snapshot in time is always possible (you can trivially make a game engine where you can rewind time, for instance; great for debugging) and it's much easier to poke and prod at a structure because you aren't changing it, you just are getting it back. Now I lost that feature.

In that conversation I had with Jay at RacketCon 2018, he suggested that I look at his DOS library for game behavior. I won't as quickly recommend this one; it's good for some game designs but the library (particularly the DOS/Win part) is fairly opinionated against objects ("processes") communicating with each other on the same tick; the idea is that each processes only can read what each other wrote on the previous tick. The goal here as I understand it is to eliminate an entire class of bugs and race conditions, but I quickly found that trying to work around the restrictions lead me to creating terrible hacks that were themselves very buggy.

This became really clear to me when I tried to implement a very simple wild west "quick draw" game a-la the Kirby quick draw and Samurai Kirby type games. (All this happened months back, when I was doing the anniversary animation I made earlier this year.) These are very simple games where two players wait for "Draw!" to appear on the screen before they press the button to "fire their gun". Fire first, you win. Fire second, you lose. Fire too early, you lose. Both fire at the same time, you draw. This is a very simple game, but trying to build it on top of DOS/Win (or my DOS/Hurd variant) was extremely hard to do while splitting the judge and players into separate objects. I ended up writing very contorted code that ultimately did communicate on the same tick, but via a layered approach that ended up taking me an hour to track down all the bugs in. I can't imagine scaling it up further.

But DOS had some good ideas, and I got to thinking about how to extend the system to allow for immediate calls, what would it look like? That's when I hit a series of epiphanies which resulted in a big rewrite of the Spritely Goblins codebase (which turned out to make it more useful for programming game behavior in a way that even fits very nicely into the Lux game loop). But I suppose I should really explain the what and why of Spritely Goblins, and how it fits into the larger goals of Spritely.

Terminal Phase and Spritely Goblins and stuff

Spritely Goblins is part of the larger Spritely project. Given Spritely's ambitious goal of "leveling up" the fediverse by extending it into the realm of rich and secure virtual worlds, we have to support distributed programming in a way that assumes a mutually suspicious network. (To get in the right mindset for this, maybe both watch my keynote and Mark Miller's keynote from the ActivityPub conference.) We really want to bake that in at the foundation of our design to build this right.

Thus Spritely Goblins is an asynchronous actor-ish distributed programming system on top of Racket. Kind of like Erlang, but with a focus on object capability security. Most of the good ideas have been taken from the E programming language ("the most interesting programming language you've never heard of"). The only programming environments I would consider viable to build Spritely on top of are ones that have been heavily informed by E, the best other candidate being the stuff Agoric is building on top of Javascript, such as their SwingSet architecture and Jessie (big surprise, since the folks behind E are mostly the folks behind Agoric), or some other more obscure language environments like Monte or, yes, Goblins. (Though, currently despite hanging out in Racket-land, which drinks deeply into the dream of everyone building their own languages, Goblins is just a library. If you want to run code you don't trust though, you'll have to wait until I release Spritely Dungeon, which will be a secure module / language restriction system for Racket. All in due time.)

Spritely Goblins already has some interesting properties:

  • All objects/actors are actually just procedures, waiting to be invoked! All "state" is merely the lexical scope of the enclosed procedure. Upon being invoked, a procedure can both return a value to its invoker (or in asynchronous programming, that fulfills the promise it is listening to) as well as specify what the next version of itself should be (ie, what procedure should be called the next time it handles a message).
  • Objects can only invoke other objects they have a reference to. This, surprisingly, is a sufficient security model as the foundation for everything we need (well, plus sealers/unsealers but I won't get into those here). This is the core observation from Jonathan Rees's A Security Kernel Based on the Lambda Calculus; object capability security is really just everyday programming via argument passing, which pretty much all programmers know how to do. (This applies to modules too, but more on that in a future post.)
  • In most cases, objects live in a "vat". This strange term from the object capability literature really means an event loop. Objects/actors can send messages to other objects in other vats; for the most part it doesn't matter where (on what machine, in what OS process, etc) other objects are when it comes to asynchronous message passing.
  • When asynchronous message passing, information is eventually resolved via promises. (Initially I had promises hidden behind coroutines in the core of the system, but it turns out that opens you to re-entrancy attacks if you aren't very careful. That may come back eventually, but with great care.)
  • While any object can communicate with any other object on any vat via message passing, objects on the same vat can do something that objects on separate vats can't: they can perform immediate calls (ie, something that looks like normal straight-ahead programming code, no coroutines required: you invoke the other object like a procedure, and it returns with a value). It turns out this is needed if you want to implement many interesting transactional things like financial instruments built on top of pure object capabilities. This also is nice for something like a game like Terminal Phase, where we really aren't doing anything asynchronous, are running on a fixed frame rate, and want to be deterministic. But a user should remember that (for important reasons I won't get into in this post) that immediate calls are strictly less universal than asynchronous message passing, since those can only be done between objects in the same vat. It's pleasant that Goblins can support both methods of development, including in an intermixed environment.
  • There is actually a lower level of abstraction than a vat, it turns out! This is something that is different than both E and Agoric's SwingSet I think and maybe even mildly novel; all the core operations (receiving a message, spawning an actor, etc) to operate on the actormap datastructure are exposed to the user. Furthermore, all of these operations are transactional! When using the lower-level actormap, the user receives a new actormap (a "transactormap") which is a delta to the parent actormap (either another transactormap or the root protected weak-hashtable actormap, a "whactormap").
  • This transactionality is really exciting. It means that if something bad happens, we can always roll back to a safe state (or rather, never commit the unsafe state at all). In the default vat, if a message is received and an uncaught exception occurs, the promise is broken, but all the effects caused by interactions from unhandling the message are as if they never occured. (Well that is, as long as we use the "become this procedure" mechanism in Goblins to manage state! If you mutate a variable, you're on your own. A Racket #lang could prevent your users from doing such naughty things if you so care.)
  • It also means that snapshotting an actormap is really easy. Elm used to advertise having a "time traveling debugger" where they showed off Mario running around, and you could reverse time to a previous state. Apparently this was removed but maybe is coming back. Anyway it's trivial to do such a thing with Goblins' actormap, and I built such a (unpublished due to being unpolished) demo.
  • Most users won't work with the actormap though, they'll work with the builtin vat that takes care of all this stuff for them. You can build your own vat, or vat-like tools, though.

Anyway, all the above works and exists. Actors can even speak to each other across vats... though, what's missing so far is the ability to talk to other objects/vats on other machines. That's basically what's next on my agenda, and I know how to do it... it's just a matter of getting the stuff done.

Well, the other thing that's missing is documentation. That's competing for next thing on the agenda.

But why a synchronous game right now?

If the really exciting stuff is the distributed secure programming stuff, why did I stop to do a synchronous non-distributed game on top of Spritely Goblins? Before I plowed ahead, given that the non-distributed aspects still rest on the distributed aspects, I wanted to make sure that the fundamentals of Spritely Goblins were good.

A space shooter is simple enough to implement and using ascii art in a terminal meant I didn't need to spend too much time thinking about graphics (plus it's an interesting area that's under-explored... most terminal-based games are roguelikes or other turn-based games, not real time). Implementing it allowed me to find many areas that could be improved usability-wise in Goblins (indeed, it's been a very active month of development for Goblins). You really know what things are and aren't nice designs by using them.

It's also a great way to identify performance bottlenecks. I calculated that roughly 1 million actor invocations could happen per second on my cheapo laptop... not bad. But that was when the actors didn't update themselves; when it came to the transactional updates, I could only seem to achieve about 65k updates per second. I figured this must be the transactionality, but it turns out it wasn't; the transactionality feature is very cheap. Can you believe that I got a jump from 65k updates per second to 680k updates per second just by switching from a Racket contract to a manual predicate check? (I expected a mild performance hit for using a contract over a manual predicate, but 10x...?) (I also added a feature so you can "recklessly" commit directly to the actormap without transactions... I don't recommend this for all applications, but if you do that you can get up to about 790k updates per second... which means that transactionality adds only about a 17% overhead, which isn't even close to the 10x gap I was seeing.) Anyway, the thing that lead me to looking into that in the first place was doing an experiment where I decided I wanted to see how many objects I could have updating at once. I might not have caught it otherwise. So making a game demo is useful for that kind of thing.

I feel now that I've gotten most of the churn out of that layer of the design out of the way so that I can move forward with the design on the distributed side of things next. That allows me to have tighter focus of things in layers, and I'm happy about that.

What's next?

So with that out of the way, the next task is to work on both the mutually suspicious distributed programming environment and the documentation. I'm not sure in which order, but I guess we'll find out.

I'll do something similar with the distributed programming environment as well... I plan to write something basic which resembles a networked game at this stage to help me ensure that the components work nicely together.

In the meanwhile, Terminal Phase is very close to being a nice game to play, but I'm deciding to leave that as a funding milestone on my Patreon. This is because, as far as my technical roadmap has gone, Terminal Phase has performed the role it needs to play. But it would be fun to have, and I'm sure other people would like to play it as a finished game (heck, I would like to play it as a finished game), but I'd like to know... do people actually care enough about free software games? About this direction of work? Am I on the right track? Not to mention that funding this work is also simply damn hard.

But, at the time of writing we're fairly close, (about 85% of the way there), so maybe it will happen. If it sounds fun to you, maybe pitch in.

But one way or another, I'll have interesting things to announce ahead. Stay tuned here, or follow me on the fediverse or on Twitter if you so prefer.

Onwards and upwards!

07 November, 2019 03:15PM by Christopher Lemmer Webber

remotecontrol @ Savannah

Fitbit is doomed: Here's why everything Google buys turns to garbage | ZDNet

07 November, 2019 01:07PM by Stephen H. Dawson DSL

November 06, 2019

health @ Savannah

GNU Health 3.6RC3 available at community server & demo database

Dear community

The Release Candidate 3 (RC3) for the upcoming GNU Health 3.6 has been installed in the community server.

You can download the latest GTK client, either using pip (from pypi test repository) or the source tarball as explained in the developer's corner chapter.

Login Info:
    Server information : federation.gnuhealth.org:9555
    Database: ghdemo36rc3
    Username: admin
    Password: gnusolidario

Alternatively, the demo database can be downloaded and installed locally via the demo db installer

    $ bash ./install_demo_database 36rc3

Please download and test the follow files:

  • gnuhealth-3.6RC3.tar.gz: Server with the 45 packages
  • gnuhealth-client-3.6RC3.tar.gz  : The GH HMIS GTK client
  • gnuhealth-client-plugins-3.6RC1.tar.gz : The Federation Resource Locator; the GNU Health Camera and the crypto plugin. Note that There have been no changes on the plugins

Remember that all the components of the 3.6 series run in Python 3

You can download the RC tarballs from the development dir:

https://www.gnuhealth.org/downloads/development/unstable/

There is a new section on the Wikibook for the GH Hackers.

https://en.wikibooks.org/wiki/GNU_Health/Developer's_corner

Please check it before you install it. It contains important information on dependencies and other installation instructions.

Happy and healthy hacking !

Luis

06 November, 2019 11:54PM by Luis Falcon