Planet GNU

Aggregation of development blogs from the GNU Project

April 24, 2018

FSF Blogs

Friday Free Software Directory IRC meetup time: April 27th starting at 12:00 p.m. EDT/16:00 UTC

Help improve the Free Software Directory by adding new entries and updating existing ones. Every Friday we meet on IRC in the #fsf channel on irc.freenode.org.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

When a user comes to the Directory, they know that everything in it is free software, has only free dependencies, and runs on a free OS. With over 16,000 entries, it is a massive repository of information about free software.

While the Directory has been and continues to be a great resource to the world for many years now, it has the potential to be a resource of even greater value. But it needs your help! And since it's a MediaWiki instance, it's easy for anyone to edit and contribute to the Directory.

It was during this week, back in 1993, that the first browser to render text and pictures on the Web was released. To commemorate this event, the Directory meetup will look at browsers, plugins, and other Web renders. One area that is behind in this field is our browser plugins. The central listing is on the unapproved program pages. It would be great to lower this number a bit this week, and get some other relevant programs updated.

If you are eager to help, and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today! There are also weekly Directory Meeting pages that everyone is welcome to contribute to before, during, and after each meeting.

24 April, 2018 06:29PM

GUIX Project news

Guix on Android!

Last year I thought to myself: since my phone is just a computer running an operating system called Android (or Replicant!), and that Android is based on a Linux kernel, it's just another foreign distribution I could install GNU Guix on, right? It turned out it was absolutely the case. Today I was reminded on IRC of my attempt last year at installing GNU Guix on my phone. Hence this blog post. I'll try to give you all the knowledge and commands required to install it on your own Android device.

Requirements

First of all, you will need an Android or Replicant device. Just like any installation of GNU Guix, you will need root access on that device. Unfortunately, in the Android world this is not very often the case by default. Then, you need a cable to connect your computer to your phone. Once the hardware is in place, you will need adb (the Android Debugging Bridge):

guix package -i adb

Exploring the device

Every Android device has its own partioning layout, but basically it works like this:

  1. A boot partition for booting the device
  2. A recovery partition for booting the device in recovery mode
  3. A data partition for user data, including applications, the user home, etc
  4. A system partition with the base system and applications. This is the place where phone companies put their own apps so you can't remove them
  5. A vendor partition for drivers
  6. Some other partitions

During the boot process, the bootloader looks for the boot partition. It doesn't contain a filesystem, but only a gzipped cpio archive (the initramfs) and the kernel. The bootloader loads them in memory and the kernel starts using the initramfs. Then, the init system from this initramfs loads partitions in their respective directories: the system partition in /system, the vendor partition in /vendor and the data partition in /data. Other partitions may be loaded.

And that's it. Android's root filesystem is actually the initramfs so any modification to its content will be lost after a reboot. Thankfully(?), Android devices are typically not rebooted often.

Another issue is the Android C library (libc), called Bionic: it has less functionality and works completely differently from the GNU libc. Since Guix is built with the Glibc, we will need to do something to make it work on our device.

Installing the necessary files

We will follow the binary installation guide. My hardware is aarch64, so I download the corresponding binary release.

Now it's time to start using adb. Connect your device and obtain root priviledges for adb. You may have to authorize root access to the computer from your phone:

adb root

Now, we will transfer some necessary files:

adb push guix-binary-* /data

# Glibc needs these two files for networking.
adb push /etc/protocols /system/etc/
adb push /etc/services /system/etc/

# ‌ and this one to perform DNS queries.  You probably need
# to change nameservers if you use mobile data.
adb push /etc/resolv.conf /system/etc/

Note that some devices may not have /system/etc available. In that case, /etc may be available. If none is available, create the directory by using adb shell to get a shell on your device, then push the files to that new directory.

Installing Guix itself

Now all the necessary files are present on the device, so we can connect to a shell on the device:

adb shell

From that shell, we will install Guix. The root filesystem is mounted read-only as it doesn't make sense to modify it. Remember: it's a RAM filesystem. Remount-it read-write and create necessary directories:

mount -o remount,rw /
mkdir /gnu /var
mount -o remount,ro /

Now, we can't just copy the content of the binary archive to these folders because the initramfs has a limited amount of space. Guix complains when /gnu or /gnu/store is a symlink. One solution consists in installing the content of the binary tarball on an existing partition (because you can't modify the partition layout easily) that has enough free space, typically the data partition. Then this partition is mounted on /var and /gnu.

Before that, you will need to find out what the data partition is in your system. Simply run mount | grep /data to see what partition was mounted.

We mount the partition, extract the tarball and move the contents to their final location:

mount /dev/block/bootdevice/by-name/userdata /gnu
mount /dev/block/bootdevice/by-name/userdata /var
cd /data
tar xf guix-binary-...
mv gnu/store .
mv var/guix .
rmdir gnu
rmdir var

Finally, we need to create users and groups for Guix to work properly. Since Bionic doesn't use /etc/passwd or /etc/group to store the users, we need to create them from scratch. Note the addition of the root user and group, as well as the nobody user.

# create guix users and root for glibc
cat > /etc/passwd << EOF
root:x:0:0:root:/data:/sbin/sh
nobody:x:99:99:nobody:/:/usr/bin/nologin
guixbuilder01:x:994:994:Guix build user 01:/var/empty:/usr/bin/nologin
guixbuilder02:x:993:994:Guix build user 02:/var/empty:/usr/bin/nologin
guixbuilder03:x:992:994:Guix build user 03:/var/empty:/usr/bin/nologin
guixbuilder04:x:991:994:Guix build user 04:/var/empty:/usr/bin/nologin
guixbuilder05:x:990:994:Guix build user 05:/var/empty:/usr/bin/nologin
guixbuilder06:x:989:994:Guix build user 06:/var/empty:/usr/bin/nologin
guixbuilder07:x:988:994:Guix build user 07:/var/empty:/usr/bin/nologin
guixbuilder08:x:987:994:Guix build user 08:/var/empty:/usr/bin/nologin
guixbuilder09:x:986:994:Guix build user 09:/var/empty:/usr/bin/nologin
guixbuilder10:x:985:994:Guix build user 10:/var/empty:/usr/bin/nologin
EOF

cat > /etc/group << EOF
root:x:0:root
guixbuild:x:994:guixbuilder01,guixbuilder02,guixbuilder03,guixbuilder04,guixbuilder05,guixbuilder06,guixbuilder07,guixbuilder08,guixbuilder09,guixbuilder10
EOF

Running Guix

First, we install the root profile somewhere:

export HOME=/data
ln -sf /var/guix/profiles/per-user/root/guix-profile \
         $HOME/.guix-profile

Now we can finally run the Guix daemon. Chrooting is impossible on my device so I had to disable it:

export PATH="$HOME/.guix-profile/bin:$HOME/.guix-profile/sbin:$PATH"
guix-daemon --build-users-group=guixbuild --disable-chroot &

To finish with, it's a good idea to allow substitutes from hydra:

mkdir /etc/guix
guix archive --authorize < \
  $HOME/.guix-profile/share/guix/hydra.gnu.org.pub

Enjoy!

guix pull

Mobile phone running 'guix pull'.

Future work

So, now we can enjoy the Guix package manager on Android! One of the drawbacks is that after a reboot we will have to redo half of the steps: recreate /var and /gnu, mount the partitions to them. Everytime you launch a shell, you will have to export the PATH to be able to run guix. You will have to run guix-daemon manually. To solve all of these problems at once, you should modify the boot image. That's tricky and I already put some effort to it, but the phone always ends up in a boot loop after I flash a modified boot image. The nice folks at #replicant suggested that I soldered some cable to access a serial console where debug messages may be dropped. Let's see how many fingers I burn before I can boot a custom boot image!

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

24 April, 2018 08:00AM by Julien Lepiller

April 22, 2018

parallel @ Savannah

GNU Parallel 20180422 ('Tiangong-1') released

GNU Parallel 20180422 ('Tiangong-1') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Quote of the month:

Today I discovered GNU Parallel, and I don’t know what to do with all this spare time.
--Ryan Booker

New in this release:

  • --csv makes GNU Parallel parse the input sources as CSV. When used with --pipe it only passes full CSV-records.
  • Time in --bar is printed as 1d02h03m04s.
  • Optimization of --tee: It spawns a process less per value.
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, April 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 April, 2018 09:19PM by Ole Tange

April 20, 2018

FSF Events

Richard Stallman estará en Buenos Aires, Argentina

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

El título de la charla será determinado.

Lugar: Centro Cultural de la Cooperación, Av. Corrientes 1543, Buenos Aires, Argentina

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Buenos Aires.

20 April, 2018 08:05PM

Richard Stallman estará en Río Cuarto, Argentina

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

El título, el lugar exacto, y la hora de la charla serán determinados.

Lugar: Viejo Mercado, Río Cuarto, Argentina

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Río Cuarto.

20 April, 2018 07:55PM

Richard Stallman estará en Mendoza, Argentina

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

El título, el lugar exacto, y la hora de la charla serán determinados.

Lugar: (por definirse)

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Mendoza.

20 April, 2018 07:34PM

Richard Stallman estará en Tucumán, Argentina

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

El título, el lugar exacto, y la hora de la charla serán determinados.

Lugar: (por definirse)

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Tucumán.

20 April, 2018 04:25PM

Richard Stallman estará en Misiones Posadas, Argentina

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

El título y el lugar exacto de la charla serán determinados.

Lugar: (por definirse)

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Misiones Posadas.

20 April, 2018 04:16PM

Parabola GNU/Linux-libre

[From Arch] glibc 2.27-2 and pam 1.3.0-2 may require manual intervention

The new version of glibc removes support for NIS and NIS+. The default /etc/nsswitch.conf file provided by filesystem package already reflects this change. Please make sure to merge pacnew file if it exists prior to upgrade.

NIS functionality can still be enabled by installing libnss_nis package. There is no replacement for NIS+ in the official repositories.

pam 1.3.0-2 no longer ships pam_unix2 module and pam_unix_*.so compatibility symlinks. Before upgrading, review PAM configuration files in the /etc/pam.d directory and replace removed modules with pam_unix.so. Users of pam_unix2 should also reset their passwords after such change. Defaults provided by pambase package do not need any modifications.

20 April, 2018 03:18PM by David P.

April 19, 2018

FSF Blogs

Friday Free Software Directory IRC meetup time: April 20th starting at 12:00 p.m. EDT/16:00 UTC

Help improve the Free Software Directory by adding new entries and updating existing ones. Every Friday we meet on IRC in the #fsf channel on irc.freenode.org.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

When a user comes to the Directory, they know that everything in it is free software, has only free dependencies, and runs on a free OS. With over 16,000 entries, it is a massive repository of information about free software.

While the Directory has been and continues to be a great resource to the world for many years now, it has the potential to be a resource of even greater value. But it needs your help! And since it's a MediaWiki instance, it's easy for anyone to edit and contribute to the Directory.

On April 20, 2008, 26-year-old Danica Patrick entered the Indy Japan 300 at Twin Ring Montegi in Montegi, Japan -- and became the first woman to win an Indycar race. This week's Directory meetup will honor this milestone by focusing on programs that involve races and racing.

If you are eager to help, and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today! There are also weekly Directory Meeting pages that everyone is welcome to contribute to before, during, and after each meeting.

19 April, 2018 04:42PM

remotecontrol @ Savannah

April 13, 2018

FSF Blogs

Private Internet Access: VPNs, education, and software freedom

Meet Private Internet Access

Private Internet Access (PIA) was a generous supporter of LibrePlanet 2018 and the Free Software Foundation as a patron. As one of the largest VPN services available, they have customers all around the world. Their VPN works with free software VPN clients like OpenVPN. They recently announced their intention to release some of the software they produce under a free license.

I had a conversation with Christel Dahlskjaer, their director of sponsorships and events, about their views on free software, digital rights, and user freedom. She emphasized that PIA uses their resources for the defense of free software, digital rights, and civil liberties, which are among their greatest concerns.

Through partnerships, PIA educates and helps people learn about and work towards better safety online and the mitigation of privacy risks. They bring this support to "activists, dissidents, journalists, whistleblowers, and digital nomads," in addition to everyday users of their VPN services, Dahlskjaer said in identifying their user base.

PIA is built on free software -- literally and ideologically. "We are acutely aware that free software is at the foundation of the vast majority of the technology that we rely on, and indeed founded our business on," said Dahlskjaer. "We believe that software freedom goes hand in hand with civil liberties and digital rights, and it is natural for us to support the development and use of free software."

"The digital landscape is changing and, as individuals, we are facing real risk from the monetization of data, surveillance, privatization, and risk of potential bad actors. Now is the time to act when it comes to taking (back) control of our data and our digital rights," she went on.

"Draconian leaders continue to be misinformed or have an agenda separate of the people, leading to poor legislation across the board, regardless of one's jurisdiction. It continues to become clear that the greatest challenge is the legal landscape for new technology. That being said, with the continued advancements we have seen with applied cryptography, the time when technology simply transcends is drawing nigh. We will win. I assure you, we will win."

The Free Software Foundation is excited to have the support of Private Internet Access in LibrePlanet, our other work, and the greater free software community.

You can read more about Private Internet Access on their Web site, and also read job descriptions online.

What is a VPN?

A VPN is a Virtual Private Network. It "extends a private network across a public network, and enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network" (Wikipedia). This gives users of a VPN enhanced privacy and security. While frequently used by companies and organizations to provide access to servers or an intranet to those not in the office, a VPN is also invaluable for activists, journalists, whistleblowers, and any end user looking to increase the trust they have in their networks. VPNs can be used to access any Web sites or Web services.

13 April, 2018 08:04PM

April 12, 2018

Friday Free Software Directory IRC meetup time: April 13th starting at 12:00 p.m. EDT/16:00 UTC

Help improve the Free Software Directory by adding new entries and updating existing ones. Every Friday we meet on IRC in the #fsf channel on irc.freenode.org.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

When a user comes to the Directory, they know that everything in it is free software, has only free dependencies, and runs on a free OS. With over 16,000 entries, it is a massive repository of information about free software.

While the Directory has been and continues to be a great resource to the world for many years now, it has the potential to be a resource of even greater value. But it needs your help! And since it's a MediaWiki instance, it's easy for anyone to edit and contribute to the Directory.

This week we are improving everyone's luck by resurrecting dead entries. While we always want to keep growing the Directory by adding new entries, every now and then we need to do some spring cleaning to wipe away the detritus. Millions of users visit the Directory each year, but we want to make sure they're not stumbling into a graveyard of out-of-date information. So this Friday the 13th, we'll be hunting dead links and reviving older entries!

If you are eager to help, and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today! There are also weekly Directory Meeting pages that everyone is welcome to contribute to before, during, and after each meeting.

12 April, 2018 03:43PM

April 09, 2018

RMS in The Guardian: “A radical proposal to keep your personal data safe”

Here at the Free Software Foundation (FSF), we're never surprised when another violation of privacy by Facebook or other bad actors is exposed: it has long since been obvious that Facebook is a gold mine for government surveillance and advertisers. However, we also recognize that social media has become a crucial part of everyday life, which is why we urge you to ditch Facebook and instead utilize freedom-respecting, distributed, user-controlled services like GNU social, Mastodon, or Diaspora.

In the meantime, something needs to be done to halt the overall abuse of data, and in today's issue of The Guardian, FSF president and founder Richard Stallman (RMS) offers a bold proposal: that systems need to be legally required to not collect data in the first place. “The basic principle is that a system must be designed not to collect certain data," he writes, "if its basic function can be carried out without that data.” He demands that this change go far beyond Facebook:

“Broader, meaning extending to all surveillance systems, not just Facebook. Deeper, meaning to advance from regulating the use of data to regulating the accumulation of data. Because surveillance is so pervasive, restoring privacy is necessarily a big change, and requires powerful measures.”

As an example of a system that could be adjusted to work this way, RMS notes that London trains and buses don't actually need to centrally record records of where people travel in order to work, although they now do so, which is a fundamental invasion of privacy. He also offers the example of GNU Taler, a convenient digital payment system that keeps payers anonymous: this is a system that already exists and respects users' privacy.

Read the rest of this article at The Guardian.

09 April, 2018 02:23PM

April 08, 2018

mcron @ Savannah

GNU Mcron 1.1.1 released

We are pleased to announce the release of GNU Mcron 1.1.1,
representing 48 commits, by 1 person over 3 weeks.

About

GNU Mcron is a complete replacement for Vixie cron. It is used to run
tasks on a schedule, such as every hour or every Monday. Mcron is
written in Guile, so its configuration can be written in Scheme; the
original cron format is also supported.

https://www.gnu.org/software/mcron/

Download

Here are the compressed sources and a GPG detached signature[*]:
https://ftp.gnu.org/gnu/mcron/mcron-1.1.1.tar.gz
https://ftp.gnu.org/gnu/mcron/mcron-1.1.1.tar.gz.sig

Use a mirror for higher download bandwidth:
https://ftpmirror.gnu.org/mcron/mcron-1.1.1.tar.gz
https://ftpmirror.gnu.org/mcron/mcron-1.1.1.tar.gz.sig

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:

gpg --verify mcron-1.1.1.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

gpg --keyserver keys.gnupg.net --recv-keys 0ADEE10094604D37

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
Autoconf 2.69
Automake 1.16.1
Makeinfo 6.5
Help2man 1.47.5

NEWS

  • Noteworthy changes in release 1.1.1 (2018-04-08) [stable]
    • Bug fixes

The "--disable-multi-user" configure variable is not reversed anymore.
'cron' and 'crontab' are now installed unless this option is used.

The programs now sets the GUILE_LOAD_PATH and GUILE_LOAD_COMPILED_PATH
environment variables with the location of the installed Guile modules.

'next-year-from', 'next-year', 'next-month-from', 'next-month',
'next-day-from', 'next-day', 'next-hour-from', 'next-hour',
'next-minute-from', 'next-minute', 'next-second-from', and 'next-second' no
longer crashes when passing an optional argument.
[bug introduced in mcron-1.1]

    • Improvements

Some basic tests for the installed programs can be run after 'make install'
with 'make installcheck'.

The configuration files are now processed using a deterministic order.

The test suite code coverage for mcron modules is now at 66.8% in term of
number of lines (mcron-1.1 was at 23.7%).

08 April, 2018 03:46PM by Mathieu Lirzin

April 05, 2018

GUIX Project news

Guix & reproducible builds at LibrePlanet 2018

LibrePlanet, the yearly free software conference organized by the Free Software Foundation, took place a week ago. Among the many great talks and workshops, David Thompson, a core Guix developer also working as a DevOps, presented many aspects of Guix and GuixSD in his talk, Practical, verifiable software freedom with GuixSD (video, slides).

In a similar domain, Chris Lamb, current Debian Project Leader and a driving force behind the Reproducible Builds effort, gave a talk entitled You think you're not a target? A tale of three developers... (video, slides).

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

05 April, 2018 12:00PM by Ludovic Courtès

Trisquel GNU/Linux

Trisquel 8.0 LTS Flidas

Trisquel 8.0, codename "Flidas" is finally here! This release will be supported with security updates until April 2021. The first thing to acknowledge is that this arrival has been severely delayed, to the point where the next upstream release (Ubuntu 18.04 LTS) will soon be published. The good news is that the development of Trisquel 9.0 will start right away, and it should come out closer to the usual release schedule of "6 months after upstream release".

But this is not to say that we shouldn't be excited about Trisquel 8.0, quite the contrary! It comes with many improvements over Trisquel 7.0, and its core components (kernel, graphics drivers, web browser and e-mail client) are fully up to date and will receive continuous upgrades during Flidas' lifetime.

Trisquel 8.0 has benefited from extensive testing, as many people have been using the development versions as their main operating system for some time. On top of that, the Free Software Foundation has been using it to run the Libreplanet conference since last year, and it has been powering all of its new server infrastructure as well!

What's new?

The biggest internal change to the default edition is the switch from GNOME to MATE 1.12. The main reason for this change was that GNOME dropped support for their legacy desktop, which retained the GNOME 2.x user experience and didn't require 3D composition -- a feature that in many computers would still need non-free software to run at full speed. MATE provides a perfect drop-in replacement, it is very light and stable and it retains all the user experience design that we are used to from previous Trisquel releases.

The next most important component is Abrowser 59 (based on Mozilla Firefox), which is not only fully-featured and quite faster than before, it has also been audited and tweaked to maximize the user's privacy without compromising on usability. Abrowser will not start any network connections on its own (most popular web browsers connect for extension updates, telemetry, geolocation and other data-collection as soon as you open them, even if you haven't even typed an address yet!) and it has a list of easy to set, privacy-enhancing settings that the user can opt-in depending on their needs. As a companion to it, and based on Mozilla Thunderbird, the IceDove mail client is also fully updated and set up for privacy.

Trisquel 8.0 also comes with the following preinstalled packages:

  • Linux-libre 4.4 by default, 4.13 available (and newer versions will be published as an optional rolling release)
  • Xorg 7.7 with optional rolling-release updates
  • LibreOffice 5.1.4
  • VLC 2.2.2

Trisquel-mini (the light edition based on LXDE) uses the Midori web browser, Sylpheed email client, Abiword text editor, and GNOME-Mplayer media player as its main preinstalled components. We also have the Trisquel TOAST edition, based on the Sugar learning environment v112, and complete with a selection of educational activities for k12 and beyond. And of course, available from our repositories and mirrors are over 25,000 more free software packages you can run, study, improve and share.

Support our effort

Trisquel is a non-profit project, you can contribute by becoming a member, donating or buying from our store.

The MATE desktop
Boot menu
Installer
Privacy settings in Abrowser
Sugar environment
Trisquel mini with Midori web browser

05 April, 2018 03:10AM by quidam

April 04, 2018

Jose E. Marchesi

Rhhw Friday 1 June 2018 - Sunday 3 June 2018 @ Stockholm

The Rabbit Herd will be meeting the weekend from 1 June to 3 June 2018.

04 April, 2018 12:00AM

April 01, 2018

sed @ Savannah

sed-4.5 released [stable]

01 April, 2018 02:18AM by Jim Meyering

March 26, 2018

foliot @ Savannah

GNU Foliot version 0.9.7

GNU Foliot version 0.9.7 is released!

This is a maintenance release, which brings GNU Foliot up-to-date with Guile-2.2, which introduced an incompatible module - goops related - change.

For a list of changes since the previous version, visit the NEWS file. For a complete description, consult the git summary and git log.

26 March, 2018 02:52AM by David Pirotte

March 24, 2018

FSF News

Public Lab and Karen Sandler are 2017 Free Software Awards winners

CAMBRIDGE, Massachusetts, USA – Saturday, March 24, 2018 – The Free Software Foundation (FSF) today announced the winners of the 2017 Free Software Awards at a ceremony held during the LibrePlanet 2018 conference at the Massachusetts Institute of Technology (MIT). FSF president Richard M. Stallman presented the Award for Projects of Social Benefit and the Award for the Advancement of Free Software.

The Award for Projects of Social Benefit is presented to a project or team responsible for applying free software, or the ideas of the free software movement, to intentionally and significantly benefit society. This award stresses the use of free software in service to humanity.

This year, Public Lab received the award, which was accepted by Liz Barry, Public Lab co-founder, organizer, and director of community development, and Jeff Warren, Public Lab co-founder and research director, on behalf of the entire Public Lab community.

Public Lab is a community and non-profit organization with the goal of democratizing science to address environmental issues. Their community-created tools and techniques utilize free software and low-cost devices to enable people at any level of technical skill to investigate environmental concerns.

Stallman noted how crucial Public Lab's work is to the global community, and also how their use of free software is crucial to their mission, saying that "the environmental and social problems caused by global heating are so large that they cannot rationally be denied. When studies concerning the causes and the effects of global heating, or the environmental impact of pollution, industry, and policy choices, are conducted using proprietary software, that is a gratuitous obstacle to replicating them.

"Public Lab gets the tools to study and protect the world into the hands of everyone -- and since they are free (libre) software, they respect both the people who use them, and the community that depends on the results."

Jeff Warren, speaking on behalf of the Public Lab community, added that using free software is part of their larger mission to take science out of the hands of the experts and allow everyday people to participate: "At Public Lab, we believe that generating knowledge is a powerful thing. We aim to open research from the exclusive hands of scientific experts. By doing so, communities facing environmental justice issues are able to own the science and advocate for the changes they want to see.

"Building free software, hardware, and open data is fundamental to our work in the Public Lab community, as we see it as a key part of our commitment to equity in addressing environmental injustice."

Public Lab folks with award

The Award for the Advancement of Free Software goes to an individual who has made a great contribution to the progress and development of free software, through activities that accord with the spirit of free software.

This year, it was presented to Karen Sandler, the Executive Director of the Software Freedom Conservancy, as well as a perennial LibrePlanet speaker and friend to the FSF. She is known for her advocacy for free software, particularly in relation to the software on medical devices: she led an initiative advocating for free software on implantable medical devices after exploring the issues surrounding the software on her own implanted medical device (a defibrillator), which regulates an inherited heart condition. Sandler has served as the Executive Director of the GNOME Foundation, where she now serves on the Board of Directors, and before that, she was General Counsel of the Software Freedom Law Center. Finally, she co-organizes Outreachy, the award-winning outreach program that organizes paid internships in free software for people who are typically underrepresented in these projects.

Stallman praised Sandler's dedication to free software, emphasizing how sharing her personal experience has provided a window into the importance of free software for a broader audience: "Her vivid warning about backdoored nonfree software in implanted medical devices has brought the issue home to people who never wrote a line of code.

"Her efforts, usually not in the public eye, to provide pro bono legal advice to free software organizations and to organize infrastructure for free software projects and copyleft defense, have been equally helpful."

Sandler explained that her dedication to promoting free software was inevitable, given her personal experience: "Coming to terms with a dangerous heart condition should never have cost me fundamental control over the technology that my life relies on," she said. "The twists and turns of my own life, including my professional work at Conservancy, led me to understand how software freedom is essential to society. This issue is personal not just for me but for anyone who relies on software, and today that means every single person."

Karen Sandler with award

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://my.fsf.org/donate. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

John Sullivan

Executive Director

Free Software Foundation

+1 (617) 542 5942

campaigns@fsf.org

24 March, 2018 10:50PM

March 23, 2018

health @ Savannah

Red Cross (Cruz Roja) implements GNU Health !

We are very happy and proud to announce that Red Cross, Cruz Roja Mexicana has implemented GNU Health in Mexico.

The implementation covers over 850 patients per day, and the following functionality has been implemented :

  • Social Medicine and Primary Care,
  • Health records and history,
  • Hospital Management,
  • Laboratory management,
  • Imaging Diagnostics,
  • Emergency and Ambulance management
  • Pharmacy
  • Human Resources
  • Financial Management

It has been implemented in three locations: Veracruz - Boca del Rio, Ver. Mexico. Congratulations to Cruz Roja and Soluciones de Mercado for the fantastic job !

---

Estamos orgullosos de anunciar que Cruz Roja Mexicana ha implementado GNU Health !

La implementación atiende a una población diaria de más de 850 pacientes, con la siguiente funcionalidad :

  • Medicina Social y Atención Primaria
  • Expediente médico
  • Gestión hospitalaria
  • Laboratorio
  • Diagnóstico por imágenes
  • Gestión de Emergencias y ambulancias
  • Farmacia
  • Recursos Humanos
  • Gestión Fianciera

Ha sido instalado en tres localidades : Veracruz - Boca del Río, Ver. Mexico. Enhorabuena a la Cruz Roja y a los colegas de Soluciones de Mercado por el excelente trabajo !!

23 March, 2018 02:48PM by Luis Falcon

March 22, 2018

parallel @ Savannah

GNU Parallel 20180322 ('Hawking') released

GNU Parallel 20180322 ('Hawking') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Quote of the month:

If you aren’t nesting
gnu parallel calls in gnu parallel calls
I don’t know how you have fun.
-- Ernest W. Durbin III EWDurbin@twitter

New in this release:

  • niceload -p can now take multiple pids separated by comma
  • --timeout gives a warning when killing processes
  • --embed now uses the same code for all supported shells
  • --delay can now take arguments like 1h12m07s
  • Parallel. Straight from your command line https://medium.com/@alonisser/parallel-straight-from-your-command-line-feb6db8b6cee
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login: The USENIX Magazine, February 2011:42-47.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 March, 2018 08:06AM by Ole Tange

March 20, 2018

FSF News

LibrePlanet free software conference celebrates 10th anniversary, this weekend at MIT, March 24-25

CAMBRIDGE, Massachusetts, USA -- Tuesday, March 20, 2018 -- This weekend, the Free Software Foundation (FSF) and the Student Information Processing Board (SIPB) at the Massachusetts Institute of Technology (MIT) present the tenth annual LibrePlanet free software conference in Cambridge, March 24-25, 2018, at MIT. LibrePlanet is an annual conference for people who care about their digital freedoms, bringing together software developers, policy experts, activists, and computer users to learn skills, share accomplishments, and tackle challenges facing the free software movement. LibrePlanet 2018 will feature sessions for all ages and experience levels.

LibrePlanet's tenth anniversary theme is "Freedom Embedded." Embedded systems are everywhere, in cars, digital watches, traffic lights, and even within our bodies. We've come to expect that proprietary software's sinister aspects are embedded in software, digital devices, and our lives, too: we expect that our phones monitor our activity and share that data with big companies, that governments enforce digital restrictions management (DRM), and that even our activity on social Web sites is out of our control. This year's talks and workshops will explore how to defend user freedom in a society reliant on embedded systems.

Keynote speakers include Benjamin Mako Hill, social scientist, technologist, free software activist, and FSF board member, examining online collaboration and free software; Electronic Frontier Foundation senior staff technologist Seth David Schoen, discussing engineering tradeoffs and free software; Deb Nicholson, community outreach director for the Open Invention Network, talking about the key to longevity for the free software movement; and Free Software Foundation founder and president Richard Stallman, looking at current threats to and opportunities for free software, with a focus on embedded systems.

This year's LibrePlanet conference will feature over 50 sessions, such as The battle to free the code at the Department of Defense, Freedom, devices, and health, and Standardizing network freedom, as well as workshops on free software and photogrammetry, digital music making, and desktops for kids.

"For ten years, LibrePlanet has brought together free software enthusiasts and newcomers from around the world to exchange ideas, collaborate, and take on challenges to software freedom," said Georgia Young, program manager of the FSF. "But the conference is not purely academic -- it works to build the free software community, offering opportunities for those who cannot attend to participate remotely by watching a multi-channel livestream and joining the conversation online. And this year, we're proud to offer several kid-friendly workshops, encouraging earlier engagement with fun, ethical free software!"

Advance registration is closed, but attendees may register in person at the event. Admission is gratis for FSF Associate Members and students. For all other attendees, the cost of admission is $60 for one day, $90 for both days, and includes admission to the conference's social events. For those who cannot attend, this year's sessions will be streamed at https://libreplanet.org/2018/live/, and recordings will be available after the event at https://media.libreplanet.org/.

Anthropologist and author Gabriella Coleman was scheduled to give the opening keynote at LibrePlanet 2018, but was forced to cancel.

About LibrePlanet

LibrePlanet is the annual conference of the Free Software Foundation, and is co-produced by MIT's Student Information Processing Board. What was once a small gathering of FSF members has grown into a larger event for anyone with an interest in the values of software freedom. LibrePlanet is always gratis for associate members of the FSF and students. Sign up for announcements about the LibrePlanet conference here.

LibrePlanet 2017 was held at MIT from March 25-26, 2017. About 400 attendees from all over the world came together for conversations, demonstrations, and keynotes centered around the theme of "The Roots of Freedom." You can watch videos from past conferences at https://media.libreplanet.org, including keynotes by Kade Crockford of the ACLU of Massachusetts and Cory Doctorow, author and special consultant to the Electronic Frontier Foundation.

About the Free Software Foundation

The FSF, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contact

Georgia Young
Program Manager
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

20 March, 2018 02:39PM

March 19, 2018

mcron @ Savannah

GNU Mcron 1.1 released

We are pleased to announce the release of GNU Mcron 1.1,
representing 124 commits, by 3 people over 4 years.

Download

Here are the compressed sources and a GPG detached signature[*]:
https://ftp.gnu.org/gnu/mcron/mcron-1.1.tar.gz
https://ftp.gnu.org/gnu/mcron/mcron-1.1.tar.gz.sig

Use a mirror for higher download bandwidth:
https://ftpmirror.gnu.org/mcron/mcron-1.1.tar.gz
https://ftpmirror.gnu.org/mcron/mcron-1.1.tar.gz.sig

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:

gpg --verify mcron-1.1.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

gpg --keyserver keys.gnupg.net --recv-keys 0ADEE10094604D37

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
Autoconf 2.69
Automake 1.16.1

NEWS

Noteworthy changes in release 1.1 (2018-03-19) [stable]
    • New features

The 'job' procedure has now a '#:user' keyword argument which allows
specifying a different user that will run it.

Additional man pages for 'cron(8)' and 'crontab(1)' are now generated using
GNU Help2man.

    • Bug fixes

Child process created when executing a job are now properly cleaned even
when execution fails by using 'dynamic-wind' construct.

    • Improvements

GNU Guile 2.2 is now supported.

Some procedures are now written using functional style and include a
docstring. 'def-macro' usages are now replaced with hygienic macros.

Compilation is now done using a non-recursive Makefile, supports out of tree
builds, and use silent rules by default.

Guile object files creation don't rely on auto-compilation anymore and are
installed in 'site-ccache' directory.

Jobs are now internally represented using SRFI-9 records instead of vectors.

Changelog are generated from Git logs when generating the tarball using
Gnulib gitlog-to-changelog script.

A test suite is now available and can be run with 'make check'.

    • Changes in behavior

The "--enable-debug" configure variable has been removed and replaced with
MCRON_DEBUG environment variable.

The "--disable-multi-user" configure variable is now used to not build and
install the 'cron' and 'crontab' programs. It has replaced the
"--enable-no-vixie-clobber" which had similar effect.

(mcron core) module is now deprecated and has been superseeded by
(mcron base).

Please report bugs to bug-mcron@gnu.org.

19 March, 2018 12:37AM by Mathieu Lirzin

March 16, 2018

Riccardo Mottola

Graphos printing fix

Important Graphos fix.

Graphos had issues when printing and the view was not 100%: to speed up drawRect, all objects were represented scaled, so that they had not to be scaled each time, which especially for Bezier Paths is expensive with all the associated handles.

Thie issue is finally fixed by either caching both original and zoomed values for each object and conditionally drawing them depending on the drawingContext.

Here the proof with GSPdf showing the generated PDF!



Soon a new release then!

16 March, 2018 06:34PM by Riccardo (noreply@blogger.com)

March 15, 2018

GNUnet News

March 14, 2018

automake @ Savannah

March 12, 2018

Jose E. Marchesi

Rhhw Friday 16 March 2018 - Sunday 18 March 2018 @ Frankfurt am Main

The Rabbit Herd will be meeting the weekend from 16 March to 18 March.

12 March, 2018 12:00AM

March 11, 2018

automake @ Savannah

February 28, 2018

FSF News

Free Software Foundation releases FY2016 Annual Report

BOSTON, Massachusetts, USA -- Wednesday, February 28, 2018 -- The Free Software Foundation (FSF) today published its Fiscal Year (FY) 2016 Annual Report.

The report is available in low-resolution (11.5 MB PDF) and high-resolution (207.2 MB PDF).

The Annual Report reviews the Foundation's activities, accomplishments, and financial picture from October 1, 2015 to September 30, 2016. It is the result of a full external financial audit, along with a focused study of program results. It examines the impact of the FSF's programs, and FY2016's major events, including LibrePlanet, the creation of ethical criteria for code-hosting repositories, and the expansion of the Respects Your Freedom computer hardware product certification program.

"More people and businesses are using free software than ever before," said FSF executive director John Sullivan in his introduction to the FY2016 report. "That's big news, but our most important measure of success is the support for the ideals. In that area, we have momentum on our side."

As with all of the Foundation's activities, the Annual Report was made using free software, including Inkscape, GIMP, and PDFsam, along with freely licensed fonts and images.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://my.fsf.org/donate. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

Georgia Young
Program Manager
Free Software Foundation
+1 (617) 542 5942 x 17
campaigns@fsf.org

28 February, 2018 07:20PM

February 24, 2018

Parabola GNU/Linux-libre

[From Arch] zita-resampler 1.6.0-1 -> 2 update requires manual intervention

The zita-resampler 1.6.0-1 package was missing a library symlink that has been readded in 1.6.0-2. If you installed 1.6.0-1, ldconfig would have created this symlink at install time, and it will conflict with the one included in 1.6.0-2. In that case, remove /usr/lib/libzita-resampler.so.1 manually before updating.

24 February, 2018 04:46AM by Omar Vega Ramos

February 22, 2018

parallel @ Savannah

GNU Parallel 20180222 ('Henrik') released

GNU Parallel 20180222 ('Henrik') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Haiku of the month:

Alias and vars
export them more easily
with env_parallel
-- Ole Tange

New in this release:

  • --embed makes it possible to embed GNU parallel in a shell script. This is useful if you need to distribute your script to someone who does not want to install GNU parallel.
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login: The USENIX Magazine, February 2011:42-47.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://www.gnu.org/s/parallel/merchandise.html
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 February, 2018 09:56PM by Ole Tange

February 19, 2018

GUIX Project news

Join GNU Guix through Outreachy or GSoC

We are happy to announce that for the first time this year, GNU Guix offers a three-month internship through Outreachy, the inclusion program for groups traditionally underrepresented in free software and tech. We currently propose two subjects to work on:

  1. improving the user experience for the guix package command-line tool;
  2. enhancing Guile tools for the Guix package manager.

Eligible persons should apply by March 22nd.

Guix also participates in the Google Summer of Code (GSoC), under the aegis of the GNU Project. We have collected project ideas for Guix, GuixSD, and the GNU Shepherd, covering a range of topics. The list is far from exhaustive, so feel free to bring your own!

If you are an eligible student, make sure to apply by March 27th.

If you’d like to contribute to computing freedom, Scheme, functional programming, or operating system development, now is a good time to join us. Let’s get in touch on the mailing lists and on the #guix channel on the Freenode IRC network!

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

19 February, 2018 04:00PM by Ludovic Courtès

February 17, 2018

libffcall @ Savannah

GNU libffcall 2.1 is released

libffcall version 2.1 is released.

New in this release:

  • Added support for Linux/arm with PIE-enabled gcc, Solaris 11.3 on x86_64, OpenBSD 6.1, HardenedBSD.
  • Fixed a bug regarding passing of pointers on Linux/x86_64 with x32 ABI.
  • Fixed a crash in trampoline on Linux/mips64el.

17 February, 2018 12:58PM by Bruno Haible

February 07, 2018

Andy Wingo

design notes on inline caches in guile

Ahoy, programming-language tinkerfolk! Today's rambling missive chews the gnarly bones of "inline caches", in general but also with particular respect to the Guile implementation of Scheme. First, a little intro.

inline what?

Inline caches are a language implementation technique used to accelerate polymorphic dispatch. Let's dive in to that.

By implementation technique, I mean that the technique applies to the language compiler and runtime, rather than to the semantics of the language itself. The effects on the language do exist though in an indirect way, in the sense that inline caches can make some operations faster and therefore more common. Eventually inline caches can affect what users expect out of a language and what kinds of programs they write.

But I'm getting ahead of myself. Polymorphic dispatch literally means "choosing based on multiple forms". Let's say your language has immutable strings -- like Java, Python, or Javascript. Let's say your language also has operator overloading, and that it uses + to concatenate strings. Well at that point you have a problem -- while you can specify a terse semantics of some core set of operations on strings (win!), you can't choose one representation of strings that will work well for all cases (lose!). If the user has a workload where they regularly build up strings by concatenating them, you will want to store strings as trees of substrings. On the other hand if they want to access characterscodepoints by index, then you want an array. But if the codepoints are all below 256, maybe you should represent them as bytes to save space, whereas maybe instead as 4-byte codepoints otherwise? Or maybe even UTF-8 with a codepoint index side table.

The right representation (form) of a string depends on the myriad ways that the string might be used. The string-append operation is polymorphic, in the sense that the precise code for the operator depends on the representation of the operands -- despite the fact that the meaning of string-append is monomorphic!

Anyway, that's the problem. Before inline caches came along, there were two solutions: callouts and open-coding. Both were bad in similar ways. A callout is where the compiler generates a call to a generic runtime routine. The runtime routine will be able to handle all the myriad forms and combination of forms of the operands. This works fine but can be a bit slow, as all callouts for a given operator (e.g. string-append) dispatch to a single routine for the whole program, so they don't get to optimize for any particular call site.

One tempting thing for compiler writers to do is to effectively inline the string-append operation into each of its call sites. This is "open-coding" (in the terminology of the early Lisp implementations like MACLISP). The advantage here is that maybe the compiler knows something about one or more of the operands, so it can eliminate some cases, effectively performing some compile-time specialization. But this is a limited technique; one could argue that the whole point of polymorphism is to allow for generic operations on generic data, so you rarely have compile-time invariants that can allow you to specialize. Open-coding of polymorphic operations instead leads to code bloat, as the string-append operation is just so many copies of the same thing.

Inline caches emerged to solve this problem. They trace their lineage back to Smalltalk 80, gained in complexity and power with Self and finally reached mass consciousness through Javascript. These languages all share the characteristic of being dynamically typed and object-oriented. When a user evaluates a statement like x = y.z, the language implementation needs to figure out where y.z is actually located. This location depends on the representation of y, which is rarely known at compile-time.

However for any given reference y.z in the source code, there is a finite set of concrete representations of y that will actually flow to that call site at run-time. Inline caches allow the language implementation to specialize the y.z access for its particular call site. For example, at some point in the evaluation of a program, y may be seen to have representation R1 or R2. For R1, the z property may be stored at offset 3 within the object's storage, and for R2 it might be at offset 4. The inline cache is a bit of specialized code that compares the type of the object being accessed against R1 , in that case returning the value at offset 3, otherwise R2 and offset r4, and otherwise falling back to a generic routine. If this isn't clear to you, Vyacheslav Egorov write a fine article describing and implementing the object representation optimizations enabled by inline caches.

Inline caches also serve as input data to later stages of an adaptive compiler, allowing the compiler to selectively inline (open-code) only those cases that are appropriate to values actually seen at any given call site.

but how?

The classic formulation of inline caches from Self and early V8 actually patched the code being executed. An inline cache might be allocated at address 0xcabba9e5 and the code emitted for its call-site would be jmp 0xcabba9e5. If the inline cache ended up bottoming out to the generic routine, a new inline cache would be generated that added an implementation appropriate to the newly seen "form" of the operands and the call-site. Let's say that new IC (inline cache) would have the address 0x900db334. Early versions of V8 would actually patch the machine code at the call-site to be jmp 0x900db334 instead of jmp 0xcabba6e5.

Patching machine code has a number of disadvantages, though. It inherently target-specific: you will need different strategies to patch x86-64 and armv7 machine code. It's also expensive: you have to flush the instruction cache after the patch, which slows you down. That is, of course, if you are allowed to patch executable code; on many systems that's impossible. Writable machine code is a potential vulnerability if the system may be vulnerable to remote code execution.

Perhaps worst of all, though, patching machine code is not thread-safe. In the case of early Javascript, this perhaps wasn't so important; but as JS implementations gained parallel garbage collectors and JS-level parallelism via "service workers", this becomes less acceptable.

For all of these reasons, the modern take on inline caches is to implement them as a memory location that can be atomically modified. The call site is just jmp *loc, as if it were a virtual method call. Modern CPUs have "branch target buffers" that predict the target of these indirect branches with very high accuracy so that the indirect jump does not become a pipeline stall. (What does this mean in the face of the Spectre v2 vulnerabilities? Sadly, God only knows at this point. Saddest panda.)

cry, the beloved country

I am interested in ICs in the context of the Guile implementation of Scheme, but first I will make a digression. Scheme is a very monomorphic language. Yet, this monomorphism is entirely cultural. It is in no way essential. Lack of ICs in implementations has actually fed back and encouraged this monomorphism.

Let us take as an example the case of property access. If you have a pair in Scheme and you want its first field, you do (car x). But if you have a vector, you do (vector-ref x 0).

What's the reason for this nonuniformity? You could have a generic ref procedure, which when invoked as (ref x 0) would return the field in x associated with 0. Or (ref x 'foo) to return the foo property of x. It would be more orthogonal in some ways, and it's completely valid Scheme.

We don't write Scheme programs this way, though. From what I can tell, it's for two reasons: one good, and one bad.

The good reason is that saying vector-ref means more to the reader. You know more about the complexity of the operation and what side effects it might have. When you call ref, who knows? Using concrete primitives allows for better program analysis and understanding.

The bad reason is that Scheme implementations, Guile included, tend to compile (car x) to much better code than (ref x 0). Scheme implementations in practice aren't well-equipped for polymorphic data access. In fact it is standard Scheme practice to abuse the "macro" facility to manually inline code so that that certain performance-sensitive operations get inlined into a closed graph of monomorphic operators with no callouts. To the extent that this is true, Scheme programmers, Scheme programs, and the Scheme language as a whole are all victims of their implementations. JavaScript, for example, does not have this problem -- to a small extent, maybe, yes, performance tweaks and tuning are always a thing but JavaScript implementations' ability to burn away polymorphism and abstraction results in an entirely different character in JS programs versus Scheme programs.

it gets worse

On the most basic level, Scheme is the call-by-value lambda calculus. It's well-studied, well-understood, and eminently flexible. However the way that the syntax maps to the semantics hides a constrictive monomorphism: that the "callee" of a call refer to a lambda expression.

Concretely, in an expression like (a b), in which a is not a macro, a must evaluate to the result of a lambda expression. Perhaps by reference (e.g. (define a (lambda (x) x))), perhaps directly; but a lambda nonetheless. But what if a is actually a vector? At that point the Scheme language standard would declare that to be an error.

The semantics of Clojure, though, would allow for ((vector 'a 'b 'c) 1) to evaluate to b. Why not in Scheme? There are the same good and bad reasons as with ref. Usually, the concerns of the language implementation dominate, regardless of those of the users who generally want to write terse code. Of course in some cases the implementation concerns should dominate, but not always. Here, Scheme could be more flexible if it wanted to.

what have you done for me lately

Although inline caches are not a miracle cure for performance overheads of polymorphic dispatch, they are a tool in the box. But what, precisely, can they do, both in general and for Scheme?

To my mind, they have five uses. If you can think of more, please let me know in the comments.

Firstly, they have the classic named property access optimizations as in JavaScript. These apply less to Scheme, as we don't have generic property access. Perhaps this is a deficiency of Scheme, but it's not exactly low-hanging fruit. Perhaps this would be more interesting if Guile had more generic protocols such as Racket's iteration.

Next, there are the arithmetic operators: addition, multiplication, and so on. Scheme's arithmetic is indeed polymorphic; the addition operator + can add any number of complex numbers, with a distinction between exact and inexact values. On a representation level, Guile has fixnums (small exact integers, no heap allocation), bignums (arbitrary-precision heap-allocated exact integers), fractions (exact ratios between integers), flonums (heap-allocated double-precision floating point numbers), and compnums (inexact complex numbers, internally a pair of doubles). Also in Guile, arithmetic operators are a "primitive generics", meaning that they can be extended to operate on new types at runtime via GOOPS.

The usual situation though is that any particular instance of an addition operator only sees fixnums. In that case, it makes sense to only emit code for fixnums, instead of the product of all possible numeric representations. This is a clear application where inline caches can be interesting to Guile.

Third, there is a very specific case related to dynamic linking. Did you know that most programs compiled for GNU/Linux and related systems have inline caches in them? It's a bit weird but the "Procedure Linkage Table" (PLT) segment in ELF binaries on Linux systems is set up in a way that when e.g. libfoo.so is loaded, the dynamic linker usually doesn't eagerly resolve all of the external routines that libfoo.so uses. The first time that libfoo.so calls frobulate, it ends up calling a procedure that looks up the location of the frobulate procedure, then patches the binary code in the PLT so that the next time frobulate is called, it dispatches directly. To dynamic language people it's the weirdest thing in the world that the C/C++/everything-static universe has at its cold, cold heart a hash table and a dynamic dispatch system that it doesn't expose to any kind of user for instrumenting or introspection -- any user that's not a malware author, of course.

But I digress! Guile can use ICs to lazily resolve runtime routines used by compiled Scheme code. But perhaps this isn't optimal, as the set of primitive runtime calls that Guile will embed in its output is finite, and so resolving these routines eagerly would probably be sufficient. Guile could use ICs for inter-module references as well, and these should indeed be resolved lazily; but I don't know, perhaps the current strategy of using a call-site cache for inter-module references is sufficient.

Fourthly (are you counting?), there is a general case of the former: when you see a call (a b) and you don't know what a is. If you put an inline cache in the call, instead of having to emit checks that a is a heap object and a procedure and then emit an indirect call to the procedure's code, you might be able to emit simply a check that a is the same as x, the only callee you ever saw at that site, and in that case you can emit a direct branch to the function's code instead of an indirect branch.

Here I think the argument is less strong. Modern CPUs are already very good at indirect jumps and well-predicted branches. The value of a devirtualization pass in compilers is that it makes the side effects of a virtual method call concrete, allowing for more optimizations; avoiding indirect branches is good but not necessary. On the other hand, Guile does have polymorphic callees (generic functions), and call ICs could help there. Ideally though we would need to extend the language to allow generic functions to feed back to their inline cache handlers.

Finally, ICs could allow for cheap tracepoints and breakpoints. If at every breakable location you included a jmp *loc, and the initial value of *loc was the next instruction, then you could patch individual locations with code to run there. The patched code would be responsible for saving and restoring machine state around the instrumentation.

Honestly I struggle a lot with the idea of debugging native code. GDB does the least-overhead, most-generic thing, which is patching code directly; but it runs from a separate process, and in Guile we need in-process portable debugging. The debugging use case is a clear area where you want adaptive optimization, so that you can emit debugging ceremony from the hottest code, knowing that you can fall back on some earlier tier. Perhaps Guile should bite the bullet and go this way too.

implementation plan

In Guile, monomorphic as it is in most things, probably only arithmetic is worth the trouble of inline caches, at least in the short term.

Another question is how much to specialize the inline caches to their call site. On the extreme side, each call site could have a custom calling convention: if the first operand is in register A and the second is in register B and they are expected to be fixnums, and the result goes in register C, and the continuation is the code at L, well then you generate an inline cache that specializes to all of that. No need to shuffle operands or results, no need to save the continuation (return location) on the stack.

The opposite would be to call ICs as if their were normal procedures: shuffle arguments into fixed operand registers, push a stack frame, and when the IC returns, shuffle the result into place.

Honestly I am looking mostly to the simple solution. I am concerned about code and heap bloat if I specify to every last detail of a call site. Also maximum speed comes with an adaptive optimizer, and in that case simple lower tiers are best.

sanity check

To compare these impressions, I took a look at V8's current source code to see where they use ICs in practice. When I worked on V8, the compiler was entirely different -- there were two tiers, and both of them generated native code. Inline caches were everywhere, and they were gnarly; every architecture had its own implementation. Now in V8 there are two tiers, not the same as the old ones, and the lowest one is a bytecode interpreter.

As an adaptive optimizer, V8 doesn't need breakpoint ICs. It can always deoptimize back to the interpreter. In actual practice, to debug at a source location, V8 will patch the bytecode to insert a "DebugBreak" instruction, which has its own support in the interpreter. V8 also supports optimized compilation of this operation. So, no ICs needed here.

Likewise for generic type feedback, V8 records types as data rather than in the classic formulation of inline caches as in Self. I think WebKit's JavaScriptCore uses a similar strategy.

V8 does use inline caches for property access (loads and stores). Besides that there is an inline cache used in calls which is just used to record callee counts, and not used for direct call optimization.

Surprisingly, V8 doesn't even seem to use inline caches for arithmetic (any more?). Fair enough, I guess, given that JavaScript's numbers aren't very polymorphic, and even with a system with fixnums and heap floats like V8, floating-point numbers are rare in cold code.

The dynamic linking and relocation points don't apply to V8 either, as it doesn't receive binary code from the internet; it always starts from source.

twilight of the inline cache

There was a time when inline caches were recommended to solve all your VM problems, but it would seem now that their heyday is past.

ICs are still a win if you have named property access on objects whose shape you don't know at compile-time. But improvements in CPU branch target buffers mean that it's no longer imperative to use ICs to avoid indirect branches (modulo Spectre v2), and creating direct branches via code-patching has gotten more expensive and tricky on today's targets with concurrency and deep cache hierarchies.

Besides that, the type feedback component of inline caches seems to be taken over by explicit data-driven call-site caches, rather than executable inline caches, and the highest-throughput tiers of an adaptive optimizer burn away inline caches anyway. The pressure on an inline cache infrastructure now is towards simplicity and ease of type and call-count profiling, leaving the speed component to those higher tiers.

In Guile the bounded polymorphism on arithmetic combined with the need for ahead-of-time compilation means that ICs are probably a code size and execution time win, but it will take some engineering to prevent the calling convention overhead from dominating cost.

Time to experiment, then -- I'll let y'all know how it goes. Thoughts and feedback welcome from the compilerati. Until then, happy hacking :)

07 February, 2018 03:14PM by Andy Wingo

February 05, 2018

remotecontrol @ Savannah

Andy Wingo

notes from the fosdem 2018 networking devroom

Greetings, internet!

I am on my way back from FOSDEM and thought I would share with yall some impressions from talks in the Networking devroom. I didn't get to go to all that many talks -- FOSDEM's hallway track is the hottest of them all -- but I did hit a select few. Thanks to Dave Neary at Red Hat for organizing the room.

Ray Kinsella -- Intel -- The path to data-plane micro-services

The day started with a drum-beating talk that was very light on technical information.

Essentially Ray was arguing for an evolution of network function virtualization -- that instead of running VNFs on bare metal as was done in the days of yore, that people started to run them in virtual machines, and now they run them in containers -- what's next? Ray is saying that "cloud-native VNFs" are the next step.

Cloud-native VNFs to move from "greedy" VNFs that take charge of the cores that are available to them, to some kind of resource sharing. "Maybe users value flexibility over performance", says Ray. It's the Care Bears approach to networking: (resource) sharing is caring.

In practice he proposed two ways that VNFs can map to cores and cards.

One was in-process sharing, which if I understood him properly was actually as nodes running within a VPP process. Basically in this case VPP or DPDK is the scheduler and multiplexes two or more network functions in one process.

The other was letting Linux schedule separate processes. In networking, we don't usually do it this way: we run network functions on dedicated cores on which nothing else runs. Ray was suggesting that perhaps network functions could be more like "normal" Linux services. Ray doesn't know if Linux scheduling will work in practice. Also it might mean allowing DPDK to work with 4K pages instead of the 2M hugepages it currently requires. This obviously has the potential for more latency hazards and would need some tighter engineering, and ultimately would have fewer guarantees than the "greedy" approach.

Interesting side things I noticed:

  • All the diagrams show Kubernetes managing CPU node allocation and interface assignment. I guess in marketing diagrams, Kubernetes has completely replaced OpenStack.

  • One slide showed guest VNFs differentiated between "virtual network functions" and "socket-based applications", the latter ones being the legacy services that use kernel APIs. It's a useful terminology difference.

  • The talk identifies user-space networking with DPDK (only!).

Finally, I note that Conway's law is obviously reflected in the performance overheads: because there are organizational isolations between dev teams, vendors, and users, there are big technical barriers between them too. The least-overhead forms of resource sharing are also those with the highest technical consistency and integration (nodes in a single VPP instance).

Magnus Karlsson -- Intel -- AF_XDP

This was a talk about getting good throughput from the NIC to userspace, but by using some kernel facilities. The idea is to get the kernel to set up the NIC and virtualize the transmit and receive ring buffers, but to let the NIC's DMA'd packets go directly to userspace.

The performance goal is 40Gbps for thousand-byte packets, or 25 Gbps for traffic with only the smallest packets (64 bytes). The fast path does "zero copy" on the packets if the hardware has the capability to steer the subset of traffic associated with the AF_XDP socket to that particular process.

The AF_XDP project builds on XDP, a newish thing where a little kind of bytecode can run on the kernel or possibly on the NIC. One of the bytecode commands (REDIRECT) causes packets to be forwarded to user-space instead of handled by the kernel's otherwise heavyweight networking stack. AF_XDP is the bridge between XDP on the kernel side and an interface to user-space using sockets (as opposed to e.g. AF_INET). The performance goal was to be within 10% or so of DPDK's raw user-space-only performance.

The benefits of AF_XDP over the current situation would be that you have just one device driver, in the kernel, rather than having to have one driver in the kernel (which you have to have anyway) and one in user-space (for speed). Also, with the kernel involved, there is a possibility for better isolation between different processes or containers, when compared with raw PCI access from user-space..

AF_XDP is what was previously known as AF_PACKET v4, and its numbers are looking somewhat OK. Though it's not upstream yet, it might be interesting to get a Snabb driver here.

I would note that kernel-userspace cooperation is a bit of a theme these days. There are other points of potential cooperation or common domain sharing, storage being an obvious one. However I heard more than once this weekend the kind of "I don't know, that area of the kernel has a different culture" sort of concern as that highlighted by Daniel Vetter in his recent LCA talk.

François-Frédéric Ozog -- Linaro -- Userland Network I/O

This talk is hard to summarize. Like the previous one, it's again about getting packets to userspace with some support from the kernel, but the speaker went really deep and I'm not quite sure what in the talk is new and what is known.

François-Frédéric is working on a new set of abstractions for relating the kernel and user-space. He works on OpenDataPlane (ODP), which is kinda like DPDK in some ways. ARM seems to be a big target for his work; that x86-64 is also a target goes without saying.

His problem statement was, how should we enable fast userland network I/O, without duplicating drivers?

François-Frédéric was a bit negative on AF_XDP because (he says) it is so focused on packets that it neglects other kinds of devices with similar needs, such as crypto accelerators. Apparently the challenge here is accelerating a single large IPsec tunnel -- because the cryptographic operations are serialized, you need good single-core performance, and making use of hardware accelerators seems necessary right now for even a single 10Gbps stream. (If you had many tunnels, you could parallelize, but that's not the case here.)

He was also a bit skeptical about standardizing on the "packet array I/O model" which AF_XDP and most NICS use. What he means here is that most current NICs move packets to and from main memory with the help of a "descriptor array" ring buffer that holds pointers to packets. A transmit array stores packets ready to transmit; a receive array stores maximum-sized packet buffers ready to be filled by the NIC. The packet data itself is somewhere else in memory; the descriptor only points to it. When a new packet is received, the NIC fills the corresponding packet buffer and then updates the "descriptor array" to point to the newly available packet. This requires at least two memory writes from the NIC to memory: at least one to write the packet data (one per 64 bytes of packet data), and one to update the DMA descriptor with the packet length and possible other metadata.

Although these writes go directly to cache, there's a limit to the number of DMA operations that can happen per second, and with 100Gbps cards, we can't afford to make one such transaction per packet.

François-Frédéric promoted an alternative I/O model for high-throughput use cases: the "tape I/O model", where packets are just written back-to-back in a uniform array of memory. Every so often a block of memory containing some number of packets is made available to user-space. This has the advantage of packing in more packets per memory block, as there's no wasted space between packets. This increases cache density and decreases DMA transaction count for transferring packet data, as we can use each 64-byte DMA write to its fullest. Additionally there's no side table of descriptors to update, saving a DMA write there.

Apparently the only cards currently capable of 100 Gbps traffic, the Chelsio and Netcope cards, use the "tape I/O model".

Incidentally, the DMA transfer limit isn't the only constraint. Something I hadn't fully appreciated before was memory write bandwidth. Before, I had thought that because the NIC would transfer in packet data directly to cache, that this wouldn't necessarily cause any write traffic to RAM. Apparently that's not the case. Later over drinks (thanks to Red Hat's networking group for organizing), François-Frédéric asserted that the DMA transfers would eventually use up DDR4 bandwidth as well.

A NIC-to-RAM DMA transaction will write one cache line (usually 64 bytes) to the socket's last-level cache. This write will evict whatever was there before. As far as I can tell, there are three cases of interest here. The best case is where the evicted cache line is from a previous DMA transfer to the same address. In that case it's modified in the cache and not yet flushed to main memory, and we can just update the cache instead of flushing to RAM. (Do I misunderstand the way caches work here? Do let me know.)

However if the evicted cache line is from some other address, we might have to flush to RAM if the cache line is dirty. That causes a memory write traffic. But if the cache line is clean, that means it was probably loaded as part of a memory read operation, and then that means we're evicting part of the network function's working set, which will later cause memory read traffic as the data gets loaded in again, and write traffic to flush out the DMA'd packet data cache line.

François-Frédéric simplified the whole thing to equate packet bandwidth with memory write bandwidth, that yes, the packet goes directly to cache but it is also written to RAM. I can't convince myself that that's the case for all packets, but I need to look more into this.

Of course the cache pressure and the memory traffic is worse if the packet data is less compact in memory; and worse still if there is any need to copy data. Ultimately, processing small packets at 100Gbps is still a huge challenge for user-space networking, and it's no wonder that there are only a couple devices on the market that can do it reliably, not that I've seen either of them operate first-hand :)

Talking with Snabb's Luke Gorrie later on, he thought that it could be that we can still stretch the packet array I/O model for a while, given that PCIe gen4 is coming soon, which will increase the DMA transaction rate. So that's a possibility to keep in mind.

At the same time, apparently there are some "coherent interconnects" coming too which will allow the NIC's memory to be mapped into the "normal" address space available to the CPU. In this model, instead of having the NIC transfer packets to the CPU, the NIC's memory will be directly addressable from the CPU, as if it were part of RAM. The latency to pull data in from the NIC to cache is expected to be slightly longer than a RAM access; for comparison, RAM access takes about 70 nanoseconds.

For a user-space networking workload, coherent interconnects don't change much. You still need to get the packet data into cache. True, you do avoid the writeback to main memory, as the packet is already in addressable memory before it's in cache. But, if it's possible to keep the packet on the NIC -- like maybe you are able to add some kind of inline classifier on the NIC that could directly shunt a packet towards an on-board IPSec accelerator -- in that case you could avoid a lot of memory transfer. That appears to be the driving factor for coherent interconnects.

At some point in François-Frédéric's talk, my brain just died. I didn't quite understand all the complexities that he was taking into account. Later, after he kindly took the time to dispell some more of my ignorance, I understand more of it, though not yet all :) The concrete "deliverable" of the talk was a model for kernel modules and user-space drivers that uses the paradigms he was promoting. It's a work in progress from Linaro's networking group, with some support from NIC vendors and CPU manufacturers.

Luke Gorrie and Asumu Takikawa -- SnabbCo and Igalia -- How to write your own NIC driver, and why

This talk had the most magnificent beginning: a sort of "repent now ye sinners" sermon from Luke Gorrie, a seasoned veteran of software networking. Luke started by describing the path of righteousness leading to "driver heaven", a world in which all vendors have publically accessible datasheets which parsimoniously describe what you need to get packets flowing. In this blessed land it's easy to write drivers, and for that reason there are many of them. Developers choose a driver based on their needs, or they write one themselves if their needs are quite specific.

But there is another path, says Luke, that of "driver hell": a world of wickedness and proprietary datasheets, where even when you buy the hardware, you can't program it unless you're buying a hundred thousand units, and even then you are smitten with the cursed non-disclosure agreements. In this inferno, only a vendor is practically empowered to write drivers, but their poor driver developers are only incentivized to get the driver out the door deployed on all nine architectural circles of driver hell. So they include some kind of circle-of-hell abstraction layer, resulting in a hundred thousand lines of code like a tangled frozen beard. We all saw the abyss and repented.

Luke described the process that led to Mellanox releasing the specification for its ConnectX line of cards, something that was warmly appreciated by the entire audience, users and driver developers included. Wonderful stuff.

My Igalia colleague Asumu Takikawa took the last half of the presentation, showing some code for the driver for the Intel i210, i350, and 82599 cards. For more on that, I recommend his recent blog post on user-space driver development. It was truly a ray of sunshine in dark, dark Brussels.

Ole Trøan -- Cisco -- Fast dataplanes with VPP

This talk was a delightful introduction to VPP, but without all of the marketing; the sort of talk that makes FOSDEM worthwhile. Usually at more commercial, vendory events, you can't really get close to the technical people unless you have a vendor relationship: they are surrounded by a phalanx of salesfolk. But in FOSDEM it is clear that we are all comrades out on the open source networking front.

The speaker expressed great personal pleasure on having being able to work on open source software; his relief was palpable. A nice moment.

He also had some kind words about Snabb, too, saying at one point that "of course you can do it on snabb as well -- Snabb and VPP are quite similar in their approach to life". He trolled the horrible complexity diagrams of many "NFV" stacks whose components reflect the org charts that produce them more than the needs of the network functions in question (service chaining anyone?).

He did get to drop some numbers as well, which I found interesting. One is that recently they have been working on carrier-grade NAT, aiming for 6 terabits per second. Those are pretty big boxes and I hope they are getting paid appropriately for that :) For context he said that for a 4-unit server, these days you can build one that does a little less than a terabit per second. I assume that's with ten dual-port 40Gbps cards, and I would guess to power that you'd need around 40 cores or so, split between two sockets.

Finally, he finished with a long example on lightweight 4-over-6. Incidentally this is the same network function my group at Igalia has been building in Snabb over the last couple years, so it was interesting to see the comparison. I enjoyed his commentary that although all of these technologies (carrier-grade NAT, MAP, lightweight 4-over-6) have the ostensible goal of keeping IPv4 running, in reality "we're day by day making IPv4 work worse", mainly by breaking the assumption that just because you get traffic from port P on IP M, doesn't mean you can send traffic to M from another port or another protocol and have it reach the target.

All of these technologies also have problems with IPv4 fragmentation. Getting it right is possible but expensive. Instead, Ole mentions that he and a cross-vendor cabal of dataplane people have a "dark RFC" in the works to deprecate IPv4 fragmentation entirely :)

OK that's it. If I get around to writing up the couple of interesting Java talks I went to (I know right?) I'll let yall know. Happy hacking!

05 February, 2018 05:22PM by Andy Wingo

February 02, 2018

freeipmi @ Savannah

FreeIPMI 1.6.1 Released

https://ftp.gnu.org/gnu/freeipmi/freeipmi-1.6.1.tar.gz

FreeIPMI 1.6.1 - 02/02/18
-------------------------
o Add IPv6 hostname support to FreeIPMI, all of FreeIPMI can now
take IPv6 addresses as inputs to "host" parameters, options, or
inputs.
o Support significant portions of IPMI IPv6 configuration in
libfreeipmi.
o Add --no-session option in ipmi-raw.
o Add SDR cache options to ipmi-config.
o Legacy -f short option for --flush-cache and -Q short option
for quiet-cache. Backwards compatible for tools that supported
it before.
o In ipmi-oem, support Gigabyte get-bmc-services and set-bmc-
services.
o Various performance improvements:
- Remove excessive calls to secure_memset to clear memory.
- Remove excessive memsets and clears of data.
- Remove unnecessary "double input checks".
- Remove expensive input checks in libfreeipmi fiid library.
Fallout from this may include FIID_ERR_FIELD_NOT_FOUND errors
in different fiid functions.
- Remove unnecessary input checks in libfreeipmi fiid library.
- Add recent 'lookups' of fields in fiid library to internal
cache.
o Various minor fixes/improvements
- Update libfreeipmi core API to use poll() instead of
select(), to avoid issues with applications with a high
number of threads.

02 February, 2018 11:47PM by Albert Chu

January 30, 2018

FSF News

Free Software Foundation receives $1 million donation from Pineapple Fund

BOSTON, Massachusetts, USA -- Tuesday, January 30, 2018 -- The Free Software Foundation (FSF) announced it has received a record-breaking charitable contribution of 91.45 Bitcoin from the Pineapple Fund, valued at $1 million at the time of the donation. This gift is a testament to the importance of free software, computer user freedom, and digital rights when technology is interwoven with daily life.

"Free software is more than open source; it is a movement that encourages community collaboration and protects users' freedom," wrote Pine, the Pineapple Fund's founder. "The Free Software Foundation does amazing work, and I'm certain the funds will be put to good use."

"The FSF is honored to receive this generous donation from the Pineapple Fund in service of the free software movement," said John Sullivan, FSF executive director. "We will use it to further empower free software activists and developers around the world. Now is a critical time for computer user freedom, and this gift will make a tremendous difference in our ability, as a movement, to meet the challenges."

The anonymous Pineapple Fund, created to give away $86 million worth of Bitcoin to charities and social causes, "is about making bold and smart bets that hopefully impact everyone in our world."

The FSF believes free software does impact everyone, and this gift from the Pineapple Fund will be used to:

  • Increase innovation and the number of new projects in high priority areas of free software development, including the GNU Project;

  • Expand the FSF's licensing, compliance, and hardware device certification programs;

  • Bring the free software movement to new audiences;

  • Contribute to the long-term stability of the organization.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

John Sullivan
Executive Director
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

30 January, 2018 04:25PM

January 29, 2018

GUIX Project news

Meet Guix at FOSDEM

GNU Guix will be present at FOSDEM in the coming days with a couple of talks:

We are also organizing a one-day Guix workshop where contributors and enthusiasts will meet, thanks to the efforts of Manolis Ragkousis and Pjotr Prins. The workshop takes place on Friday Feb. 2nd at the Institute of Cultural Affairs (ICAB) in Brussels. The morning will be dedicated to talks—among other things, we are happy to welcome Eelco Dolstra, the founder of Nix, without which Guix would not exist today. The afternoon will be a more informal discussion and hacking session.

Attendance to the workshop is free and open to everyone, though you are invited to register. Check out the workshop’s wiki page for the program, registration, and practical info. Hope to see you in Brussels!

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

29 January, 2018 03:00PM by Ludovic Courtès

January 28, 2018

dico @ Savannah

Version 2.5

Version 2.5 of GNU dico is available for download. Main new feature in this release: support for four-column index files in dict.org format.

Previous versions of dico supported only three-column index files. This is most common format. However, some dictionaries have four-column index files. When trying to load such dictionaries using prior versions of GNU dico, you would get the error message "X.index:Y: malformed entry". The present version fixes this problem.

28 January, 2018 02:49PM by Sergey Poznyakoff

January 27, 2018

GNUnet News

gnURL 7.58.0

I'm no longer publishing release announcements on gnunet.org. Read the full gnURL 7.58.0 release announcement on our developer mailinglist and on info-gnu once my email has passed the moderation.

27 January, 2018 03:48PM by ng0

January 26, 2018

Lonely Cactus

The Ridiculous Gopher Project: BBSs and ZModem

In the previous entry, I talked about the ridiculous Gopher project, in which I might try to make a presence for myself in Gopher Space.

So my first though was that I would have a blog and a webgallery over gopher.

The blog entries are a very simple prospect, since they need to be plain text.  I don't really like the block paragraph style, but, I did sketch out a conversion from markdown to troff to text that does some nice formatting.

The directory of the blog entries is a bit more complicated.  I had an idea for a cgi that handled directory structures and indices that are date-based with a parallel directory structure and index that is keyword based.

But anyway, I got stuck on my first step, and fell down a rabbit hole, as per usual.

So I thought to myself, what if I wanted to have comments for my gopher blog?  How would that work?  What technology would I used?  Well, in the original Gopher spec, there is a capacity for a Telnet session.  I thought that I could make a tiny Telnet-based BBS with just enough functionality to let one leave a comment or read comments.

So I went on the internet to find a tiny BBS to examine.  I found just about the simplest BBS one could imagine.  It is called Puppy BBS.
I found it in here: http://cd.textfiles.com/simtel/simtel20/MSDOS/FIDO/.index.html

So there this California-based guy named Tom Jennings who does a lot of stuff in the intersection between tech and art. Once upon a time he was a driving force behind FidoNet, which was a pre-internet community of dial-up BBSs. He's done many cool things since FidoNet.

Check out his cool art at http://www.sensitiveresearch.com/

I guess Tom wrote PuppyBBS as a reaction to how complicated BBSs had become back in the late 1980s.

So I thought, hey, does this thing still build and run? Well, not exactly. First off, it uses a MS-DOS C library that handles serial comms, which, of course, doesn't work on Microsoft Windows 10 or on Linux. And even if that library did still exist, I couldn't try it even if I wanted to. I mean, if I wanted to try it I would need two landlines and two dial-up modems so I could call myself. I do have a dial-up modem in a box in the garage, but, I'm not going to get another landline for this nonsense.

Anyway, I e-mailed Tom and asked if I could hack it up and post it on Github, and he said okay. And so this is what this is PuppyBBS.

Puppy BBS has four functions:
  • write messages
  • read messages
  • upload files
  • download files
From there, I started writing a Telnet-based BBS, which PupperBBS.  And that went pretty well.  It took very little time to get the message reading and writing running.  I was on a roll, so I decided that I would quickly tackle the other two functions that PuppyBBS had: uploading and downloading files.  And that was where it all got complicated.

PuppyBBS used XModem for file transfer, because it was the 80's and that was what people did.  But I thought ZModem, which was faster and more reliable, would be the way to go.  So, I thought I'd just link a zmodem library to the BBS and I'd be ready to go.

But, I couldn't find a zmodem library that was ready to go.  All zmodem code seems to be derived for lrzsz, so I downloaded the code from lrzsz and made it into a library.  To do that, I had to understand the code, so I tried to read it.  That code is so very 1980s.  It is terrible, so I had to fix it.

(Let the record show that by "terrible" I mean terrible from a reader's point of view.  It was written with so much global state and no indication of which procedures modify that state.  There is no isolation, no separation of concerns.  As a practical matter, it works great.)

And that led to a full week of untangling it all, which is what became the libzmodem library.  Now my libzmodem isn't really much more readable than the original code, but, at least it makes more sense to me.

Great, now I linked libzmodem to PupperBBS to add some ZModem send and receive functionality.  Now to test it.  I set up PupperBBS.  I telnetted in to the system, got to the BBS, and tried to upload and download some files.  It became apparent that for ZModem to work, the telnet program itself has to have some parnership with rz and sz, launching one or the other as appropriate.

Since this had to have worked in the past, some internet searches led me to zssh on sourceforge  . zssh has a telnet program that has a built-in zmodem send and receive functionality.  Unfortunately, it wasn't packaged on Fedora didn't compile out of the box, so I started trying to understand it and fix it.

So, anyway to summarize:
  1. Let's do a Gopher blog!
  2. How do you do comments?
  3. Telnet works on Gopher!
  4. Let's make a BBS
  5. BBS's do Zmodem
  6. Let's make a ZModem library
  7. Let's make a Telnet client that does ZModem.
And this is why I never finish anything.

26 January, 2018 05:38AM by Mike (noreply@blogger.com)

January 25, 2018

GUIX Project news

aarch64 build machines donated

Good news! We got a present for our build farm in the form of two SoftIron OverDrive 1000 aarch64 machines donated by ARM Holdings. One of them is already running behind our new build farm, which distributes binaries from https://berlin.guixsd.org, and the other one should be operational soon.

The OverDrive has 4 cores and 8 GiB of RAM. It comes in a fancy VCR-style case, which looks even more fancy with the obligatory stickers:

An OverDrive 1000 with its fancy Guix stickers.

A few months ago we reported on the status of the aarch64 port, which was already looking good. The latest releases include a pre-built binary tarball of Guix for aarch64.

Until now though, the project’s official build farms were not building aarch64 binaries. Consequently, Guix on aarch64 would build everything from source. We are glad that this is about to be fixed. We will need to expand our build capacity for this architecture and for ARMv7 as well, and you too can help!

Thanks to ARM Holdings and in particular to Richard Henwood for contributing to our build infrastructure!

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

25 January, 2018 09:00PM by Ludovic Courtès

Christopher Allan Webber

On standards divisions and collaboration (or: Why can't the decentralized social web people just get along?)

A couple of days ago I wrote about ActivityPub becoming a W3C Recommendation. This was one output of the Social Working Group, and the blogpost was about my experiences, and most of my experiences were on my direct work on ActivityPub. But the Social Working Group did more than ActivityPub; it also on the same day published WebSub, a useful piece of technology in its own right which amongst other things also plays a significant historical role in what is even ActivityPub's history (but is not used by ActivityPub itself), and it has also published several documents which are not compatible with ActivityPub at all, and appear to play the same role. This, to outsiders, may appear confusing, but there are reasons which I will go into in this post.

On that note, friend and Social Working Group co-participant Amy Guy just wrote a reasonably and (to my own feelings) highly empathizable frustrated blogpost (go ahead and read it before you finish this blogpost) about the kinds of comments you see with different members of different decentralized social web communities sniping at each other. Yes, reading the comments is always a precarious idea, particularly on tech news sites. But what's especially frustrating is seeing comments that we either:

These comments seem to be being made by people who were not part of the standards process, so as someone who spent three years of their life on it, let me give the perspective of someone who was actually there.

So yes, first of all, it's true that in the end we pushed out two "stacks" that were mostly incompatible. These would more or less be the "restful + linked data" stack, which is ActivityPub and Linked Data Notifications using ActivityStreams as its core (but extensible) vocabulary (which are directly interoperable, and use the same "inbox" property for delivery), and the "Indieweb stack", which is Micropub and Webmention. (And there's also WebSub, which is not really either specifically part of one or the other of those "stacks" but which can be used with either, and is of such historical significance to federation that we wanted it to be standardized.) Amy Guy did a good job of mapping the landscape in her Social Web Protocols document.

Gosh, two stacks! It does kind of look confusing, if you weren't in the group, to see how this could have happened. Going through meeting logs is boring (though the meeting logs are up there if you feel like it) so here's what happened, as I remember it.

First of all, we didn't just start out with two stacks, we started out with three. At the beginning we had the linked data folks, the RESTful "just speak plain JSON" development type folks, and the Indieweb folks. Nobody really saw eye to eye at first, but eventually we managed to reach some convergence (though not as much as I would have liked). In fact we managed to merge two approaches entirely: ActivityPub is a RESTful API that can be read and interpreted as just JSON, but thanks to JSON-LD you have the power of linked data for extensions or maybe because you really like doing fancy RDF the-web-is-a-graph things. And ActivityPub uses the very same inbox of Linked Data Notifications, and is directly interoperable. Things did not start out as directly interoperable, but Sarven Capadisli and Amy Guy (who was not yet a co-author of ActivityPub) were willing to sit down and discuss and work out the details, and eventually we got there.

Merging the RESTful + Linked Data stuff with the Indieweb stuff was a bit more of a challenge, but for a while it looked like even that might completely happen. For those that don't know, Linked Data type people and Indieweb type people have, for whatever reason, historically been at each others' throats despite (or perhaps because of) the enormous similarity between the kind of work that they're doing (the main disagreements being "should we treat everything like a graph" and "are namespaces a good idea" and also, let's be honest, just historical grudges). But Amy Guy long made the case in the group that actually the divisions between the groups were very shallow and that with just a few tweaks we could actually bridge the gap (this was the real origin of the Social Web Protocols document, which though it eventually became a document of the different things we produced, was originally an analysis of how they weren't so different at all). At the face to face summit in Paris (which I did not attend, but ActivityPub co-editor Jessica Tallon did) there was apparently an energetic meeting over a meal where I'm told that Jessica Tallon and Aaron Parecki (editor of Micropub and Webmention) hit some kind of epiphany and realized yes, by god, we can actually merge these approaches together. Attending remotely, I wasn't there for the meal, but when everyone returned it was apparent that something had changed: the conversation had shifted towards reconciling differences. Between the Paris face to face meeting and the next one, energy was high and discussions active on how to bring things together. Aaron even began to consider that maybe Micropub (and/or? I forget if it was just one) Webmention could support ActivityStreams, since ActivityStreams already had an extension mechanism worked out. At the next face to face meeting, things started out optimistic as well... and then suddenly, within the span of minutes, the whole idea of merging the specs fell apart. In fact it happened so quickly that I'm not even entirely sure what did it, but I think it was over two things: one, Micropub handled an update of fields where you could add or remove a specific element from a list (without giving the entire changed list as a replacement value) and it wasn't obvious how it could be done with ActivityPub, and two, something like "well we already have a whole vocabulary in Microformats anyway, we might as well stick with it." (I could have the details wrong here a bit... again, it happened very fast, and I remember in the next break trying to figure out whether or not things did just fall apart or not.)

With the the dream of Linked Data and Indieweb stuff being reconciled given up on, we decided that at least we could move forward in parallel without clobbering, and in fact while actively supporting, each other. I think, at this point, this was actually the best decision possible, and in a sense it was even very fruitful. At this point, not trying to reconcile and compromise on a single spec, the authors and editors of the differing specifications still spent much time collaborating as the specifications moved forward. Aaron and other Indieweb folks provided plenty of useful feedback for ActivityPub and the ActivityPub folks provided plenty of useful feedback for the Indieweb folks, and I'd say all our specifications were improved greatly by this "friendly treaty" of sorts. If we could not unify, we could at least cooperate, and we did.

I'd even say that we came to a good amount of mutual understanding and respect between these groups within the Social Web Working Group. People approached these decentralization challenges with different building blocks, assumptions, principles, and goals... hence at some point they've encountered approaches that didn't quite jive with their "world view" on how to do it right (TM). And that's okay! Even there, we have plenty of space for cooperation and can learn from each other.

This is also true with the continuation of the Social Web Working Group, which is the SocialCG, where the two co-chairs are myself and Aaron Parecki, who are both editors of specifications of the conflicting "stacks". Within the Social Web Community Group we have a philosophy that our scope is to work on collaboration on social web protocols. If you use a different protocol than another person, you probably can still collaborate a lot, because there's a lot of overlap between the problem domains between social web protocols. Outside the SocialWG and SocialCG it still seems to be a different story, and sadly linked data people and Indieweb people seem to still show up on each others' threads to go after each other. I consider that a disappointment... I wish the external world would reflect the kind of sense of mutual understanding we got in the SocialWG and SocialCG.

Speaking of best attempts at bringing unity, my main goal at participating in the SocialWG, and my entire purpose of showing up in the first place, was always to bring unity. The first task I performed over the course of the first few months at the Social Working Group was to try to bring all of the existing distributed social networks to participate in the SocialWG calls. Even at that time, I was worried about the situation with a "fractured federation"... MediaGoblin was about to implement its own federation code, and I was unhappy that we had a bunch of libre distributed social network projects but none of them could talk to each other, and no matter what we chose we would just end up contributing to the problem. I was called out as naive (which I suppose, in retrospect, was accurate) for a belief that if we could just get everyone around the table we could reconcile our differences, agree on a standard that everyone could share in, and maybe we'd start singing Kumbaya or something. And yes, I was naive, but I did reach out to everyone I could think of (if I missed you somehow, I'm sorry): Diaspora, GNU Social, Pump.io (well, they were already there), Hubzilla, Friendica, Owncloud (later Nextcloud)... etc etc (Mastodon and some others didn't even exist at this point, though we would connect later)... I figured this was our one chance to finally get everyone on board and collaborate. We did have Diaspora and Owncloud participants for a time (and Nextcloud even has begun implementing ActivityPub), and plenty of groups said they'd like to participate, but the main barrier was that the standards process took a lot of time (true story), which not everyone was able to allocate. But we did our best to incorporate and respond to feedback whever we got it. We did detailed analysis on what the major social networks were providing and what we needed to cover as a result. What I'm trying to say is: ActivityPub was my best attempt to bring unity to this space. It grew out of direct experiences from developing previous standards between OStatus, the Pump API, and over a decade of developing social network protocols and software, including by people who pioneered much of the work in that territory. We tried through long and open comment periods to reconcile the needs of various groups and potential users. Maybe we didn't always succeed... but we did try, and always gave it our best. Maybe ActivityPub will succeed in that role or maybe it won't... I'm hopeful, but time is the true test.

Speaking of attempting to bring unity to the different decentralized social network projects, probably the main thing that disappoints me is the amount of strife we have between these different projects. For example, there are various threads pitting Mastodon vs GNU Social. In fact, Mastodon's lead developer and GNU Social's lead developer get along just fine... it's various members of the communities of each that tend to (sounds familiar?) be hostile.

Here's something interesting: decentralized social web initiatives haven't yet faced an all-out attack from what would be presumably be their natural enemies in the centralized social web: Facebook, Twitter, et all. I mean, there have been some aggressions, in the senses that bridging projects that let users mirror their timelines get shut down as terms of service violations and some comparatively minor things, but I don't know of (as of yet) an outright attack. But maybe they don't have to: participants in the decentralized social web is so good at fighting each other that apparently we do that work for them.

But it doesn't have to be that way. You might be able to come to consensus on a good way forward. And if you can't come to consensus, you can at least have friendly and cooperative communication.

And if somehow, you can't do any of that, you just not openly attack each other. We've got enough hard work to fight to make the federated social web work without fighting ourselves. Thanks.

Update: A previous version of this article said "I even saw someone tried to write a federation history and characterize it as war", but it's been pointed out that I'm being unfair here, since the very article I'm pointing to itself refutes the idea of this being war. Fair point, and I've removed that bit.

25 January, 2018 08:35PM by Christopher Lemmer Webber

January 24, 2018

ActivityPub is a W3C Recommendation

Having spent the majority of the last three years of my life on it, I'm happy to announce that ActivityPub is now a W3C Recommendation. Whew! At last! Horray! Finally! I've written some more words on this over on the FSF's blog so maybe read that.

As for things I didn't put there, that fit more on a personal blog? I guess that's where I speak about my personal life experience and feelings about it and I would say they're a mix of elation (for making it), relief (also for making it, because it wasn't always clear that we would), and burnout (I had no idea this process was going to suck up so much of my life).

I didn't expect this to take over my life so thoroughly. I did say this bit on the FSF blogpost but when Jessica Tallon and I got involved in the Social Working Group we figured we were just showing up for an hour a week to make sure things were on track. I did think the goal of the Social Working Group was the right one: we had a lot of libre social networks but they were largely fractured and failed at interoperability... surely we could do better if we got everyone in a room together! (Getting everyone in the room wasn't easy and didn't always happen, though I sure as heck tried, particularly early on.) But I figured the other people in the room would be the experts, the responsible ones, and we'd just be tagging along to make sure our needs were met. Well, the next thing you know we're co-editors of ActivityPub, and that time grew from an hour a week to filling most of my week to sometimes urgent, grueling deadlines (granted, I made most of them a lot more complicated than I needed to be by doing example implementations in obscure languages, etc etc).

I'm feeling great about things now, but that wasn't always the case through this. I've come to learn how hard standards work is, and I've been doing other specification work recently too (more on that in a coming blogpost), but I'll say that for whatever reason (and I can think of quite a few, but it's not worth going into here), ActivityPub has been far harder than anything else I've worked on in the standards space. (Maybe that's just because it's the first standard I've gotten to completion though.)

In fact, in early-to-middle 2017 I was in quite a bit of despair, because it seemed clear that ActivityPub was going to not make it in time as an official recommended standard. The Social Working Group's charter was going to run out at mid-2017, and it had already been extended once... apparently getting a second extension was nearly unheard of. I resigned myself to the idea that ActivityPub would be published as a note, but that there was no way that we would be able to make it to getting the shiny foil stamp of being an actual recommended standard. Instead, I shifted my effort to making sure that my ActivityPub implementation work would support enough of ActivityStreams (which is what ActivityPub uses as its vocabulary) to make sure that at least that would make it as a standard with all the components we required, since we at least needed to be able to refer to that vocabulary.

But Mastodon saved ActivityPub. I'll admit that at first I was skeptical about all the hype I was hearing about Mastodon... but Amy Guy (co-author of ActivityPub, and whose PHD thesis, "Presentation of Self on a Decentralised Web", is worth a read at the memorable domain of dr.amy.gy) convinced me that I really ought to check out what was going on in Mastodon land. And I found I really did like what was happening there... and connected to a community that felt like what I had missed from the heyday of StatusNet/identi.ca, while having a bit of its own flavor of culture, one that I really felt at home in. It turned out this was good timing... Mastodon was having trouble expanding the privacy needs of its users on OStatus, and it turns out private addressing was exactly one of the reasons that ActivityPub was developed. (I'm not claiming credit for this, I'm just talking from my perspective... the Mastodon ActivityPub implementation issue can give you a better sense of where credit is due, and here I didn't really do much.) This interest came right at the right time... it began to also drum up interest from many other participants too... and it pretty much directly lead to another extension to the Social Working Group, giving us until the end of 2017 to wrap up the work on standardizing ActivityPub. Whew!

But Mastodon is not alone. Today there are a growing number of implementers of ActivityPub. I'd encourage you, if you haven't, to watch this video of PeerTube and Mastodon federating over ActivityPub. Pretty cool stuff! ActivityPub has been a massive group effort, and I'm relieved to see that all that hard work has paid off, for all of us.

Meanwhile, there's a lot to do still ahead. MediaGoblin, ironically, has fallen behind on its own federation support in the interest of advancing federation standards (we have some federation code, but it's for the old pre-ActivityPub Pump API, and it's bitrotted quite a bit) and I need to figure out what the next steps are and discuss with the community (expect more on that in the next few months, and sure to be discussed at my talk at Libreplanet 2018). And ActivityPub may be "done" in the sense that "it made it through the standards process", but some of the most interesting work is still ahead. The Social Web Community Group, of which I am co-chair, meets bi-weekly to talk and collaborate on the interesting problems that implementers of libre networks are encountering. (It's open to everyone, maybe you should join?)

On that note, in a recent Social Web Community Group meeting, Evan Prodromou was showing off some of his latest ActivityPub projects (tags.pub and places.pub). I'm paraphrasing here, but he said something interesting, which has stuck with me: "We did all that standardizing work, and that's great, but now we get to the fun part... now we get to build things."

I agree. I look forward to what the next few years of fun ActivityPub development bring. Onwards!

24 January, 2018 05:00AM by Christopher Lemmer Webber

January 22, 2018

parallel @ Savannah

GNU Parallel 20180122 ('Mayon') released [stable]

GNU Parallel 20180122 ('Mayon') [stable] has been released. It is
available for download at: http://ftpmirror.gnu.org/parallel/

No new functionality was introduced so this is a good candidate for a
stable release.

Quote of the month:

GNU Parallel is making me pretty happy this morning
-- satanpenguin

New in this release:

  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one
or more computers. A job can be a single command or a small script
that has to be run for each of the lines in the input. The typical
input is a list of files, a list of hosts, a list of users, a list of
URLs, or a list of tables. A job can also be a command that reads from
a pipe. GNU Parallel can then split the input and pipe it into
commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to
use as GNU Parallel is written to have the same options as xargs. If
you write loops in shell, you will find GNU Parallel may be able to
replace most of the loops and make them run faster by running several
jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as
you would get had you run the commands sequentially. This makes it
possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O -
pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline
will love you for it.

When using programs that use GNU Parallel to process data for
publication please cite:

O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login:
The USENIX Magazine, February 2011:42-47.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/

Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists

not already there)

  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing
databases through all the different databases' command line clients.
So far the focus has been on giving a common way to specify login
information (protocol, username, password, hostname, and port number),
size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you
will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different
Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or
other system activity) is above a certain limit. When the limit is
reached the program will be suspended for some time. If the limit is a
soft limit the program will be allowed to run for short amounts of
time before being suspended again. If the limit is a hard limit the
program will only be allowed to run when the system is below the
limit.

22 January, 2018 04:28PM by Ole Tange

January 21, 2018

libtasn1 @ Savannah

libtasn1 moved to gitlab

The new primary development site is at:
https://gitlab.com/gnutls/libtasn1

21 January, 2018 09:47AM by Nikos Mavrogiannopoulos

January 17, 2018

Andy Wingo

instruction explosion in guile

Greetings, fellow Schemers and compiler nerds: I bring fresh nargery!

instruction explosion

A couple years ago I made a list of compiler tasks for Guile. Most of these are still open, but I've been chipping away at the one labeled "instruction explosion":

Now we get more to the compiler side of things. Currently in Guile's VM there are instructions like vector-ref. This is a little silly: there are also instructions to branch on the type of an object (br-if-tc7 in this case), to get the vector's length, and to do a branching integer comparison. Really we should replace vector-ref with a combination of these test-and-branches, with real control flow in the function, and then the actual ref should use some more primitive unchecked memory reference instruction. Optimization could end up hoisting everything but the primitive unchecked memory reference, while preserving safety, which would be a win. But probably in most cases optimization wouldn't manage to do this, which would be a lose overall because you have more instruction dispatch.

Well, this transformation is something we need for native compilation anyway. I would accept a patch to do this kind of transformation on the master branch, after version 2.2.0 has forked. In theory this would remove most all high level instructions from the VM, making the bytecode closer to a virtual CPU, and likewise making it easier for the compiler to emit native code as it's working at a lower level.

Now that I'm getting close to finished I wanted to share some thoughts. Previous progress reports on the mailing list.

a simple loop

As an example, consider this loop that sums the 32-bit floats in a bytevector. I've annotated the code with lines and columns so that you can correspond different pieces to the assembly.

   0       8   12     19
 +-v-------v---v------v-
 |
1| (use-modules (rnrs bytevectors))
2| (define (f32v-sum bv)
3|   (let lp ((n 0) (sum 0.0))
4|     (if (< n (bytevector-length bv))
5|         (lp (+ n 4)
6|             (+ sum (bytevector-ieee-single-native-ref bv n)))
7|          sum)))

The assembly for the loop before instruction explosion went like this:

L1:
  17    (handle-interrupts)     at (unknown file):5:12
  18    (uadd/immediate 0 1 4)
  19    (bv-f32-ref 1 3 1)      at (unknown file):6:19
  20    (fadd 2 2 1)            at (unknown file):6:12
  21    (s64<? 0 4)             at (unknown file):4:8
  22    (jnl 8)                ;; -> L4
  23    (mov 1 0)               at (unknown file):5:8
  24    (j -7)                 ;; -> L1

So, already Guile's compiler has hoisted the (bytevector-length bv) and unboxed the loop index n and accumulator sum. This work aims to simplify further by exploding bv-f32-ref.

exploding the loop

In practice, instruction explosion happens in CPS conversion, as we are converting the Scheme-like Tree-IL language down to the CPS soup language. When we see a Tree-Il primcall (a call to a known primitive), instead of lowering it to a corresponding CPS primcall, we inline a whole blob of code.

In the concrete case of bv-f32-ref, we'd inline it with something like the following:

(unless (and (heap-object? bv)
             (eq? (heap-type-tag bv) %bytevector-tag))
  (error "not a bytevector" bv))
(define len (word-ref bv 1))
(define ptr (word-ref bv 2))
(unless (and (<= 4 len)
             (<= idx (- len 4)))
  (error "out of range" idx))
(f32-ref ptr len)

As you can see, there are four branches hidden in the bv-f32-ref: two to check that the object is a bytevector, and two to check that the index is within range. In this explanation we assume that the offset idx is already unboxed, but actually unboxing the index ends up being part of this work as well.

One of the goals of instruction explosion was that by breaking the operation into a number of smaller, more orthogonal parts, native code generation would be easier, because the compiler would only have to know about those small bits. However without an optimizing compiler, it would be better to reify a call out to a specialized bv-f32-ref runtime routine instead of inlining all of this code -- probably whatever language you write your runtime routine in (C, rust, whatever) will do a better job optimizing than your compiler will.

But with an optimizing compiler, there is the possibility of removing possibly everything but the f32-ref. Guile doesn't quite get there, but almost; here's the post-explosion optimized assembly of the inner loop of f32v-sum:

L1:
  27    (handle-interrupts)
  28    (tag-fixnum 1 2)
  29    (s64<? 2 4)             at (unknown file):4:8
  30    (jnl 15)               ;; -> L5
  31    (uadd/immediate 0 2 4)  at (unknown file):5:12
  32    (u64<? 2 7)             at (unknown file):6:19
  33    (jnl 5)                ;; -> L2
  34    (f32-ref 2 5 2)
  35    (fadd 3 3 2)            at (unknown file):6:12
  36    (mov 2 0)               at (unknown file):5:8
  37    (j -10)                ;; -> L1

good things

The first thing to note is that unlike the "before" code, there's no instruction in this loop that can throw an exception. Neat.

Next, note that there's no type check on the bytevector; the peeled iteration preceding the loop already proved that the bytevector is a bytevector.

And indeed there's no reference to the bytevector at all in the loop! The value being dereferenced in (f32-ref 2 5 2) is a raw pointer. (Read this instruction as, "sp[2] = *(float*)((byte*)sp[5] + (uptrdiff_t)sp[2])".) The compiler does something interesting; the f32-ref CPS primcall actually takes three arguments: the garbage-collected object protecting the pointer, the pointer itself, and the offset. The object itself doesn't appear in the residual code, but including it in the f32-ref primcall's inputs keeps it alive as long as the f32-ref itself is alive.

bad things

Then there are the limitations. Firstly, instruction 28 tags the u64 loop index as a fixnum, but never uses the result. Why is this here? Sadly it's because the value is used in the bailout at L2. Recall this pseudocode:

(unless (and (<= 4 len)
             (<= idx (- len 4)))
  (error "out of range" idx))

Here the error ends up lowering to a throw CPS term that the compiler recognizes as a bailout and renders out-of-line; cool. But it uses idx as an argument, as a tagged SCM value. The compiler untags the loop index, but has to keep a tagged version around for the error cases.

The right fix is probably some kind of allocation sinking pass that sinks the tag-fixnum to the bailouts. Oh well.

Additionally, there are two tests in the loop. Are both necessary? Turns out, yes :( Imagine you have a bytevector of length 1025. The loop continues until the last ref at offset 1024, which is within bounds of the bytevector but there's one one byte available at that point, so we need to throw an exception at this point. The compiler did as good a job as we could expect it to do.

is is worth it? where to now?

On the one hand, instruction explosion is a step sideways. The code is more optimal, but it's more instructions. Because Guile currently has a bytecode VM, that means more total interpreter overhead. Testing on a 40-megabyte bytevector of 32-bit floats, the exploded f32v-sum completes in 115 milliseconds compared to around 97 for the earlier version.

On the other hand, it is very easy to imagine how to compile these instructions to native code, either ahead-of-time or via a simple template JIT. You practically just have to look up the instructions in the corresponding ISA reference, is all. The result should perform quite well.

I will probably take a whack at a simple template JIT first that does no register allocation, then ahead-of-time compilation with register allocation. Getting the AOT-compiled artifacts to dynamically link with runtime routines is a sufficient pain in my mind that I will put it off a bit until later. I also need to figure out a good strategy for truly polymorphic operations like general integer addition; probably involving inline caches.

So that's where we're at :) Thanks for reading, and happy hacking in Guile in 2018!

17 January, 2018 10:30AM by Andy Wingo

January 16, 2018

libsigsegv @ Savannah

libsigsegv 2.12 is released

libsigsegv version 2.12 is released.

New in this release:

  • Added support for catching stack overflow on Hurd/i386.
  • Added support for catching stack overflow on Haiku.
  • Corrected distinction between stack overflow and other fault on AIX.
  • Reliability improvements on Linux, FreeBSD, NetBSD.
  • NOTE: Support for Cygwin and native Windows is currently not up-to-date.

Download: https://ftp.gnu.org/gnu/libsigsegv/libsigsegv-2.12.tar.gz

16 January, 2018 08:47PM by Bruno Haible

FSF News

Announcing LibrePlanet 2018 keynote speakers

The keynote speakers for the tenth annual LibrePlanet conference will be anthropologist and author Gabriella Coleman, free software policy expert and community advocate Deb Nicholson, Electronic Frontier Foundation (EFF) senior staff technologist Seth Schoen, and FSF founder and president Richard Stallman. Register for this year's conference here!

LibrePlanet is an annual conference for people who care about their digital freedoms, bringing together software developers, policy experts, activists, and computer users to learn skills, share accomplishments, and tackle challenges facing the free software movement. The theme of this year's conference is Freedom. Embedded. In a society reliant on embedded systems -- in cars, digital watches, traffic lights, and even within our bodies -- how do we defend computer user freedom, protect ourselves against corporate and government surveillance, and move toward a freer world? LibrePlanet 2018 will explore these topics in sessions for all ages and experience levels.

Gabriella (Biella) Coleman is best known in the free software community for her book Coding Freedom: The Ethics and Aesthetics of Hacking. Trained as an anthropologist, Coleman holds the Wolfe Chair in Scientific and Technological Literacy at McGill University. Her scholarship explores the intersection of the cultures of hacking and politics, with a focus on the sociopolitical implications of the free software movement and the digital protest ensemble Anonymous, the latter in her book Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous.

Deb Nicholson is a free software policy expert and a passionate community advocate, notably contributing to GNU MediaGoblin and OpenHatch. She is the Community Outreach Director for the Open Invention Network, the world's largest patent non-aggression community, which serves the kernel Linux, GNU, Android, and other key free software projects. A perennial speaker at LibrePlanet, this is Nicholson's first keynote at the conference.

"They are all too modest to say it, but these speakers will blow your mind," said FSF executive director John Sullivan. "Don't miss this opportunity to hear about how technology controls our core freedoms, how people are working together in communities to build software that truly empowers, and how you can both benefit from and contribute to these efforts."

Seth David Schoen has worked at the EFF for over a decade, creating the Staff Technologist position and helping other technologists understand the civil liberties implications of their work, helping EFF staff better understand technology related to EFF's legal work, and helping the public understand what the products they use really do. Schoen last spoke at LibrePlanet in 2015, when he introduced Let's Encrypt, the automated, free software-based certificate authority.

FSF president Richard Stallman will present the Free Software Awards, and discuss pressing threats and important opportunities for software freedom. Dr. Richard Stallman launched the free software movement in 1983 and started the development of the GNU operating system (see www.gnu.org) in 1984. GNU is free software: everyone has the freedom to copy it and redistribute it, with or without changes. The GNU/Linux system, basically the GNU operating system with Linux added, is used on tens of millions of computers today. Stallman has received the ACM Grace Hopper Award, a MacArthur Foundation fellowship, the Electronic Frontier Foundation's Pioneer Award, and the the Takeda Award for Social/Economic Betterment, as well as several doctorates honoris causa, and has been inducted into the Internet Hall of Fame.

About LibrePlanet

LibrePlanet is the annual conference of the Free Software Foundation. Begun as a modest gathering of FSF members, the conference now is a large, vibrant gathering of free software enthusiasts, welcoming anyone interested in software freedom and digital rights. Registration is now open, and admission is gratis for FSF members and students.

For the fifth year in a row, LibrePlanet will be held at the Massachusetts Institute of Technology in Cambridge, Massachusetts, on March 24th and 25th, 2017. Co-presented by the Free Software Foundation and MIT's Student Information Processing Board (SIPB), the rest of the LibrePlanet program will be announced soon. The opening keynote at LibrePlanet 2017 was given by Kade Crockford, Director of the Technology for Liberty Program at the ACLU of Massachusetts, and the closing keynote was given by Sumana Harihareswara, founder of Changeset Consulting.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software — particularly the GNU operating system and its GNU/Linux variants — and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contact

Georgia Young
Program Manager
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

16 January, 2018 07:05PM

January 11, 2018

Andy Wingo

spectre and the end of langsec

I remember in 2008 seeing Gerald Sussman, creator of the Scheme language, resignedly describing a sea change in the MIT computer science curriculum. In response to a question from the audience, he said:

The work of engineers used to be about taking small parts that they understood entirely and using simple techniques to compose them into larger things that do what they want.

But programming now isn't so much like that. Nowadays you muck around with incomprehensible or nonexistent man pages for software you don't know who wrote. You have to do basic science on your libraries to see how they work, trying out different inputs and seeing how the code reacts. This is a fundamentally different job.

Like many I was profoundly saddened by this analysis. I want to believe in constructive correctness, in math and in proofs. And so with the rise of functional programming, I thought that this historical slide from reason towards observation was just that, historical, and that the "safe" languages had a compelling value that would be evident eventually: that "another world is possible".

In particular I found solace in "langsec", an approach to assessing and ensuring system security in terms of constructively correct programs. One obvious application is parsing of untrusted input, and indeed the langsec.org website appears to emphasize this domain as one in which a programming languages approach can be fruitful. It is, after all, a truth universally acknowledged, that a program with good use of data types, will be free from many common bugs. So far so good, and so far so successful.

The basis of language security is starting from a programming language with a well-defined, easy-to-understand semantics. From there you can prove (formally or informally) interesting security properties about particular programs. For example, if a program has a secret k, but some untrusted subcomponent C of it should not have access to k, one can prove if k can or cannot leak to C. This approach is taken, for example, by Google's Caja compiler to isolate components from each other, even when they run in the context of the same web page.

But the Spectre and Meltdown attacks have seriously set back this endeavor. One manifestation of the Spectre vulnerability is that code running in a process can now read the entirety of its address space, bypassing invariants of the language in which it is written, even if it is written in a "safe" language. This is currently being used by JavaScript programs to exfiltrate passwords from a browser's password manager, or bitcoin wallets.

Mathematically, in terms of the semantics of e.g. JavaScript, these attacks should not be possible. But practically, they work. Spectre shows us that the building blocks provided to us by Intel, ARM, and all the rest are no longer "small parts understood entirely"; that instead now we have to do "basic science" on our CPUs and memory hierarchies to know what they do.

What's worse, we need to do basic science to come up with adequate mitigations to the Spectre vulnerabilities (side-channel exfiltration of results of speculative execution). Retpolines, poisons and masks, et cetera: none of these are proven to work. They are simply observed to be effective on current hardware. Indeed mitigations are anathema to the correctness-by-construction: if you can prove that a problem doesn't exist, what is there to mitigate?

Spectre is not the first crack in the edifice of practical program correctness. In particular, timing side channels are rarely captured in language semantics. But I think it's fair to say that Spectre is the most devastating vulnerability in the langsec approach to security that has ever been uncovered.

Where do we go from here? I see but two options. One is to attempt to make the behavior of the machines targetted by secure language implementations behave rigorously as architecturally specified, and in no other way. This is the approach taken by all of the deployed mitigations (retpolines, poisoned pointers, masked accesses): modify the compiler and runtime to prevent the CPU from speculating through vulnerable indirect branches (prevent speculative execution), or from using fetched values in further speculative fetches (prevent this particular side channel). I think we are missing a model and a proof that these mitigations restore target architectural semantics, though.

However if we did have a model of what a CPU does, we have another opportunity, which is to incorporate that model in a semantics of the target language of a compiler (e.g. micro-x86 versus x86). It could be that this model produces a co-evolution of the target architectures as well, whereby Intel decides to disclose and expose more of its microarchitecture to user code. Cacheing and other microarchitectural side-effects would then become explicit rather than transparent.

Rich Hickey has this thing where he talks about "simple versus easy". Both of them sound good but for him, only "simple" is good whereas "easy" is bad. It's the sort of subjective distinction that can lead to an endless string of Worse Is Better Is Worse Bourbaki papers, according to the perspective of the author. Anyway transparent caching in the CPU has been marvelously easy for most application developers and fantastically beneficial from a performance perspective. People needing constant-time operations have complained, of course, but that kind of person always complains. Could it be, though, that actually there is some other, better-is-better kind of simplicity that should replace the all-pervasive, now-treacherous transparent cacheing?

I don't know. All I will say is that an ad-hoc approach to determining which branches and loads are safe and which are not is not a plan that inspires confidence. Godspeed to the langsec faithful in these dark times.

11 January, 2018 01:44PM by Andy Wingo

January 07, 2018

gzip @ Savannah

gzip-1.9 released [stable]

07 January, 2018 10:50PM by Jim Meyering

nano @ Savannah

GNU nano 2.9.2 was released

The most important change in this version is that now you can use <Tab> to indent a marked region and <Shift+Tab> to unindent it. Furthermore, with the option 'set trimblanks' in your nanorc, nano will now snip those pesky trailing spaces when automatic hard-wrapping occurs (when using the --fill option, for example). Apart from those things, there are several small fixes and improvements. Recommended upgrade.

07 January, 2018 11:09AM by Benno Schulenberg

January 04, 2018

Lonely Cactus

The ridiculous gopher project

My primary New Year's Resolution for 2018 is to start no new projects, and to only finish old ones.  In looking over my repos  -- more aptly titled the graveyard of 1,000 Saturdays -- I have excavated a couple of projects from the earth.

I'm starting, for now, with what is one of the most ridiculous of all possible projects: a gopher-protocol blog.  Do you remember gopher?  It was a protocol and a network ecosystem that existed just before HTTP took over the world.  I presented the world as directories that contained files, and users could poke around and look at those files to their heart's content.

There is a reason that I'm nostalgic for those days, and it lies primarily in how all of the world of HTTP and the world of iPhone and Android applications are really data-mining spy operations.  The gopher protocol is too primitive to allow the wholesale data mining operations that the modern web has become.  I has no client side scripting and cookies.  And because the world of gopher is so strange and hard to reach, there is a bit of a pioneer mindset among aficionados


So yeah, gopher.  A big directory of files of the types on the following list.  Take a look at this table of filetypes that Gopher handles natively.


Itemtype Content
0Text file
1Directory
5PC binary
6UNIX uuencoded file
8Telnet Session
9Binary File
gGIF image
sSound
IImage (other than GIF)
0Text file

Pretty old school, eh?  Just feel the power of the 1990s.

There are a lot of people running blogs in Gopher.  Really they just are directories of plain text files, ordered by date.  It is very pure, but, slightly boring.  So I looked at that list and asked myself if I could create a modern (lol) Gopher blog engine.

You can, of course, write servers that push out dynamic-generated content, but the clients only receive these static files.  Do you remember back when Perl5 was the way one would write CGI scripts that created "dynamic" HTML?  You can to the same thing here: make CGI scripts that create text files or GIF images.

In my conception, a modern gopher weblog engine would have text files of blog entries, a gallery of GIFs, and a commenting system.  Lacking any other gopher available method, the commenting system would be a Telnet session.

So I have picked up a couple of old ideas: a weblog software with a Gopher interface, a web gallery with a Gopher interface, and a tiny Telnet BBS where people can leave comments.  I've (re)started with the BBS, because it is the most ridiculous.

04 January, 2018 11:14PM by Mike (noreply@blogger.com)

January 02, 2018

Writing as little as possible

My New Year's Resolution for 2018 is to start no new projects.  For 2018, I will only finish my many, many uncompleted projects.

I've started up with one of my most pointless coding projects: a telnet BBS.  Writing a BBS in the late 1980's and early 1990's was something of a rite of passage.  Much like writing your own blog software was in the late 1990's and 2000's.

But going back to the idea of finishing things, I've given myself some additional constraints.
  • Write as little code as possible.
  • Use common libraries and components sensibly and liberally.
  • Bend my concept to the strengths and constraints created by the libraries and components, instead of wrangling them into matching my vision.
This ends up being very hard to do.  To be specific, it is very difficult to quash my ego and perfectionism; that perfectionism is why my repo has two dozens projects, of which only three are functional.

One of the forces that pushes me to write my own code, instead of using other people's code, is that reading and understanding other people's code and documentation is hard and it doesn't feel like an accomplishment.  To properly use another library, one really does need to put in the work of reading the docs and understanding their logic, which is deeply unsatisfying.

Will 2018 be the year I recover from Incompletion Syndrome?  Time will tell.

02 January, 2018 05:40AM by Mike (noreply@blogger.com)

January 01, 2018

health @ Savannah

Native GNU Health GTK client !

Dear community

I am happy to announce the native GNU Health GTK client for series 3.2 !

The GNU Health GTK Client

The GTK client allows to connect to the GNU Health server from the desktop.

Starting from GNU Health version 3.2, you can directly download the gnuhealth client from GNU.org or pypi.

Installation

The GNU Health client is pip installable :

For a system-wide installation (you need to be root)

# pip install gnuhealth-client

Alternatively, you can do a local installation :

$ pip install --user gnuhealth-client

For the latest information about the GNU Health client on pypi visit : https://pypi.python.org/pypi/gnuhealth-client

Alternatively, you can also install it from source :

$ wget https://ftp.gnu.org/gnu/health/gnuhealth-client-latest.tar.gz

Technology

The GNU Health GTK client derives from the Tryton GTK client, with specific features of GNU Health and healthcare sector.

GNU Health client series 3.2.x use GTK2+ and Python2. This is a
transition series for the upcoming 3.4, that will use GTK3+ and Python3

The default profile

The GNU Health client comes with a pre-defined profile, which points to the GNU Health community demo server

Server : health.gnusolidario.org
Port : 8000
User : admin
Passwd : gnusolidario

GNU Health Plugins

You can download GNU Health plugins for specific functionality.

For example:

  • The GNU Health Crypto plugin to digitally sign documents using GNUPG
  • The GNU Health Camera to use cameras and store them directly on the system (person registration, histological samples, etc..)

More information about the GNU Health plugins at :

https://en.wikibooks.org/wiki/GNU_Health/Plugins

The GNU Health client configuration file

The default configuration file resides in

$HOME/.config/gnuhealth/gnuhealth-client.conf

Using a custom greeter / banner

You can customize the login greeter banner to fit your institution.

In the section [client] include the banner param with the absolute path of the png file.

Something like

[client]
banner = /home/yourlogin/myhospitalbanner.png

The default resolution of the banner is 500 x 128 pixels. Adjust yours to approximately this size.

Development

The development of the GNU Health client will be done on GNU Savannah, using the Mercurial repository.

Tasks, bugs and mailing lists will be on health-dev@gnu.org , for development.

General questions can be done on health@gnu.org mailing list.

Homepage

http://health.gnu.org

Documentation

The GNU Health GTK documentation will be at the corresponding chapter in the GNU Health Wikibook

https://en.wikibooks.org/wiki/GNU_Health

01 January, 2018 10:23PM by Luis Falcon

gdbm @ Savannah

Version 1.14

Version 1.14 is available for download. This is a bug-fix release. A list of important changes follows:

Make sure created databases are byte-for-byte reproducible

This fixes two longstanding bugs: (1) when allocating database file header blocks, the unused memory is filled with zeroes; (2) when expanding
a mmapped memory area, the added extent is filled with zeroes.

Fix build with --enable-gdbm-export

Make gdbm_error global variable thread safe

Fix possible segmentation violation in gdbm_setopt

Fix handling of group headers in --help output

01 January, 2018 09:59PM by Sergey Poznyakoff

December 27, 2017

coreutils @ Savannah

coreutils-8.29 released [stable]

27 December, 2017 06:44PM by Pádraig Brady