Planet GNU

Aggregation of development blogs from the GNU Project

May 26, 2018

GUIX Project news

Customize GuixSD: Use Stock SSH Agent Everywhere!

I frequently use SSH. Since I don't like typing my password all the time, I use an SSH agent. Originally I used the GNOME Keyring as my SSH agent, but recently I've switched to using the ssh-agent from OpenSSH. I accomplished this by doing the following two things:

  • Replace the default GNOME Keyring with a custom-built version that disables the SSH agent feature.

  • Start my desktop session with OpenSSH's ssh-agent so that it's always available to any applications in my desktop session.

Below, I'll show you in detail how I did this. In addition to being useful for anyone who wants to use OpenSSH's ssh-agent in GuixSD, I hope this example will help to illustrate how GuixSD enables you to customize your entire system to be just the way you want it!

The Problem: GNOME Keyring Can't Handle My SSH Keys

On GuixSD, I like to use the GNOME desktop environment. GNOME is just one of the various desktop environments that GuixSD supports. By default, the GNOME desktop environment on GuixSD comes with a lot of goodies, including the GNOME Keyring, which is GNOME's integrated solution for securely storing secrets, passwords, keys, and certificates.

The GNOME Keyring has many useful features. One of those is its SSH Agent feature. This feature allows you to use the GNOME Keyring as an SSH agent. This means that when you invoke a command like ssh-add, it will add the private key identities to the GNOME Keyring. Usually this is quite convenient, since it means that GNOME users basically get an SSH agent for free!

Unfortunately, up until GNOME 3.28 (the current release), the GNOME Keyring's SSH agent implementation was not as complete as the stock SSH agent from OpenSSH. As a result, earlier versions of GNOME Keyring did not support many use cases. This was a problem for me, since GNOME Keyring couldn't read my modern SSH keys. To make matters worse, by design the SSH agent for GNOME Keyring and OpenSSH both use the same environment variables (e.g., SSH_AUTH_SOCK). This makes it difficult to use OpenSSH's ssh-agent everywhere within my GNOME desktop environment.

Happily, starting with GNOME 3.28, GNOME Keyring delegates all SSH agent functionality to the stock SSH agent from OpenSSH. They have removed their custom implementation entirely. This means that today, I could solve my problem simply by using the most recent version of GNOME Keyring. I'll probably do just that when the new release gets included in Guix. However, when I first encountered this problem, GNOME 3.28 hadn't been released yet, so the only option available to me was to customize GNOME Keyring or remove it entirely.

In any case, I'm going to show you how I solved this problem by modifying the default GNOME Keyring from the Guix package collection. The same ideas can be used to customize any package, so hopefully it will be a useful example. And what if you don't use GNOME, but you do want to use OpenSSH's ssh-agent? In that case, you may still need to customize your GuixSD system a little bit. Let me show you how!

The Solution: ~/.xsession and a Custom GNOME Keyring

The goal is to make OpenSSH's ssh-agent available everywhere when we log into our GNOME desktop session. First, we must arrange for ssh-agent to be running whenever we're logged in.

There are many ways to accomplish this. For example, I've seen people implement shell code in their shell's start-up files which basically manages their own ssh-agent process. However, I prefer to just start ssh-agent once and not clutter up my shell's start-up files with unnecessary code. So that's what we're going to do!

Launch OpenSSH's ssh-agent in Your ~/.xsession

By default, GuixSD uses the SLiM desktop manager. When you log in, SLiM presents you with a menu of so-called "desktop sessions", which correspond to the desktop environments you've declared in your operating system declaration. For example, if you've added the gnome-desktop-service to your operating system declaration, then you'll see an option for GNOME at the SLiM login screen.

You can further customize your desktop session with the ~/.xsession file. The contract for this file in GuixSD is the same as it is for many GNU/Linux distributions: if it exists, then it will be executed. The arguments passed to it will be the command line invocation that would normally be executed to start the desktop session that you selected from the SLiM login screen. Your ~/.xsession is expected to do whatever is necessary to customize and then start the specified desktop environment. For example, when you select GNOME from the SLiM login screen, your ~/.xsession file will basically be executed like this (for the exact execution mechanism, please refer to the source code linked above):

$ ~/.xsession gnome-session

The upshot of all this is that the ~/.xsession is an ideal place to set up your SSH agent! If you start an SSH agent in your ~/.xsession file, you can have the SSH agent available everywhere, automatically! Check it out: Put this into your ~/.xsession file, and make the file executable:

#!/run/current-system/profile/bin/bash
exec ssh-agent "$@"

When you invoke ssh-agent in this way, it executes the specified program in an environment where commands like ssh-add just work. It does this by setting environment variables such as SSH_AUTH_SOCK, which programs like ssh-add find and use automatically. Because GuixSD allows you to customize your desktop session like this, you can use any SSH agent you want in any desktop environments that you want, automatically!

Of course, if you're using GNOME Keyring version 3.27 or earlier (like I was), then this isn't quite enough. In that case, the SSH agent feature of GNOME Keyring will override the environment variables set by OpenSSH's ssh-agent, so commands like ssh-add will wind up communicating with the GNOME Keyring instead of the ssh-agent you launched in your ~/.xsession. This is bad because, as previously mentioned, GNOME Keyring version 3.27 and earlier don't support as many uses cases as OpenSSH's ssh-agent.

How can we work around this problem?

Customize the GNOME Keyring

One heavy-handed solution would be to remove GNOME Keyring entirely. That would work, but then you would lose out on all the other great features that it has to offer. Surely we can do better!

The GNOME Keyring documentation explains that one way to disable the SSH agent feature is to include the --disable-ssh-agent configure flag when building it. Thankfully, Guix provides some ways to customize software in exactly this way!

Conceptually, we "just" have to do the following two things:

  • Customize the existing gnome-keyring package.

  • Make the gnome-desktop-service use our custom gnome-keyring package.

Create a Custom GNOME Keyring Package

Let's begin by defining a custom gnome-keyring package, which we'll call gnome-keyring-sans-ssh-agent. With Guix, we can do this in less than ten lines of code:

(define-public gnome-keyring-sans-ssh-agent
  (package
    (inherit gnome-keyring)
    (name "gnome-keyring-sans-ssh-agent")
    (arguments
     (substitute-keyword-arguments
         (package-arguments gnome-keyring)
       ((#:configure-flags flags)
        `(cons "--disable-ssh-agent" ,flags))))))

Don't worry if some of that code is unclear at first. I'll clarify it now!

In Guix, a <package> record like the one above is defined by a macro called define-record-type* (defined in the file guix/records.scm in the Guix source). It's similar to an SRFI-9 record. The inherit feature of this macro is very useful: it creates a new copy of an existing record, overriding specific fields in the new copy as needed.

In the above, we define gnome-keyring-sans-ssh-agent to be a copy of the gnome-keyring package, and we use inherit to change the name and arguments fields in that new copy. We also use the substitute-keyword-arguments macro (defined in the file guix/utils.scm in the Guix source) to add --disable-ssh-agent to the list of configure flags defined in the gnome-keyring package. The effect of this is to define a new GNOME Keyring package that is built exactly the same as the original, but in which the SSH agent is disabled.

I'll admit this code may seem a little opaque at first, but all code does when you first learn it. Once you get the hang of things, you can customize packages any way you can imagine. If you want to learn more, you should read the docstrings for the define-record-type* and substitute-keyword-arguments macros in the Guix source code. It's also very helpful to grep the source code to see examples of how these macros are used in practice. For example:

$ # Search the currently installed Guix for the current user.
$ grep -r substitute-keyword-arguments ~/.config/guix/latest
$ # Search the Guix Git repository, assuming you've checked it out here.
$ grep -r substitute-keyword-arguments ~/guix

Use the Custom GNOME Keyring Package

OK, we've created our own custom GNOME Keyring package. Great! Now, how do we use it?

In GuixSD, the GNOME desktop environment is treated as a system service. To make GNOME use our custom GNOME Keyring package, we must somehow customize the gnome-desktop-service (defined in the file gnu/services/desktop.scm) to use our custom package. How do we do customize a service? Generally, the answer depends on the service. Thankfully, many of GuixSD's services, including the gnome-desktop-service, follow a similar pattern. In this case, we "just" need to pass a custom <gnome-desktop-configuration> record to the gnome-desktop-service procedure in our operating system declaration, like this:

(operating-system

  ...

  (services (cons*
             (gnome-desktop-service
              #:config my-gnome-desktop-configuration)
             %desktop-services)))

Here, the cons* procedure just adds the GNOME desktop service to the %desktop-services list, returning the new list. For details, please refer to the the Guile manual.

Now the question is: what should my-gnome-desktop-configuration be? Well, if we examine the docstring for this record type in the Guix source, we see the following:

(define-record-type* <gnome-desktop-configuration> gnome-desktop-configuration
  make-gnome-desktop-configuration
  gnome-desktop-configuration
  (gnome-package gnome-package (default gnome)))

The gnome package referenced here is a "meta" package: it exists only to aggregate many GNOME packages together, including gnome-keyring. To see its definition, we can simply invoke guix edit gnome, which opens the file where the package is defined:

(define-public gnome
  (package
    (name "gnome")
    (version (package-version gnome-shell))
    (source #f)
    (build-system trivial-build-system)
    (arguments '(#:builder (mkdir %output)))
    (propagated-inputs
     ;; TODO: Add more packages according to:
     ;;       <https://packages.debian.org/jessie/gnome-core>.
     `(("adwaita-icon-theme"        ,adwaita-icon-theme)
       ("baobab"                    ,baobab)
       ("font-cantarell"            ,font-cantarell)
       [... many packages omitted for brevity ...]
       ("gnome-keyring"             ,gnome-keyring)
       [... many packages omitted for brevity ...]
    (synopsis "The GNU desktop environment")
    (home-page "https://www.gnome.org/")
    (description
     "GNOME is the graphical desktop for GNU.  It includes a wide variety of
applications for browsing the web, editing text and images, creating
documents and diagrams, playing media, scanning, and much more.")
    (license license:gpl2+)))

Apart from being a little long, this is just a normal package definition. We can see that gnome-keyring is included in the list of propagated-inputs. So, we need to create a replacement for the gnome package that uses our gnome-keyring-sans-ssh-agent instead of gnome-keyring. The following package definition accomplishes that:

(define-public gnome-sans-ssh-agent
  (package
    (inherit gnome)
    (name "gnome-sans-ssh-agent")
    (propagated-inputs
     (map (match-lambda
            ((name package)
             (if (equal? name "gnome-keyring")
                 (list name gnome-keyring-sans-ssh-agent)
                 (list name package))))
          (package-propagated-inputs gnome)))))

As before, we use inherit to create a new copy of the gnome package that overrides the original name and propagated-inputs fields. Since Guix packages are just defined using good old scheme, we can use existing language features like map and match-lambda to manipulate the list of propagated inputs. The effect of the above is to create a new package that is the same as the gnome package but uses gnome-keyring-sans-ssh-agent instead of gnome-keyring.

Now that we have gnome-sans-ssh-agent, we can create a custom <gnome-desktop-configuration> record and pass it to the gnome-desktop-service procedure as follows:

(operating-system

  ...

  (services (cons*
             (gnome-desktop-service
              #:config (gnome-desktop-configuration
                        (gnome-package gnome-sans-ssh-agent)))
             %desktop-services)))

Wrapping It All Up

Finally, you need to run the following commands as root to create and boot into the new system generation (replace MY-CONFIG with the path to the customized operating system configuration file):

# guix system reconfigure MY-CONFIG
# reboot

After you log into GNOME, any time you need to use SSH, the stock SSH agent from OpenSSH that you started in your ~/.xsession file will be used instead of the GNOME Keyring's SSH agent. It just works! Note that it still works even if you select a non-GNOME desktop session (like XFCE) at the SLiM login screen, since the ~/.xsession is not tied to any particular desktop session,

In the unfortunate event that something went wrong and things just aren't working when you reboot, don't worry: with GuixSD, you can safely roll back to the previous system generation via the usual mechanisms. For example, you can run this from the command line to roll back:

# guix system roll-back
# reboot

This is one of the great benefits that comes from the fact that Guix follows the functional software deployment model. However, note that because the ~/.xsession file (like many files in your home directory) is not managed by Guix, you must manually undo the changes that you made to it in order to roll back fully.

Conclusion

I hope this helps give you some ideas for how you can customize your own GuixSD system to make it exactly what you want it to be. Not only can you customize your desktop session via your ~/.xsession file, but Guix also provides tools for you to modify any of the default packages or services to suit your specific needs.

Happy hacking!

Notices

CC0

To the extent possible under law, Chris Marusich has waived all copyright and related or neighboring rights to this article, "Customize GuixSD: Use Stock SSH Agent Everywhere!". This work is published from: United States.

The views expressed in this article are those of Chris Marusich and do not necessarily reflect the views of his past, present, or future employers.

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

26 May, 2018 03:00PM by Chris Marusich

health @ Savannah

openSUSE donates more Raspberry Pis to the GNU Health project

Dear community:

Today, in the context of the openSUSE conference 2018, oSC18, openSUSE donated 10 Raspberry Pis to the GNU health project.

GNU Health embedded, is a project that delivers GNU Health in single board machines, like raspberry pi.

The #GNUHealthEmbedded project delivers a ready-to-run, full blown installation of GNU Health. From the GNUHealth kernel, the database and even a demo database that can be installed.

The user only needs to point the client to the the server address.

The new Raspberry Pis will include:

  • Latest openSUSE Leap 15
  • GNU Health 3.4 server
  • Offline documentation
  • Lab interfaces

It will also include the Federation related packages, so it could act as a relay or as a node in the distributed, federated model.

Thank you so much to openSUSE for their generous donation, and commitment to the GNUHealth project, as an active member and as an sponsor !

Main news from openSUSE portal:
https://news.opensuse.org/2018/05/26/opensuse-donates-10-more-raspberry-pis-to-gnu-health/

26 May, 2018 01:02PM by Luis Falcon

May 25, 2018

Sylvain Beucler

Testing GNU FreeDink in your browser

Ever wanted to try this weird GNU FreeDink game, but never had the patience to install it?
Today, you can play it with a single click :)

Play GNU FreeDink

This is a first version that can be polished further but it works quite well.
This is the original C/C++/SDL2 code with a few tweaks, cross-compiled to WebAssembly (and an alternate version in asm.js) with emscripten.
Nothing brand new I know, but things are getting smoother, and WebAssembly is definitely a performance boost.

I like distributed and autonomous tools, so I'm generally not inclined to web-based solutions.
In this case however, this is a local version of the game. There's no server side. Savegames are in your browser local storage. Even importing D-Mods (game add-ons) is performed purely locally in the in-memory virtual FS with a custom .tar.bz2 extractor cross-compiled to WebAssembly.
And you don't have to worry about all these Store policies (and Distros policies^W^W^W.

I'm interested in feedback on how well these works for you in your browsers and devices:

I'm also interested in tips on how to place LibreJS tags - this is all free JavaScript.

25 May, 2018 11:46PM

FSF Blogs

Success for net neutrality, success for free software

We've had great success with the United States Senate voting in support of net neutrality! Congratulations and thank you to everyone in the US for contacting your congresspeople, and all of you who helped spread the word.

However, it's not over yet. Here are more actions you can take if you're in the United States.

Now that the (CRA) has passed the Senate, it moves to the House of Representatives. Just as we asked you to call your senators, now it's time to call your House representatives. Find their contact info here and use the script below to ask them to support the reinstatement of net neutrality protections.

The timing hasn't been set for future votes and hearings yet, but that's no reason to wait: make sure your representatives know how you feel.

Looking for a sample script?

Hello,

I live in CITY/STATE. I am calling to urge you to support net neutrality.

I hope REPRESENTATIVE will do the right thing and vote for the CRA to overturn the FCC's repeal of net neutrality protections.

Thank you for your time.

Want some bonus points?

If your senator voted in support of the CRA, call and thank them for their work. (See a list of senators and their votes here.)

  • Call your senator directly. Find their number here.

  • Or, dial the Senate at (202) 224-3121 and they will connect you.

We'll be updating you as things develop.

Share on social media!

Tell your friends by sharing on social media! Use the hashtags netneutrality and freesoftware.

Net neutrality in the US is a global issue

Even if you don't live in the US, what happens with net neutrality here still matters to you.

When companies -- and free software projects -- within the US are burdened by extra fees being charged by Internet Service Providers (ISPs), everyone in the world loses access to these pages and services. For example, blocking VoIP services in one country affects communication at a global scale. Access to the tools we use to build, download, and share free software are at risk when net neutrality is no longer there to protect them -- and users -- from ISPs controlling access to those sites.

Read these posts by the Free Software Foundation (FSF)

Want to know even more? Check out these posts from the FSF.

Looking for more ways to support digital rights? Consider becoming an [FSF member][12] today!

25 May, 2018 06:25PM

GNU Spotlight with Mike Gerwitz: 18 new GNU releases!

For announcements of most new GNU releases, subscribe to the info-gnu mailing list: https://lists.gnu.org/mailman/listinfo/info-gnu.

To download: nearly all GNU software is available from https://ftp.gnu.org/gnu/, or preferably one of its mirrors from https://www.gnu.org/prep/ftp.html. You can use the URL https://ftpmirror.gnu.org/ to be automatically redirected to a (hopefully) nearby and up-to-date mirror.

A number of GNU packages, as well as the GNU operating system as a whole, are looking for maintainers and other assistance: please see https://www.gnu.org/server/takeaction.html#unmaint if you'd like to help. The general page on how to help GNU is at https://www.gnu.org/help/help.html.

If you have a working or partly working program that you'd like to offer to the GNU project as a GNU package, see https://www.gnu.org/help/evaluation.html.

As always, please feel free to write to us at maintainers@gnu.org with any GNUish questions or suggestions for future installments.

25 May, 2018 04:08PM

May 23, 2018

GNUnet News

May 22, 2018

parallel @ Savannah

GNU Parallel 20180522 ('Great March of Return') released

GNU Parallel 20180522 ('Great March of Return') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Quote of the month:

Gnu parallel seems to work fine.
Sucking up the life out of all cores, poor machine ahahahahah
-- osxreverser

New in this release:

  • --tty allows for more programs accessing /dev/tty in parallel. Some programs require tty access without using it.
  • env_parallel --session will record names in current environment in $PARALLEL_IGNORED_NAMES and exit. It is only used with env_parallel, and can work like --record-env but in a single session.
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, April 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 May, 2018 10:57PM by Ole Tange

May 21, 2018

FSF Blogs

Friday Free Software Directory IRC meetup time: May 25th starting at 12:00 p.m. EDT/16:00 UTC

Help improve the Free Software Directory by adding new entries and updating existing ones. Every Friday we meet on IRC in the #fsf channel on irc.freenode.org.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

When a user comes to the Directory, they know that everything in it is free software, has only free dependencies, and runs on a free OS. With over 16,000 entries, it is a massive repository of information about free software.

While the Directory has been and continues to be a great resource to the world for many years now, it has the potential to be a resource of even greater value. But it needs your help! And since it's a MediaWiki instance, it's easy for anyone to edit and contribute to the Directory.

40 years ago, American management consultant Marilyn Loden first coined the term "glass ceiling" to describe invisible career barriers for women at a panel discussion. Despite the passing of two generations, Loden notes that the matter is still very alive, which readily displays how insidious a problem we have. To honor Loden's contribution, this week's theme for the Directory meetup is business software.

If you are eager to help, and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today! There are also weekly Directory Meeting pages that everyone is welcome to contribute to before, during, and after each meeting. To see the meeting start time in your time zone, run this in GNU bash: date --date='TZ="America/New_York" 12:00 this Fri'

21 May, 2018 03:43PM

Andy Wingo

correct or inotify: pick one

Let's say you decide that you'd like to see what some other processes on your system are doing to a subtree of the file system. You don't want to have to change how those processes work -- you just want to see what files those processes create and delete.

One approach would be to just scan the file-system tree periodically, enumerating its contents. But when the file system tree is large and the change rate is low, that's not an optimal thing to do.

Fortunately, Linux provides an API to allow a process to receive notifications on file-system change events, called inotify. So you open up the inotify(7) manual page, and are greeted with this:

With careful programming, an application can use inotify to efficiently monitor and cache the state of a set of filesystem objects. However, robust applications should allow for the fact that bugs in the monitoring logic or races of the kind described below may leave the cache inconsistent with the filesystem state. It is probably wise to do some consistency checking, and rebuild the cache when inconsistencies are detected.

It's not exactly reassuring is it? I mean, "you had one job" and all.

Reading down a bit farther, I thought that with some "careful programming", I could get by. After a day of trying, I am now certain that it is impossible to build a correct recursive directory monitor with inotify, and I am not even sure that "good enough" solutions exist.

pitfall the first: buffer overflow

Fundamentally, inotify races the monitoring process with all other processes on the system. Events are delivered to the monitoring process via a fixed-size buffer that can overflow, and the monitoring process provides no back-pressure on the system's rate of filesystem modifications. With inotify, you have to be ready to lose events.

This I think is probably the easiest limitation to work around. The kernel can let you know when the buffer overflows, and you can tweak the buffer size. Still, it's a first indication that perfect is not possible.

pitfall the second: now you see it, now you don't

This one is the real kicker. Say you get an event that says that a file "frenemies.txt" has been created in the directory "/contacts/". You go to open the file -- but is it still there? By the time you get around to looking for it, it could have been deleted, or renamed, or maybe even created again or replaced! This is a TOCTTOU race, built-in to the inotify API. It is literally impossible to use inotify without this class of error.

The canonical solution to this kind of issue in the kernel is to use file descriptors instead. Instead of or possibly in addition to getting a name with the file change event, you get a descriptor to a (possibly-unlinked) open file, which you would then be responsible for closing. But that's not what inotify does. Oh well!

pitfall the third: race conditions between inotify instances

When you inotify a directory, you get change notifications for just that directory. If you want to get change notifications for subdirectories, you need to open more inotify instances and poll on them all. However now you have N2 problems: as poll and the like return an unordered set of readable file descriptors, each with their own ordering, you no longer have access to a linear order in which changes occurred.

It is impossible to build a recursive directory watcher that definitively says "ok, first /contacts/frenemies.txt was created, then /contacts was renamed to /peeps, ..." because you have no ordering between the different watches. You don't know that there was ever even a time that /contacts/frenemies.txt was an accessible file name; it could have been only ever openable as /peeps/frenemies.txt.

Of course, this is the most basic ordering problem. If you are building a monitoring tool that actually wants to open files -- good luck bubster! It literally cannot be correct. (It might work well enough, of course.)

reflections

As far as I am aware, inotify came out to address the needs of desktop search tools like the belated Beagle (11/10 good pupper just trying to get his pup on). Especially in the days of spinning metal, grovelling over the whole hard-drive was a real non-starter, especially if the search database should to be up-to-date.

But after looking into inotify, I start to see why someone at Google said that desktop search was in some ways harder than web search -- I mean we all struggle to find files on our own machines, even now, 15 years after the whole dnotify/inotify thing started. Part of it is that the given the choice between supporting reliable, fool-proof file system indexes on the one hand, and overclocking the IOPS benchmarks on the other, the kernel gave us inotify. I understand it, but inotify still sucks.

I dunno about you all but whenever I've had to document such an egregious uncorrectable failure mode as any of the ones in the inotify manual, I have rewritten the software instead. In that spirit, I hope that some day we shall send inotify to the pet cemetery, to rest in peace beside Beagle.

21 May, 2018 02:29PM by Andy Wingo

nano @ Savannah

GNU nano 2.9.7 was released

Accumulated changes over the last five releases include: the ability to bind a key to a string (text and/or escape sequences), a default color of bright white on red for error messages, an improvement to the way the Scroll-Up and Scroll-Down commands work, and the new --afterends option to make Ctrl+Right (next word) stop at the end of a word instead of at the beginning. Check it out.

21 May, 2018 10:36AM by Benno Schulenberg

May 18, 2018

FSF Events

Richard Stallman - "We must legislate to block collection of personal data" (HOPE, New York, NY)

Richard Stallman will be speaking at The Circle of HOPE (2018-07-22–24).

We must legislate to block collection of personal data" With surveillance so pervasive, weak measures can only nibble around the edges. To restore privacy, we need strong measures. Companies are so adept at manufacturing users' consent that the requirement hardly hampers their surveillance, so now what we need nowadays is to put strict limits on what data systems can collect.

Richard Stallman's speech will be nontechnical and the public is encouraged to attend.

Location: 18th floor, Hotel Pennsylvania, 401 Seventh Avenue at 33rd Street (15 Penn Plaza), New York, NY 10001

Please fill out our contact form, so that we can contact you about future events in and around New York City.

18 May, 2018 01:35PM

May 17, 2018

Richard Stallman estará en La Plata, Argentina

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

El título será determinado.

Lugar: Facultad de informática de la Universidad Nacional de La Plata (calle 50 y 120), La Plata, Argentina

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Buenos Aires.

17 May, 2018 07:40PM

May 16, 2018

Andy Wingo

lightweight concurrency in lua

Hello, all! Today I'd like to share some work I have done recently as part of the Snabb user-space networking toolkit. Snabb is mainly about high-performance packet processing, but it also needs to communicate with management-oriented parts of network infrastructure. These communication needs are performed by a dedicated manager process, but that process has many things to do, and can't afford to make blocking operations.

Snabb is written in Lua, which doesn't have built-in facilities for concurrency. What we'd like is to have fibers. Fortunately, Lua's coroutines are powerful enough to implement fibers. Let's do that!

fibers in lua

First we need a scheduling facility. Here's the smallest possible scheduler: simply a queue of tasks and a function to run those tasks.

local task_queue = {}

function schedule_task(thunk)
   table.insert(task_queue, thunk)
end

function run_tasks()
   local queue = task_queue
   task_queue = {}
   for _,thunk in ipairs(queue) do thunk() end
end

For our purposes, a task is just a function that will be called with no arguments.

Now let's build fibers. This is easier than you might think!

local current_fiber = false

function spawn_fiber(fn)
   local fiber = coroutine.create(fn)
   schedule_task(function () resume_fiber(fiber) end)
end

function resume_fiber(fiber, ...)
   current_fiber = fiber
   local ok, err = coroutine.resume(fiber, ...)
   current_fiber = nil
   if not ok then
      print('Error while running fiber: '..tostring(err))
   end
end

function suspend_current_fiber(block, ...)
   -- The block function should arrange to reschedule
   -- the fiber when it becomes runnable.
   block(current_fiber, ...)
   return coroutine.yield()
end

Here, a fiber is simply a coroutine underneath. Suspending a fiber suspends the coroutine. Resuming a fiber runs the coroutine. If you're unfamiliar with coroutines, or coroutines in Lua, maybe have a look at the lua-users wiki page on the topic.

The difference between a fibers facility and just coroutines is that with fibers, you have a scheduler as well. Very much like Scheme's call-with-prompt, coroutines are one of those powerful language building blocks that should rarely be used directly; concurrent programming needs more structure than what Lua offers.

If you're following along, it's probably worth it here to think how you would implement yield based on these functions. A yield implementation should yield control to the scheduler, and resume the fiber on the next scheduler turn. The answer is here.

communication

Once you have fibers and a scheduler, you have concurrency, which means that if you're not careful, you have a mess. Here I think the Go language got the essence of the idea exactly right: Do not communicate by sharing memory; instead, share memory by communicating.

Even though Lua doesn't support multiple machine threads running concurrently, concurrency between fibers can still be fraught with bugs. Tony Hoare's Communicating Sequential Processes showed that we can avoid a class of these bugs by treating communication as a first-class concept.

Happily, the Concurrent ML project showed that it's possible to build these first-class communication facilities as a library, provided the language you are working in has threads of some kind, and fibers are enough. Last year I built a Concurrent ML library for Guile Scheme, and when in Snabb we had a similar need, I ported that code over to Lua. As it's a new take on the problem in a different language, I think I've been able to simplify things even more.

So let's take a crack at implementing Concurrent ML in Lua. In CML, the fundamental primitive for communication is the operation. An operation represents the potential for communication. For example, if you have a channel, it would have methods to return "get operations" and "put operations" on that channel. Actually receiving or sending a message on a channel occurs by performing those operations. One operation can be performed many times, or not at all.

Compared to a system like Go, for example, there are two main advantages of CML. The first is that CML allows non-deterministic choice between a number of potential operations in a generic way. For example, you can construct a operation that, when performed, will either get on one channel or wait for a condition variable to be signalled, whichever comes first. In Go, you can only select between operations on channels.

The other interesting part of CML is that operations are built from a uniform protocol, and so users can implement new kinds of operations. Compare again to Go where all you have are channels, and nothing else.

The CML operation protocol consists three related functions: try which attempts to directly complete an operation in a non-blocking way; block, which is called after a fiber has suspended, and which arranges to resume the fiber when the operation completes; and wrap, which is called on the result of a successfully performed operation.

In Lua, we can call this an implementation of an operation, and create it like this:

function new_op_impl(try, block, wrap)
   return { try=try, block=block, wrap=wrap }
end

Now let's go ahead and write the guts of CML: the operation implementation. We'll represent an operation as a Lua object with two methods. The perform method will attempt to perform the operation, and return the resulting value. If the operation can complete immediately, the call to perform will return directly. Otherwise, perform will suspend the current fiber and arrange to continue only when the operation completes.

The wrap method "decorates" an operation, returning a new operation that, if and when it completes, will "wrap" the result of the completed operation with a function, by applying the function to the result. It's useful to distinguish the sub-operations of a non-deterministic choice from each other.

Here our new_op function will take an array of operation implementations and return an operation that, when performed, will synchronize on the first available operation. As you can see, it already has the equivalent of Go's select built in.

function new_op(impls)
   local op = { impls=impls }
   
   function op.perform()
      for _,impl in ipairs(impls) do
         local success, val = impl.try()
         if success then return impl.wrap(val) end
      end
      local function block(fiber)
         local suspension = new_suspension(fiber)
         for _,impl in ipairs(impls) do
            impl.block(suspension, impl.wrap)
         end
      end
      local wrap, val = suspend_current_fiber(block)
      return wrap(val)
   end

   function op.wrap(f)
      local wrapped = {}
      for _, impl in ipairs(impls) do
         local function wrap(val)
            return f(impl.wrap(val))
         end
         local impl = new_op_impl(impl.try, impl.block, wrap)
         table.insert(wrapped, impl)
      end
      return new_op(wrapped)
   end

   return op
end

There's only one thing missing there, which is new_suspension. When you go to suspend a fiber because none of the operations that it's trying to do can complete directly (i.e. all of the try functions of its impls returned false), at that point the corresponding block functions will publish the fact that the fiber is waiting. However the fiber only waits until the first operation is ready; subsequent operations becoming ready should be ignored. The suspension is the object that manages this state.

function new_suspension(fiber)
   local waiting = true
   local suspension = {}
   function suspension.waiting() return waiting end
   function suspension.complete(wrap, val)
      assert(waiting)
      waiting = false
      local function resume()
         resume_fiber(fiber, wrap, val)
      end
      schedule_task(resume)
   end
   return suspension
end

As you can see, the suspension's complete method is also the bit that actually arranges to resume a suspended fiber.

Finally, just to round out the implementation, here's a function implementing non-deterministic choice from among a number of sub-operations:

function choice(...)
   local impls = {}
   for _, op in ipairs({...}) do
      for _, impl in ipairs(op.impls) do
         table.insert(impls, impl)
      end
   end
   return new_op(impls)
end

on cml

OK, I'm sure this seems a bit abstract at this point. Let's implement something concrete in terms of these primitives: channels.

Channels expose two similar but different kinds of operations: put operations, which try to send a value, and get operations, which try to receive a value. If there's a sender already waiting to send when we go to perform a get_op, the operation continues directly, and we resume the sender; otherwise the receiver publishes its suspension to a queue. The put_op case is similar.

Finally we add some synchronous put and get convenience methods, in terms of their corresponding CML operations.

function new_channel()
   local ch = {}
   -- Queues of suspended fibers waiting to get or put values
   -- via this channel.
   local getq, putq = {}, {}

   local function default_wrap(val) return val end
   local function is_empty(q) return #q == 0 end
   local function peek_front(q) return q[1] end
   local function pop_front(q) return table.remove(q, 1) end
   local function push_back(q, x) q[#q+1] = x end

   -- Since a suspension could complete in multiple ways
   -- because of non-deterministic choice, it could be that
   -- suspensions on a channel's putq or getq are already
   -- completed.  This helper removes already-completed
   -- suspensions.
   local function remove_stale_entries(q)
      local i = 1
      while i <= #q do
         if q[i].suspension.waiting() then
            i = i + 1
         else
            table.remove(q, i)
         end
      end
   end

   -- Make an operation that if and when it completes will
   -- rendezvous with a receiver fiber to send VAL over the
   -- channel.  Result of performing operation is nil.
   function ch.put_op(val)
      local function try()
         remove_stale_entries(getq)
         if is_empty(getq) then
            return false, nil
         else
            local remote = pop_front(getq)
            remote.suspension.complete(remote.wrap, val)
            return true, nil
         end
      end
      local function block(suspension, wrap)
         remove_stale_entries(putq)
         push_back(putq, {suspension=suspension, wrap=wrap, val=val})
      end
      return new_op({new_op_impl(try, block, default_wrap)})
   end

   -- Make an operation that if and when it completes will
   -- rendezvous with a sender fiber to receive one value from
   -- the channel.  Result is the value received.
   function ch.get_op()
      local function try()
         remove_stale_entries(putq)
         if is_empty(putq) then
            return false, nil
         else
            local remote = pop_front(putq)
            remote.suspension.complete(remote.wrap)
            return true, remote.val
         end
      end
      local function block(suspension, wrap)
         remove_stale_entries(getq)
         push_back(getq, {suspension=suspension, wrap=wrap})
      end
      return new_op({new_op_impl(try, block, default_wrap)})
   end

   function ch.put(val) return ch.put_op(val).perform() end
   function ch.get()    return ch.get_op().perform()    end

   return ch
end

a wee example

You might be wondering what it's like to program with channels in Lua, so here's a little example that shows a prime sieve based on channels. It's not a great example of concurrency in that it's not an inherently concurrent problem, but it's cute to show computations in terms of infinite streams.

function prime_sieve(count)
   local function sieve(p, rx)
      local tx = new_channel()
      spawn_fiber(function ()
         while true do
            local n = rx.get()
            if n % p ~= 0 then tx.put(n) end
         end
      end)
      return tx
   end

   local function integers_from(n)
      local tx = new_channel()
      spawn_fiber(function ()
         while true do
            tx.put(n)
            n = n + 1
         end
      end)
      return tx
   end

   local function primes()
      local tx = new_channel()
      spawn_fiber(function ()
         local rx = integers_from(2)
         while true do
            local p = rx.get()
            tx.put(p)
            rx = sieve(p, rx)
         end
      end)
      return tx
   end

   local done = false
   spawn_fiber(function()
      local rx = primes()
      for i=1,count do print(rx.get()) end
      done = true
   end)

   while not done do run_tasks() end
end

Here you also see an example of running the scheduler in the last line.

where next?

Let's put this into perspective: in a couple hundred lines of code, we've gone from minimal Lua to a language with lightweight multitasking, extensible CML-based operations, and CSP-style channels; truly a delight.

There are a number of possible ways to extend this code. One of them is to implement true multithreading, if the language you are working in supports that. In that case there are some small protocol modifications to take into account; see the notes on the Guile CML implementation and especially the Manticore Parallel CML project.

The implementation above is pleasantly small, but it could be faster with the choice of more specialized data structures. I think interested readers probably see a number of opportunities there.

In a library, you might want to avoid the global task_queue and implement nested or multiple independent schedulers, and of course in a parallel situation you'll want core-local schedulers as well.

The implementation above has no notion of time. What we did in the Snabb implementation of fibers was to implement a timer wheel, inspired by Juho Snellman's Ratas, and then add that timer wheel as a task source to Snabb's scheduler. In Snabb, every time the equivalent of run_tasks() is called, a scheduler asks its sources to schedule additional tasks. The timer wheel implementation schedules expired timers. It's straightforward to build CML timeout operations in terms of timers.

Additionally, your system probably has other external sources of communication, such as sockets. The trick to integrating sockets into fibers is to suspend the current fiber whenever an operation on a file descriptor would block, and arrange to resume it when the operation can proceed. Here's the implementation in Snabb.

The only difficult bit with getting nice nonblocking socket support is that you need to be able to suspend the calling thread when you see the EWOULDBLOCK condition, and for coroutines that is often only possible if you implemented the buffered I/O yourself. In Snabb that's what we did: we implemented a compatible replacement for Lua's built-in streams, in Lua. That lets us handle EWOULDBLOCK conditions in a flexible manner. Integrating epoll as a task source also lets us sleep when there are no runnable tasks.

Likewise in the Snabb context, we are also working on a TCP implementation. In that case you want to structure TCP endpoints as fibers, and arrange to suspend and resume them as appropriate, while also allowing timeouts. I think the scheduler and CML patterns are going to allow us to do that without much trouble. (Of course, the TCP implementation will give us lots of trouble!)

Additionally your system might want to communicate with fibers from other threads. It's entirely possible to implement CML on top of pthreads, and it's entirely possible as well to support communication between pthreads and fibers. If this is interesting to you, see Guile's implementation.

When I talked about fibers in an earlier article, I built them in terms of delimited continuations. Delimited continuations are fun and more expressive than coroutines, but it turns out that for fibers, all you need is the expressive power of coroutines -- multi-shot continuations aren't useful. Also I think the presentation might be more straightforward. So if all your language has is coroutines, that's still good enough.

There are many more kinds of standard CML operations; implementing those is also another next step. In particular, I have found semaphores and condition variables to be quite useful. Also, standard CML supports "guards", invoked when an operation is performed, and "nacks", invoked when an operation is definitively not performed because a choice selected some other operation. These can be layered on top; see the Parallel CML paper for notes on "primitive CML".

Also, the choice operator above is left-biased: it will prefer earlier impls over later ones. You might want to not always start with the first impl in the list.

The scheduler shown above is the simplest thing I could come up with. You may want to experiment with other scheduling algorithms, e.g. capability-based scheduling, or kill-safe abstractions. Do it!

Or, it could be you already have a scheduler, like some kind of main loop that's already there. Cool, you can use it directly -- all that fibers needs is some way to schedule functions to run.

godspeed

In summary, I think Concurrent ML should be better-known. Its simplicity and expressivity make it a valuable part of any concurrent system. Already in Snabb it helped us solve some longstanding gnarly issues by making the right solutions expressible.

As Adam Solove says, Concurrent ML is great, but it has a branding problem. Its ideas haven't penetrated the industrial concurrent programming world to the extent that they should. This article is another attempt to try to get the word out. Thanks to Adam for the observation that CML is really a protocol; I'm sure the concepts could be made even more clear, but at least this is a step forward.

All the code in this article is up on a gitlab snippet along with instructions for running the example program from the command line. Give it a go, and happy hacking with CML!

16 May, 2018 03:17PM by Andy Wingo

GUIX Project news

Tarballs, the ultimate container image format

A year ago we introduced guix pack, a tool that allows you to create “application bundles” from a set of Guix package definitions. On your Guix machine, you run:

guix pack -S /opt/gnu/bin=bin guile gnutls guile-json

and you get a tarball containing your favorite programming language implementation and a couple of libraries, where /opt/gnu/bin is a symlink to the bin directory containing, in this case, the guile command. Add -f docker and, instead of a tarball, you get an image in the Docker format that you can pass to docker load on any machine where Docker is installed. Overall that’s a relatively easy way to share software stacks with machines that do not run Guix.

The tarball format is plain and simple, it’s the one we know and love, and it’s been there “forever” as its name suggests. The tarball that guix pack produces can be readily extracted on another machine, one that doesn’t run Guix, and you’re done. The problem though, is that you’ll need to either unpack the tarball in the root file system or to play tricks with the unshare command, as we saw in the previous post. Why can’t we just extract such a tarball in our home directory and directly run ./opt/gnu/bin/guile for instance?

Relocatable packages

The main issue is that, except in the uncommon case where developers went to great lengths to make it possible (as with GUB, see the *-reloc*.patch files), packages built for GNU/Linux are not relocatable. ELF files embed things like the absolute file name of the dynamic linker, directories where libraries are to be search for (they can be relative file names with $ORIGIN but usually aren’t), and so on; furthermore, it’s very common to embed things like the name of the directory that contains locale data or other application-specific data. For Guix-built software, all these are absolute file names under /gnu/store so Guix-built binaries won’t run unless those /gnu/store files exist.

On machines where support for “user namespaces” is enabled, we can easily “map” the directory where users unpacked the tarball that guix pack produced to /gnu/store, as shown in the previous post:

$ tar xf /path/to/pack.tar.gz
$ unshare -mrf chroot . /opt/gnu/bin/guile --version
guile (GNU Guile) 2.2.0

It does the job but remains quite tedious. Can’t we automate that?

guix pack --relocatable

The --relocatable (or -R) option of guix pack, which landed a few days ago, produces tarballs with automatically relocatable binaries. Back to our earlier example, let’s say you produce a tarball with this new option:

guix pack --relocatable -S /bin=bin -S /etc=etc guile gnutls guile-json

You can send the resulting tarball to any machine that runs the kernel Linux (it doesn’t even have to be GNU/Linux) with user namespace support—which, unfortunately, is disabled by default on some distros. There, as a regular user, you can run:

$ tar xf /path/to/pack.tar.gz
$ source ./etc/profile    # define ’GUILE_LOAD_PATH’, etc.
$ ./bin/guile
guile: warning: failed to install locale
GNU Guile 2.2.3
Copyright (C) 1995-2017 Free Software Foundation, Inc.

Guile comes with ABSOLUTELY NO WARRANTY; for details type `,show w'.
This program is free software, and you are welcome to redistribute it
under certain conditions; type `,show c' for details.

Enter `,help' for help.
scheme@(guile-user)> ,use(json)
scheme@(guile-user)> ,use(gnutls)

We were able to run Guile and to use our Guile libraries since sourcing ./etc/profile augmented the GUILE_LOAD_PATH environment variable that tells Guile where to look for libraries. Indeed we can see it by inspecting the value of %load-path at the Guile prompt:

scheme@(guile-user)> %load-path
$1 = ("/gnu/store/w9xd291967cvmdp3m0s7739icjzgs8ns-profile/share/guile/site/2.2" "/gnu/store/b90y3swxlx3vw2yyacs8cz59b8cbpbw5-guile-2.2.3/share/guile/2.2" "/gnu/store/b90y3swxlx3vw2yyacs8cz59b8cbpbw5-guile-2.2.3/share/guile/site/2.2" "/gnu/store/b90y3swxlx3vw2yyacs8cz59b8cbpbw5-guile-2.2.3/share/guile/site" "/gnu/store/b90y3swxlx3vw2yyacs8cz59b8cbpbw5-guile-2.2.3/share/guile")

Wait, it’s all /gnu/store! As it turns out, guix pack --relocatable created a wrapper around guile that populates /gnu/store in the mount namespace of the process. Even though /gnu/store does not exist on that machine, our guile process “sees” our packages under /gnu/store:

scheme@(guile-user)> ,use(ice-9 ftw)
scheme@(guile-user)> (scandir "/gnu/store")
$2 = ("." ".." "0249nw8c7z626fw1fayacm160fpd543k-guile-json-0.6.0R" "05dvazr5wfh7lxx4zi54zfqnx6ha8vxr-bash-static-4.4.12" "0jawbsyafm93nxf4rcmkf1rsk7z03qfa-libltdl-2.4.6" "0z1r7ai6syi2qnf5z8w8n25b1yv8gdr4-info-dir" "1n59wjm6dbvc38b320iiwrxra3dg7yv8-libunistring-0.9.8" "2fg01r58vv9w41kw6drl1wnvqg7rkv9d-libtasn1-4.12" "2ifmksc425qcysl5rkxkbv6yrgc1w9cs-gcc-5.5.0-lib" "2vxvd3vls7c8i9ngs881dy1p5brc7p85-gmp-6.1.2" "4sqaib7c2dfjv62ivrg9b8wa7bh226la-glibc-2.26.105-g0890d5379c" "5kih0kxmipzjw10c53hhckfzkcs7c8mm-gnutls-3.5.13" "8hxm8am4ll05sa8wlwgdq2lj4ddag464-zlib-1.2.11" "90vz0r78bww7dxhpa7vsiynr1rcqhyh4-nettle-3.4" "b90y3swxlx3vw2yyacs8cz59b8cbpbw5-guile-2.2.3" "c4jrwbv7qckvnqa7f3h7bd1hh8rbg72y-libgc-7.6.0" "f5lw5w4nxs6p5gq0c2nb3jsrxc6mmxbi-libgc-7.6.0" "hjxic0k4as384vn2qp0l964isfkb0blb-guile-json-0.6.0" "ksyja5lbwy0mpskvn4rfi5klc00c092d-libidn2-2.0.4" "l15mx9lrwdflyvmb4a05va05v5yqizg5-libffi-3.2.1" "mm0zclrzj3y7rj74hzyd0f224xly04fh-bash-minimal-4.4.12" "vgmln3b639r68vvy75xhcbi7d2w31mx1-pkg-config-0.29.2" "vz3zfmphvv4w4y7nffwr4jkk7k4s0rfs-guile-2.2.3" "w9xd291967cvmdp3m0s7739icjzgs8ns-profile" "x0jf9ckd30k3nhs6bbhkrxsjmqz8phqd-nettle-3.4" "x8z6cr7jggs8vbyh0xzfmxbid63z6y83-guile-2.2.3R" "xbkl3nx0fqgpw2ba8jsjy0bk3nw4q3i4-gnutls-3.5.13R" "xh4k91vl0i8nlyrmvsh01x0mz629w5a9-gmp-6.1.2" "yx12x8v4ny9f6fipk8285jgfzqavii83-manual-database" "zksh1n0p9x903kqbvswgwy2vsk2b7255-libatomic-ops-7.4.8")

The wrapper is a small statically-linked C program. (Scheme would be nice and would allow us to reuse call-with-container, but it would also take up more space.) All it does is create a child process with separate mount and user namespaces, which in turn mounts the tarball’s /gnu/store to /gnu/store, bind-mounts other entries from the host root file system, and chroots into that. The result is a binary that sees everything a “normal” program sees on the host, but with the addition of /gnu/store, with minimal startup overhead.

In a way, it’s a bit of a hack: for example, what gets bind-mounted in the mount namespace of the wrapped program is hard-coded, which is OK, but some flexibility would be welcome (things like Flatpak’s sandbox permissions, for instance). Still, that it Just Works is a pretty cool feature.

Tarballs vs. Snap, Flatpak, Docker, & co.

Come to think of it: if you’re a developer, guix pack is probably one of the easiest ways to create an “application bundle” to share with your users; and as a user, these relocatable tarballs are about the simplest thing you can deal with since you don’t need anything but tar—well, and user namespace support. Plus, since they are bit-reproducible, anyone can rebuild them to ensure they do not contain malware or to check the provenance and licensing of its contents.

Application bundles cannot replace full-blown package management, which allows users to upgrade, get security updates, use storage and memory efficiently, and so on. For the purposes of quickly sharing packages with users or with Guix-less machines, though, you might find Guix packs to be more convenient than Snap, Flatplak, or Docker. Give it a spin and let us know!

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

16 May, 2018 09:00AM by Ludovic Courtès

May 15, 2018

FSF News

Zerocat Chipflasher "board-edition-1" now FSF-certified to Respect Your Freedom

chipflasher in action

This is the first device under The Zerocat Label to receive RYF certification. The Chipflasher enables users to flash devices such as laptops, allowing them to replace proprietary software with free software like Libreboot. While users are able to purchase RYF-certified laptops that already come with Libreboot pre-loaded, for the first time ever they are capable of freeing their own laptops using an RYF-certified device. The Zerocat Chipflasher board-edition-1 is now available for purchase as a limited edition at http://www.zerocat.org/shop-en.html. These first ten limited edition boards are signed by Kai Mertens, chief developer of The Zerocat Label, and will help to fund additional production and future development of RYF-certified devices.

"The certification of the Zerocat Chipflasher is a big step forward for the Respects Your Freedom program. Replacing proprietary boot firmware is one of the first tasks for creating a laptop that meets RYF's criteria, and now anyone can do so for their own devices with a flasher that is itself RYF-certified," said the FSF's executive director, John Sullivan.

An RYF-certified flashing device could also help to grow the number of laptops available via the RYF program.

"When someone sets out to start their own business selling RYF-certified devices, they now have a piece of hardware they can trust to help them with that process. We hope to see even more laptops made available under the program, and having those laptops flashed with a freedom-respecting device will help to set those retailers on the right path from the start," said the FSF's licensing & compliance manager, Donald Robertson, III.

"Free software tools carry the inherent message 'Let’s help our neighbors!', as this is basically the spirit of the licenses that these tools are shipped with. From a global perspective, we are all 'neighbors,' no matter which country. And from this point of view, I would be happy if the flasher will be regarded as a contribution towards worldwide cooperation and friendship," said Mertens.

To learn more about the Respects Your Freedom device certification program, including details on the certification of all these devices, please visit https://fsf.org/ryf.

Hardware sellers interested in applying for certification can consult https://www.fsf.org/resources/hw/endorsement/criteria.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at and , are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

About The Zerocat Label

The Zerocat Label has been set up in order to direct the focus on free design hardware development.

The development of free designs for hardware has many benefits. One of the most important is probably its capacity to conserve the Earth’s resources, and those of people. When we share knowledge, we work towards global solutions instead of individual profits.

Our current approach is to check which free design computer chips are available today, and to start creating something useful with them, even if bigger goals remain out of reach at this time. Creating tools in a modular manner allows us to combine them and to achieve complex solutions in the future.

Our next tasks are to find an answer to questions like “What free-design tools do we need?” and “What are we able to accomplish right now?” We hope to be able to build other free-design tools of wide interest by answering those questions. It is an experimental endeavor.

Media Contacts

Donald Robertson, III Licensing and Compliance Manager Free Software Foundation +1 (617) 542 5942 licensing@fsf.org

Kai Mertens Chief Developer The Zerocat Label zerocat@posteo.de

Image Copyright 2018 Kai Mertens, Licensed under Creative Commons Attribution-ShareAlike 4.0.

15 May, 2018 02:05PM

May 14, 2018

FSF Blogs

Friday Free Software Directory IRC meetup time: May 18th starting at 12:00 p.m. EDT/16:00 UTC

Help improve the Free Software Directory by adding new entries and updating existing ones. Every Friday we meet on IRC in the #fsf channel on irc.freenode.org.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

When a user comes to the Directory, they know that everything in it is free software, has only free dependencies, and runs on a free OS. With over 16,000 entries, it is a massive repository of information about free software.

While the Directory has been and continues to be a great resource to the world for many years now, it has the potential to be a resource of even greater value. But it needs your help! And since it's a MediaWiki instance, it's easy for anyone to edit and contribute to the Directory.

During this week in 1928, Mickey Mouse made his first appearance in the cartoon Plane Crazy. However, his first appearance before the public was officially in the short Steamboat Willie, since it was released first. 70 years later, Mickey became infamous in a new way: as the derisive namesake of the US Copyright Term Extension Act (CTEA) of 1998, which extended, and thereby harmonized the US and EU length of copyright.

To honor this dubious milestone, the theme of the Directory meeting is graphics and drawing software. One immediate item that needs a refresh is GIMP. The other big drawing projects could be helped by adding screenshots and updating the entries. Of course, if you know of any projects that aren't listed, we hope you will take the time to add them to the Directory.

If you are eager to help, and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today! There are also weekly Directory Meeting pages that everyone is welcome to contribute to before, during, and after each meeting. To see the meeting start time in your time zone, run this in GNU bash: date --date='TZ="America/New_York" 12:00 this Fri'

14 May, 2018 07:53PM

May 12, 2018

German Arias

New release of eiffel-iup

It is already available a new version of eiffel-iup, a Liberty Eiffel wrapper to IUP toolkit. So you can build your graphical application from Eiffel using Liberty Eiffel, the GNU implementation of Eiffel language. So happy hacking.

12 May, 2018 01:16AM by Germán Arias

May 11, 2018

librejs @ Savannah

LibreJS 7.14 released

GNU LibreJS aims to address the JavaScript problem described in Richard Stallman's article The JavaScript Trap. LibreJS is a free add-on for GNU IceCat and other Mozilla-based browsers. It blocks nonfree nontrivial JavaScript while allowing JavaScript that is free and/or trivial. https://www.gnu.org/philosophy/javascript-trap.en.html

The source tarball for this release can be found at:
http://ftp.gnu.org/gnu/librejs/librejs-7.14.tar.gz
http://ftp.gnu.org/gnu/librejs/librejs-7.14.tar.gz.sig

The installable extension file (compatible with Mozilla-based browsers version >= v57) is available here:
http://ftp.gnu.org/gnu/librejs/librejs-7.14.xpi
http://ftp.gnu.org/gnu/librejs/librejs-7.14.xpi.sig

GPG key:05EF 1D2F FE61 747D 1FC8 27C3 7FAC 7D26 472F 4409
https://savannah.gnu.org/project/memberlist-gpgkeys.php?group=librejs

Version 7.14 is an extensive bugfix release that builds on the work done by Nathan Nichols, Nyk Nyby and Zach Wick to port LibreJS to the new WebExtensions format, and previously on the contributions by Loic Duros and myself among others.

Changes since version 7.13 (excerpt from the git changelog):

  • Check global licenses for pages
  • Enable legacy license matching and hash whitelist matching
  • Refactor whitelisting of domains
  • Generalize comment styles for license matching
  • Use multi-part fetch mechanism for read_script
  • Improved system that prevents parsing non-html documents
  • Do not process non-javascript scripts (json, templates, etc)
  • Do not run license_read on whitelisted scripts
  • Prevent parsing inline scripts if there is a global license
  • Prevent evaluation of external scripts, as they are always nontrivial
  • Avoid parsing empty whitespace sections
  • Correct tab and badge initialization to prevent race conditions
  • Generalize gpl-3.0 license text
  • Improved logging
  • Disable whitelisted and blacklisted sections on display panel for now
  • Hide per-script action buttons until functionality works
  • Fixes to the CSS plus showing links instead of hashes

11 May, 2018 10:28PM by Ruben Rodriguez

FSF News

Contract opportunity: JavaScript Developer for GNU LibreJS

The Free Software Foundation (FSF), a Massachusetts 501(c)(3) charity with a worldwide mission to protect computer user freedom, seeks a contract JavaScript Developer to work on GNU LibreJS, a free browser add-on that addresses the problem of nonfree JavaScript described in Richard Stallman's article The JavaScript Trap. This is a temporary, paid contract opportunity, with specific deliverables, hours, term, and payment to be determined with the selected candidate. We anticipate the contract being approximately 80 hours of full-time work, with the possibility of extension depending on results and project status.

Reporting to our technical team, the contractor will work to implement important missing features in the LibreJS extension. We are looking for someone with experience in backend JavaScript development, WebExtensions, and NodeJS/Browserify. Experience with software licensing is a plus. This is an urgent priority; we are seeking someone who is able to start now. Contractors can be based anywhere, but must be able to attend telephone meetings during Eastern Daylight Time business hours.

Examples of deliverables include, but are not limited to:

  • Web Labels support, plus Web Labels in JSON format
  • SPDX support
  • Unit and functional testing
  • User interface improvements
  • New and updated documentation

LibreJS is a critical component of the FSF's campaign for user freedom on the Web, and freeing JavaScript specifically. Building on past contributions, this is an opportunity to help unlock a world where users can better protect their freedom as they browse, and collaborate with each other to make and share modified JavaScript to use.

Reference documentation

Proposal instructions

Proposals must be submitted via email to hiring@fsf.org. The email must contain the subject line "LibreJS Developer." A complete application should include:

  • Letter of interest
  • CV / portfolio with links to any previous work online, especially browser extensions
  • At least two recent client references

All materials must be in a free format. Email submissions that do not follow these instructions will probably be overlooked. No phone calls, please.

Proposals will be reviewed on a rolling basis until the contract is filled. To guarantee consideration, submit your proposal by Friday, May 18, 2018.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. We are based in Boston, MA, USA.

11 May, 2018 02:18PM

FSF Events

Richard Stallman - "El software libre en la ética y en la práctica" (Barcelona, España)

Richard Stallman hablará sobre las metas y la filosofía del movimiento del Software Libre, y el estado y la historia del sistema operativo GNU, el cual junto con el núcleo Linux, es actualmente utilizado por decenas de millones de personas en todo el mundo.

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

Lugar:Auditori UPC Edifici Vertex Universitat Politècnica de Catalunya Carrer de Dulcet 3, 08034 Barcelona, España

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Barcelona.

11 May, 2018 10:25AM

Richard Stallman to speak in Barcelona, Spain

Richard Stallman will be speaking at the Maker Faire (2018-06-16–17). His speech will be nontechnical, admission is gratis, and the public is encouraged to attend.

Richard Stallman's speech will be nontechnical, admission is gratis, and the public is encouraged to attend.

Speech topic to be determined.

Location: CaixaForum Barcelona,​ Av. de Francesc Ferrer i Guàrdia 6-8, 08038 Barcelona, Spain

Important: While (gratis) registration for this event is required, please note that you have the option of registering anonymously and without running nonfree software, at the venue.

Please fill out our contact form, so that we can contact you about future events in and around Barcelona.

11 May, 2018 09:53AM

May 10, 2018

Richard Stallman to speak in Pato Branco, Brazil

Richard Stallman's speech will be nontechnical, admission is gratis, and the public is encouraged to attend.

Speech topic and start time to be determined.

Location: to be determined

Please fill out our contact form, so that we can contact you about future events in and around Pato Branco.

10 May, 2018 04:35PM

FSF Blogs

Friday Free Software Directory IRC meetup time: May 11th starting at 12:00 p.m. EDT/16:00 UTC

Help improve the Free Software Directory by adding new entries and updating existing ones. Every Friday we meet on IRC in the #fsf channel on irc.freenode.org.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

When a user comes to the Directory, they know that everything in it is free software, has only free dependencies, and runs on a free OS. With over 16,000 entries, it is a massive repository of information about free software.

While the Directory has been and continues to be a great resource to the world for many years now, it has the potential to be a resource of even greater value. But it needs your help! And since it's a MediaWiki instance, it's easy for anyone to edit and contribute to the Directory.

On May 11, 1981, Bob Marley died from cancer. Whether you're a fan or not, his cultural significance cannot be underplayed. For many, he is seen as a iconic proponent of peace and love. This week, the directory will honor Bob Marley with a focus on freshening up the music software.

If you are eager to help, and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today! There are also weekly Directory Meeting pages that everyone is welcome to contribute to before, during, and after each meeting. To see the meeting start time in your time zone, run this in GNU bash: date --date='TZ="America/New_York" 12:00 this Fri'

10 May, 2018 04:19PM

May 09, 2018

GUIX Project news

Paper on reproducible bioinformatics pipelines with Guix

I’m happy to announce that the bioinformatics group at the Max Delbrück Center that I’m working with has released a preprint of a paper on reproducibility with the title Reproducible genomics analysis pipelines with GNU Guix.

We built a collection of bioinformatics pipelines called "PiGx" ("Pipelines in Genomix") and packaged them as first-class packages with GNU Guix. Then we looked at the degree to which the software achieves bit-reproducibility, analysed sources of non-determinism (e.g. time stamps), discussed experimental reproducibility at runtime (e.g. random number generators, the interface provided by the kernel and the GNU C library, etc) and commented on the practice of using “containers” (or application bundles) instead.

Reproducible builds is a crucial foundation for computational experiments. We hope that PiGx and the reproducibility analysis we presented in the paper can serve as a useful case study demonstrating the importance of a principled approach to computational reproducibility and the effectiveness of Guix in the pursuit of reproducible software management.

09 May, 2018 10:00AM by Ricardo Wurmus

May 08, 2018

libredwg @ Savannah

Smokers and mirrors

I've setup continuous integration testing for all branches and pull requests at https://travis-ci.org/LibreDWG/libredwg/builds for GNU/Linux, and at https://ci.appveyor.com/project/rurban/libredwg for windows, which also generates binaries (a dll) automatically.

There's also an official github mirror at https://github.com/LibreDWG/libredwg
where pull requests are accepted. This repo drives the CI's.
See https://github.com/LibreDWG/libredwg/releases for the nightly windows builds.

The first alpha release should be in June, when all the new permissions are finalized.

08 May, 2018 04:13PM by Reini Urban

May 05, 2018

Parabola GNU/Linux-libre

[From Arch] js52 52.7.3-2 upgrade requires intervention

Due to the SONAME of /usr/lib/libmozjs-52.so not matching its file name, ldconfig created an untracked file /usr/lib/libmozjs-52.so.0. This is now fixed and both files are present in the package.

To pass the upgrade, remove /usr/lib/libmozjs-52.so.0 prior to upgrading.

05 May, 2018 06:35AM by Omar Vega Ramos

May 02, 2018

guile-cv @ Savannah

Guile-CV version 0.1.9

Guile-CV version 0.1.9 is released! (Mai 2018)
Changes since the previous version

For a list of changes since the previous version, visit the NEWS file. For
a complete description, consult the git summary and git log

02 May, 2018 04:11AM by David Pirotte

April 26, 2018

GUIX Project news

Guix welcomes Outreachy, GSoC, and Guix-HPC interns

We are thrilled to announce that five people will join Guix as interns over the next few months! As part of Google’s Summer of Code (GSoC), under the umbrella of the GNU Project, three people are joining us:

  • Tatiana Sholokhova will work on a Web interface for the Guix continuous integration (CI) tool, Cuirass, similar in spirit to that of Hydra. Cuirass was started as part of GSoC 2016.
  • uniq10 will take over the build daemon rewrite in Scheme, a project started as part of last year's GSoC by reepca. The existing code lives in the guile-daemon branch. Results from last year already got us a long way towards a drop-in replacement of the current C++ code base.
  • Ioannis P. Koutsidis will work on implementing semantics similar to that of systemd unit files in the Shepherd, the “init system” (PID 1) used on GuixSD.

Through Outreachy, the inclusion program for groups underrepresented in free software and tech, one person will join:

Finally, we are welcoming one intern as part of the Guix-HPC effort:

  • Pierre-Antoine Rouby arrived a couple of weeks ago at Inria for a four-month internship on improving the user experience of Guix in high-performance computing (HPC) and reproducible scientific workflows. Pierre-Antoine has already contributed a couple of HPC package definitions and will next look at tools such as hpcguix-web, guix pack, and more.

Gábor Boskovits, Ricardo Wurmus, and Ludovic Courtès will be their primary mentors, and the whole Guix crowd will undoubtedly help and provide guidance as it has always done. Welcome to all of you!

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

26 April, 2018 03:00PM by Ludovic Courtès

April 24, 2018

Guix on Android!

Last year I thought to myself: since my phone is just a computer running an operating system called Android (or Replicant!), and that Android is based on a Linux kernel, it's just another foreign distribution I could install GNU Guix on, right? It turned out it was absolutely the case. Today I was reminded on IRC of my attempt last year at installing GNU Guix on my phone. Hence this blog post. I'll try to give you all the knowledge and commands required to install it on your own Android device.

Requirements

First of all, you will need an Android or Replicant device. Just like any installation of GNU Guix, you will need root access on that device. Unfortunately, in the Android world this is not very often the case by default. Then, you need a cable to connect your computer to your phone. Once the hardware is in place, you will need adb (the Android Debugging Bridge):

guix package -i adb

Exploring the device

Every Android device has its own partioning layout, but basically it works like this:

  1. A boot partition for booting the device
  2. A recovery partition for booting the device in recovery mode
  3. A data partition for user data, including applications, the user home, etc
  4. A system partition with the base system and applications. This is the place where phone companies put their own apps so you can't remove them
  5. A vendor partition for drivers
  6. Some other partitions

During the boot process, the bootloader looks for the boot partition. It doesn't contain a filesystem, but only a gzipped cpio archive (the initramfs) and the kernel. The bootloader loads them in memory and the kernel starts using the initramfs. Then, the init system from this initramfs loads partitions in their respective directories: the system partition in /system, the vendor partition in /vendor and the data partition in /data. Other partitions may be loaded.

And that's it. Android's root filesystem is actually the initramfs so any modification to its content will be lost after a reboot. Thankfully(?), Android devices are typically not rebooted often.

Another issue is the Android C library (libc), called Bionic: it has less functionality and works completely differently from the GNU libc. Since Guix is built with the Glibc, we will need to do something to make it work on our device.

Installing the necessary files

We will follow the binary installation guide. My hardware is aarch64, so I download the corresponding binary release.

Now it's time to start using adb. Connect your device and obtain root priviledges for adb. You may have to authorize root access to the computer from your phone:

adb root

Now, we will transfer some necessary files:

adb push guix-binary-* /data

# Glibc needs these two files for networking.
adb push /etc/protocols /system/etc/
adb push /etc/services /system/etc/

# ‌ and this one to perform DNS queries.  You probably need
# to change nameservers if you use mobile data.
adb push /etc/resolv.conf /system/etc/

Note that some devices may not have /system/etc available. In that case, /etc may be available. If none is available, create the directory by using adb shell to get a shell on your device, then push the files to that new directory.

Installing Guix itself

Now all the necessary files are present on the device, so we can connect to a shell on the device:

adb shell

From that shell, we will install Guix. The root filesystem is mounted read-only as it doesn't make sense to modify it. Remember: it's a RAM filesystem. Remount-it read-write and create necessary directories:

mount -o remount,rw /
mkdir /gnu /var
mount -o remount,ro /

Now, we can't just copy the content of the binary archive to these folders because the initramfs has a limited amount of space. Guix complains when /gnu or /gnu/store is a symlink. One solution consists in installing the content of the binary tarball on an existing partition (because you can't modify the partition layout easily) that has enough free space, typically the data partition. Then this partition is mounted on /var and /gnu.

Before that, you will need to find out what the data partition is in your system. Simply run mount | grep /data to see what partition was mounted.

We mount the partition, extract the tarball and move the contents to their final location:

mount /dev/block/bootdevice/by-name/userdata /gnu
mount /dev/block/bootdevice/by-name/userdata /var
cd /data
tar xf guix-binary-...
mv gnu/store .
mv var/guix .
rmdir gnu
rmdir var

Finally, we need to create users and groups for Guix to work properly. Since Bionic doesn't use /etc/passwd or /etc/group to store the users, we need to create them from scratch. Note the addition of the root user and group, as well as the nobody user.

# create guix users and root for glibc
cat > /etc/passwd << EOF
root:x:0:0:root:/data:/sbin/sh
nobody:x:99:99:nobody:/:/usr/bin/nologin
guixbuilder01:x:994:994:Guix build user 01:/var/empty:/usr/bin/nologin
guixbuilder02:x:993:994:Guix build user 02:/var/empty:/usr/bin/nologin
guixbuilder03:x:992:994:Guix build user 03:/var/empty:/usr/bin/nologin
guixbuilder04:x:991:994:Guix build user 04:/var/empty:/usr/bin/nologin
guixbuilder05:x:990:994:Guix build user 05:/var/empty:/usr/bin/nologin
guixbuilder06:x:989:994:Guix build user 06:/var/empty:/usr/bin/nologin
guixbuilder07:x:988:994:Guix build user 07:/var/empty:/usr/bin/nologin
guixbuilder08:x:987:994:Guix build user 08:/var/empty:/usr/bin/nologin
guixbuilder09:x:986:994:Guix build user 09:/var/empty:/usr/bin/nologin
guixbuilder10:x:985:994:Guix build user 10:/var/empty:/usr/bin/nologin
EOF

cat > /etc/group << EOF
root:x:0:root
guixbuild:x:994:guixbuilder01,guixbuilder02,guixbuilder03,guixbuilder04,guixbuilder05,guixbuilder06,guixbuilder07,guixbuilder08,guixbuilder09,guixbuilder10
EOF

Running Guix

First, we install the root profile somewhere:

export HOME=/data
ln -sf /var/guix/profiles/per-user/root/guix-profile \
         $HOME/.guix-profile

Now we can finally run the Guix daemon. Chrooting is impossible on my device so I had to disable it:

export PATH="$HOME/.guix-profile/bin:$HOME/.guix-profile/sbin:$PATH"
guix-daemon --build-users-group=guixbuild --disable-chroot &

To finish with, it's a good idea to allow substitutes from hydra:

mkdir /etc/guix
guix archive --authorize < \
  $HOME/.guix-profile/share/guix/hydra.gnu.org.pub

Enjoy!

guix pull

Mobile phone running 'guix pull'.

Future work

So, now we can enjoy the Guix package manager on Android! One of the drawbacks is that after a reboot we will have to redo half of the steps: recreate /var and /gnu, mount the partitions to them. Everytime you launch a shell, you will have to export the PATH to be able to run guix. You will have to run guix-daemon manually. To solve all of these problems at once, you should modify the boot image. That's tricky and I already put some effort to it, but the phone always ends up in a boot loop after I flash a modified boot image. The nice folks at #replicant suggested that I soldered some cable to access a serial console where debug messages may be dropped. Let's see how many fingers I burn before I can boot a custom boot image!

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

24 April, 2018 08:00AM by Julien Lepiller

April 22, 2018

parallel @ Savannah

GNU Parallel 20180422 ('Tiangong-1') released

GNU Parallel 20180422 ('Tiangong-1') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Quote of the month:

Today I discovered GNU Parallel, and I don’t know what to do with all this spare time.
--Ryan Booker

New in this release:

  • --csv makes GNU Parallel parse the input sources as CSV. When used with --pipe it only passes full CSV-records.
  • Time in --bar is printed as 1d02h03m04s.
  • Optimization of --tee: It spawns a process less per value.
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, April 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 April, 2018 09:19PM by Ole Tange

April 20, 2018

Parabola GNU/Linux-libre

[From Arch] glibc 2.27-2 and pam 1.3.0-2 may require manual intervention

The new version of glibc removes support for NIS and NIS+. The default /etc/nsswitch.conf file provided by filesystem package already reflects this change. Please make sure to merge pacnew file if it exists prior to upgrade.

NIS functionality can still be enabled by installing libnss_nis package. There is no replacement for NIS+ in the official repositories.

pam 1.3.0-2 no longer ships pam_unix2 module and pam_unix_*.so compatibility symlinks. Before upgrading, review PAM configuration files in the /etc/pam.d directory and replace removed modules with pam_unix.so. Users of pam_unix2 should also reset their passwords after such change. Defaults provided by pambase package do not need any modifications.

20 April, 2018 03:18PM by David P.

April 19, 2018

remotecontrol @ Savannah

April 08, 2018

mcron @ Savannah

GNU Mcron 1.1.1 released

We are pleased to announce the release of GNU Mcron 1.1.1,
representing 48 commits, by 1 person over 3 weeks.

About

GNU Mcron is a complete replacement for Vixie cron. It is used to run
tasks on a schedule, such as every hour or every Monday. Mcron is
written in Guile, so its configuration can be written in Scheme; the
original cron format is also supported.

https://www.gnu.org/software/mcron/

Download

Here are the compressed sources and a GPG detached signature[*]:
https://ftp.gnu.org/gnu/mcron/mcron-1.1.1.tar.gz
https://ftp.gnu.org/gnu/mcron/mcron-1.1.1.tar.gz.sig

Use a mirror for higher download bandwidth:
https://ftpmirror.gnu.org/mcron/mcron-1.1.1.tar.gz
https://ftpmirror.gnu.org/mcron/mcron-1.1.1.tar.gz.sig

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:

gpg --verify mcron-1.1.1.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

gpg --keyserver keys.gnupg.net --recv-keys 0ADEE10094604D37

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
Autoconf 2.69
Automake 1.16.1
Makeinfo 6.5
Help2man 1.47.5

NEWS

  • Noteworthy changes in release 1.1.1 (2018-04-08) [stable]
    • Bug fixes

The "--disable-multi-user" configure variable is not reversed anymore.
'cron' and 'crontab' are now installed unless this option is used.

The programs now sets the GUILE_LOAD_PATH and GUILE_LOAD_COMPILED_PATH
environment variables with the location of the installed Guile modules.

'next-year-from', 'next-year', 'next-month-from', 'next-month',
'next-day-from', 'next-day', 'next-hour-from', 'next-hour',
'next-minute-from', 'next-minute', 'next-second-from', and 'next-second' no
longer crashes when passing an optional argument.
[bug introduced in mcron-1.1]

    • Improvements

Some basic tests for the installed programs can be run after 'make install'
with 'make installcheck'.

The configuration files are now processed using a deterministic order.

The test suite code coverage for mcron modules is now at 66.8% in term of
number of lines (mcron-1.1 was at 23.7%).

08 April, 2018 03:46PM by Mathieu Lirzin

April 05, 2018

Trisquel GNU/Linux

Trisquel 8.0 LTS Flidas

Trisquel 8.0, codename "Flidas" is finally here! This release will be supported with security updates until April 2021. The first thing to acknowledge is that this arrival has been severely delayed, to the point where the next upstream release (Ubuntu 18.04 LTS) will soon be published. The good news is that the development of Trisquel 9.0 will start right away, and it should come out closer to the usual release schedule of "6 months after upstream release".

But this is not to say that we shouldn't be excited about Trisquel 8.0, quite the contrary! It comes with many improvements over Trisquel 7.0, and its core components (kernel, graphics drivers, web browser and e-mail client) are fully up to date and will receive continuous upgrades during Flidas' lifetime.

Trisquel 8.0 has benefited from extensive testing, as many people have been using the development versions as their main operating system for some time. On top of that, the Free Software Foundation has been using it to run the Libreplanet conference since last year, and it has been powering all of its new server infrastructure as well!

What's new?

The biggest internal change to the default edition is the switch from GNOME to MATE 1.12. The main reason for this change was that GNOME dropped support for their legacy desktop, which retained the GNOME 2.x user experience and didn't require 3D composition -- a feature that in many computers would still need non-free software to run at full speed. MATE provides a perfect drop-in replacement, it is very light and stable and it retains all the user experience design that we are used to from previous Trisquel releases.

The next most important component is Abrowser 59 (based on Mozilla Firefox), which is not only fully-featured and quite faster than before, it has also been audited and tweaked to maximize the user's privacy without compromising on usability. Abrowser will not start any network connections on its own (most popular web browsers connect for extension updates, telemetry, geolocation and other data-collection as soon as you open them, even if you haven't even typed an address yet!) and it has a list of easy to set, privacy-enhancing settings that the user can opt-in depending on their needs. As a companion to it, and based on Mozilla Thunderbird, the IceDove mail client is also fully updated and set up for privacy.

Trisquel 8.0 also comes with the following preinstalled packages:

  • Linux-libre 4.4 by default, 4.13 available (and newer versions will be published as an optional rolling release)
  • Xorg 7.7 with optional rolling-release updates
  • LibreOffice 5.1.4
  • VLC 2.2.2

Trisquel-mini (the light edition based on LXDE) uses the Midori web browser, Sylpheed email client, Abiword text editor, and GNOME-Mplayer media player as its main preinstalled components. We also have the Trisquel TOAST edition, based on the Sugar learning environment v112, and complete with a selection of educational activities for k12 and beyond. And of course, available from our repositories and mirrors are over 25,000 more free software packages you can run, study, improve and share.

Support our effort

Trisquel is a non-profit project, you can contribute by becoming a member, donating or buying from our store.

The MATE desktop
Boot menu
Installer
Privacy settings in Abrowser
Sugar environment
Trisquel mini with Midori web browser

05 April, 2018 03:10AM by quidam

April 04, 2018

Jose E. Marchesi

Rhhw Friday 1 June 2018 - Sunday 3 June 2018 @ Stockholm

The Rabbit Herd will be meeting the weekend from 1 June to 3 June 2018.

04 April, 2018 12:00AM

April 01, 2018

sed @ Savannah

sed-4.5 released [stable]

01 April, 2018 02:18AM by Jim Meyering

March 26, 2018

foliot @ Savannah

GNU Foliot version 0.9.7

GNU Foliot version 0.9.7 is released!

This is a maintenance release, which brings GNU Foliot up-to-date with Guile-2.2, which introduced an incompatible module - goops related - change.

For a list of changes since the previous version, visit the NEWS file. For a complete description, consult the git summary and git log.

26 March, 2018 02:52AM by David Pirotte

March 24, 2018

FSF News

Public Lab and Karen Sandler are 2017 Free Software Awards winners

CAMBRIDGE, Massachusetts, USA – Saturday, March 24, 2018 – The Free Software Foundation (FSF) today announced the winners of the 2017 Free Software Awards at a ceremony held during the LibrePlanet 2018 conference at the Massachusetts Institute of Technology (MIT). FSF president Richard M. Stallman presented the Award for Projects of Social Benefit and the Award for the Advancement of Free Software.

The Award for Projects of Social Benefit is presented to a project or team responsible for applying free software, or the ideas of the free software movement, to intentionally and significantly benefit society. This award stresses the use of free software in service to humanity.

This year, Public Lab received the award, which was accepted by Liz Barry, Public Lab co-founder, organizer, and director of community development, and Jeff Warren, Public Lab co-founder and research director, on behalf of the entire Public Lab community.

Public Lab is a community and non-profit organization with the goal of democratizing science to address environmental issues. Their community-created tools and techniques utilize free software and low-cost devices to enable people at any level of technical skill to investigate environmental concerns.

Stallman noted how crucial Public Lab's work is to the global community, and also how their use of free software is crucial to their mission, saying that "the environmental and social problems caused by global heating are so large that they cannot rationally be denied. When studies concerning the causes and the effects of global heating, or the environmental impact of pollution, industry, and policy choices, are conducted using proprietary software, that is a gratuitous obstacle to replicating them.

"Public Lab gets the tools to study and protect the world into the hands of everyone -- and since they are free (libre) software, they respect both the people who use them, and the community that depends on the results."

Jeff Warren, speaking on behalf of the Public Lab community, added that using free software is part of their larger mission to take science out of the hands of the experts and allow everyday people to participate: "At Public Lab, we believe that generating knowledge is a powerful thing. We aim to open research from the exclusive hands of scientific experts. By doing so, communities facing environmental justice issues are able to own the science and advocate for the changes they want to see.

"Building free software, hardware, and open data is fundamental to our work in the Public Lab community, as we see it as a key part of our commitment to equity in addressing environmental injustice."

Public Lab folks with award

The Award for the Advancement of Free Software goes to an individual who has made a great contribution to the progress and development of free software, through activities that accord with the spirit of free software.

This year, it was presented to Karen Sandler, the Executive Director of the Software Freedom Conservancy, as well as a perennial LibrePlanet speaker and friend to the FSF. She is known for her advocacy for free software, particularly in relation to the software on medical devices: she led an initiative advocating for free software on implantable medical devices after exploring the issues surrounding the software on her own implanted medical device (a defibrillator), which regulates an inherited heart condition. Sandler has served as the Executive Director of the GNOME Foundation, where she now serves on the Board of Directors, and before that, she was General Counsel of the Software Freedom Law Center. Finally, she co-organizes Outreachy, the award-winning outreach program that organizes paid internships in free software for people who are typically underrepresented in these projects.

Stallman praised Sandler's dedication to free software, emphasizing how sharing her personal experience has provided a window into the importance of free software for a broader audience: "Her vivid warning about backdoored nonfree software in implanted medical devices has brought the issue home to people who never wrote a line of code.

"Her efforts, usually not in the public eye, to provide pro bono legal advice to free software organizations and to organize infrastructure for free software projects and copyleft defense, have been equally helpful."

Sandler explained that her dedication to promoting free software was inevitable, given her personal experience: "Coming to terms with a dangerous heart condition should never have cost me fundamental control over the technology that my life relies on," she said. "The twists and turns of my own life, including my professional work at Conservancy, led me to understand how software freedom is essential to society. This issue is personal not just for me but for anyone who relies on software, and today that means every single person."

Karen Sandler with award

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://my.fsf.org/donate. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

John Sullivan

Executive Director

Free Software Foundation

+1 (617) 542 5942

campaigns@fsf.org

24 March, 2018 10:50PM

March 23, 2018

health @ Savannah

Red Cross (Cruz Roja) implements GNU Health !

We are very happy and proud to announce that Red Cross, Cruz Roja Mexicana has implemented GNU Health in Mexico.

The implementation covers over 850 patients per day, and the following functionality has been implemented :

  • Social Medicine and Primary Care,
  • Health records and history,
  • Hospital Management,
  • Laboratory management,
  • Imaging Diagnostics,
  • Emergency and Ambulance management
  • Pharmacy
  • Human Resources
  • Financial Management

It has been implemented in three locations: Veracruz - Boca del Rio, Ver. Mexico. Congratulations to Cruz Roja and Soluciones de Mercado for the fantastic job !

---

Estamos orgullosos de anunciar que Cruz Roja Mexicana ha implementado GNU Health !

La implementación atiende a una población diaria de más de 850 pacientes, con la siguiente funcionalidad :

  • Medicina Social y Atención Primaria
  • Expediente médico
  • Gestión hospitalaria
  • Laboratorio
  • Diagnóstico por imágenes
  • Gestión de Emergencias y ambulancias
  • Farmacia
  • Recursos Humanos
  • Gestión Fianciera

Ha sido instalado en tres localidades : Veracruz - Boca del Río, Ver. Mexico. Enhorabuena a la Cruz Roja y a los colegas de Soluciones de Mercado por el excelente trabajo !!

23 March, 2018 02:48PM by Luis Falcon

March 22, 2018

parallel @ Savannah

GNU Parallel 20180322 ('Hawking') released

GNU Parallel 20180322 ('Hawking') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Quote of the month:

If you aren’t nesting
gnu parallel calls in gnu parallel calls
I don’t know how you have fun.
-- Ernest W. Durbin III EWDurbin@twitter

New in this release:

  • niceload -p can now take multiple pids separated by comma
  • --timeout gives a warning when killing processes
  • --embed now uses the same code for all supported shells
  • --delay can now take arguments like 1h12m07s
  • Parallel. Straight from your command line https://medium.com/@alonisser/parallel-straight-from-your-command-line-feb6db8b6cee
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login: The USENIX Magazine, February 2011:42-47.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 March, 2018 08:06AM by Ole Tange

March 20, 2018

FSF News

LibrePlanet free software conference celebrates 10th anniversary, this weekend at MIT, March 24-25

CAMBRIDGE, Massachusetts, USA -- Tuesday, March 20, 2018 -- This weekend, the Free Software Foundation (FSF) and the Student Information Processing Board (SIPB) at the Massachusetts Institute of Technology (MIT) present the tenth annual LibrePlanet free software conference in Cambridge, March 24-25, 2018, at MIT. LibrePlanet is an annual conference for people who care about their digital freedoms, bringing together software developers, policy experts, activists, and computer users to learn skills, share accomplishments, and tackle challenges facing the free software movement. LibrePlanet 2018 will feature sessions for all ages and experience levels.

LibrePlanet's tenth anniversary theme is "Freedom Embedded." Embedded systems are everywhere, in cars, digital watches, traffic lights, and even within our bodies. We've come to expect that proprietary software's sinister aspects are embedded in software, digital devices, and our lives, too: we expect that our phones monitor our activity and share that data with big companies, that governments enforce digital restrictions management (DRM), and that even our activity on social Web sites is out of our control. This year's talks and workshops will explore how to defend user freedom in a society reliant on embedded systems.

Keynote speakers include Benjamin Mako Hill, social scientist, technologist, free software activist, and FSF board member, examining online collaboration and free software; Electronic Frontier Foundation senior staff technologist Seth David Schoen, discussing engineering tradeoffs and free software; Deb Nicholson, community outreach director for the Open Invention Network, talking about the key to longevity for the free software movement; and Free Software Foundation founder and president Richard Stallman, looking at current threats to and opportunities for free software, with a focus on embedded systems.

This year's LibrePlanet conference will feature over 50 sessions, such as The battle to free the code at the Department of Defense, Freedom, devices, and health, and Standardizing network freedom, as well as workshops on free software and photogrammetry, digital music making, and desktops for kids.

"For ten years, LibrePlanet has brought together free software enthusiasts and newcomers from around the world to exchange ideas, collaborate, and take on challenges to software freedom," said Georgia Young, program manager of the FSF. "But the conference is not purely academic -- it works to build the free software community, offering opportunities for those who cannot attend to participate remotely by watching a multi-channel livestream and joining the conversation online. And this year, we're proud to offer several kid-friendly workshops, encouraging earlier engagement with fun, ethical free software!"

Advance registration is closed, but attendees may register in person at the event. Admission is gratis for FSF Associate Members and students. For all other attendees, the cost of admission is $60 for one day, $90 for both days, and includes admission to the conference's social events. For those who cannot attend, this year's sessions will be streamed at https://libreplanet.org/2018/live/, and recordings will be available after the event at https://media.libreplanet.org/.

Anthropologist and author Gabriella Coleman was scheduled to give the opening keynote at LibrePlanet 2018, but was forced to cancel.

About LibrePlanet

LibrePlanet is the annual conference of the Free Software Foundation, and is co-produced by MIT's Student Information Processing Board. What was once a small gathering of FSF members has grown into a larger event for anyone with an interest in the values of software freedom. LibrePlanet is always gratis for associate members of the FSF and students. Sign up for announcements about the LibrePlanet conference here.

LibrePlanet 2017 was held at MIT from March 25-26, 2017. About 400 attendees from all over the world came together for conversations, demonstrations, and keynotes centered around the theme of "The Roots of Freedom." You can watch videos from past conferences at https://media.libreplanet.org, including keynotes by Kade Crockford of the ACLU of Massachusetts and Cory Doctorow, author and special consultant to the Electronic Frontier Foundation.

About the Free Software Foundation

The FSF, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contact

Georgia Young
Program Manager
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

20 March, 2018 02:39PM

March 19, 2018

mcron @ Savannah

GNU Mcron 1.1 released

We are pleased to announce the release of GNU Mcron 1.1,
representing 124 commits, by 3 people over 4 years.

Download

Here are the compressed sources and a GPG detached signature[*]:
https://ftp.gnu.org/gnu/mcron/mcron-1.1.tar.gz
https://ftp.gnu.org/gnu/mcron/mcron-1.1.tar.gz.sig

Use a mirror for higher download bandwidth:
https://ftpmirror.gnu.org/mcron/mcron-1.1.tar.gz
https://ftpmirror.gnu.org/mcron/mcron-1.1.tar.gz.sig

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:

gpg --verify mcron-1.1.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

gpg --keyserver keys.gnupg.net --recv-keys 0ADEE10094604D37

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
Autoconf 2.69
Automake 1.16.1

NEWS

Noteworthy changes in release 1.1 (2018-03-19) [stable]
    • New features

The 'job' procedure has now a '#:user' keyword argument which allows
specifying a different user that will run it.

Additional man pages for 'cron(8)' and 'crontab(1)' are now generated using
GNU Help2man.

    • Bug fixes

Child process created when executing a job are now properly cleaned even
when execution fails by using 'dynamic-wind' construct.

    • Improvements

GNU Guile 2.2 is now supported.

Some procedures are now written using functional style and include a
docstring. 'def-macro' usages are now replaced with hygienic macros.

Compilation is now done using a non-recursive Makefile, supports out of tree
builds, and use silent rules by default.

Guile object files creation don't rely on auto-compilation anymore and are
installed in 'site-ccache' directory.

Jobs are now internally represented using SRFI-9 records instead of vectors.

Changelog are generated from Git logs when generating the tarball using
Gnulib gitlog-to-changelog script.

A test suite is now available and can be run with 'make check'.

    • Changes in behavior

The "--enable-debug" configure variable has been removed and replaced with
MCRON_DEBUG environment variable.

The "--disable-multi-user" configure variable is now used to not build and
install the 'cron' and 'crontab' programs. It has replaced the
"--enable-no-vixie-clobber" which had similar effect.

(mcron core) module is now deprecated and has been superseeded by
(mcron base).

Please report bugs to bug-mcron@gnu.org.

19 March, 2018 12:37AM by Mathieu Lirzin

March 16, 2018

Riccardo Mottola

Graphos printing fix

Important Graphos fix.

Graphos had issues when printing and the view was not 100%: to speed up drawRect, all objects were represented scaled, so that they had not to be scaled each time, which especially for Bezier Paths is expensive with all the associated handles.

Thie issue is finally fixed by either caching both original and zoomed values for each object and conditionally drawing them depending on the drawingContext.

Here the proof with GSPdf showing the generated PDF!



Soon a new release then!

16 March, 2018 06:34PM by Riccardo (noreply@blogger.com)

March 15, 2018

GNUnet News

March 14, 2018

automake @ Savannah

March 12, 2018

Jose E. Marchesi

Rhhw Friday 16 March 2018 - Sunday 18 March 2018 @ Frankfurt am Main

The Rabbit Herd will be meeting the weekend from 16 March to 18 March.

12 March, 2018 12:00AM

March 11, 2018

automake @ Savannah

February 28, 2018

FSF News

Free Software Foundation releases FY2016 Annual Report

BOSTON, Massachusetts, USA -- Wednesday, February 28, 2018 -- The Free Software Foundation (FSF) today published its Fiscal Year (FY) 2016 Annual Report.

The report is available in low-resolution (11.5 MB PDF) and high-resolution (207.2 MB PDF).

The Annual Report reviews the Foundation's activities, accomplishments, and financial picture from October 1, 2015 to September 30, 2016. It is the result of a full external financial audit, along with a focused study of program results. It examines the impact of the FSF's programs, and FY2016's major events, including LibrePlanet, the creation of ethical criteria for code-hosting repositories, and the expansion of the Respects Your Freedom computer hardware product certification program.

"More people and businesses are using free software than ever before," said FSF executive director John Sullivan in his introduction to the FY2016 report. "That's big news, but our most important measure of success is the support for the ideals. In that area, we have momentum on our side."

As with all of the Foundation's activities, the Annual Report was made using free software, including Inkscape, GIMP, and PDFsam, along with freely licensed fonts and images.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://my.fsf.org/donate. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

Georgia Young
Program Manager
Free Software Foundation
+1 (617) 542 5942 x 17
campaigns@fsf.org

28 February, 2018 07:20PM

February 24, 2018

Parabola GNU/Linux-libre

[From Arch] zita-resampler 1.6.0-1 -> 2 update requires manual intervention

The zita-resampler 1.6.0-1 package was missing a library symlink that has been readded in 1.6.0-2. If you installed 1.6.0-1, ldconfig would have created this symlink at install time, and it will conflict with the one included in 1.6.0-2. In that case, remove /usr/lib/libzita-resampler.so.1 manually before updating.

24 February, 2018 04:46AM by Omar Vega Ramos

February 22, 2018

parallel @ Savannah

GNU Parallel 20180222 ('Henrik') released

GNU Parallel 20180222 ('Henrik') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Haiku of the month:

Alias and vars
export them more easily
with env_parallel
-- Ole Tange

New in this release:

  • --embed makes it possible to embed GNU parallel in a shell script. This is useful if you need to distribute your script to someone who does not want to install GNU parallel.
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login: The USENIX Magazine, February 2011:42-47.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://www.gnu.org/s/parallel/merchandise.html
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 February, 2018 09:56PM by Ole Tange

February 17, 2018

libffcall @ Savannah

GNU libffcall 2.1 is released

libffcall version 2.1 is released.

New in this release:

  • Added support for Linux/arm with PIE-enabled gcc, Solaris 11.3 on x86_64, OpenBSD 6.1, HardenedBSD.
  • Fixed a bug regarding passing of pointers on Linux/x86_64 with x32 ABI.
  • Fixed a crash in trampoline on Linux/mips64el.

17 February, 2018 12:58PM by Bruno Haible

February 07, 2018

Andy Wingo

design notes on inline caches in guile

Ahoy, programming-language tinkerfolk! Today's rambling missive chews the gnarly bones of "inline caches", in general but also with particular respect to the Guile implementation of Scheme. First, a little intro.

inline what?

Inline caches are a language implementation technique used to accelerate polymorphic dispatch. Let's dive in to that.

By implementation technique, I mean that the technique applies to the language compiler and runtime, rather than to the semantics of the language itself. The effects on the language do exist though in an indirect way, in the sense that inline caches can make some operations faster and therefore more common. Eventually inline caches can affect what users expect out of a language and what kinds of programs they write.

But I'm getting ahead of myself. Polymorphic dispatch literally means "choosing based on multiple forms". Let's say your language has immutable strings -- like Java, Python, or Javascript. Let's say your language also has operator overloading, and that it uses + to concatenate strings. Well at that point you have a problem -- while you can specify a terse semantics of some core set of operations on strings (win!), you can't choose one representation of strings that will work well for all cases (lose!). If the user has a workload where they regularly build up strings by concatenating them, you will want to store strings as trees of substrings. On the other hand if they want to access characterscodepoints by index, then you want an array. But if the codepoints are all below 256, maybe you should represent them as bytes to save space, whereas maybe instead as 4-byte codepoints otherwise? Or maybe even UTF-8 with a codepoint index side table.

The right representation (form) of a string depends on the myriad ways that the string might be used. The string-append operation is polymorphic, in the sense that the precise code for the operator depends on the representation of the operands -- despite the fact that the meaning of string-append is monomorphic!

Anyway, that's the problem. Before inline caches came along, there were two solutions: callouts and open-coding. Both were bad in similar ways. A callout is where the compiler generates a call to a generic runtime routine. The runtime routine will be able to handle all the myriad forms and combination of forms of the operands. This works fine but can be a bit slow, as all callouts for a given operator (e.g. string-append) dispatch to a single routine for the whole program, so they don't get to optimize for any particular call site.

One tempting thing for compiler writers to do is to effectively inline the string-append operation into each of its call sites. This is "open-coding" (in the terminology of the early Lisp implementations like MACLISP). The advantage here is that maybe the compiler knows something about one or more of the operands, so it can eliminate some cases, effectively performing some compile-time specialization. But this is a limited technique; one could argue that the whole point of polymorphism is to allow for generic operations on generic data, so you rarely have compile-time invariants that can allow you to specialize. Open-coding of polymorphic operations instead leads to code bloat, as the string-append operation is just so many copies of the same thing.

Inline caches emerged to solve this problem. They trace their lineage back to Smalltalk 80, gained in complexity and power with Self and finally reached mass consciousness through Javascript. These languages all share the characteristic of being dynamically typed and object-oriented. When a user evaluates a statement like x = y.z, the language implementation needs to figure out where y.z is actually located. This location depends on the representation of y, which is rarely known at compile-time.

However for any given reference y.z in the source code, there is a finite set of concrete representations of y that will actually flow to that call site at run-time. Inline caches allow the language implementation to specialize the y.z access for its particular call site. For example, at some point in the evaluation of a program, y may be seen to have representation R1 or R2. For R1, the z property may be stored at offset 3 within the object's storage, and for R2 it might be at offset 4. The inline cache is a bit of specialized code that compares the type of the object being accessed against R1 , in that case returning the value at offset 3, otherwise R2 and offset r4, and otherwise falling back to a generic routine. If this isn't clear to you, Vyacheslav Egorov write a fine article describing and implementing the object representation optimizations enabled by inline caches.

Inline caches also serve as input data to later stages of an adaptive compiler, allowing the compiler to selectively inline (open-code) only those cases that are appropriate to values actually seen at any given call site.

but how?

The classic formulation of inline caches from Self and early V8 actually patched the code being executed. An inline cache might be allocated at address 0xcabba9e5 and the code emitted for its call-site would be jmp 0xcabba9e5. If the inline cache ended up bottoming out to the generic routine, a new inline cache would be generated that added an implementation appropriate to the newly seen "form" of the operands and the call-site. Let's say that new IC (inline cache) would have the address 0x900db334. Early versions of V8 would actually patch the machine code at the call-site to be jmp 0x900db334 instead of jmp 0xcabba6e5.

Patching machine code has a number of disadvantages, though. It inherently target-specific: you will need different strategies to patch x86-64 and armv7 machine code. It's also expensive: you have to flush the instruction cache after the patch, which slows you down. That is, of course, if you are allowed to patch executable code; on many systems that's impossible. Writable machine code is a potential vulnerability if the system may be vulnerable to remote code execution.

Perhaps worst of all, though, patching machine code is not thread-safe. In the case of early Javascript, this perhaps wasn't so important; but as JS implementations gained parallel garbage collectors and JS-level parallelism via "service workers", this becomes less acceptable.

For all of these reasons, the modern take on inline caches is to implement them as a memory location that can be atomically modified. The call site is just jmp *loc, as if it were a virtual method call. Modern CPUs have "branch target buffers" that predict the target of these indirect branches with very high accuracy so that the indirect jump does not become a pipeline stall. (What does this mean in the face of the Spectre v2 vulnerabilities? Sadly, God only knows at this point. Saddest panda.)

cry, the beloved country

I am interested in ICs in the context of the Guile implementation of Scheme, but first I will make a digression. Scheme is a very monomorphic language. Yet, this monomorphism is entirely cultural. It is in no way essential. Lack of ICs in implementations has actually fed back and encouraged this monomorphism.

Let us take as an example the case of property access. If you have a pair in Scheme and you want its first field, you do (car x). But if you have a vector, you do (vector-ref x 0).

What's the reason for this nonuniformity? You could have a generic ref procedure, which when invoked as (ref x 0) would return the field in x associated with 0. Or (ref x 'foo) to return the foo property of x. It would be more orthogonal in some ways, and it's completely valid Scheme.

We don't write Scheme programs this way, though. From what I can tell, it's for two reasons: one good, and one bad.

The good reason is that saying vector-ref means more to the reader. You know more about the complexity of the operation and what side effects it might have. When you call ref, who knows? Using concrete primitives allows for better program analysis and understanding.

The bad reason is that Scheme implementations, Guile included, tend to compile (car x) to much better code than (ref x 0). Scheme implementations in practice aren't well-equipped for polymorphic data access. In fact it is standard Scheme practice to abuse the "macro" facility to manually inline code so that that certain performance-sensitive operations get inlined into a closed graph of monomorphic operators with no callouts. To the extent that this is true, Scheme programmers, Scheme programs, and the Scheme language as a whole are all victims of their implementations. JavaScript, for example, does not have this problem -- to a small extent, maybe, yes, performance tweaks and tuning are always a thing but JavaScript implementations' ability to burn away polymorphism and abstraction results in an entirely different character in JS programs versus Scheme programs.

it gets worse

On the most basic level, Scheme is the call-by-value lambda calculus. It's well-studied, well-understood, and eminently flexible. However the way that the syntax maps to the semantics hides a constrictive monomorphism: that the "callee" of a call refer to a lambda expression.

Concretely, in an expression like (a b), in which a is not a macro, a must evaluate to the result of a lambda expression. Perhaps by reference (e.g. (define a (lambda (x) x))), perhaps directly; but a lambda nonetheless. But what if a is actually a vector? At that point the Scheme language standard would declare that to be an error.

The semantics of Clojure, though, would allow for ((vector 'a 'b 'c) 1) to evaluate to b. Why not in Scheme? There are the same good and bad reasons as with ref. Usually, the concerns of the language implementation dominate, regardless of those of the users who generally want to write terse code. Of course in some cases the implementation concerns should dominate, but not always. Here, Scheme could be more flexible if it wanted to.

what have you done for me lately

Although inline caches are not a miracle cure for performance overheads of polymorphic dispatch, they are a tool in the box. But what, precisely, can they do, both in general and for Scheme?

To my mind, they have five uses. If you can think of more, please let me know in the comments.

Firstly, they have the classic named property access optimizations as in JavaScript. These apply less to Scheme, as we don't have generic property access. Perhaps this is a deficiency of Scheme, but it's not exactly low-hanging fruit. Perhaps this would be more interesting if Guile had more generic protocols such as Racket's iteration.

Next, there are the arithmetic operators: addition, multiplication, and so on. Scheme's arithmetic is indeed polymorphic; the addition operator + can add any number of complex numbers, with a distinction between exact and inexact values. On a representation level, Guile has fixnums (small exact integers, no heap allocation), bignums (arbitrary-precision heap-allocated exact integers), fractions (exact ratios between integers), flonums (heap-allocated double-precision floating point numbers), and compnums (inexact complex numbers, internally a pair of doubles). Also in Guile, arithmetic operators are a "primitive generics", meaning that they can be extended to operate on new types at runtime via GOOPS.

The usual situation though is that any particular instance of an addition operator only sees fixnums. In that case, it makes sense to only emit code for fixnums, instead of the product of all possible numeric representations. This is a clear application where inline caches can be interesting to Guile.

Third, there is a very specific case related to dynamic linking. Did you know that most programs compiled for GNU/Linux and related systems have inline caches in them? It's a bit weird but the "Procedure Linkage Table" (PLT) segment in ELF binaries on Linux systems is set up in a way that when e.g. libfoo.so is loaded, the dynamic linker usually doesn't eagerly resolve all of the external routines that libfoo.so uses. The first time that libfoo.so calls frobulate, it ends up calling a procedure that looks up the location of the frobulate procedure, then patches the binary code in the PLT so that the next time frobulate is called, it dispatches directly. To dynamic language people it's the weirdest thing in the world that the C/C++/everything-static universe has at its cold, cold heart a hash table and a dynamic dispatch system that it doesn't expose to any kind of user for instrumenting or introspection -- any user that's not a malware author, of course.

But I digress! Guile can use ICs to lazily resolve runtime routines used by compiled Scheme code. But perhaps this isn't optimal, as the set of primitive runtime calls that Guile will embed in its output is finite, and so resolving these routines eagerly would probably be sufficient. Guile could use ICs for inter-module references as well, and these should indeed be resolved lazily; but I don't know, perhaps the current strategy of using a call-site cache for inter-module references is sufficient.

Fourthly (are you counting?), there is a general case of the former: when you see a call (a b) and you don't know what a is. If you put an inline cache in the call, instead of having to emit checks that a is a heap object and a procedure and then emit an indirect call to the procedure's code, you might be able to emit simply a check that a is the same as x, the only callee you ever saw at that site, and in that case you can emit a direct branch to the function's code instead of an indirect branch.

Here I think the argument is less strong. Modern CPUs are already very good at indirect jumps and well-predicted branches. The value of a devirtualization pass in compilers is that it makes the side effects of a virtual method call concrete, allowing for more optimizations; avoiding indirect branches is good but not necessary. On the other hand, Guile does have polymorphic callees (generic functions), and call ICs could help there. Ideally though we would need to extend the language to allow generic functions to feed back to their inline cache handlers.

Finally, ICs could allow for cheap tracepoints and breakpoints. If at every breakable location you included a jmp *loc, and the initial value of *loc was the next instruction, then you could patch individual locations with code to run there. The patched code would be responsible for saving and restoring machine state around the instrumentation.

Honestly I struggle a lot with the idea of debugging native code. GDB does the least-overhead, most-generic thing, which is patching code directly; but it runs from a separate process, and in Guile we need in-process portable debugging. The debugging use case is a clear area where you want adaptive optimization, so that you can emit debugging ceremony from the hottest code, knowing that you can fall back on some earlier tier. Perhaps Guile should bite the bullet and go this way too.

implementation plan

In Guile, monomorphic as it is in most things, probably only arithmetic is worth the trouble of inline caches, at least in the short term.

Another question is how much to specialize the inline caches to their call site. On the extreme side, each call site could have a custom calling convention: if the first operand is in register A and the second is in register B and they are expected to be fixnums, and the result goes in register C, and the continuation is the code at L, well then you generate an inline cache that specializes to all of that. No need to shuffle operands or results, no need to save the continuation (return location) on the stack.

The opposite would be to call ICs as if their were normal procedures: shuffle arguments into fixed operand registers, push a stack frame, and when the IC returns, shuffle the result into place.

Honestly I am looking mostly to the simple solution. I am concerned about code and heap bloat if I specify to every last detail of a call site. Also maximum speed comes with an adaptive optimizer, and in that case simple lower tiers are best.

sanity check

To compare these impressions, I took a look at V8's current source code to see where they use ICs in practice. When I worked on V8, the compiler was entirely different -- there were two tiers, and both of them generated native code. Inline caches were everywhere, and they were gnarly; every architecture had its own implementation. Now in V8 there are two tiers, not the same as the old ones, and the lowest one is a bytecode interpreter.

As an adaptive optimizer, V8 doesn't need breakpoint ICs. It can always deoptimize back to the interpreter. In actual practice, to debug at a source location, V8 will patch the bytecode to insert a "DebugBreak" instruction, which has its own support in the interpreter. V8 also supports optimized compilation of this operation. So, no ICs needed here.

Likewise for generic type feedback, V8 records types as data rather than in the classic formulation of inline caches as in Self. I think WebKit's JavaScriptCore uses a similar strategy.

V8 does use inline caches for property access (loads and stores). Besides that there is an inline cache used in calls which is just used to record callee counts, and not used for direct call optimization.

Surprisingly, V8 doesn't even seem to use inline caches for arithmetic (any more?). Fair enough, I guess, given that JavaScript's numbers aren't very polymorphic, and even with a system with fixnums and heap floats like V8, floating-point numbers are rare in cold code.

The dynamic linking and relocation points don't apply to V8 either, as it doesn't receive binary code from the internet; it always starts from source.

twilight of the inline cache

There was a time when inline caches were recommended to solve all your VM problems, but it would seem now that their heyday is past.

ICs are still a win if you have named property access on objects whose shape you don't know at compile-time. But improvements in CPU branch target buffers mean that it's no longer imperative to use ICs to avoid indirect branches (modulo Spectre v2), and creating direct branches via code-patching has gotten more expensive and tricky on today's targets with concurrency and deep cache hierarchies.

Besides that, the type feedback component of inline caches seems to be taken over by explicit data-driven call-site caches, rather than executable inline caches, and the highest-throughput tiers of an adaptive optimizer burn away inline caches anyway. The pressure on an inline cache infrastructure now is towards simplicity and ease of type and call-count profiling, leaving the speed component to those higher tiers.

In Guile the bounded polymorphism on arithmetic combined with the need for ahead-of-time compilation means that ICs are probably a code size and execution time win, but it will take some engineering to prevent the calling convention overhead from dominating cost.

Time to experiment, then -- I'll let y'all know how it goes. Thoughts and feedback welcome from the compilerati. Until then, happy hacking :)

07 February, 2018 03:14PM by Andy Wingo

February 05, 2018

remotecontrol @ Savannah

Andy Wingo

notes from the fosdem 2018 networking devroom

Greetings, internet!

I am on my way back from FOSDEM and thought I would share with yall some impressions from talks in the Networking devroom. I didn't get to go to all that many talks -- FOSDEM's hallway track is the hottest of them all -- but I did hit a select few. Thanks to Dave Neary at Red Hat for organizing the room.

Ray Kinsella -- Intel -- The path to data-plane micro-services

The day started with a drum-beating talk that was very light on technical information.

Essentially Ray was arguing for an evolution of network function virtualization -- that instead of running VNFs on bare metal as was done in the days of yore, that people started to run them in virtual machines, and now they run them in containers -- what's next? Ray is saying that "cloud-native VNFs" are the next step.

Cloud-native VNFs to move from "greedy" VNFs that take charge of the cores that are available to them, to some kind of resource sharing. "Maybe users value flexibility over performance", says Ray. It's the Care Bears approach to networking: (resource) sharing is caring.

In practice he proposed two ways that VNFs can map to cores and cards.

One was in-process sharing, which if I understood him properly was actually as nodes running within a VPP process. Basically in this case VPP or DPDK is the scheduler and multiplexes two or more network functions in one process.

The other was letting Linux schedule separate processes. In networking, we don't usually do it this way: we run network functions on dedicated cores on which nothing else runs. Ray was suggesting that perhaps network functions could be more like "normal" Linux services. Ray doesn't know if Linux scheduling will work in practice. Also it might mean allowing DPDK to work with 4K pages instead of the 2M hugepages it currently requires. This obviously has the potential for more latency hazards and would need some tighter engineering, and ultimately would have fewer guarantees than the "greedy" approach.

Interesting side things I noticed:

  • All the diagrams show Kubernetes managing CPU node allocation and interface assignment. I guess in marketing diagrams, Kubernetes has completely replaced OpenStack.

  • One slide showed guest VNFs differentiated between "virtual network functions" and "socket-based applications", the latter ones being the legacy services that use kernel APIs. It's a useful terminology difference.

  • The talk identifies user-space networking with DPDK (only!).

Finally, I note that Conway's law is obviously reflected in the performance overheads: because there are organizational isolations between dev teams, vendors, and users, there are big technical barriers between them too. The least-overhead forms of resource sharing are also those with the highest technical consistency and integration (nodes in a single VPP instance).

Magnus Karlsson -- Intel -- AF_XDP

This was a talk about getting good throughput from the NIC to userspace, but by using some kernel facilities. The idea is to get the kernel to set up the NIC and virtualize the transmit and receive ring buffers, but to let the NIC's DMA'd packets go directly to userspace.

The performance goal is 40Gbps for thousand-byte packets, or 25 Gbps for traffic with only the smallest packets (64 bytes). The fast path does "zero copy" on the packets if the hardware has the capability to steer the subset of traffic associated with the AF_XDP socket to that particular process.

The AF_XDP project builds on XDP, a newish thing where a little kind of bytecode can run on the kernel or possibly on the NIC. One of the bytecode commands (REDIRECT) causes packets to be forwarded to user-space instead of handled by the kernel's otherwise heavyweight networking stack. AF_XDP is the bridge between XDP on the kernel side and an interface to user-space using sockets (as opposed to e.g. AF_INET). The performance goal was to be within 10% or so of DPDK's raw user-space-only performance.

The benefits of AF_XDP over the current situation would be that you have just one device driver, in the kernel, rather than having to have one driver in the kernel (which you have to have anyway) and one in user-space (for speed). Also, with the kernel involved, there is a possibility for better isolation between different processes or containers, when compared with raw PCI access from user-space..

AF_XDP is what was previously known as AF_PACKET v4, and its numbers are looking somewhat OK. Though it's not upstream yet, it might be interesting to get a Snabb driver here.

I would note that kernel-userspace cooperation is a bit of a theme these days. There are other points of potential cooperation or common domain sharing, storage being an obvious one. However I heard more than once this weekend the kind of "I don't know, that area of the kernel has a different culture" sort of concern as that highlighted by Daniel Vetter in his recent LCA talk.

François-Frédéric Ozog -- Linaro -- Userland Network I/O

This talk is hard to summarize. Like the previous one, it's again about getting packets to userspace with some support from the kernel, but the speaker went really deep and I'm not quite sure what in the talk is new and what is known.

François-Frédéric is working on a new set of abstractions for relating the kernel and user-space. He works on OpenDataPlane (ODP), which is kinda like DPDK in some ways. ARM seems to be a big target for his work; that x86-64 is also a target goes without saying.

His problem statement was, how should we enable fast userland network I/O, without duplicating drivers?

François-Frédéric was a bit negative on AF_XDP because (he says) it is so focused on packets that it neglects other kinds of devices with similar needs, such as crypto accelerators. Apparently the challenge here is accelerating a single large IPsec tunnel -- because the cryptographic operations are serialized, you need good single-core performance, and making use of hardware accelerators seems necessary right now for even a single 10Gbps stream. (If you had many tunnels, you could parallelize, but that's not the case here.)

He was also a bit skeptical about standardizing on the "packet array I/O model" which AF_XDP and most NICS use. What he means here is that most current NICs move packets to and from main memory with the help of a "descriptor array" ring buffer that holds pointers to packets. A transmit array stores packets ready to transmit; a receive array stores maximum-sized packet buffers ready to be filled by the NIC. The packet data itself is somewhere else in memory; the descriptor only points to it. When a new packet is received, the NIC fills the corresponding packet buffer and then updates the "descriptor array" to point to the newly available packet. This requires at least two memory writes from the NIC to memory: at least one to write the packet data (one per 64 bytes of packet data), and one to update the DMA descriptor with the packet length and possible other metadata.

Although these writes go directly to cache, there's a limit to the number of DMA operations that can happen per second, and with 100Gbps cards, we can't afford to make one such transaction per packet.

François-Frédéric promoted an alternative I/O model for high-throughput use cases: the "tape I/O model", where packets are just written back-to-back in a uniform array of memory. Every so often a block of memory containing some number of packets is made available to user-space. This has the advantage of packing in more packets per memory block, as there's no wasted space between packets. This increases cache density and decreases DMA transaction count for transferring packet data, as we can use each 64-byte DMA write to its fullest. Additionally there's no side table of descriptors to update, saving a DMA write there.

Apparently the only cards currently capable of 100 Gbps traffic, the Chelsio and Netcope cards, use the "tape I/O model".

Incidentally, the DMA transfer limit isn't the only constraint. Something I hadn't fully appreciated before was memory write bandwidth. Before, I had thought that because the NIC would transfer in packet data directly to cache, that this wouldn't necessarily cause any write traffic to RAM. Apparently that's not the case. Later over drinks (thanks to Red Hat's networking group for organizing), François-Frédéric asserted that the DMA transfers would eventually use up DDR4 bandwidth as well.

A NIC-to-RAM DMA transaction will write one cache line (usually 64 bytes) to the socket's last-level cache. This write will evict whatever was there before. As far as I can tell, there are three cases of interest here. The best case is where the evicted cache line is from a previous DMA transfer to the same address. In that case it's modified in the cache and not yet flushed to main memory, and we can just update the cache instead of flushing to RAM. (Do I misunderstand the way caches work here? Do let me know.)

However if the evicted cache line is from some other address, we might have to flush to RAM if the cache line is dirty. That causes a memory write traffic. But if the cache line is clean, that means it was probably loaded as part of a memory read operation, and then that means we're evicting part of the network function's working set, which will later cause memory read traffic as the data gets loaded in again, and write traffic to flush out the DMA'd packet data cache line.

François-Frédéric simplified the whole thing to equate packet bandwidth with memory write bandwidth, that yes, the packet goes directly to cache but it is also written to RAM. I can't convince myself that that's the case for all packets, but I need to look more into this.

Of course the cache pressure and the memory traffic is worse if the packet data is less compact in memory; and worse still if there is any need to copy data. Ultimately, processing small packets at 100Gbps is still a huge challenge for user-space networking, and it's no wonder that there are only a couple devices on the market that can do it reliably, not that I've seen either of them operate first-hand :)

Talking with Snabb's Luke Gorrie later on, he thought that it could be that we can still stretch the packet array I/O model for a while, given that PCIe gen4 is coming soon, which will increase the DMA transaction rate. So that's a possibility to keep in mind.

At the same time, apparently there are some "coherent interconnects" coming too which will allow the NIC's memory to be mapped into the "normal" address space available to the CPU. In this model, instead of having the NIC transfer packets to the CPU, the NIC's memory will be directly addressable from the CPU, as if it were part of RAM. The latency to pull data in from the NIC to cache is expected to be slightly longer than a RAM access; for comparison, RAM access takes about 70 nanoseconds.

For a user-space networking workload, coherent interconnects don't change much. You still need to get the packet data into cache. True, you do avoid the writeback to main memory, as the packet is already in addressable memory before it's in cache. But, if it's possible to keep the packet on the NIC -- like maybe you are able to add some kind of inline classifier on the NIC that could directly shunt a packet towards an on-board IPSec accelerator -- in that case you could avoid a lot of memory transfer. That appears to be the driving factor for coherent interconnects.

At some point in François-Frédéric's talk, my brain just died. I didn't quite understand all the complexities that he was taking into account. Later, after he kindly took the time to dispell some more of my ignorance, I understand more of it, though not yet all :) The concrete "deliverable" of the talk was a model for kernel modules and user-space drivers that uses the paradigms he was promoting. It's a work in progress from Linaro's networking group, with some support from NIC vendors and CPU manufacturers.

Luke Gorrie and Asumu Takikawa -- SnabbCo and Igalia -- How to write your own NIC driver, and why

This talk had the most magnificent beginning: a sort of "repent now ye sinners" sermon from Luke Gorrie, a seasoned veteran of software networking. Luke started by describing the path of righteousness leading to "driver heaven", a world in which all vendors have publically accessible datasheets which parsimoniously describe what you need to get packets flowing. In this blessed land it's easy to write drivers, and for that reason there are many of them. Developers choose a driver based on their needs, or they write one themselves if their needs are quite specific.

But there is another path, says Luke, that of "driver hell": a world of wickedness and proprietary datasheets, where even when you buy the hardware, you can't program it unless you're buying a hundred thousand units, and even then you are smitten with the cursed non-disclosure agreements. In this inferno, only a vendor is practically empowered to write drivers, but their poor driver developers are only incentivized to get the driver out the door deployed on all nine architectural circles of driver hell. So they include some kind of circle-of-hell abstraction layer, resulting in a hundred thousand lines of code like a tangled frozen beard. We all saw the abyss and repented.

Luke described the process that led to Mellanox releasing the specification for its ConnectX line of cards, something that was warmly appreciated by the entire audience, users and driver developers included. Wonderful stuff.

My Igalia colleague Asumu Takikawa took the last half of the presentation, showing some code for the driver for the Intel i210, i350, and 82599 cards. For more on that, I recommend his recent blog post on user-space driver development. It was truly a ray of sunshine in dark, dark Brussels.

Ole Trøan -- Cisco -- Fast dataplanes with VPP

This talk was a delightful introduction to VPP, but without all of the marketing; the sort of talk that makes FOSDEM worthwhile. Usually at more commercial, vendory events, you can't really get close to the technical people unless you have a vendor relationship: they are surrounded by a phalanx of salesfolk. But in FOSDEM it is clear that we are all comrades out on the open source networking front.

The speaker expressed great personal pleasure on having being able to work on open source software; his relief was palpable. A nice moment.

He also had some kind words about Snabb, too, saying at one point that "of course you can do it on snabb as well -- Snabb and VPP are quite similar in their approach to life". He trolled the horrible complexity diagrams of many "NFV" stacks whose components reflect the org charts that produce them more than the needs of the network functions in question (service chaining anyone?).

He did get to drop some numbers as well, which I found interesting. One is that recently they have been working on carrier-grade NAT, aiming for 6 terabits per second. Those are pretty big boxes and I hope they are getting paid appropriately for that :) For context he said that for a 4-unit server, these days you can build one that does a little less than a terabit per second. I assume that's with ten dual-port 40Gbps cards, and I would guess to power that you'd need around 40 cores or so, split between two sockets.

Finally, he finished with a long example on lightweight 4-over-6. Incidentally this is the same network function my group at Igalia has been building in Snabb over the last couple years, so it was interesting to see the comparison. I enjoyed his commentary that although all of these technologies (carrier-grade NAT, MAP, lightweight 4-over-6) have the ostensible goal of keeping IPv4 running, in reality "we're day by day making IPv4 work worse", mainly by breaking the assumption that just because you get traffic from port P on IP M, doesn't mean you can send traffic to M from another port or another protocol and have it reach the target.

All of these technologies also have problems with IPv4 fragmentation. Getting it right is possible but expensive. Instead, Ole mentions that he and a cross-vendor cabal of dataplane people have a "dark RFC" in the works to deprecate IPv4 fragmentation entirely :)

OK that's it. If I get around to writing up the couple of interesting Java talks I went to (I know right?) I'll let yall know. Happy hacking!

05 February, 2018 05:22PM by Andy Wingo

February 02, 2018

freeipmi @ Savannah

FreeIPMI 1.6.1 Released

https://ftp.gnu.org/gnu/freeipmi/freeipmi-1.6.1.tar.gz

FreeIPMI 1.6.1 - 02/02/18
-------------------------
o Add IPv6 hostname support to FreeIPMI, all of FreeIPMI can now
take IPv6 addresses as inputs to "host" parameters, options, or
inputs.
o Support significant portions of IPMI IPv6 configuration in
libfreeipmi.
o Add --no-session option in ipmi-raw.
o Add SDR cache options to ipmi-config.
o Legacy -f short option for --flush-cache and -Q short option
for quiet-cache. Backwards compatible for tools that supported
it before.
o In ipmi-oem, support Gigabyte get-bmc-services and set-bmc-
services.
o Various performance improvements:
- Remove excessive calls to secure_memset to clear memory.
- Remove excessive memsets and clears of data.
- Remove unnecessary "double input checks".
- Remove expensive input checks in libfreeipmi fiid library.
Fallout from this may include FIID_ERR_FIELD_NOT_FOUND errors
in different fiid functions.
- Remove unnecessary input checks in libfreeipmi fiid library.
- Add recent 'lookups' of fields in fiid library to internal
cache.
o Various minor fixes/improvements
- Update libfreeipmi core API to use poll() instead of
select(), to avoid issues with applications with a high
number of threads.

02 February, 2018 11:47PM by Albert Chu

January 28, 2018

dico @ Savannah

Version 2.5

Version 2.5 of GNU dico is available for download. Main new feature in this release: support for four-column index files in dict.org format.

Previous versions of dico supported only three-column index files. This is most common format. However, some dictionaries have four-column index files. When trying to load such dictionaries using prior versions of GNU dico, you would get the error message "X.index:Y: malformed entry". The present version fixes this problem.

28 January, 2018 02:49PM by Sergey Poznyakoff

January 27, 2018

GNUnet News

gnURL 7.58.0

I'm no longer publishing release announcements on gnunet.org. Read the full gnURL 7.58.0 release announcement on our developer mailinglist and on info-gnu once my email has passed the moderation.

27 January, 2018 03:48PM by ng0

January 26, 2018

Lonely Cactus

The Ridiculous Gopher Project: BBSs and ZModem

In the previous entry, I talked about the ridiculous Gopher project, in which I might try to make a presence for myself in Gopher Space.

So my first though was that I would have a blog and a webgallery over gopher.

The blog entries are a very simple prospect, since they need to be plain text.  I don't really like the block paragraph style, but, I did sketch out a conversion from markdown to troff to text that does some nice formatting.

The directory of the blog entries is a bit more complicated.  I had an idea for a cgi that handled directory structures and indices that are date-based with a parallel directory structure and index that is keyword based.

But anyway, I got stuck on my first step, and fell down a rabbit hole, as per usual.

So I thought to myself, what if I wanted to have comments for my gopher blog?  How would that work?  What technology would I used?  Well, in the original Gopher spec, there is a capacity for a Telnet session.  I thought that I could make a tiny Telnet-based BBS with just enough functionality to let one leave a comment or read comments.

So I went on the internet to find a tiny BBS to examine.  I found just about the simplest BBS one could imagine.  It is called Puppy BBS.
I found it in here: http://cd.textfiles.com/simtel/simtel20/MSDOS/FIDO/.index.html

So there this California-based guy named Tom Jennings who does a lot of stuff in the intersection between tech and art. Once upon a time he was a driving force behind FidoNet, which was a pre-internet community of dial-up BBSs. He's done many cool things since FidoNet.

Check out his cool art at http://www.sensitiveresearch.com/

I guess Tom wrote PuppyBBS as a reaction to how complicated BBSs had become back in the late 1980s.

So I thought, hey, does this thing still build and run? Well, not exactly. First off, it uses a MS-DOS C library that handles serial comms, which, of course, doesn't work on Microsoft Windows 10 or on Linux. And even if that library did still exist, I couldn't try it even if I wanted to. I mean, if I wanted to try it I would need two landlines and two dial-up modems so I could call myself. I do have a dial-up modem in a box in the garage, but, I'm not going to get another landline for this nonsense.

Anyway, I e-mailed Tom and asked if I could hack it up and post it on Github, and he said okay. And so this is what this is PuppyBBS.

Puppy BBS has four functions:
  • write messages
  • read messages
  • upload files
  • download files
From there, I started writing a Telnet-based BBS, which PupperBBS.  And that went pretty well.  It took very little time to get the message reading and writing running.  I was on a roll, so I decided that I would quickly tackle the other two functions that PuppyBBS had: uploading and downloading files.  And that was where it all got complicated.

PuppyBBS used XModem for file transfer, because it was the 80's and that was what people did.  But I thought ZModem, which was faster and more reliable, would be the way to go.  So, I thought I'd just link a zmodem library to the BBS and I'd be ready to go.

But, I couldn't find a zmodem library that was ready to go.  All zmodem code seems to be derived for lrzsz, so I downloaded the code from lrzsz and made it into a library.  To do that, I had to understand the code, so I tried to read it.  That code is so very 1980s.  It is terrible, so I had to fix it.

(Let the record show that by "terrible" I mean terrible from a reader's point of view.  It was written with so much global state and no indication of which procedures modify that state.  There is no isolation, no separation of concerns.  As a practical matter, it works great.)

And that led to a full week of untangling it all, which is what became the libzmodem library.  Now my libzmodem isn't really much more readable than the original code, but, at least it makes more sense to me.

Great, now I linked libzmodem to PupperBBS to add some ZModem send and receive functionality.  Now to test it.  I set up PupperBBS.  I telnetted in to the system, got to the BBS, and tried to upload and download some files.  It became apparent that for ZModem to work, the telnet program itself has to have some parnership with rz and sz, launching one or the other as appropriate.

Since this had to have worked in the past, some internet searches led me to zssh on sourceforge  . zssh has a telnet program that has a built-in zmodem send and receive functionality.  Unfortunately, it wasn't packaged on Fedora didn't compile out of the box, so I started trying to understand it and fix it.

So, anyway to summarize:
  1. Let's do a Gopher blog!
  2. How do you do comments?
  3. Telnet works on Gopher!
  4. Let's make a BBS
  5. BBS's do Zmodem
  6. Let's make a ZModem library
  7. Let's make a Telnet client that does ZModem.
And this is why I never finish anything.

26 January, 2018 05:38AM by Mike (noreply@blogger.com)

January 25, 2018

Christopher Allan Webber

On standards divisions and collaboration (or: Why can't the decentralized social web people just get along?)

A couple of days ago I wrote about ActivityPub becoming a W3C Recommendation. This was one output of the Social Working Group, and the blogpost was about my experiences, and most of my experiences were on my direct work on ActivityPub. But the Social Working Group did more than ActivityPub; it also on the same day published WebSub, a useful piece of technology in its own right which amongst other things also plays a significant historical role in what is even ActivityPub's history (but is not used by ActivityPub itself), and it has also published several documents which are not compatible with ActivityPub at all, and appear to play the same role. This, to outsiders, may appear confusing, but there are reasons which I will go into in this post.

On that note, friend and Social Working Group co-participant Amy Guy just wrote a reasonably and (to my own feelings) highly empathizable frustrated blogpost (go ahead and read it before you finish this blogpost) about the kinds of comments you see with different members of different decentralized social web communities sniping at each other. Yes, reading the comments is always a precarious idea, particularly on tech news sites. But what's especially frustrating is seeing comments that we either:

These comments seem to be being made by people who were not part of the standards process, so as someone who spent three years of their life on it, let me give the perspective of someone who was actually there.

So yes, first of all, it's true that in the end we pushed out two "stacks" that were mostly incompatible. These would more or less be the "restful + linked data" stack, which is ActivityPub and Linked Data Notifications using ActivityStreams as its core (but extensible) vocabulary (which are directly interoperable, and use the same "inbox" property for delivery), and the "Indieweb stack", which is Micropub and Webmention. (And there's also WebSub, which is not really either specifically part of one or the other of those "stacks" but which can be used with either, and is of such historical significance to federation that we wanted it to be standardized.) Amy Guy did a good job of mapping the landscape in her Social Web Protocols document.

Gosh, two stacks! It does kind of look confusing, if you weren't in the group, to see how this could have happened. Going through meeting logs is boring (though the meeting logs are up there if you feel like it) so here's what happened, as I remember it.

First of all, we didn't just start out with two stacks, we started out with three. At the beginning we had the linked data folks, the RESTful "just speak plain JSON" development type folks, and the Indieweb folks. Nobody really saw eye to eye at first, but eventually we managed to reach some convergence (though not as much as I would have liked). In fact we managed to merge two approaches entirely: ActivityPub is a RESTful API that can be read and interpreted as just JSON, but thanks to JSON-LD you have the power of linked data for extensions or maybe because you really like doing fancy RDF the-web-is-a-graph things. And ActivityPub uses the very same inbox of Linked Data Notifications, and is directly interoperable. Things did not start out as directly interoperable, but Sarven Capadisli and Amy Guy (who was not yet a co-author of ActivityPub) were willing to sit down and discuss and work out the details, and eventually we got there.

Merging the RESTful + Linked Data stuff with the Indieweb stuff was a bit more of a challenge, but for a while it looked like even that might completely happen. For those that don't know, Linked Data type people and Indieweb type people have, for whatever reason, historically been at each others' throats despite (or perhaps because of) the enormous similarity between the kind of work that they're doing (the main disagreements being "should we treat everything like a graph" and "are namespaces a good idea" and also, let's be honest, just historical grudges). But Amy Guy long made the case in the group that actually the divisions between the groups were very shallow and that with just a few tweaks we could actually bridge the gap (this was the real origin of the Social Web Protocols document, which though it eventually became a document of the different things we produced, was originally an analysis of how they weren't so different at all). At the face to face summit in Paris (which I did not attend, but ActivityPub co-editor Jessica Tallon did) there was apparently an energetic meeting over a meal where I'm told that Jessica Tallon and Aaron Parecki (editor of Micropub and Webmention) hit some kind of epiphany and realized yes, by god, we can actually merge these approaches together. Attending remotely, I wasn't there for the meal, but when everyone returned it was apparent that something had changed: the conversation had shifted towards reconciling differences. Between the Paris face to face meeting and the next one, energy was high and discussions active on how to bring things together. Aaron even began to consider that maybe Micropub (and/or? I forget if it was just one) Webmention could support ActivityStreams, since ActivityStreams already had an extension mechanism worked out. At the next face to face meeting, things started out optimistic as well... and then suddenly, within the span of minutes, the whole idea of merging the specs fell apart. In fact it happened so quickly that I'm not even entirely sure what did it, but I think it was over two things: one, Micropub handled an update of fields where you could add or remove a specific element from a list (without giving the entire changed list as a replacement value) and it wasn't obvious how it could be done with ActivityPub, and two, something like "well we already have a whole vocabulary in Microformats anyway, we might as well stick with it." (I could have the details wrong here a bit... again, it happened very fast, and I remember in the next break trying to figure out whether or not things did just fall apart or not.)

With the the dream of Linked Data and Indieweb stuff being reconciled given up on, we decided that at least we could move forward in parallel without clobbering, and in fact while actively supporting, each other. I think, at this point, this was actually the best decision possible, and in a sense it was even very fruitful. At this point, not trying to reconcile and compromise on a single spec, the authors and editors of the differing specifications still spent much time collaborating as the specifications moved forward. Aaron and other Indieweb folks provided plenty of useful feedback for ActivityPub and the ActivityPub folks provided plenty of useful feedback for the Indieweb folks, and I'd say all our specifications were improved greatly by this "friendly treaty" of sorts. If we could not unify, we could at least cooperate, and we did.

I'd even say that we came to a good amount of mutual understanding and respect between these groups within the Social Web Working Group. People approached these decentralization challenges with different building blocks, assumptions, principles, and goals... hence at some point they've encountered approaches that didn't quite jive with their "world view" on how to do it right (TM). And that's okay! Even there, we have plenty of space for cooperation and can learn from each other.

This is also true with the continuation of the Social Web Working Group, which is the SocialCG, where the two co-chairs are myself and Aaron Parecki, who are both editors of specifications of the conflicting "stacks". Within the Social Web Community Group we have a philosophy that our scope is to work on collaboration on social web protocols. If you use a different protocol than another person, you probably can still collaborate a lot, because there's a lot of overlap between the problem domains between social web protocols. Outside the SocialWG and SocialCG it still seems to be a different story, and sadly linked data people and Indieweb people seem to still show up on each others' threads to go after each other. I consider that a disappointment... I wish the external world would reflect the kind of sense of mutual understanding we got in the SocialWG and SocialCG.

Speaking of best attempts at bringing unity, my main goal at participating in the SocialWG, and my entire purpose of showing up in the first place, was always to bring unity. The first task I performed over the course of the first few months at the Social Working Group was to try to bring all of the existing distributed social networks to participate in the SocialWG calls. Even at that time, I was worried about the situation with a "fractured federation"... MediaGoblin was about to implement its own federation code, and I was unhappy that we had a bunch of libre distributed social network projects but none of them could talk to each other, and no matter what we chose we would just end up contributing to the problem. I was called out as naive (which I suppose, in retrospect, was accurate) for a belief that if we could just get everyone around the table we could reconcile our differences, agree on a standard that everyone could share in, and maybe we'd start singing Kumbaya or something. And yes, I was naive, but I did reach out to everyone I could think of (if I missed you somehow, I'm sorry): Diaspora, GNU Social, Pump.io (well, they were already there), Hubzilla, Friendica, Owncloud (later Nextcloud)... etc etc (Mastodon and some others didn't even exist at this point, though we would connect later)... I figured this was our one chance to finally get everyone on board and collaborate. We did have Diaspora and Owncloud participants for a time (and Nextcloud even has begun implementing ActivityPub), and plenty of groups said they'd like to participate, but the main barrier was that the standards process took a lot of time (true story), which not everyone was able to allocate. But we did our best to incorporate and respond to feedback whever we got it. We did detailed analysis on what the major social networks were providing and what we needed to cover as a result. What I'm trying to say is: ActivityPub was my best attempt to bring unity to this space. It grew out of direct experiences from developing previous standards between OStatus, the Pump API, and over a decade of developing social network protocols and software, including by people who pioneered much of the work in that territory. We tried through long and open comment periods to reconcile the needs of various groups and potential users. Maybe we didn't always succeed... but we did try, and always gave it our best. Maybe ActivityPub will succeed in that role or maybe it won't... I'm hopeful, but time is the true test.

Speaking of attempting to bring unity to the different decentralized social network projects, probably the main thing that disappoints me is the amount of strife we have between these different projects. For example, there are various threads pitting Mastodon vs GNU Social. In fact, Mastodon's lead developer and GNU Social's lead developer get along just fine... it's various members of the communities of each that tend to (sounds familiar?) be hostile.

Here's something interesting: decentralized social web initiatives haven't yet faced an all-out attack from what would be presumably be their natural enemies in the centralized social web: Facebook, Twitter, et all. I mean, there have been some aggressions, in the senses that bridging projects that let users mirror their timelines get shut down as terms of service violations and some comparatively minor things, but I don't know of (as of yet) an outright attack. But maybe they don't have to: participants in the decentralized social web is so good at fighting each other that apparently we do that work for them.

But it doesn't have to be that way. You might be able to come to consensus on a good way forward. And if you can't come to consensus, you can at least have friendly and cooperative communication.

And if somehow, you can't do any of that, you just not openly attack each other. We've got enough hard work to fight to make the federated social web work without fighting ourselves. Thanks.

Update: A previous version of this article said "I even saw someone tried to write a federation history and characterize it as war", but it's been pointed out that I'm being unfair here, since the very article I'm pointing to itself refutes the idea of this being war. Fair point, and I've removed that bit.

25 January, 2018 08:35PM by Christopher Lemmer Webber

January 24, 2018

ActivityPub is a W3C Recommendation

Having spent the majority of the last three years of my life on it, I'm happy to announce that ActivityPub is now a W3C Recommendation. Whew! At last! Horray! Finally! I've written some more words on this over on the FSF's blog so maybe read that.

As for things I didn't put there, that fit more on a personal blog? I guess that's where I speak about my personal life experience and feelings about it and I would say they're a mix of elation (for making it), relief (also for making it, because it wasn't always clear that we would), and burnout (I had no idea this process was going to suck up so much of my life).

I didn't expect this to take over my life so thoroughly. I did say this bit on the FSF blogpost but when Jessica Tallon and I got involved in the Social Working Group we figured we were just showing up for an hour a week to make sure things were on track. I did think the goal of the Social Working Group was the right one: we had a lot of libre social networks but they were largely fractured and failed at interoperability... surely we could do better if we got everyone in a room together! (Getting everyone in the room wasn't easy and didn't always happen, though I sure as heck tried, particularly early on.) But I figured the other people in the room would be the experts, the responsible ones, and we'd just be tagging along to make sure our needs were met. Well, the next thing you know we're co-editors of ActivityPub, and that time grew from an hour a week to filling most of my week to sometimes urgent, grueling deadlines (granted, I made most of them a lot more complicated than I needed to be by doing example implementations in obscure languages, etc etc).

I'm feeling great about things now, but that wasn't always the case through this. I've come to learn how hard standards work is, and I've been doing other specification work recently too (more on that in a coming blogpost), but I'll say that for whatever reason (and I can think of quite a few, but it's not worth going into here), ActivityPub has been far harder than anything else I've worked on in the standards space. (Maybe that's just because it's the first standard I've gotten to completion though.)

In fact, in early-to-middle 2017 I was in quite a bit of despair, because it seemed clear that ActivityPub was going to not make it in time as an official recommended standard. The Social Working Group's charter was going to run out at mid-2017, and it had already been extended once... apparently getting a second extension was nearly unheard of. I resigned myself to the idea that ActivityPub would be published as a note, but that there was no way that we would be able to make it to getting the shiny foil stamp of being an actual recommended standard. Instead, I shifted my effort to making sure that my ActivityPub implementation work would support enough of ActivityStreams (which is what ActivityPub uses as its vocabulary) to make sure that at least that would make it as a standard with all the components we required, since we at least needed to be able to refer to that vocabulary.

But Mastodon saved ActivityPub. I'll admit that at first I was skeptical about all the hype I was hearing about Mastodon... but Amy Guy (co-author of ActivityPub, and whose PHD thesis, "Presentation of Self on a Decentralised Web", is worth a read at the memorable domain of dr.amy.gy) convinced me that I really ought to check out what was going on in Mastodon land. And I found I really did like what was happening there... and connected to a community that felt like what I had missed from the heyday of StatusNet/identi.ca, while having a bit of its own flavor of culture, one that I really felt at home in. It turned out this was good timing... Mastodon was having trouble expanding the privacy needs of its users on OStatus, and it turns out private addressing was exactly one of the reasons that ActivityPub was developed. (I'm not claiming credit for this, I'm just talking from my perspective... the Mastodon ActivityPub implementation issue can give you a better sense of where credit is due, and here I didn't really do much.) This interest came right at the right time... it began to also drum up interest from many other participants too... and it pretty much directly lead to another extension to the Social Working Group, giving us until the end of 2017 to wrap up the work on standardizing ActivityPub. Whew!

But Mastodon is not alone. Today there are a growing number of implementers of ActivityPub. I'd encourage you, if you haven't, to watch this video of PeerTube and Mastodon federating over ActivityPub. Pretty cool stuff! ActivityPub has been a massive group effort, and I'm relieved to see that all that hard work has paid off, for all of us.

Meanwhile, there's a lot to do still ahead. MediaGoblin, ironically, has fallen behind on its own federation support in the interest of advancing federation standards (we have some federation code, but it's for the old pre-ActivityPub Pump API, and it's bitrotted quite a bit) and I need to figure out what the next steps are and discuss with the community (expect more on that in the next few months, and sure to be discussed at my talk at Libreplanet 2018). And ActivityPub may be "done" in the sense that "it made it through the standards process", but some of the most interesting work is still ahead. The Social Web Community Group, of which I am co-chair, meets bi-weekly to talk and collaborate on the interesting problems that implementers of libre networks are encountering. (It's open to everyone, maybe you should join?)

On that note, in a recent Social Web Community Group meeting, Evan Prodromou was showing off some of his latest ActivityPub projects (tags.pub and places.pub). I'm paraphrasing here, but he said something interesting, which has stuck with me: "We did all that standardizing work, and that's great, but now we get to the fun part... now we get to build things."

I agree. I look forward to what the next few years of fun ActivityPub development bring. Onwards!

24 January, 2018 05:00AM by Christopher Lemmer Webber