Planet GNU

Aggregation of development blogs from the GNU Project

August 11, 2019

unifont @ Savannah

Unifont 12.1.03 Released

11 August 2019 Unifont 12.1.03 is now available. Significant changes in this version include the replacement of the Jiskan glyphs in the Japanese version, unifont_jp, with Izumi public domain glyphs.  Also, modifications to Limbu, Buginese, Tai Tham, Adlam, and Mayan Numerals, plus a redrawn Indian Rupee Sign.  Full details are in the ChangeLog file.

Download this release at:

https://ftpmirror.gnu.org/unifont/unifont-12.1.03/

or if that fails,

https://ftp.gnu.org/gnu/unifont/unifont-12.1.03/

or, as a last resort,

ftp://ftp.gnu.org/gnu/unifont/unifont-12.1.03/

11 August, 2019 08:53PM by Paul Hardy

August 08, 2019

FSF Blogs

May 2019: Photos from Aalborg and Copenhagen

Free Software Foundation president Richard Stallman (RMS) was in Denmark in May 2019.

After a visit to the beach in nearby Slettestrand the day before, RMS went to Aalborg, where he delivered his speech “Free software and your freedom”1 at Aalborg University (AAU), on May 6th.

Photos courtesy of Aalborg University (copyright © 2019, CC BY 4.0).

The next day, he went on to Odense, where he gave his speech “The danger of mass surveillance” at Syddansk Universitet (the University of Southern Denmark, or SDU). Next, he headed on to Copenhagen, where he gave the following three speeches.

On May 8th, he was at the IT-Universitetet i København (IT University of Copenhagen, or ITU) to give his speech “Free Software and your freedom in computing,” to an audience of about three hundred people.

Photos courtesy of ITU Innovators (copyright © 2019, CC BY 4.0).

On May 9th, he was in the Lundbeckfond Auditorium of the Copenhagen Biocenter, at Københavns Universitet (University of Copenhagen, or KU), to give his speech “Computing, freedom, and privacy.”

Photos courtesy of the University of Copenhagen (copyright © 2019, CC BY 4.0).

On May 10th, he gave his speech "Should we have more surveillance than the USSR?" at Danmarks Tekniske Universitet (the Technical University of Denmark, or DTU).

On the day after that, while still in Copenhagen, he visited the Danish-French School, which is the only school in Denmark that we are aware of to use free software exclusively.

Thank you to everyone who made this trip possible!

Please fill out our contact form, so that we can inform you about future events in and around Aalborg, Odense, Copenhagen.

Please see www.fsf.org/events for a full list of all of RMS's confirmed engagements,
and contact rms-assist@gnu.org if you'd like him to come speak.


1. The recording will soon be posted on our audio-video archive.

08 August, 2019 03:35PM

August 07, 2019

Fundraiser membership drive comes to an end and we all win!

The Free Software Foundation (FSF) spring fundraiser has come to an end and we would like to thank you for your help in surpassing our ambitious goal of 200 new members in 28 days, and for all the inspirational words of support we've received over the past weeks. The motivations people give for becoming associate members are gratifying, and these are only a few:

  • I think non-free software is unethical.

  • GNU OS has enabled me to build my companies and remain independent of big capital

  • I thought it was about time that I started supporting the software I've been using for years...

We're extremely thankful for all of the ways you may have contributed. For instance, you may have taken the time to explain the concept of free software to someone new. Perhaps you licensed your program under the GNU GPLv3 or later, or you contributed to free software by giving your spare time writing code. Or you heard our call and left your copy of the Free Software Foundation Bulletin in a public space and shared our images online. Because of your help, we managed to welcome 206 new members to our associate membership program, and new people continue to join daily!

We are only 14 staff here at the FSF, so we rely on our community to keep moving forward in the fight for user freedom. Whether online or offline, the support we get from your financial generosity, positive feedback, and sharing our mission for user freedom is very much appreciated.

We are powered by donors, members, and volunteers like you -- all of the work of the FSF is accomplished with your help.

A warm thank GNU!

Zoe Kooyman
Program Manager

07 August, 2019 03:43PM

Christopher Allan Webber

ActivityPub Conf 2019 Speakers

Good news everyone! The speaker list for ActivityPub Conf 2019 is here! (In this document, below, but also in ODT and PDF formats.)

(Bad news everyone: registration is closed! We're now at 40 people registered to attend. However, we do aim to be posting recordings of the event afterwards if you couldn't register in time.)

But, just in case you'd rather see the list of speakers on a webpage rather than download a document, here you go:

Keynote: Mark Miller, “Architectures of Robust Openness”

Description coming soon! But we're very excited about Mark Miller keynoting.

Keynote: Christopher Lemmer Webber, “ActivityPub: past, present, future”

This talk gives an overview of ActivityPub: how did we get to this point? Where are we now? Where do we need to go? We'll paint a chart from past to a hopeful future with better privacy, richer interactions, and more security and control for our users.

Matt Baer, “Federated Blogging with WriteFreely”

We're building out one idea of what federated blogging could look like with separate ActivityPub-powered platforms, WriteFreely and Read.as -- one for writing and and one for reading. Beyond the software, we're also offering hosting services and helping new instances spring up to make community-building more accessible, and get ActivityPub-powered software into more hands. In this talk I'll go over our approach so far and where we're headed next.

Caleb James DeLisle, “The case for the unattributed message”

Despite it's significant contribution to internet culture, the archetype of the anonymous image board has been largely ignored by protocol designers. Perhaps the reason for this is because it's all too easy to conflate unattributed speech with unmoderated speech, which has shown itself to be a dead end. But as we've seen from Twitter and Facebook, putting a name on everything hasn't actually worked that well at improving the quality of discourse, but what it does do is put already marginalized people at greater risk.

What I credit as one of the biggest breakthroughs of the fediverse has been the loose federation which allows a person to choose their moderator, completely side stepping the question of undemocratic censorship vs. toxic free speech. Now I want to start a conversation about how we might marry this powerful moderation system to a forum which divorces the expression of thought from all forms of identity.

Cristina DeLisle, “OSS compliance with privacy by default and design”

Privacy is becoming more and more central in shaping the future of tech and the data protection legislation has contributed significantly to making this happen. Privacy by default and design are core principles that are fundamental to how software should be envisioned. The GDPR that came into the spotlight has a strong case to become a standard even outside European borders, influencing the way we protect personal data. However its impact might be, its implementation is still in its infancy. OSS has found itself facing the situation and one aspect which is particularly interesting on the tech side is how to incorporate the principles of privacy by default and design into the software that we build.

This talk is going to be an overview of how the GDPR has impacted FOSS communities, what do we mean by privacy by default and by design, how could we envision them applied in our OSS. It will bring examples from which we might find something interesting to learn from, regardless if we are looking at them as mistakes, best practices or just ways of doing things.

Michael Demetriou, “I don't know what I'm talking about: a newbie's introduction to ActivityPub”

I have just started my development journey in ActivityPubLand and I hope to have a first small application ready before ActivityPubConf. I was thinking that since I have close to zero experience with ActivityPub development, I could document my first month of experience, describe the onboarding process and point out useful resources and common pitfalls. In the end I can showcase what I've done during this period.

Luc Didry, “Advice to new fediverse administrators and developers”

Hosting an ActivityPub service is not like hosting another service… and it's the same for developing ActivityPub software. Here is some advice based on Framasoft's experience (we host a Mastodon instance and develop two ActivityPub software: PeerTube and Mobilizon – the last one is not yet out), errors and observations.

Maloki, “Is ActivityPub paving the way to web 3.0?”

A talk about how we're walking away from Web 2.0, and paving the way to Web 3.0 with ActivityPub development. We'll discuss what this could mean for the future of the web, we'll look at some of the history of the web, and also consider the social implications moving forward.

Pukkamustard, “The Semantic Social Network”

ActivityPub uses JSON-LD as serialization. This means @context field all over the place. But really there is more behind this: ActivityPub speaks Linked Data. In this talk we would like to show what this means and how this can be used to do cool things. We might even convince you that the Fediverse is a huge distributed graph that could be queried in very interesting ways - that the Fediverse is a Semantic Social Network.

Schmittlauch, “Decentralised Hashtag Search and Subscription in Federated Social Networks”

Hashtags have become an important tool for organising topic-related posts in all major social networks, even having managed to spark social movements like #MeToo. In federated social networks, unfortunately so far the view on all posts of a hashtag is fragmented between instances.

For a student research paper I came up with an architecture for search and subscription of hashtag-posts in federated social networks. This additional backend for instances augments the Fediverse with a little bit of P2P technology.

As this architecture is still at a conceptual stage, after presenting my work I'd like to gather ideas and feedback from various Fediverse stakeholders: What do global hashtags mean for marginalised people and moderation, are they more a tool of empowerment or of harassment? How can this concept be represented in the ActivityPub protocol? And what stories do server devs have to tell about common attack scenarios?

Serge Wroclawski, “Keeping Unwanted Messages off the Fediverse”

Spam, scams and harassment pose a threat to all social networks, including the Fediverse. In this talk, we discuss a multilayered approach to mitigating these threats. We explore spam mitigation techniques of the past as well as new techniques such as OcapPub and Postage.

07 August, 2019 02:54PM by Christopher Lemmer Webber

August 03, 2019

GNU Guile

GNU Guile 2.9.3 (beta) released

We are delighted to announce GNU Guile 2.9.3, the third beta release in preparation for the upcoming 3.0 stable series. See the release announcement for full details and a download link.

This release improves the quality of the just-in-time (JIT) native code generation, resulting in up to 50% performance improvements on some workloads. See the article "Fibs, lies, and benchmarks" for an in-depth discussion of some of the specific improvements.

GNU Guile 2.9.3 is a beta release, and as such offers no API or ABI stability guarantees. Users needing a stable Guile are advised to stay on the stable 2.2 series.

Experience reports with GNU Guile 2.9.3, good or bad, are very welcome; send them to guile-devel@gnu.org. If you know you found a bug, please do send a note to bug-guile@gnu.org. Happy hacking!

03 August, 2019 02:20PM by Andy Wingo (guile-devel@gnu.org)

gnuastro @ Savannah

Gnuastro 0.10 released

The 10th release of GNU Astronomy Utilities (Gnuastro) is now available. Please see the announcement for more.

03 August, 2019 02:16AM by Mohammad Akhlaghi

August 02, 2019

FSF Events

Richard Stallman - "Copyright vs Community" (Moscow, Russia)

This speech by Richard Stallman will be nontechnical, admission is gratis, and the public is encouraged to attend.

Copyright developed in the age of the printing press, and was designed to fit with the system of centralized copying imposed by the printing press. But the copyright system does not fit well with computer networks, and only draconian punishments can enforce it.
The global corporations that profit from copyright are lobbying for draconian punishments, and to increase their copyright powers, while suppressing public access to technology. But if we seriously hope to serve the only legitimate purpose of copyright–to promote progress, for the benefit of the public–then we must make changes in the other direction.

Location: комната A-202, Московский политехнический университет, Большая Семеновская ул., 38 (вход на территорию, 55.78163° N, 37.70975° E; здание, 55.78137° N, 37.71110° E), Москва, Moscow Oblast, 107023 (room A-202, Moscow Polytech, B. Semyenovskaya St., 38 (entrance to area, 55.78163° N, 37.70975° E; building, 55.78137° N, 37.71110° E), Moscow, Moscow Oblast, Russia, 107023)

Please fill out our contact form, so that we can contact you about future events in and around Moscow.

02 August, 2019 05:20PM

Event - EmacsConf

EmacsConf is the conference about the joy of Emacs, Emacs Lisp, and memorizing key sequences.

See event page for more information.

Location: virtual (online)

Please subscribe to our online newsletter, the Free Software Supporter, so that we can contact you about future events.

02 August, 2019 05:15PM

Richard Stallman - "Free software and your freedom" (TechTrain, Saint Petersburg, Russia)

Richard Stallman will be speaking at TechTrain (2019-08-24–25). His speech will be nontechnical and the public is encouraged to attend.

The Free Software Movement campaigns for computer users' freedom to cooperate and control their own computing. The Free Software Movement developed the GNU operating system, typically used together with the kernel Linux, specifically to make these freedoms possible.

Location: павильон H, КВЦ «Экспофорум», Петербургское шоссе 64к1 лит. А, Санкт-Петербург (Pavillion H, Saint Petersburg ExpoForum, Peterburgskoye Shosse 64k1, building A, Saint Petersburg, Russia)

Note: Anonymous registration (in cash and without the need to run nonfree software) will be possible, at the venue, on the day of the event.

Please fill out our contact form, so that we can contact you about future events in and around Saint Petersburg.

02 August, 2019 05:10PM

Richard Stallman - "Copyright vs Community" (Saint Petersburg, Russia)

Copyright developed in the age of the printing press, and was designed to fit with the system of centralized copying imposed by the printing press. But the copyright system does not fit well with computer networks, and only draconian punishments can enforce it.
The global corporations that profit from copyright are lobbying for draconian punishments, and to increase their copyright powers, while suppressing public access to technology. But if we seriously hope to serve the only legitimate purpose of copyright–to promote progress, for the benefit of the public–then we must make changes in the other direction.

This speech by Richard Stallman will be nontechnical, admission is gratis, and the public is encouraged to attend.

Location: Россия, Санкт-Петербург, 10 линия 33 (10 line VO 33, Saint Petersburg, Russia - Phoenix learning center)

Please fill out our contact form, so that we can contact you about future events in and around Saint Petersburg.

02 August, 2019 04:55PM

August 01, 2019

libc @ Savannah

The GNU C Library version 2.30 is now available

The GNU C Library
=================

The GNU C Library version 2.30 is now available.

The GNU C Library is used as the C library in the GNU system and
in GNU/Linux systems, as well as many other systems that use Linux
as the kernel.

The GNU C Library is primarily designed to be a portable
and high performance C library.  It follows all relevant
standards including ISO C11 and POSIX.1-2017.  It is also
internationalized and has one of the most complete
internationalization interfaces known.

The GNU C Library webpage is at http://www.gnu.org/software/libc/

Packages for the 2.30 release may be downloaded from:
        http://ftpmirror.gnu.org/libc/
        http://ftp.gnu.org/gnu/libc/

The mirror list is at http://www.gnu.org/order/ftp.html

NEWS for version 2.30
=====================

Major new features:

  • Unicode 12.1.0 Support: Character encoding, character type info, and

  transliteration tables are all updated to Unicode 12.1.0, using
  generator scripts contributed by Mike FABIAN (Red Hat).

  • The dynamic linker accepts the --preload argument to preload shared

  objects, in addition to the LD_PRELOAD environment variable.

  • The twalk_r function has been added.  It is similar to the existing

  twalk function, but it passes an additional caller-supplied argument
  to the callback function.

  • On Linux, the getdents64, gettid, and tgkill functions have been added.
  • Minguo (Republic of China) calendar support has been added as an

  alternative calendar for the following locales: zh_TW, cmn_TW, hak_TW,
  nan_TW, lzh_TW.

  • The entry for the new Japanese era has been added for ja_JP locale.
  • Memory allocation functions malloc, calloc, realloc, reallocarray, valloc,

  pvalloc, memalign, and posix_memalign fail now with total object size
  larger than PTRDIFF_MAX.  This is to avoid potential undefined behavior with
  pointer subtraction within the allocated object, where results might
  overflow the ptrdiff_t type.

  • The dynamic linker no longer refuses to load objects which reference

  versioned symbols whose implementation has moved to a different soname
  since the object has been linked.  The old error message, symbol
  FUNCTION-NAME, version SYMBOL-VERSION not defined in file DSO-NAME with
  link time reference, is gone.

  • Add new POSIX-proposed pthread_cond_clockwait, pthread_mutex_clocklock,

  pthread_rwlock_clockrdlock, pthread_rwlock_clockwrlock and sem_clockwait
  functions.  These behave similarly to their "timed" equivalents, but also
  accept a clockid_t parameter to determine which clock their timeout should
  be measured against.  All functions allow waiting against CLOCK_MONOTONIC
  and CLOCK_REALTIME.  The decision of which clock to be used is made at the
  time of the wait (unlike with pthread_condattr_setclock, which requires
  the clock choice at initialization time).

  • On AArch64 the GNU IFUNC resolver call ABI changed: old resolvers still

  work, new resolvers can use a second argument which can be extended in
  the future, currently it contains the AT_HWCAP2 value.

Deprecated and removed features, and other changes affecting compatibility:

  • The copy_file_range function fails with ENOSYS if the kernel does not

  support the system call of the same name.  Previously, user space
  emulation was performed, but its behavior did not match the kernel
  behavior, which was deemed too confusing.  Applications which use the
  copy_file_range function can no longer rely on glibc to provide a fallback
  on kernels that do not support the copy_file_range system call, and if
  this function returns ENOSYS, they will need to use their own fallback.
  Support for copy_file_range for most architectures was added in version
  4.5 of the mainline Linux kernel.

  • The functions clock_gettime, clock_getres, clock_settime,

  clock_getcpuclockid, clock_nanosleep were removed from the librt library
  for new applications (on architectures which had them).  Instead, the
  definitions in libc will be used automatically, which have been available
  since glibc 2.17.

  • The obsolete and never-implemented XSI STREAMS header files <stropts.h>

  and <sys/stropts.h> have been removed.

  • Support for the "inet6" option in /etc/resolv.conf and the RES_USE_INET6

  resolver flag (deprecated in glibc 2.25) have been removed.

  • The obsolete RES_INSECURE1 and RES_INSECURE2 option flags for the DNS stub

  resolver have been removed from <resolv.h>.

  • With --enable-bind-now, installed programs are now linked with the

  BIND_NOW flag.

  • Support for the PowerPC SPE ISA extension (powerpc-*-*gnuspe*

  configurations) has been removed, following the deprecation of this
  subarchitecture in version 8 of GCC, and its removal in version 9.

  • On 32-bit Arm, support for the port-based I/O emulation and the <sys/io.h>

  header have been removed.

  • The Linux-specific <sys/sysctl.h> header and the sysctl function have been

  deprecated and will be removed from a future version of glibc.
  Application should directly access /proc instead.  For obtaining random
  bits, the getentropy function can be used.

Changes to build and runtime requirements:

  • GCC 6.2 or later is required to build the GNU C Library.

  Older GCC versions and non-GNU compilers are still supported when
  compiling programs that use the GNU C Library.

Security related changes:

  CVE-2019-7309: x86-64 memcmp used signed Jcc instructions to check
  size.  For x86-64, memcmp on an object size larger than SSIZE_MAX
  has undefined behavior.  On x32, the size_t argument may be passed
  in the lower 32 bits of the 64-bit RDX register with non-zero upper
  32 bits.  When it happened with the sign bit of RDX register set,
  memcmp gave the wrong result since it treated the size argument as
  zero.  Reported by H.J. Lu.

  CVE-2019-9169: Attempted case-insensitive regular-expression match
  via proceed_next_node in posix/regexec.c leads to heap-based buffer
  over-read.  Reported by Hongxu Chen.

The following bugs are resolved with this release:

  [2872] locale: Transliteration Cyrillic -> ASCII fails
  [6399] libc: gettid() should have a wrapper
  [16573] malloc: mtrace hangs when MALLOC_TRACE is defined
  [16976] glob: fnmatch unbounded stack VLA for collating symbols
  [17396] localedata: globbing for locale by [[.collating-element.]]
  [18035] dynamic-link: pldd does no longer work, enters infinite loop
  [18465] malloc: memusagestat is built using system C library
  [18830] locale: iconv -c -f ascii with >buffer size worth of input before
    invalid input drops valid char
  [20188] nptl: libpthread IFUNC resolver for vfork can lead to crash
  [20568] locale: Segfault with wide characters and setlocale/fgetwc/UTF-8
  [21897] localedata: Afar locales: Fix mon, abmon, and abday
  [22964] localedata: The Japanese Era name will be changed on May 1, 2019
  [23352] malloc: __malloc_check_init still defined in public header
    malloc.h.
  [23403] nptl: Wrong alignment of TLS variables
  [23501] libc: nftw() doesn't return dangling symlink's inode
  [23733] malloc: Check the count before calling tcache_get()
  [23741] malloc: Missing _attribute_alloc_size_ in many allocation
    functions
  [23831] localedata: nl_NL missing LC_NUMERIC thousands_sep
  [23844] nptl: pthread_rwlock_trywrlock results in hang
  [23983] argparse: Missing compat versions of argp_failure and argp_error
    for long double = double
  [23984] libc: Missing compat versions of err.h and error.h functions for
    long double = double
  [23996] localedata: Dutch salutations
  [24040] libc: riscv64: unterminated call chain in __thread_start
  [24047] network: libresolv should use IP_RECVERR/IPV6_RECVERR to avoid
    long timeouts
  [24051] stdio: puts and putchar ouput to _IO_stdout instead of stdout
  [24059] nss: nss_files: get_next_alias calls fgets_unlocked without
    checking for NULL.
  [24114] regex: regexec buffer read overrun in "grep -i
    '\(\(\)*.\)*\(\)\(\)\1'"
  [24122] libc: Segfaults if 0 returned from la_version
  [24153] stdio: Some input functions do not react to stdin assignment
  [24155] string: x32 memcmp can treat positive length as 0 (if sign bit in
    RDX is set) (CVE-2019-7309)
  [24161] nptl: __run_fork_handlers self-deadlocks in malloc/tst-mallocfork2
  [24164] libc: Systemtap probes need to use "nr" constraint on 32-bit Arm,
    not the default "nor"
  [24166] dynamic-link: Dl_serinfo.dls_serpath[1] in dlfcn.h causes UBSAN
    false positives, change to modern flexible array
  [24180] nptl: pthread_mutex_trylock does not use the correct order of
    instructions while maintaining the robust mutex list due to missing
    compiler barriers.
  [24194] librt: Non-compatibility symbols for clock_gettime etc. cause
    unnecessary librt dependencies
  [24200] localedata: Revert first_weekday removal in en_IE locale
  [24211] nptl: Use-after-free in Systemtap probe in pthread_join
  [24215] nptl: pthread_timedjoin_np should be a cancellation point
  [24216] malloc: Check for large bin list corruption when inserting
    unsorted chunk
  [24228] stdio: old x86 applications that use legacy libio crash on exit
  [24231] dynamic-link: [sparc64] R_SPARC_H34 implementation falls through
    to R_SPARC_H44
  [24293] localedata: Missing Minguo calendar support for TW locales
  [24296] localedata: Orthographic mistakes in 'day' and 'abday' sections in
    tt_RU (Tatar) locale
  [24307] localedata: Update locale data to Unicode 12.0.0
  [24323] dynamic-link: dlopen should not be able open PIE objects
  [24335] build: "Obsolete types detected" with Linux 5.0 headers
  [24369] localedata: Orthographic mistakes in 'mon' and 'abmon' sections in
    tt_RU (Tatar) locale
  [24370] localedata: Add lang_name for tt_RU locale
  [24372] locale: Binary locale files are not architecture independent
  [24394] time: strptime %Ey mis-parses final year of era
  [24476] dynamic-link: __libc_freeres triggers bad free in libdl if dlerror
    was not used
  [24506] dynamic-link: FAIL: elf/tst-pldd with --enable-hardcoded-path-in-
    tests
  [24531] malloc: Malloc tunables give tcache assertion failures
  [24532] libc: conform/arpa/inet.h failures due to linux kernel 64-bit
    time_t changes
  [24535] localedata: Update locale data to Unicode 12.1.0
  [24537] build: nptl/tst-eintr1 test case can hit task limits on some
    kernels and break testing
  [24544] build: elf/tst-pldd doesn't work if you install with a --prefix
  [24556] build: [GCC 9] error: ‘%s’ directive argument is null
    [-Werror=format-overflow=]
  [24570] libc: alpha: compat msgctl uses __IPC_64
  [24584] locale: Data race in __wcsmbs_clone_conv
  [24588] stdio: Remove codecvt vtables from libio
  [24603] math: sysdeps/ieee754/dbl-64/branred.c is slow when compiled with
    -O3 -march=skylake
  [24614] localedata: nl_NL LC_MONETARY doesn't match CLDR 35
  [24632] stdio: Old binaries which use freopen with default stdio handles
    crash
  [24640] libc: __ppc_get_timebase_freq() always return 0 when using static
    linked glibc
  [24652] localedata: szl_PL spelling correction
  [24695] nss: nss_db: calling getpwent after endpwent crashes
  [24696] nss: endgrent() clobbers errno=ERRNO for 'group: db files' entry
    in /etc/nsswitch.conf
  [24699] libc: mmap64 with very large offset broken on MIPS64 n32
  [24740] libc: getdents64 type confusion
  [24741] dynamic-link: ld.so should not require that a versioned symbol is
    always implemented in the same library
  [24744] libc: Remove copy_file_range emulation
  [24757] malloc: memusagestat is linked against system libpthread
  [24794] libc: Partial test suite run builds corrupt test-in-container
    testroot

Release Notes
=============

https://sourceware.org/glibc/wiki/Release/2.30

Contributors
============

This release was made possible by the contributions of many people.
The maintainers are grateful to everyone who has contributed
changes or bug reports.  These include:

Adam Maris
Adhemerval Zanella
Alexandra Hájková
Andreas K. Hüttel
Andreas Schwab
Anton Youdkevitch
Aurelien Jarno
Carlos O'Donell
DJ Delorie
Daniil Zhilin
David Abdurachmanov
David Newall
Dmitry V. Levin
Egor Kobylkin
Felix Yan
Feng Xue
Florian Weimer
Gabriel F. T. Gomes
Grzegorz Kulik
H.J. Lu
Jan Kratochvil
Jim Wilson
Joseph Myers
Maciej W. Rozycki
Mao Han
Mark Wielaard
Matthew Malcomson
Mike Crowe
Mike FABIAN
Mike Frysinger
Mike Gerow
PanderMusubi
Patsy Franklin
Paul A. Clarke
Paul Clarke
Paul Eggert
Paul Pluzhnikov
Rafal Luzynski
Richard Henderson
Samuel Thibault
Siddhesh Poyarekar
Stan Shebs
Stefan Liebler
Szabolcs Nagy
TAMUKI Shoichi
Tobias Klauser
Tulio Magno Quites Machado Filho
Uros Bizjak
Vincent Chen
Vineet Gupta
Wilco Dijkstra
Wolfram Sang
Yann Droneaud
Zack Weinberg
mansayk
marxin

01 August, 2019 08:12PM by Carlos O'Donell

July 28, 2019

stow @ Savannah

GNU Stow 2.3.1 released

This release improves ease of installation by dropping some module dependencies which were introduced in 2.3.0.  It also fixes an issue with the test suite, and improves the release procedure.  See http://git.savannah.gnu.org/cgit/stow.git/tree/NEWS for more details.

Also note that 2.3.0 was released last month (June 2019) and announced on the mailing lists but not here on savannah.

28 July, 2019 01:43PM by Adam Spiers

July 26, 2019

FSF Blogs

GNU Spotlight with Mike Gerwitz: 16 new GNU releases in July!

For announcements of most new GNU releases, subscribe to the info-gnu mailing list: https://lists.gnu.org/mailman/listinfo/info-gnu.

To download: nearly all GNU software is available from https://ftp.gnu.org/gnu/, or preferably one of its mirrors from https://www.gnu.org/prep/ftp.html. You can use the URL https://ftpmirror.gnu.org/ to be automatically redirected to a (hopefully) nearby and up-to-date mirror.

A number of GNU packages, as well as the GNU operating system as a whole, are looking for maintainers and other assistance: please see https://www.gnu.org/server/takeaction.html#unmaint if you'd like to help. The general page on how to help GNU is at https://www.gnu.org/help/help.html.

If you have a working or partly working program that you'd like to offer to the GNU project as a GNU package, see https://www.gnu.org/help/evaluation.html.

As always, please feel free to write to us at maintainers@gnu.org with any GNUish questions or suggestions for future installments.

26 July, 2019 05:56PM

July 25, 2019

Strengthen free software by telling Congress to reject the STRONGER Patents Act

The Free Software Foundation (FSF) has long been opposed to the ways in which US and international patent law have been misused to allow for so-called "software patents." As these patents are not held over a particular piece of software, but rather any principle that can be used in its design, it is more accurate to describe them as software idea patents. Software idea patents are a grave threat to free software developers because they severely limit the scope of what they can or cannot implement, and also expose them to legal harassment from any group that has enough resources to enforce their unjust claim over a programming idea. Litigation on a software idea patent can spell financial ruin for developers and an abrupt end to any free software project. An upcoming bill in Congress would leave the free software community much more open to these kinds of attacks.

Despite its failure to pass in 2017, a bill with the appropriately Orwellian title of "Support Technology and Research of Our Nation's Growth and Economic Resilience" (STRONGER) Patents Act (S.2082) was reintroduced into Congress on July 10th, 2019. If passed, the Act would make software idea patents much more easily claimed and enforceable against developers in the free software community. Whatever its effects on other types of patents may be, the fact that it will prop up software idea patents is reason enough to reject it. As we have seen before, it is an attempt to patch a broken system. The proposed bill never bothers to question the validity of these patents as a category. To protect our community, it is our duty to inform Congress and legislators that the only remedy for the problem posed by software idea patents is to dismantle them entirely.

The FSF urges all of its supporters to contact their local congresspeople and advise them to vote against the STRONGER Patents Act. No matter where you are in the world, please stand firm in campaigning for the right of free software users and programmers to use, copy, and develop tools for user and community empowerment.

Take action!

If you're not in the US, let us know your country by updating your profile so we can send you more relevant info. In the meantime, please also help us spread the word to your contacts in the US.

Nervous? Try using the following script:

Hello,

I live in CITY/STATE. I am calling to urge you to vote against the "STRONGER" Patents Act, and protect programmers and developers from the threat of software idea patents.

Thank you for your time.

Don't know who to call?

  • To call your congressperson directly, dial (202) 224-3121 and the switchboard will connect you.
  • Alternatively, you can find contact information for both your local senator and House representative online in the Senate and House directories.

25 July, 2019 07:20PM

July 24, 2019

Fall internships at the FSF! Apply by September 2

Do you believe that free software is crucial to a free society? Do you want to help people understand why free software matters, and how to use it? Are you interested in diving into software freedom issues like copyleft, Digital Restrictions Management (DRM), or surveillance and encryption? Or are you interested in sysadmin work? We may have just the opportunity for you here at FSF!

These positions are unpaid, educational opportunities, and the FSF will provide any appropriate documentation you might need to receive funding and school credit from outside sources. We also provide lunch expense reimbursement and a monthly transportation pass that will give you free access to local subways and buses (MBTA). We place an emphasis on providing hands-on educational opportunities for interns, in which they work closely with staff mentors on projects that match their skills and interests.

Interns can choose from the following fields of work:

  • The FSF campaigns team is in charge of communicating with and expanding our audience of free software supporters, targeting important opportunities for free software adoption and development, and empowering people to act against specific threats to their freedom. Campaigns team interns might work on expanding and updating our resources on a particular area of the free software world. Or, the International Day Against DRM (IDAD) offers a great opportunity to plan and experience the online and offline elements of a full campaign in close collaboration with our campaigns manager.

  • The FSF licensing and compliance lab works with developers to license their packages under one of the GNU General Public Licenses and to help organizations maintain compliance. They also field all licensing and copyleft inquiries. Licensing team interns might assist with the Respects Your Freedom certification program, or they might work to improve the Free Software Directory or analyze the compatibility of other licenses with the GPL.

  • The FSF tech team maintains and improves the infrastructure for the FSF and the GNU Project. Tech team interns may choose from our current list of projects, or suggest one of their own. We have plenty of opportunities, from updating our video streaming toolkit, to improving our data management systems.

Fall internships start mid to end September and typically run for a period of twelve weeks. We prefer candidates who are able to work in our Boston office, but may consider remote interns. The deadline to apply is September 2.

To apply, send a letter of interest and a resume with two references to hiring@fsf.org. Please send all application materials in free software-friendly formats like .pdf, .odt, and .txt. Use "Fall internship application" as the subject line of your email. Please include links to your writing, design, or coding work if it applies -- personal, professional, or class work is acceptable. URLs are preferred, though email attachments in free formats are acceptable, too. Learn more about our internships, and direct any questions to info@fsf.org.

24 July, 2019 04:00PM

Christopher Allan Webber

Mark S. Miller keynoting at ActivityPub Conf 2019

I am extremely pleased to announce that Mark S. Miller is keynoting at ActivityPub Conf 2019!

It's hard for me to understate how huge this is. Mark S. Miller works at Agoric which is leading the way on modern application of object capabilities, which is convenient, since exploration of how to apply object capabilities to federated social networks is a major topic of interest on the fediverse.

But just leaving it at that would be leaving out too much. We can trace Mark's work back the Agoric papers in 1988 which laid out the vision for a massive society and economy of computing agents. (And yes, that's where the Agoric company got its name from.)

For 30 years Mark has been working towards that vision, and social networks continued to intersect with its work. In the late 1990s Mark was involved in a company working on the game Electric Communities Habitat (it's hard to find information on it, but here's a rare video of it in action). (Although Mark Miller didn't work on it, Electric Communities Habitat has its predecessor in Lucasfilm's Habitat, which it turns out was a graphical multiplayer game which ran on the Commodore 64!(!!!) You can see the entertaining trailer for this game... keep in mind, this was released in 1986!)

People who have read my blog before may know that I've talked about building secure social spaces as virtual worlds: part of the reason I know it is possible is that Electric Communities Habitat for the large part built it and proved the ideas possible. Electric Communities the company did not survive, but the ideas lived on in the E programming language, which I like to describe as "the most interesting and important programming language you may have never heard of".

While the oldschool design of the website may give you the impression that the ideas there are out of date, time and time again I've found that the answers to my questions about how to build things have all been found on erights.org and in Mark Miller's dissertation.

Mark's work hasn't stopped there. Many good ideas in Javascript (such as its promises system) were largely inspired from Mark's work on the E programming language (Mark joined the standardization process of Javascript to make it be possible to build ocap-safe systems on it), and... well, I can go on and on.

Instead, I'm going to pause and say that I'm extremely excited that Mark has agreed to come to ActivityPub Conf to help introduce the community to the ideas in object capabilities. I hope the history I laid out above helps make it clear that the work to coordinate cooperative behavior amongst machines overlaps strongly with our work in the federated social web of establishing cooperative behavior amongst communities of human beings. I look forward to Mark helping us understand how to apply these ideas to our space.

Has this post got you excited? At the time of me writing this, there's still space at ActivityPub Conf 2019, and there's still time (until Monday July 29th) to submit talks. See the conference announcement for more details, and hope to see you there!

EDIT: I incorrectly cited Mark Miller originally as being involved in Lucasfilm's Habitat; fixed and better explained its history.

24 July, 2019 04:00PM by Christopher Lemmer Webber

Jose E. Marchesi

Rhhw July 2019 @ Frankfurt am Main

The Rabbit Herd will be meeting the weekend from 26 July to 28 July 2019, in Frankfurt. If you are in the nearby and in the mood for some hacking, feel free to join us!

24 July, 2019 12:00AM

GNUnet News

GNUnet 0.11.6 released

2019-07-24: GNUnet 0.11.6 released

We are pleased to announce the release of GNUnet 0.11.6.

This is a bugfix release for 0.11.5, fixing a lot of minor bugs, improving stability and code quality. Further, our videos are back on the homepage. In this release, we again improved the webpage in general and updated our documentation. As always: In terms of usability, users should be aware that there are still a large number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny (about 200 peers) and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.11.6 release is still only suitable for early adopters with some reasonable pain tolerance.

Download links

gnunet-gtk and gnunet-fuse were not released again, as there were no changes and the 0.11.0 versions are expected to continue to work fine with gnunet-0.11.6.

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.11.6 (since 0.11.5)

  • gnunet-identity can now print private keys.
  • The REST service can be configured to echo the HTTP Origin header value for Cross-Origin-Resource-Sharing (CORS) when it is called by a browser plugin. Optionally, a CORS Origin to echo can be also be directly configured.
  • re:claimID tickets are now re-used whenever possible.
  • SUID binary detection mechanisms implemented to improve compatiblity with some distributions.
  • TRANSPORT, TESTBED and CADET tests now pass again on macOS.
  • The GNS proxy Certification Authority is now generated using gnutls-certtool, if available, with opennssl/certtool as fallback.
  • Documentation, comments and code quality was improved.

Known Issues

  • There are known major design issues in the TRANSPORT, ATS and CORE subsystems which will need to be addressed in the future to achieve acceptable usability, performance and security.
  • There are known moderate implementation limitations in CADET that negatively impact performance. Also CADET may unexpectedly deliver messages out-of-order.
  • There are known moderate design issues in FS that also impact usability and performance.
  • There are minor implementation limitations in SET that create unnecessary attack surface for availability.
  • The RPS subsystem remains experimental.
  • Some high-level tests in the test-suite fail non-deterministically due to the low-level TRANSPORT issues.

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.

Thanks

This release was the work of many people. The following people contributed code and were thus easily identified: Martin Schanzenbach, Julius Bünger, ng0, Christian Grothoff, Alexia Pagkopoulou, rexxnor, xrs, lurchi and t3sserakt.

24 July, 2019 12:00AM

July 23, 2019

parallel @ Savannah

GNU Parallel 20190722 ('Ryugu') released

GNU Parallel 20190722 ('Ryugu') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

GNU Parallel is 10 years old next year on 2020-04-22. You are here by invited to a reception on Friday 2020-04-17.

See https://www.gnu.org/software/parallel/10-years-anniversary.html

Quote of the month:

  It is SUPER easy to speed up jobs from the command line w/ GNU parallel.
    -- B3n @B3njaminHimes@twitter

New in this release:

  • {= uq; =} causes the replacement string to be unquoted. Example: parallel echo '{=uq;=}.jpg' ::: '*'
  • --tagstring {=...=} is now evaluated for each line with --linebuffer.
  • Use -J ./profile to read a profile in current dir.
  • Speedup of startup by 40%: Find the parent shell differently on GNU/Linux, cache information about the CPU and which setpgrp method to use to make GNU Parallel start 40% faster.
  • $PARALLEL_SSHLOGIN can be used in the command line.
  • Bug fixes and man page updates.

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

23 July, 2019 07:00AM by Ole Tange

July 22, 2019

Christopher Allan Webber

ActivityPub Conf 2019

ActivityPub Conf flier

This flier also available in PDF and ODT formats.

UPDATE: As of August 5th, registrations have now filled up! See all of you who registered at ActivityPub Conf!

That's right! We're hosting the first ever ActivityPub Conf. It's immediately following Rebooting Web of Trust in Prague.

There's no admission fee to attend. (Relatedly, the conference is kind of being done on the cheap, because it is being funded by organizers who are themselves barely funded.) The venue, however, is quite cool: it's at the DOX Centre for Contemporary Art, which is itself exploring the ways the digital world is affecting our lives.

If you plan on attending (and maybe also speaking), you should get in your application soon (see the flier for details). We've never done one of these, and we have no idea what the response will be like, so this is going to be a smaller gathering (about 40 people). In some ways, it will be somewhere between a conference and a gathering of people-who-are-interested-in-activitypub.

As said in the flier, by attending, you are agreeing to the code of conduct, so be sure to read that.

The plan is that the first day will be talks (see the flier above for details on how to apply as a speaker) and the second day will be an unconference, with people splitting off into groups to work through problems of mutual interest.

Applications for general admission are first-come-first-serve. Additionally, we have reserved some slots for speakers specifically; the application to get in submissions for talks is 1 week from today (July 29th). We are hoping for and encouraging a wide range of participant backgrounds.

Hope to see you in Prague!

22 July, 2019 04:00PM by Christopher Lemmer Webber

hyperbole @ Savannah

GNU Hyperbole 7.0.3 is the latest release

Hyperbole is an amazing hypertextual information management system
that installs quickly and easily as an Emacs package.  It is part of
GNU Elpa, the Emacs Lisp Package Archive.

Hyperbole interlinks all your working information within Emacs for
fast access and editing, not just within special modes.  An hour
invested exploring Hyperbole's built-in interactive DEMO file will
save you hundreds of hours in your future work.

7.0.3 is a significant release with a number of interesting
improvements.  What's new in this release is described here:

   http://www.gnu.org/s/hyperbole/HY-NEWS.html

Hyperbole is described here:

   http://www.gnu.org/s/hyperbole

For use cases, see:

   http://www.gnu.org/s/hyperbole/HY-WHY.html

For what users think about Hyperbole, see:

  https://www.gnu.org/s/hyperbole/hyperbole.html#user-quotes

Hyperbole can supplement and extend Org-mode's capabilities.  It adds
many features not found elsewhere in Emacs, including Org mode, see:

   http://www.emacswiki.org/emacs/Hyperbole

Hyperbole includes its own easy-to-use hypertextual buttons and links
that can be created without the need for any markup language.

Hyperbole has an interactive demo to introduce you to its features as
well as a detailed reference manual, as explained here:

  https://www.gnu.org/s/hyperbole/hyperbole.html#invocation-and-doc

========================================================================

  • Quick Reasons to Try Hyperbole

========================================================================

It contains:

- the most flexible and easy-to-use hyperbuttons available, including
  implicit buttons automatically recognized by context, e.g. stack
  trace source line references.

- the only Emacs outliner with full legal item numbering,
  e.g. 1.4.2.6, and automatic permanent hyperlink anchors for every
  item

- the only free-form contact manager with full-text search for Emacs

- rapid and precise window, frame and buffer placement on screen

- an extensive menu of typed web searches, e.g. dictionary, wikipedia
  and stackoverflow, plus convenient, fast file and line finding
  functions

- immediate execution of a series of key presses just by typing them
  out.  For example, a M-RETURN press on: {C-x C-b C-s scratch RET
  C-a} will find the first buffer menu item that contains 'scratch';
  then leave point at the beginning of its line.  Build interactive
  tutorials with this.

========================================================================

  • The Magic of Implicit Buttons and the Action Key

========================================================================

For near instant gratification, try Hyperbole's 'implicit button'
capabilities (hyper-buttons that Hyperbole gives you for free by
recognizing all types of references embedded within text such as
pathnames or error message lines).  Below are more complex examples to
show the power; simpler ones can be found within the Hyperbole DEMO
file.

Implicit buttons are activated by pressing the Action Key, M-RETURN.
Once Hyperbole is loaded in your Emacs, pressing M-RETURN on any of
these examples in virtually any buffer will display the associated
referent in a chosen window or frame, handling all variable
substitution and full path resolution:

    "find-func.el"                            Find this file whether gzipped or not
                                              in the Emacs Lisp load-path

    "${hyperb:dir}/HY-NEWS"                   Resolve variable, show Hyperbole news

    "${PATH}/umask"                           Display a script somewhere in multi-dir PATH

    "${hyperb:dir}/DEMO#Hyperbole Menus"      Org mode outline, Markdown, and HTML # refs

    "(hyperbole)Menus"                        Texinfo and Info node links

    "c:/Users", "c:\Users", "/C/Users", "/c/Users", and "/mnt/c/Users"
                                            On Windows and Windows Subsystem for Linux,
                                            Hyperbole recognizes all of these as the
                                            same path and can translate between Windows
                                            and POSIX path formats in both directions

Git Links:
    git#branches                              List branches in current repo/project
    git#commits                               List and browse commits for current project
    git#tags                                  List tags in current project

    git#/hyperbole                            From any buffer, dired on the top
                                              directory of the local hyperbole
                                              project

    git#/hyperbole/55a1f0 or                  From any buffer, display hyperbole
    git#hyperbole/55a1f0                      local git commit diff

Github Links:
    gh@rswgnu                                 Display user's home page & projects

    github#rswgnu/hyperbole                   Display user's project
    gh#rswgnu/helm/global_mouse               Display user project's branch
    gh#rswgnu/hyperbole/55a1f0                Display user project's commit diff

Gitlab Links:
    gitlab@seriyalexandrov                    Display user's home page
    gl#gitlab-org/gitlab-ce/activity          Summarize user's project activity
    gl#gitlab-org/gitlab-ce/analytics         Display user project's cycle_analytics
    gl#gitlab-org/gitlab-ce/boards            Display user project's kanban-type issue boards

Once you set the default user and project variables, you can leave them off any reference links:

    (setq hibtypes-gitlab-default-user "gitlab-org")
    (setq hibtypes-gitlab-default-project "gitlab-ce")

    gl#issues or gl#list                      Display default project's issue list
    gl#labels                                 Display default project's issue categories
    gl#members                                Display default project's staff list
    gl#contributors                           Show contributor push frequency charts
    gl#merge_requests or gl#pulls             Display default project's pull requests
    gl#milestones                             Display default project's milestones status
    gl#pages                                  Display default project's web pages
    gl#snippets                               Project snippets, diffs and text with discussion
    gl#groups                                 List all available groups of projects
    gl#projects                               List all available projects

    gl#milestone=38                           Show a specific project milestone
    gl#snippet/1689487                        Show a specific project snippet

Even useful social media links:
    tw#travel or twitter#travel               Display twitter hashtag matches
    fb#technology                             Display facebook hashtag matches

Hyperbole uses simple prefix characters with paths to make them executable:
    "!/bin/date"                              Execute as a non-windowed program within a shell
    "&/opt/X11/bin/xeyes"                     Execute as a windowed program;
    "-find-func.el"                           Load/execute this Emacs Lisp library

    File "/usr/lib/python3.7/ast.py", line 37, in parse
                                              Jump to error/stack trace source

    "/ftp:anonymous@ftp.gnu.org:"             Tramp remote paths

22 July, 2019 04:16AM by Robert Weiner

July 21, 2019

Sylvain Beucler

Planet clean-up

planet.gnu.org logo

I did some clean-up / resync on the planet.gnu.org setup :)

  • Fix issue with newer https websites (SNI)
  • Re-sync Debian base config, scripts and packaging, update documentation; the planet-venus package is still in bad shape though, it's not officially orphaned but the maintainer is unreachable AFAICS
  • Fetch all Savannah feeds using https
  • Update feeds with redirections, which seem to mess-up caching

21 July, 2019 04:57PM

July 13, 2019

Parabola GNU/Linux-libre

[From Arch] libbloom>=1.6-2 update requires manual intervention

The libbloom package prior to version 1.6-2 was missing a soname link. This has been fixed in 1.6-2, so the upgrade will need to overwrite the untracked soname link created by ldconfig. If you get an error

libbloom: /usr/lib/libbloom.so.1 exists in filesystem

when updating, use

pacman -Suy --overwrite usr/lib/libbloom.so.1

to perform the upgrade.

13 July, 2019 10:00PM by David P.

July 12, 2019

rush @ Savannah

Version 2.1

Version 2.1 is available for download from GNU and Puszcza archives.

This version fixes several minor bugs that appeared in previous release 2.0.

12 July, 2019 07:45PM by Sergey Poznyakoff

GNU Guix

Towards Guix for DevOps

Hey, there! I'm Jakob, a Google Summer of Code intern and new contributor to Guix. Since May, I've been working on a DevOps automation tool for the Guix System, which we've been calling guix deploy.

The idea for a Guix DevOps tool has been making rounds on the mailing lists for some time now. Years, in fact; Dave Thompson and Chris Webber put together a proof-of-concept for it way back in 2015. Thus, we've had plenty of time to gaze upon the existing tools for this sort of thing -- Ansible, NixOps -- and fantasize about a similar tool, albeit with the expressive power of Guile scheme and the wonderful system configuration facilities of Guix. And now, those fantasies are becoming a reality.

"DevOps" is a term that might be unfamiliar to a fair number of Guix users. I'll spare you the detour to Wikipedia and give a brief explanation of what guix deploy does.

Imagine that you've spent the afternoon playing around with Guile's (web) module, developing software for a web forum. Awesome! But a web forum with no users is pretty boring, so you decide to shell out a couple bucks for a virtual private server to run your web forum. You feel that Wildebeest admirers on the internet deserve a platform of their own for discussion, and decide to dedicate the forum to that.

As it turns out, C. gnou is a more popular topic than you ever would have imagined. Your web forum soon grows in size -- attracting hundreds of thousands of simultaneous users. Despite Guile's impressive performance characteristics, one lowly virtual machine is too feeble to support such a large population of Wildebeest fanatics. So you decide to use Apache as a load-balancer, and shell out a couple more bucks for a couple more virtual private servers. Now you've got a problem on your hands; you're the proud owner of five or so virtual machines, and you need to make sure they're all running the most recent version of either your web forum software or Apache.

This is where guix deploy comes into play. Just as you'd use an operating-system declaration to configure services and user accounts on a computer running the Guix System, you can now use that same operating-system declaration to remotely manage any number of machines. A "deployment" managing your Wildebeest fan site setup might look something like this:

...

;; Service for our hypothetical guile web forum application.
(define guile-forum-service-type
  (service-type (name 'guile-forum)
                (extensions
                 (list (service-extension shepherd-root-service-type
                                          guile-forum-shepherd-service)
                       (service-extension account-service-type
                                          (const %guile-forum-accounts))))
                (default-value (guile-forum-configuration))
                (description "A web forum written in GNU Guile.")))

...

(define %forum-server-count 4)

(define (forum-server n)
  (operating-system
    (host-name (format #f "forum-server-~a" n))
    ...
    (services
     (append (list (service guile-forum-service-type
                            (guile-forum-configuration
                             "GNU Fan Forum!")))
             %base-services))))

(define load-balancer-server
  (operating-system
    (host-name "load-balancer-server"
    ...
    (services
     (append (list (service httpd-service-type
                            (httpd-configuration
                             ...)))
             %base-services)))))

;; One machine running our load balancer.
(cons (machine
       (system load-balancer-server)
       (environment manged-host-environment-type)
       (configuration (machine-ssh-configuration
                       ...)))

      ;; And a couple running our forum software!
      (let loop ((n 1)
                 (servers '()))
        (if (> n %forum-server-count)
            servers
            (loop (1+ n)
                  (cons (machine
                         (system (forum-server n))
                         (environment manged-host-environment-type)
                         (configuration (machine-ssh-configuration
                                         ...)))
                        servers)))))

The take-away from that example is that there's a new machine type atop the good ol' operating-system type, specifying how the machine should be provisioned. The version of guix deploy that's currently on the master branch only supports managed-host-environment-type, which is used for machines that are already up and running the Guix System. Provisioning, in that sense, only really involves opening an SSH connection to the host. But I'm sure you can imagine a linode-environment-type which automatically sets up a virtual private server through Linode, or a libvirt-environment-type that spins up a virtual machine for running your services. Those types are what I'll be working on in the coming months, in addition to cleaning up the code that's there now.

And yes, you did read that right. guix deploy is on the Guix master branch right now! In fact, we've already done a successful deployment right here on ci.guix.gnu.org. So, if this sounds as though it'd be up your alley, run guix pull, crack open the manual, and let us know how it goes!

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

12 July, 2019 07:00PM by Jakob L. Kreuze

FSF Events

John Sullivan - " 'Just don't buy it': Consumer choices in free software activism" (Curitiba, Brazil)

FSF executive director John Sullivan will be giving his speech “‘Just don't buy it’: Consumer choices in free software activism” at DebConf 19 (2019-07-21–28):

Movement activism often focuses on economic decisions. Buy this ethically made product; don't buy that one made by a company that funds terrible things. In free software, we encourage people to boycott (for example) Microsoft, and to instead support companies who sell machines with GNU/Linux.

It's an intuitive idea that, as individuals wanting to make the world better, we should use our willingness to spend or not spend money to reward those who do right and punish those who do wrong. Throughout history, this has sometimes been effective. But how effective? Can it be dangerous?

There is a danger of reducing activism and social change strategy to these decisions. We see this in the free software movement, when some activist campaigns aimed at persuading people to stop using proprietary software are met with responses like, “If you don't like Apple products, just don't buy them. Help make free products that are better than theirs. Why campaign against them?” or “How can you criticize proprietary software but still drive a car that has it?”

As an advocate, have you ever heard these responses, or felt like a hypocrite, or stumbled trying to explain to others why the situation is more complicated than “just don't buy it”?

How do we form a holistic movement strategy for advancing user freedom that takes consumer activism as far as possible, without overprioritizing it?

I hope those interested in effectively fighting for user freedom will join me as I share thoughts formed from 16 years of experience working on the Free Software Foundation's advocacy efforts, against the backdrop of some highlights from the history of other social movements.

Location: Auditรณrio, Av. Sete de Setembro, 3165 Rebouรงas, Curitiba, PR 80.230-901, Brazil

Please fill out our contact form, so that we can contact you about future events in and around Curitiba.

12 July, 2019 09:55AM

July 09, 2019

Christopher Allan Webber

Racket is an acceptable Python

A little over a decade ago, there were some popular blogposts about whether Ruby was an acceptable Lisp or whether even Lisp was an acceptable Lisp. Peter Norvig was also writing at the time introducing Python to Lisp programmers. Lisp, those in the know knew, was the right thing to strive for, and yet seemed unattainable for anything aimed for production since the AI Winter shattered Lisp's popularity in the 80s/early 90s. If you can't get Lisp, what's closest thing you can get?

This was around the time I was starting to program; I had spent some time configuring my editor with Emacs Lisp and loved every moment I got to do it; I read some Lisp books and longed for more. And yet when I tried to "get things done" in the language, I just couldn't make as much headway as I could with my preferred language for practical projects at the time: Python.

Python was great... mostly. It was easy to read, it was easy to write, it was easy-ish to teach to newcomers. (Python's intro material is better than most, but my spouse has talked before about some major pitfalls that the Python documentation has which make getting started unnecessarily hard. You can hear her talk about that at this talk we co-presented on at last year's RacketCon.) I ran a large free software project on a Python codebase, and it was easy to get new contributors; the barrier to entry to becoming a programmer with Python was low. I consider that to be a feature, and it certainly helped me bootstrap my career.

Most importantly of all though, Python was easy to pick up and run with because no matter what you wanted to do, either the tools came built in or the Python ecosystem had enough of the pieces nearby that building what you wanted was usually fairly trivial.

But Python has its limitations, and I always longed for a lisp. For a brief time, I thought I could get there by contributing to the Hy project, which was a lisp that transformed itself into the Python AST. "Why write Python in a syntax that's easy to read when you could add a bunch of parentheses to it instead?" I would joke when I talked about it. Believe it or not though, I do consider lisps easier to read, once you are comfortable to understand their syntax. I certainly find them easier to write and modify. And I longed for the metaprogramming aspects of Lisp.

Alas, Hy didn't really reach my dream. That macro expansion made debugging a nightmare as Hy would lose track of where the line numbers are; it wasn't until that when I really realized that without line numbers, you're just lost in terms of debugging in Python-land. That and Python didn't really have the right primitives; immutable datastructures for whatever reason never became first class, meaning that functional programming was hard, "cons" didn't really exist (actually this doesn't matter as much as people might think), recursive programming isn't really as possible without tail call elimination, etc etc etc.

But I missed parentheses. I longed for parentheses. I dreamed in parentheses. I'm not kidding, the only dreams I've ever had in code were in lisp, and it's happened multiple times, programs unfolding before me. The structure of lisp makes the flow of code so clear, and there's simply nothing like the comfort of developing in front of a lisp REPL.

Yet to choose to use a lisp seemed to mean opening myself up to eternal yak-shaving of developing packages that were already available on the Python Package Index or limiting my development community an elite group of Emacs users. When I was in Python, I longed for the beauty of a Lisp; when I was in a Lisp, I longed for the ease of Python.

All this changed when I discovered Racket:

  • Racket comes with a full-featured editor named DrRacket built-in that's damn nice to use. It has all the features that make lisp hacking comfortable previously mostly only to Emacs users: parenthesis balancing, comfortable REPL integration, etc etc. But if you want to use Emacs, you can use racket-mode. Win-win.
  • Racket has intentionally been built as an educational language, not unlike Python. One of the core audiences of Racket is middle schoolers, and it even comes with a built-in game engine for kids. (The How to Design Programs prologue might give you an introductory taste, and Realm of Racket is a good book all about learning to program by building Racket games.)
  • My spouse and I even taught classes about how to learn to program for humanities academics using Racket. We found the age-old belief that "lisp syntax is just too hard" is simply false; the main thing that most people lack is decent lisp-friendly tooling with a low barrier to entry, and DrRacket provides that. The only people who were afraid of the parentheses turned out to be people who already knew how to program. Those who didn't even praised the syntax for its clarity and the way the editor could help show you when you made a syntax error (DrRacket is very good at that). "Lisp is too hard to learn" is a lie; if middle schoolers can learn it, so can more seasoned programmers.
  • Racket might even be more batteries included than Python. At least all the batteries that come included are generally nicer; Racket's GUI library is the only time I've ever had fun in my life writing GUI programs (and they're cross platform too). Constructing pictures with its pict library is a delight. Plotting graphs with plot is an incredible experience. Writing documentation with Scribble is the best non-org-mode experience I've ever had, but has the advantage over org-mode in that your document is just inverted code. I could go on. And these are just some packages bundled with Racket; the Package repository contains much more.
  • Racket's documentation is, in my experience, unparalleled. The Racket Guide walks you through all the key concepts, and the Racket Reference has everything else you need.
  • The tutorials are also wonderful; the introductory tutorial gets your feet wet not through composing numbers or strings but by building up pictures. Want to learn more? The next two tutorials show you how to build web applications and then build your own web server.
  • Like Python, even though Racket has its roots in education, it is more than ready for serious practical use. These days, when I want to build something and get it done quickly and efficiently, I reach for Racket first.

Racket is a great Lisp, but it's also an acceptable Python. Sometimes you really can have it all.

09 July, 2019 02:27PM by Christopher Lemmer Webber

July 03, 2019

Aleksander Morgado

DW5821e firmware update integration in ModemManager and fwupd

The Dell Wireless 5821e module is a Qualcomm SDX20 based LTE Cat16 device. This modem can work in either MBIM mode or QMI mode, and provides different USB layouts for each of the modes. In Linux kernel based and Windows based systems, the MBIM mode is the default one, because it provides easy integration with the OS (e.g. no additional drivers or connection managers required in Windows) and also provides all the features that QMI provides through QMI over MBIM operations.

The firmware update process of this DW5821e module is integrated in your GNU/Linux distribution, since ModemManager 1.10.0 and fwupd 1.2.6. There is no official firmware released in the LVFS (yet) but the setup is completely ready to be used, just waiting for Dell to publish an initial official firmware release.

The firmware update integration between ModemManager and fwupd involves different steps, which I’ll try to describe here so that it’s clear how to add support for more devices in the future.

1) ModemManager reports expected update methods, firmware version and device IDs

The Firmware interface in the modem object exposed in DBus contains, since MM 1.10, a new UpdateSettings property that provides a bitmask specifying which is the expected firmware update method (or methods) required for a given module, plus a dictionary of key-value entries specifying settings applicable to each of the update methods.

In the case of the DW5821e, two update methods are reported in the bitmask: “fastboot” and “qmi-pdc“, because both are required to have a complete firmware upgrade procedure. “fastboot” would be used to perform the system upgrade by using an OTA update file, and “qmi-pdc” would be used to install the per-carrier configuration files after the system upgrade has been done.

The list of settings provided in the dictionary contain the two mandatory fields required for all devices that support at least one firmware update method: “device-ids” and “version”. These two fields are designed so that fwupd can fully rely on them during its operation:

  • The “device-ids” field will include a list of strings providing the device IDs associated to the device, sorted from the most specific to the least specific. These device IDs are the ones that fwupd will use to build the GUIDs required to match a given device to a given firmware package. The DW5821e will expose four different device IDs:
    • “USB\VID_413C“: specifying this is a Dell-branded device.
    • “USB\VID_413C&PID_81D7“: specifying this is a DW5821e module.
    • “USB\VID_413C&PID_81D7&REV_0318“: specifying this is hardware revision 0x318 of the DW5821e module.
    • “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VODAFONE“: specifying this is hardware revision 0x318 of the DW5821e module running with a Vodafone-specific carrier configuration.
  • The “version” field will include the firmware version string of the module, using the same format as used in the firmware package files used by fwupd. This requirement is obviously very important, because if the format used is different, the simple version string comparison used by fwupd (literally ASCII string comparison) would not work correctly. It is also worth noting that if the carrier configuration is also versioned, the version string should contain not only the version of the system, but also the version of the carrier configuration. The DW5821e will expose a firmware version including both, e.g. “T77W968.F1.1.1.1.1.VF.001” (system version being F1.1.1.1.1 and carrier config version being “VF.001”)
  • In addition to the mandatory fields, the dictionary exposed by the DW5821e will also contain a “fastboot-at” field specifying which AT command can be used to switch the module into fastboot download mode.

2) fwupd matches GUIDs and checks available firmware versions

Once fwupd detects a modem in ModemManager that is able to expose the correct UpdateSettings property in the Firmware interface, it will add the device as a known device that may be updated in its own records. The device exposed by fwupd will contain the GUIDs built from the “device-ids” list of strings exposed by ModemManager. E.g. for the “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VODAFONE” device ID, fwupd will use GUID “b595e24b-bebb-531b-abeb-620fa2b44045”.

fwupd will then be able to look for firmware packages (CAB files) available in the LVFS that are associated to any of the GUIDs exposed for the DW5821e.

The CAB files packaged for the LVFS will contain one single firmware OTA file plus one carrier MCFG file for each supported carrier in the give firmware version. The CAB files will also contain one “metainfo.xml” file for each of the supported carriers in the released package, so that per-carrier firmware upgrade paths are available: only firmware updates for the currently used carrier should be considered. E.g. we don’t want users running with the Vodafone carrier config to get notified of upgrades to newer firmware versions that aren’t certified for the Vodafone carrier.

Each of the CAB files with multiple “metainfo.xml” files will therefore be associated to multiple GUID/version pairs. E.g. the same CAB file will be valid for the following GUIDs (using Device ID instead of GUID for a clearer explanation, but really the match is per GUID not per Device ID):

  • Device ID “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VODAFONE” providing version “T77W968.F1.2.2.2.2.VF.002”
  • Device ID “USB\VID_413C&PID_81D7&REV_0318&CARRIER_TELEFONICA” providing version “T77W968.F1.2.2.2.2.TF.003”
  • Device ID “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VERIZON” providing version “T77W968.F1.2.2.2.2.VZ.004”
  • … and so on.

Following our example, fwupd will detect our device exposing device ID “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VODAFONE” and version “T77W968.F1.1.1.1.1.VF.001” in ModemManager and will be able to find a CAB file for the same device ID providing a newer version “T77W968.F1.2.2.2.2.VF.002” in the LVFS. The firmware update is possible!

3) fwupd requests device inhibition from ModemManager

In order to perform the firmware upgrade, fwupd requires full control of the modem. Therefore, when the firmware upgrade process starts, fwupd will use the new InhibitDevice(TRUE) method in the Manager DBus interface of ModemManager to request that a specific modem with a specific uid should be inhibited. Once the device is inhibited in ModemManager, it will be disabled and removed from the list of modems in DBus, and no longer used until the inhibition is removed.

The inhibition may be removed by calling InhibitDevice(FALSE) explicitly once the firmware upgrade is finished, and will also be automatically removed if the program that requested the inhibition disappears from the bus.

4) fwupd downloads CAB file from LVFS and performs firmware update

Once the modem is inhibited in ModemManager, fwupd can right away start the firmware update process. In the case of the DW5821e, the firmware update requires two different methods and two different upgrade cycles.

The first step would be to reboot the module into fastboot download mode using the AT command specified by ModemManager in the “at-fastboot” entry of the “UpdateSettings” property dictionary. After running the AT command, the module will reset itself and reboot with a completely different USB layout (and different vid:pid) that fwupd can detect as being the same device as before but in a different working mode. Once the device is in fastboot mode, fwupd will download and install the OTA file using the fastboot protocol, as defined in the “flashfile.xml” file provided in the CAB file:

<parts interface="AP">
  <part operation="flash" partition="ota" filename="T77W968.F1.2.2.2.2.AP.123_ota.bin" MD5="f1adb38b5b0f489c327d71bfb9fdcd12"/>
</parts>

Once the OTA file is completely downloaded and installed, fwupd will trigger a reset of the module also using the fastboot protocol, and the device will boot from scratch on the newly installed firmware version. During this initial boot, the module will report itself running in a “default” configuration not associated to any carrier, because the OTA file update process involves fully removing all installed carrier-specific MCFG files.

The second upgrade cycle performed by fwupd once the modem is detected again involves downloading all carrier-specific MCFG files one by one into the module using the QMI PDC protocol. Once all are downloaded, fwupd will activate the specific carrier configuration that was previously active before the download was started. E.g. if the module was running with the Vodafone-specific carrier configuration before the upgrade, fwupd will select the Vodafone-specific carrier configuration after the upgrade. The module would be reseted one last time using the QMI DMS protocol as a last step of the upgrade procedure.

5) fwupd removes device inhibition from ModemManager

The upgrade logic will finish by removing the device inhibition from ModemManager using InhibitDevice(FALSE) explicitly. At that point, ModemManager would re-detect and re-probe the modem from scratch, which should already be running in the newly installed firmware and with the newly selected carrier configuration.

03 July, 2019 01:27PM by aleksander

July 01, 2019

rush @ Savannah

Version 2.0

Version 2.0 is available for download from GNU and Puszcza archives.

This release features a complete rewrite of the configuration support. It introduces a new configuration file syntax that offers a large set of control structures and transformation instructions for handling arbitrary requests.  Please see the documentation for details.

Backward compatibility with prior releases is retained and old configuration syntax is still supported.  This ensures that existing installations will remain operational without any changes. Nevertheless, system administrators are encouraged to switch to the new syntax as soon as possible.

01 July, 2019 08:15AM by Sergey Poznyakoff

June 30, 2019

GNU Guile

GNU Guile 2.2.6 released

We are pleased to announce GNU Guile 2.2.6, the sixth bug-fix release in the new 2.2 stable release series. This release represents 11 commits by 4 people since version 2.2.5. First and foremost, it fixes a regression introduced in 2.2.5 that would break Guile’s built-in HTTP server.

See the release announcement for details.

30 June, 2019 09:55PM by Ludovic Courtès (guile-devel@gnu.org)

June 29, 2019

trans-coord @ Savannah

Malayalam team re-established

After more than 8 years of being orphaned, Malayalam team is active again.  The new team leader, Aiswarya Kaitheri Kandoth, made a new translation of the Free Software Definition, so now we have 41 translations of that page!

Currently, Malayalam the only active translation team of official languages of India.  It is a Dravidian language spoken by about 40 million people worldwide, with the most speakers living in the Indian state of Kerala.  Like many Indian languages, it uses a syllabic script derived from Brahmi.

Links to up-to-date translations are shown on the automatically generated report page.

29 June, 2019 06:54AM by Ineiev

June 27, 2019

Christopher Allan Webber

How do Spritely's actor and storage layers tie together?

I've been hacking away at Spritely (see previously). Recently I've been making progress on both the actor model (goblins and its rewrite goblinoid) as well as the storage layers (currently called Magenc and Crystal, but we are talking about probably renaming the both of them into a suite called "datashards"... yeah, everything is moving and changing fast right now.)

In the #spritely channel on freenode a friend asked, what is the big picture idea here? Both the actor model layer and the storage layer describe themselves as using "capabilities" (or more precisely "object capabilities" or "ocaps") but they seem to be implemented differently. How does it all tie together?

A great question! I think the first point of confusion is that while both follow the ocap paradigm (which is to say, reference/possession-based authority... possessing the capability gives you access, and it does not matter what your identity is for the most part for access control), they are implemented very differently because they are solving different problems. The storage system is based on encrypted, persistent data, with its ideas drawn from Tahoe-LAFS and Freenet, and the way that capabilities work is based on possession of cryptographic keys (which are themselves embedded/referenced in the URIs). The actor model, on the other hand, is based on holding onto a reference to a unique, unguessable URL (well, that's a bit of an intentional oversimplification for the sake of this explaination but we'll run with it) where the actor at that URL is "live" and communicated with via message passing. (Most of the ideas from this come from E and Waterken.) Actors are connected to each other over secure channels to prevent eavesdropping or leakage of the capabilities.

So yeah, how do these two seemingly very different layers tie together? As usual, I find that I most easily explain things via narrative, so let's imagine the following game scenario: Alice is in a room with a goblin. First Alice sees the goblin, then Alice attacks the goblin, then the goblin and Alice realize that they are not so different and become best friends.

The goblin and Alice both manifest in this universe as live actors. When Alice walks into the room (itself an actor), the room gives Alice a reference to the goblin actor. To "see" the goblin, Alice sends a message to it asking for its description. It replies with its datashards storage URI with its 3d model and associated textures. Alice can now query the storage system to reconstruct these models and textures from the datashards storage systems she uses. (The datashards storage systems themselves can't actually see the contents if they don't have the capability itself; this is much safer for peers to help the network share data because they can help route things through the network without personally knowing or being responsible for what the contents of those messages are. It could also be possible for the goblin to provide Alice with a direct channel to a storage system to retrieve its assets from.) Horray, Alice got the 3d model and images! Now she can see the goblin.

Assuming that the goblin is an enemy, Alice attacks! Attacking is common in this game universe, and there is no reason necessarily to keep around attack messages, so sending a message to the goblin is just a one-off transient message... there's no need to persist it in the storage system.

The attack misses! The goblin shouts, "Wait!" and makes its case, that both of them are just adventurers in this room, and shouldn't they both be friends? Alice is touched and halts her attack. These messages are also sent transiently; while either party could log them, they are closer to an instant messenger or IRC conversation rather than something intended to be persisted long-term.

They exchange their mailbox addresses and begin sending each other letters. These, however, are intended to be persisted; when Alice receives a message from the goblin in her mailbox (or vice versa), the message received contains the datashards URI to the letter, which Alice can then retrieve from the appropriate store. She can then always refer to this message, and she can choose whether or not to persist it locally or elsewhere. Since the letter has its own storage URI, when Alice constructs a reply, she can clearly mark that it was in reference to the previous letter. Even if Alice or the goblin's servers go down, either can continue to refer to these letters. Alice and the goblin have the freedom to choose what storage systems they wish, whether targeted/direct/local or via a public peer to peer routing system, with reasonable assumptions (given the continued strength of the underlying cryptographic algorithms used) that the particular entities storing or forwarding their data cannot read its content.

And so it is: live references of actors are able to send live, transient messages, but can only be sent to other actors whose (unguessable/unforgeable) address you have. This allows for highly dynamic and expressive interactions while retaining security. Datashards URIs allow for the storage and retrieval of content which can continue to be persisted by interested parties, even if the originating host goes down.

There are some things I glossed over in this writeup. The particular ways that the actors' addresses and references work is one thing (unguessable http based capability URLs on their own have leakage problems due to the way various web technologies are implemented, and not even every actor reference needs to be a long-lived URI; see CapTP for more details), how to establish connections between actor processes/servers (we can reuse TLS, or even better, something like tor's onion services), so are how interactions such as fighting can be scoped to a room (this paper explains how), as well as how we can map human meaningful names onto unguessable identifiers (the answer there is petnames). But I have plans for this and increasing confidence that it will come together... I think we're already on track.

Hopefully this writeup brings some clarity on how some of the components will work together, though!

27 June, 2019 06:15PM by Christopher Lemmer Webber

June 26, 2019

Andy Wingo

fibs, lies, and benchmarks

Friends, consider the recursive Fibonacci function, expressed most lovelily in Haskell:

fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)

Computing elements of the Fibonacci sequence ("Fibonacci numbers") is a common microbenchmark. Microbenchmarks are like a Suzuki exercises for learning violin: not written to be good tunes (good programs), but rather to help you improve a skill.

The fib microbenchmark teaches language implementors to improve recursive function call performance.

I'm writing this article because after adding native code generation to Guile, I wanted to check how Guile was doing relative to other language implementations. The results are mixed. We can start with the most favorable of the comparisons: Guile present versus Guile of the past.


I collected these numbers on my i7-7500U CPU @ 2.70GHz 2-core laptop, with no particular performance tuning, running each benchmark 10 times, waiting 2 seconds between measurements. The bar value indicates the median elapsed time, and above each bar is an overlayed histogram of all results for that scenario. Note that the y axis is on a log scale. The 2.9.3* version corresponds to unreleased Guile from git.

Good news: Guile has been getting significantly faster over time! Over decades, true, but I'm pleased.

where are we? static edition

How good are Guile's numbers on an absolute level? It's hard to say because there's no absolute performance oracle out there. However there are relative performance oracles, so we can try out perhaps some other language implementations.

First up would be the industrial C compilers, GCC and LLVM. We can throw in a few more "static" language implementations as well: compilers that completely translate to machine code ahead-of-time, with no type feedback, and a minimal run-time.


Here we see that GCC is doing best on this benchmark, completing in an impressive 0.304 seconds. It's interesting that the result differs so much from clang. I had a look at the disassembly for GCC and I see:

fib:
    push   %r12
    mov    %rdi,%rax
    push   %rbp
    mov    %rdi,%rbp
    push   %rbx
    cmp    $0x1,%rdi
    jle    finish
    mov    %rdi,%rbx
    xor    %r12d,%r12d
again:
    lea    -0x1(%rbx),%rdi
    sub    $0x2,%rbx
    callq  fib
    add    %rax,%r12
    cmp    $0x1,%rbx
    jg     again
    and    $0x1,%ebp
    lea    0x0(%rbp,%r12,1),%rax
finish:
    pop    %rbx
    pop    %rbp
    pop    %r12
    retq   

It's not quite straightforward; what's the loop there for? It turns out that GCC inlines one of the recursive calls to fib. The microbenchmark is no longer measuring call performance, because GCC managed to reduce the number of calls. If I had to guess, I would say this optimization doesn't have a wide applicability and is just to game benchmarks. In that case, well played, GCC, well played.

LLVM's compiler (clang) looks more like what we'd expect:

fib:
   push   %r14
   push   %rbx
   push   %rax
   mov    %rdi,%rbx
   cmp    $0x2,%rdi
   jge    recurse
   mov    %rbx,%rax
   add    $0x8,%rsp
   pop    %rbx
   pop    %r14
   retq   
recurse:
   lea    -0x1(%rbx),%rdi
   callq  fib
   mov    %rax,%r14
   add    $0xfffffffffffffffe,%rbx
   mov    %rbx,%rdi
   callq  fib
   add    %r14,%rax
   add    $0x8,%rsp
   pop    %rbx
   pop    %r14
   retq   

I bolded the two recursive calls.

Incidentally, the fib as implemented by GCC and LLVM isn't quite the same program as Guile's version. If the result gets too big, GCC and LLVM will overflow, whereas in Guile we overflow into a bignum. Also in C, it's possible to "smash the stack" if you recurse too much; compilers and run-times attempt to mitigate this danger but it's not completely gone. In Guile you can recurse however much you want. Finally in Guile you can interrupt the process if you like; the compiled code is instrumented with safe-points that can be used to run profiling hooks, debugging, and so on. Needless to say, this is not part of C's mission.

Some of these additional features can be implemented with no significant performance cost (e.g., via guard pages). But it's fair to expect that they have some amount of overhead. More on that later.

The other compilers are OCaml's ocamlopt, coming in with a very respectable result; Go, also doing well; and V8 WebAssembly via Node. As you know, you can compile C to WebAssembly, and then V8 will compile that to machine code. In practice it's just as static as any other compiler, but the generated assembly is a bit more involved:

fib_tramp:
    jmp    fib

fib:
    push   %rbp
    mov    %rsp,%rbp
    pushq  $0xa
    push   %rsi
    sub    $0x10,%rsp
    mov    %rsi,%rbx
    mov    0x2f(%rbx),%rdx
    mov    %rax,-0x18(%rbp)
    cmp    %rsp,(%rdx)
    jae    stack_check
post_stack_check:
    cmp    $0x2,%eax
    jl     return_n
    lea    -0x2(%rax),%edx
    mov    %rbx,%rsi
    mov    %rax,%r10
    mov    %rdx,%rax
    mov    %r10,%rdx
    callq  fib_tramp
    mov    -0x18(%rbp),%rbx
    sub    $0x1,%ebx
    mov    %rax,-0x20(%rbp)
    mov    -0x10(%rbp),%rsi
    mov    %rax,%r10
    mov    %rbx,%rax
    mov    %r10,%rbx
    callq  fib_tramp
return:
    mov    -0x20(%rbp),%rbx
    add    %ebx,%eax
    mov    %rbp,%rsp
    pop    %rbp
    retq   
return_n:
    jmp    return
stack_check:
    callq  WasmStackGuard
    mov    -0x10(%rbp),%rbx
    mov    -0x18(%rbp),%rax
    jmp    post_stack_check

Apparently fib compiles to a function of two arguments, the first passed in rsi, and the second in rax. (V8 uses a custom calling convention for its compiled WebAssembly.) The first synthesized argument is a handle onto run-time data structures for the current thread or isolate, and in the function prelude there's a check to see that the function has enough stack. V8 uses these stack checks also to handle interrupts, for when a web page is stuck in JavaScript.

Otherwise, it's a more or less normal function, with a bit more register/stack traffic than would be strictly needed, but pretty good.

do optimizations matter?

You've heard of Moore's Law -- though it doesn't apply any more, it roughly translated into hardware doubling in speed every 18 months. (Yes, I know it wasn't precisely that.) There is a corresponding rule of thumb for compiler land, Proebsting's Law: compiler optimizations make software twice as fast every 18 years. Zow!

The previous results with GCC and LLVM were with optimizations enabled (-O3). One way to measure Proebsting's Law would be to compare the results with -O0. Obviously in this case the program is small and we aren't expecting much work out of the optimizer, but it's interesting to see anyway:


Answer: optimizations don't matter much for this benchark. This investigation does give a good baseline for compilers from high-level languages, like Guile: in the absence of clever trickery like the recursive inlining thing GCC does and in the absence of industrial-strength instruction selection, what's a good baseline target for a compiler? Here we see for this benchmark that it's somewhere between 420 and 620 milliseconds or so. Go gets there, and OCaml does even better.

how is time being spent, anyway?

Might we expect V8/WebAssembly to get there soon enough, or is the stack check that costly? How much time does one stack check take anyway? For that we'd have to determine the number of recursive calls for a given invocation.

Friends, it's not entirely clear to me why this is, but I instrumented a copy of fib, and I found that the number of calls in fib(n) was a more or less constant factor of the result of calling fib. That ratio converges to twice the golden ratio, which means that since fib(n+1) ~= φ * fib(n), then the number of calls in fib(n) is approximately 2 * fib(n+1). I scratched my head for a bit as to why this is and I gave up; the Lord works in mysterious ways.

Anyway for fib(40), that means that there are around 3.31e8 calls, absent GCC shenanigans. So that would indicate that each call for clang takes around 1.27 ns, which at turbo-boost speeds on this machine is 4.44 cycles. At maximum throughput (4 IPC), that would indicate 17.8 instructions per call, and indeed on the n > 2 path I count 17 instructions.

For WebAssembly I calculate 2.25 nanoseconds per call, or 7.9 cycles, or 31.5 (fused) instructions at max IPC. And indeed counting the extra jumps in the trampoline, I get 33 cycles on the recursive path. I count 4 instructions for the stack check itself, one to save the current isolate, and two to shuffle the current isolate into place for the recursive calls. But, compared to clang, V8 puts 6 words on the stack per call, as opposed to only 4 for LLVM. I think with better interprocedural register allocation for the isolate (i.e.: reserve a register for it), V8 could get a nice boost for call-heavy workloads.

where are we? dynamic edition

Guile doesn't aim to replace C; it's different. It has garbage collection, an integrated debugger, and a compiler that's available at run-time, it is dynamically typed. It's perhaps more fair to compare to languages that have some of these characteristics, so I ran these tests on versions of recursive fib written in a number of languages. Note that all of the numbers in this post include start-up time.


Here, the ocamlc line is the same as before, but using the bytecode compiler instead of the native compiler. It's a bit of an odd thing to include but it performs so well I just had to include it.

I think the real takeaway here is that Chez Scheme has fantastic performance. I have not been able to see the disassembly -- does it do the trick like GCC does? -- but the numbers are great, and I can see why Racket decided to rebase its implementation on top of it.

Interestingly, as far as I understand, Chez implements stack checks in the straightfoward way (an inline test-and-branch), not with a guard page, and instead of using the stack check as a generic ability to interrupt a computation in a timely manner as V8 does, Chez emits a separate interrupt check. I would like to be able to see Chez's disassembly but haven't gotten around to figuring out how yet.

Since I originally published this article, I added a LuaJIT entry as well. As you can see, LuaJIT performs as well as Chez in this benchmark.

Haskell's call performance is surprisingly bad here, beaten even by OCaml's bytecode compiler; is this the cost of laziness, or just a lacuna of the implementation? I do not know. I do know I have this mental image that Haskell is a good compiler but apparently if that's the standard, so is Guile :)

Finally, in this comparison section, I was not surprised by cpython's relatively poor performance; we know cpython is not fast. I think though that it just goes to show how little these microbenchmarks are worth when it comes to user experience; like many of you I use plenty of Python programs in my daily work and don't find them slow at all. Think of micro-benchmarks like x-ray diffraction; they can reveal the hidden substructure of DNA but they say nothing at all about the organism.

where to now?

Perhaps you noted that in the last graph, the Guile and Chez lines were labelled "(lexical)". That's because instead of running this program:

(define (fib n)
  (if (< n 2)
      n
      (+ (fib (- n 1)) (fib (- n 2)))))

They were running this, instead:

(define (fib n)
  (define (fib* n)
    (if (< n 2)
        n
        (+ (fib* (- n 1)) (fib* (- n 2)))))
  (fib* n))

The thing is, historically, Scheme programs have treated top-level definitions as being mutable. This is because you don't know the extent of the top-level scope -- there could always be someone else who comes and adds a new definition of fib, effectively mutating the existing definition in place.

This practice has its uses. It's useful to be able to go in to a long-running system and change a definition to fix a bug or add a feature. It's also a useful way of developing programs, to incrementally build the program bit by bit.


But, I would say that as someone who as written and maintained a lot of Scheme code, it's not a normal occurence to mutate a top-level binding on purpose, and it has a significant performance impact. If the compiler knows the target to a call, that unlocks a number of important optimizations: type check elision on the callee, more optimal closure representation, smaller stack frames, possible contification (turning calls into jumps), argument and return value count elision, representation specialization, and so on.

This overhead is especially egregious for calls inside modules. Scheme-the-language only gained modules relatively recently -- relative to the history of scheme -- and one of the aspects of modules is precisely to allow reasoning about top-level module-level bindings. This is why running Chez Scheme with the --program option is generally faster than --script (which I used for all of these tests): it opts in to the "newer" specification of what a top-level binding is.

In Guile we would probably like to move towards a more static way of treating top-level bindings, at least those within a single compilation unit. But we haven't done so yet. It's probably the most important single optimization we can make over the near term, though.

As an aside, it seems that LuaJIT also shows a similar performance differential for local function fib(n) versus just plain function fib(n).

It's true though that even absent lexical optimizations, top-level calls can be made more efficient in Guile. I am not sure if we can reach Chez with the current setup of having a template JIT, because we need two return addresses: one virtual (for bytecode) and one "native" (for JIT code). Register allocation is also something to improve but it turns out to not be so important for fib, as there are few live values and they need to spill for the recursive call. But, we can avoid some of the indirection on the call, probably using an inline cache associated with the callee; Chez has had this optimization since 1984!

what guile learned from fib

This exercise has been useful to speed up Guile's procedure calls, as you can see for the difference between the latest Guile 2.9.2 release and what hasn't been released yet (2.9.3).

To decide what improvements to make, I extracted the assembly that Guile generated for fib to a standalone file, and tweaked it in a number of ways to determine what the potential impact of different scenarios was. Some of the detritus from this investigation is here.

There were three big performance improvements. One was to avoid eagerly initializing the slots in a function's stack frame; this took a surprising amount of run-time. Fortunately the rest of the toolchain like the local variable inspector was already ready for this change.

Another thing that became clear from this investigation was that our stack frames were too large; there was too much memory traffic. I was able to improve this in the lexical-call by adding an optimization to elide useless closure bindings. Usually in Guile when you call a procedure, you pass the callee as the 0th parameter, then the arguments. This is so the procedure has access to its closure. For some "well-known" procedures -- procedures whose callers can be enumerated -- we optimize to pass a specialized representation of the closure instead ("closure optimization"). But for well-known procedures with no free variables, there's no closure, so we were just passing a throwaway value (#f). An unhappy combination of Guile's current calling convention being stack-based and a strange outcome from the slot allocator meant that frames were a couple words too big. Changing to allow a custom calling convention in this case sped up fib considerably.

Finally, and also significantly, Guile's JIT code generation used to manually handle calls and returns via manual stack management and indirect jumps, instead of using the platform calling convention and the C stack. This is to allow unlimited stack growth. However, it turns out that the indirect jumps at return sites were stalling the pipeline. Instead we switched to use call/return but keep our manual stack management; this allows the CPU to use its return address stack to predict return targets, speeding up code.

et voilà

Well, long article! Thanks for reading. There's more to do but I need to hit the publish button and pop this off my stack. Until next time, happy hacking!

26 June, 2019 10:34AM by Andy Wingo

June 25, 2019

libredwg @ Savannah

libredwg-0.8 released

This is a major release, adding the new dynamic API, read and write
all header and object fields by name. Many of the old dwg_api.h field
accessors are deprecated.
More here: https://www.gnu.org/software/libredwg/ and http://git.savannah.gnu.org/cgit/libredwg.git/tree/NEWS

Here are the compressed sources:
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.8.tar.gz   (9.8MB)
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.8.tar.xz   (3.7MB)

Here are the GPG detached signatures[*]:
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.8.tar.gz.sig
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.8.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are more binaries:
  https://github.com/LibreDWG/libredwg/releases/tag/0.8

Here are the SHA256 checksums:

087f0806220a0a33a9aab2c2763266a69e12427a5bd7179cff206289e60fe2fd  libredwg-0.8.tar.gz
0487c84e962a4dbcfcf3cbe961294b74c1bebd89a128b4929a1353bc7f58af26  libredwg-0.8.tar.xz

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify libredwg-0.8.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys B4F63339E65D6414

and rerun the 'gpg --verify' command.

25 June, 2019 09:55AM by Reini Urban

June 23, 2019

apl @ Savannah

GNU APL 1.8 Released

I am happy to announce that GNU APL 1.8 has been released.

GNU APL is a free implementation of the ISO standard 13751 aka.
"Programming Language APL, Extended",

This release contains:

  • bug fixes,
  • ⎕DLX (Donald Knuth's Dancing Links Algorithm),
  • ⎕FFT (fast fourier transforms; real, complex, and windows),
  • ⎕GTK (create GUI windows from APL),
  • ⎕RE (regular expressions), and
  • user-defined APL commands.

Also, you can now call GNU APL from Python.

23 June, 2019 01:03PM by Jürgen Sauermann

June 22, 2019

denemo @ Savannah

Release 2.3 is imminent - please test.

New Features
    Seek Locations in Scores
        Specify type of object sought
        Or valid note range
        Or any custom condition
        Creates a clickable list of locations
        Each location is removed from list once visited
    Syntax highlighting in LilyPond view
    Playback Start/End markers draggable
    Source window navigation by page number
        Page number always visible
    Rapid marking of passages
    Two-chord Tremolos
    Allowing breaks at half-measure for whole movement
        Also breaks at every beat
    Passages
        Mark Passages of music
        Perform tasks on the marked passages
        Swapping musical material with staff below implemented
    Search for lost scores
        Interval-based
        Searches whole directory hierarchy
        Works for transposed scores
    Compare Scores
    Index Collection of Scores
        All scores below a start directory indexed
        Index includes typeset incipit for music
        Title, Composer, Instrumentation, Score Comment fields
        Sort by composer surname
        Filter by any Scheme condition
        Open files by clicking on them in Index
    Intelligent File Opening
        Re-interprets file paths for moved file systems
    Improved Score etc editor appearance
    Print History
        History records what part of the score was printed
        Date and printer included
    Improvements to Scheme Editor
        Title bar shows open file
        Save dialog gives help
    Colors now differentiate palettes, titles etc. in main display
    Swapping Display and Source positions
        for switching between entering music and editing
        a single keypress or MIDI command
    Activate object from keyboard
        Fn2 key equivalent to mouse-right click
        Shift and Control right-click via Shift-Fn2 and Control-Fn2
    Help via Email
    Auto-translation to Spanish

Bug Fixes

    Adding buttons to palettes no longer brings hidden buttons back

    MIDI playback of empty measures containing non-notes

    Instrument name with Ambitus clash in staff properties menu fixed

    Visibility of emmentaler glyphs fixed

    Update of layout on staff to voice change

    Open Recent anomalies fixed

    Failures to translate menu titles and palettes fixed

22 June, 2019 09:29AM by Richard Shann

June 21, 2019

parallel @ Savannah

GNU Parallel 20190622 ('HongKong') released

GNU Parallel 20190622 ('HongKong') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

GNU Parallel is 10 years old in a year on 2020-04-22. You are here by invited to a reception on Friday 2020-04-17.

See https://www.gnu.org/software/parallel/10-years-anniversary.html

Quote of the month:

  I want to make a shout-out for @GnuParallel, it's a work of beauty and power
    -- Cristian Consonni @CristianCantoro

New in this release:

  • --shard can now take a column name and optionally a perl expression. Similar to --group-by and replacement strings.
  • Bug fixes and man page updates.

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

21 June, 2019 01:36PM by Ole Tange

mailutils @ Savannah

Version 3.7

Version 3.7 of GNU mailutils is available for download.

This version introduces a new format for mailboxes: dotmail. Dotmail is a replacement for traditional mbox format, proposed by
Kurt Hackenberg. A dotmail mailbox is a single disk file, where messages are stored sequentially. Each message ends with a single
dot (similar to the format used in the SMTP DATA command). A dot appearing at the start of the line is doubled, to prevent it from being interpreted as end of message marker.

For a complete list of changes, please see the NEWS file.

21 June, 2019 01:15PM by Sergey Poznyakoff

June 20, 2019

GNU Guile

GNU Guile 2.2.5 released

We are pleased to announce GNU Guile 2.2.5, the fifth bug-fix release in the new 2.2 stable release series. This release represents 100 commits by 11 people since version 2.2.4. It fixes bugs that had accumulated over the last few months, notably in the SRFI-19 date and time library and in the (web uri) module. This release also greatly improves performance of bidirectional pipes, and introduces the new get-bytevector-some! binary input primitive that made it possible.

Guile 2.2.5 can be downloaded from the usual places.

See the release announcement for details.

Besides, we remind you that Guile 3.0 is in the works, and that you can try out version 2.9.2, which is the latest beta release of what will become 3.0.

Enjoy!

20 June, 2019 11:20AM by Ludovic Courtès (guile-devel@gnu.org)

June 17, 2019

GNU Guix

Substitutes are now available as lzip

For a long time, our build farm at ci.guix.gnu.org has been delivering substitutes (pre-built binaries) compressed with gzip. Gzip was never the best choice in terms of compression ratio, but it was a reasonable and convenient choice: it’s rock-solid, and zlib made it easy for us to have Guile bindings to perform in-process compression in our multi-threaded guix publish server.

With the exception of building software from source, downloads take the most time of Guix package upgrades. If users can download less, upgrades become faster, and happiness ensues. Time has come to improve on this, and starting from early June, Guix can publish and fetch lzip-compressed substitutes, in addition to gzip.

Lzip

Lzip is a relatively little-known compression format, initially developed by Antonio Diaz Diaz ca. 2013. It has several C and C++ implementations with surprisingly few lines of code, which is always reassuring. One of its distinguishing features is a very good compression ratio with reasonable CPU and memory requirements, according to benchmarks published by the authors.

Lzlib provides a well-documented C interface and Pierre Neidhardt set out to write bindings for that library, which eventually landed as the (guix lzlib) module.

With this in place we were ready to start migrating our tools, and then our build farm, to lzip compression, so we can all enjoy smaller downloads. Well, easier said than done!

Migrating

The compression format used for substitutes is not a core component like it can be in “traditional” binary package formats such as .deb since Guix is conceptually a “source-based” distro. However, deployed Guix installations did not support lzip, so we couldn’t just switch our build farm to lzip overnight; we needed to devise a transition strategy.

Guix asks for the availability of substitutes over HTTP. For example, a question such as:

“Dear server, do you happen to have a binary of /gnu/store/6yc4ngrsig781bpayax2cg6pncyhkjpq-emacs-26.2 that I could download?”

translates into prose to an HTTP GET of https://ci.guix.gnu.org/6yc4ngrsig781bpayax2cg6pncyhkjpq.narinfo, which returns something like:

StorePath: /gnu/store/6yc4ngrsig781bpayax2cg6pncyhkjpq-emacs-26.2
URL: nar/gzip/6yc4ngrsig781bpayax2cg6pncyhkjpq-emacs-26.2
Compression: gzip
NarHash: sha256:0h2ibqpqyi3z0h16pf7ii6l4v7i2wmvbrxj4ilig0v9m469f6pm9
NarSize: 134407424
References: 2dk55i5wdhcbh2z8hhn3r55x4873iyp1-libxext-1.3.3 …
FileSize: 48501141
System: x86_64-linux
Deriver: 6xqibvc4v8cfppa28pgxh0acw9j8xzhz-emacs-26.2.drv
Signature: 1;berlin.guixsd.org;KHNpZ25hdHV…

(This narinfo format is inherited from Nix and implemented here and here.) This tells us we can download the actual binary from /nar/gzip/…-emacs-26.2, and that it will be about 46 MiB (the FileSize field.) This is what guix publish serves.

The trick we came up with was to allow guix publish to advertise several URLs, one per compression format. Thus, for recently-built substitutes, we get something like this:

StorePath: /gnu/store/mvhaar2iflscidl0a66x5009r44fss15-gimp-2.10.12
URL: nar/gzip/mvhaar2iflscidl0a66x5009r44fss15-gimp-2.10.12
Compression: gzip
FileSize: 30872887
URL: nar/lzip/mvhaar2iflscidl0a66x5009r44fss15-gimp-2.10.12
Compression: lzip
FileSize: 18829088
NarHash: sha256:10n3nv3clxr00c9cnpv6x7y2c66034y45c788syjl8m6ga0hbkwy
NarSize: 94372664
References: 05zlxc7ckwflz56i6hmlngr86pmccam2-pcre-8.42 …
System: x86_64-linux
Deriver: vi2jkpm9fd043hm0839ibbb42qrv5xyr-gimp-2.10.12.drv
Signature: 1;berlin.guixsd.org;KHNpZ25hdHV…

Notice that there are two occurrences of the URL, Compression, and FileSize fields: one for gzip, and one for lzip. Old Guix instances will just pick the first one, gzip; newer Guix will pick whichever supported method provides the smallest FileSize, usually lzip. This will make migration trivial in the future, should we add support for other compression methods.

Users need to upgrade their Guix daemon to benefit from lzip. On a “foreign distro”, simply run guix pull as root. On standalone Guix systems, run guix pull && sudo guix system reconfigure /etc/config.scm. In both cases, the daemon has to be restarted, be it with systemctl restart guix-daemon.service or with herd restart guix-daemon.

First impressions

This new gzip+lzip scheme has been deployed on ci.guix.gnu.org for a week. Specifically, we run guix publish -C gzip:9 -C lzip:9, meaning that we use the highest compression ratio for both compression methods.

Currently, only a small subset of the package substitutes are available as both lzip and gzip; those that were already available as gzip have not been recompressed. The following Guile program that taps into the API of guix weather allows us to get some insight:

(use-modules (gnu) (guix)
             (guix monads)
             (guix scripts substitute)
             (srfi srfi-1)
             (ice-9 match))

(define all-packages
  (@@ (guix scripts weather) all-packages))

(define package-outputs
  (@@ (guix scripts weather) package-outputs))

(define (fetch-lzip-narinfos)
  (mlet %store-monad ((items (package-outputs (all-packages))))
    (return
     (filter (lambda (narinfo)
               (member "lzip" (narinfo-compressions narinfo)))
             (lookup-narinfos "https://ci.guix.gnu.org" items)))))

(define (lzip/gzip-ratio narinfo)
  (match (narinfo-file-sizes narinfo)
    ((gzip lzip)
     (/ lzip gzip))))

(define (average lst)
  (/ (reduce + 0 lst)
     (length lst) 1.))

Let’s explore this at the REPL:

scheme@(guile-user)> (define lst
                       (with-store s
                         (run-with-store s (fetch-lzip-narinfos))))
computing 9,897 package derivations for x86_64-linux...
updating substitutes from 'https://ci.guix.gnu.org'... 100.0%
scheme@(guile-user)> (length lst)
$4 = 2275
scheme@(guile-user)> (average (map lzip/gzip-ratio lst))
$5 = 0.7398994395478715

As of this writing, around 20% of the package substitutes are available as lzip, so take the following stats with a grain of salt. Among those, the lzip-compressed substitute is on average 26% smaller than the gzip-compressed one. What if we consider only packages bigger than 5 MiB uncompressed?

scheme@(guile-user)> (define biggest
                       (filter (lambda (narinfo)
                                 (> (narinfo-size narinfo)
                                    (* 5 (expt 2 20))))
                               lst))
scheme@(guile-user)> (average (map lzip/gzip-ratio biggest))
$6 = 0.5974238562384483
scheme@(guile-user)> (length biggest)
$7 = 440

For those packages, lzip yields substitutes that are 40% smaller on average. Pretty nice! Lzip decompression is slightly more CPU-intensive than gzip decompression, but downloads are bandwidth-bound, so the benefits clearly outweigh the costs.

Going forward

The switch from gzip to lzip has the potential to make upgrades “feel” faster, and that is great in itself.

Fundamentally though, we’ve always been looking in this project at peer-to-peer solutions with envy. Of course, the main motivation is to have a community-supported and resilient infrastructure, rather than a centralized one, and that vision goes hand-in-hand with reproducible builds.

We started working on an extension to publish and fetch substitutes over IPFS. Thanks to its content-addressed nature, IPFS has the potential to further reduce the amount of data that needs to be downloaded on an upgrade.

The good news is that IPFS developers are also interested in working with package manager developers, and I bet there’ll be interesting discussions at IPFS Camp in just a few days. We’re eager to pursue our IPFS integration work, and if you’d like to join us and hack the good hack, let’s get in touch!

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

17 June, 2019 02:30PM by Ludovic Courtès

June 05, 2019

GNUnet News

2019-06-05: GNUnet 0.11.5 released

2019-06-05: GNUnet 0.11.5 released

We are pleased to announce the release of GNUnet 0.11.5.

This is a bugfix release for 0.11.4, mostly fixing a few minor bugs and improving performance, in particular for identity management with a large number of egos. In the wake of this release, we also launched the REST API documentation. In terms of usability, users should be aware that there are still a large number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny (about 200 peers) and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.11.5 release is still only suitable for early adopters with some reasonable pain tolerance.

Download links

gnunet-gtk saw some minor changes to adopt it to API changes in the main code related to the identity improvements. gnunet-fuse was not released again, as there were no changes and the 0.11.0 version is expected to continue to work fine with gnunet-0.11.5.

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.11.5 (since 0.11.4)

  • gnunet-identity is much faster when creating or deleting egos given a large number of existing egos.
  • GNS now supports CAA records.
  • Documentation, comments and code quality was improved.

Known Issues

  • There are known major design issues in the TRANSPORT, ATS and CORE subsystems which will need to be addressed in the future to achieve acceptable usability, performance and security.
  • There are known moderate implementation limitations in CADET that negatively impact performance. Also CADET may unexpectedly deliver messages out-of-order.
  • There are known moderate design issues in FS that also impact usability and performance.
  • There are minor implementation limitations in SET that create unnecessary attack surface for availability.
  • The RPS subsystem remains experimental.
  • Some high-level tests in the test-suite fail non-deterministically due to the low-level TRANSPORT issues.

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.

Thanks

This release was the work of many people. The following people contributed code and were thus easily identified: Christian Grothoff, Florian Dold, Marcello Stanisci, ng0, Martin Schanzenbach and Bernd Fix.

05 June, 2019 12:00AM

June 04, 2019

gengetopt @ Savannah

2.23 released

New version (2.23) was released. Main changes were in build system, so please report any issues you notice.

04 June, 2019 08:41AM by Gray Wolf

June 03, 2019

Andy Wingo

pictie, my c++-to-webassembly workbench

Hello, interwebs! Today I'd like to share a little skunkworks project with y'all: Pictie, a workbench for WebAssembly C++ integration on the web.

loading pictie...

>&&<&>>>&&><<>>&&<><>>

wtf just happened????!?

So! If everything went well, above you have some colors and a prompt that accepts Javascript expressions to evaluate. If the result of evaluating a JS expression is a painter, we paint it onto a canvas.

But allow me to back up a bit. These days everyone is talking about WebAssembly, and I think with good reason: just as many of the world's programs run on JavaScript today, tomorrow much of it will also be in languages compiled to WebAssembly. JavaScript isn't going anywhere, of course; it's around for the long term. It's the "also" aspect of WebAssembly that's interesting, that it appears to be a computing substrate that is compatible with JS and which can extend the range of the kinds of programs that can be written for the web.

And yet, it's early days. What are programs of the future going to look like? What elements of the web platform will be needed when we have systems composed of WebAssembly components combined with JavaScript components, combined with the browser? Is it all going to work? Are there missing pieces? What's the status of the toolchain? What's the developer experience? What's the user experience?

When you look at the current set of applications targetting WebAssembly in the browser, mostly it's games. While compelling, games don't provide a whole lot of insight into the shape of the future web platform, inasmuch as there doesn't have to be much JavaScript interaction when you have an already-working C++ game compiled to WebAssembly. (Indeed, much of the incidental interactions with JS that are currently necessary -- bouncing through JS in order to call WebGL -- people are actively working on removing all of that overhead, so that WebAssembly can call platform facilities (WebGL, etc) directly. But I digress!)

For WebAssembly to really succeed in the browser, there should also be incremental stories -- what does it look like when you start to add WebAssembly modules to a system that is currently written mostly in JavaScript?

To find out the answers to these questions and to evaluate potential platform modifications, I needed a small, standalone test case. So... I wrote one? It seemed like a good idea at the time.

pictie is a test bed

Pictie is a simple, standalone C++ graphics package implementing an algebra of painters. It was created not to be a great graphics package but rather to be a test-bed for compiling C++ libraries to WebAssembly. You can read more about it on its github page.

Structurally, pictie is a modern C++ library with a functional-style interface, smart pointers, reference types, lambdas, and all the rest. We use emscripten to compile it to WebAssembly; you can see more information on how that's done in the repository, or check the README.

Pictie is inspired by Peter Henderson's "Functional Geometry" (1982, 2002). "Functional Geometry" inspired the Picture language from the well-known Structure and Interpretation of Computer Programs computer science textbook.

prototype in action

So far it's been surprising how much stuff just works. There's still lots to do, but just getting a C++ library on the web is pretty easy! I advise you to take a look to see the details.

If you are thinking of dipping your toe into the WebAssembly water, maybe take a look also at Pictie when you're doing your back-of-the-envelope calculations. You can use it or a prototype like it to determine the effects of different compilation options on compile time, load time, throughput, and network trafic. You can check if the different binding strategies are appropriate for your C++ idioms; Pictie currently uses embind (source), but I would like to compare to WebIDL as well. You might also use it if you're considering what shape your C++ library should have to have a minimal overhead in a WebAssembly context.

I use Pictie as a test-bed when working on the web platform; the weakref proposal which adds finalization, leak detection, and working on the binding layers around Emscripten. Eventually I'll be able to use it in other contexts as well, with the WebIDL bindings proposal, typed objects, and GC.

prototype the web forward

As the browser and adjacent environments have come to dominate programming in practice, we lost a bit of the delightful variety from computing. JS is a great language, but it shouldn't be the only medium for programs. WebAssembly is part of this future world, waiting in potentia, where applications for the web can be written in any of a number of languages. But, this future world will only arrive if it "works" -- if all of the various pieces, from standards to browsers to toolchains to virtual machines, only if all of these pieces fit together in some kind of sensible way. Now is the early phase of annealing, when the platform as a whole is actively searching for its new low-entropy state. We're going to need a lot of prototypes to get from here to there. In that spirit, may your prototypes be numerous and soon replaced. Happy annealing!

03 June, 2019 10:10AM by Andy Wingo

June 01, 2019

unifont @ Savannah

GNU Unifont 12.1.02 Released

1 June 2019 Unifont 12.1.02 is now available. This version introduces a Japanese TrueType version, unifont_jp, replacing over 10,000 ideographs from the default Unifont build with kanji from the public domain Jiskan16 font.  This version also contains redrawn Devanagari and Bengali glyphs.  Full details are in the ChangeLog file.

Download this release at:

https://ftpmirror.gnu.org/unifont/unifont-12.1.02/

or if that fails,

https://ftp.gnu.org/gnu/unifont/unifont-12.1.02/

or, as a last resort,

ftp://ftp.gnu.org/gnu/unifont/unifont-12.1.02/

01 June, 2019 10:43PM by Paul Hardy

May 28, 2019

GNU Guile

Join the Guile and Guix Days in Strasbourg, June 21–22!

We’re organizing Guile Days at the University of Strasbourg, France, co-located with the Perl Workshop, on June 21st and 22nd.

Guile Days 2019

Update: The program is now complete, view the schedule on-line.

The schedule is not complete yet, but we can already announce a couple of events:

  • Getting Started with GNU Guix will be an introductory hands-on session to Guix, targeting an audience of people who have some experience with GNU/Linux but are new to Guix.
  • During a “code buddy” session, experienced Guile programmers will be here to get you started programming in Guile, and to answer questions and provide guidance while you hack at your pace on the project of your choice.

If you’re already a Guile or Guix user or developer, consider submitting by June 8th, on the web site, talks on topics such as:

  • The neat Guile- or Guix-related project you’ve been working on.

  • Cool Guile hacking topics—Web development, databases, system development, graphical user interfaces, shells, you name it!

  • Fancy Guile technology—concurrent programming with Fibers, crazy macrology, compiler front-ends, JIT compilation and Guile 3, development environments, etc.

  • Guixy things: on Guix subsystems, services, the Shepherd, Guile development with Guix, all things OS-level in Guile, Cuirass, reproducible builds, bootstrapping, Mes and Gash, all this!

You can also propose hands-on workshops, which could last anything from an hour to a day. We expect newcomers at this event, people who don’t know Guile and Guix and want to learn about it. Consider submitting introductory workshops on Guile and Guix!

We encourage submissions from people in communities usually underrepresented in free software, including women, people in sexual minorities, or people with disabilities.

We want to make this event a pleasant experience for everyone, and participation is subject to a code of conduct.

Many thanks to the organizers of the Perl Workshop and to the sponsors of the event: RENATER, Université de Strasbourg, X/Stra, and Worteks.

28 May, 2019 09:11AM by Ludovic Courtès (guile-devel@gnu.org)

May 24, 2019

Andy Wingo

lightening run-time code generation

The upcoming Guile 3 release will have just-in-time native code generation. Finally, amirite? There's lots that I'd like to share about that and I need to start somewhere, so this article is about one piece of it: Lightening, a library to generate machine code.

on lightning

Lightening is a fork of GNU Lightning, adapted to suit the needs of Guile. In fact at first we chose to use GNU Lightning directly, "vendored" into the Guile source respository via the git subtree mechanism. (I see that in the meantime, git gained a kind of a subtree command; one day I will have to figure out what it's for.)

GNU Lightning has lots of things going for it. It has support for many architectures, even things like Itanium that I don't really care about but which a couple Guile users use. It abstracts the differences between e.g. x86 and ARMv7 behind a common API, so that in Guile I don't need to duplicate the JIT for each back-end. Such an abstraction can have a slight performance penalty, because maybe it missed the opportunity to generate optimal code, but this is acceptable to me: I was more concerned about the maintenance burden, and GNU Lightning seemed to solve that nicely.

GNU Lightning also has fantastic documentation. It's written in C and not C++, which is the right thing for Guile at this time, and it's also released under the LGPL, which is Guile's license. As it's a GNU project there's a good chance that GNU Guile's needs might be taken into account if any changes need be made.

I mentally associated Paolo Bonzini with the project, who I knew was a good no-nonsense hacker, as he used Lightning for a smalltalk implementation; and I knew also that Matthew Flatt used Lightning in Racket. Then I looked in the source code to see architecture support and was pleasantly surprised to see MIPS, POWER, and so on, so I went with GNU Lightning for Guile in our 2.9.1 release last October.

on lightening the lightning

When I chose GNU Lightning, I had in mind that it was a very simple library to cheaply write machine code into buffers. (Incidentally, if you have never worked with this stuff, I remember a time when I was pleasantly surprised to realize that an assembler could be a library and not just a program that processes text. A CPU interprets machine code. Machine code is just bytes, and you can just write C (or Scheme, or whatever) functions that write bytes into buffers, and pass those buffers off to the CPU. Now you know!)

Anyway indeed GNU Lightning 1.4 or so was that very simple library that I had in my head. I needed simple because I would need to debug any problems that came up, and I didn't want to add more complexity to the C side of Guile -- eventually I should be migrating this code over to Scheme anyway. And, of course, simple can mean fast, and I needed fast code generation.

However, GNU Lightning has a new release series, the 2.x series. This series is a rewrite in a way of the old version. On the plus side, this new series adds all of the weird architectures that I was pleasantly surprised to see. The old 1.4 didn't even have much x86-64 support, much less AArch64.

This new GNU Lightning 2.x series fundamentally changes the way the library works: instead of having a jit_ldr_f function that directly emits code to load a float from memory into a floating-point register, the jit_ldr_f function now creates a node in a graph. Before code is emitted, that graph is optimized, some register allocation happens around call sites and for temporary values, dead code is elided, and so on, then the graph is traversed and code emitted.

Unfortunately this wasn't really what I was looking for. The optimizations were a bit opaque to me and I just wanted something simple. Building the graph took more time than just emitting bytes into a buffer, and it takes more memory as well. When I found bugs, I couldn't tell whether they were related to my usage or in the library itself.

In the end, the node structure wasn't paying its way for me. But I couldn't just go back to the 1.4 series that I remembered -- it didn't have the architecture support that I needed. Faced with the choice between changing GNU Lightning 2.x in ways that went counter to its upstream direction, switching libraries, or refactoring GNU Lightning to be something that I needed, I chose the latter.

in which our protagonist cannot help himself

Friends, I regret to admit: I named the new thing "Lightening". True, it is a lightened Lightning, yes, but I am aware that it's horribly confusing. Pronounced like almost the same, visually almost identical -- I am a bad person. Oh well!!

I ported some of the existing GNU Lightning backends over to Lightening: ia32, x86-64, ARMv7, and AArch64. I deleted the backends for Itanium, HPPA, Alpha, and SPARC; they have no Debian ports and there is no situation in which I can afford to do QA on them. I would gladly accept contributions for PPC64, MIPS, RISC-V, and maybe S/390. At this point I reckon it takes around 20 hours to port an additional backend from GNU Lightning to Lightening.

Incidentally, if you need a code generation library, consider your choices wisely. It is likely that Lightening is not right for you. If you can afford platform-specific code and you need C, Lua's DynASM is probably the right thing for you. If you are in C++, copy the assemblers from a JavaScript engine -- C++ offers much more type safety, capabilities for optimization, and ergonomics.

But if you can only afford one emitter of JIT code for all architectures, you need simple C, you don't need register allocation, you want a simple library to just include in your source code, and you are good with the LGPL, then Lightening could be a thing for you. Check the gitlab page for info on how to test Lightening and how to include it into your project.

giving it a spin

Yesterday's Guile 2.9.2 release includes Lightening, so you can give it a spin. The switch to Lightening allowed us to lower our JIT optimization threshold by a factor of 50, letting us generate fast code sooner. If you try it out, let #guile on freenode know how it went. In any case, happy hacking!

24 May, 2019 08:44AM by Andy Wingo

May 23, 2019

GNU Guile

GNU Guile 2.9.2 (beta) released

We are delighted to announce GNU Guile 2.9.2, the second beta release in preparation for the upcoming 3.0 stable series. See the release announcement for full details and a download link.

This release extends just-in-time (JIT) native code generation support to the ia32, ARMv7, and AArch64 architectures. Under the hood, we swapped out GNU Lightning for a related fork called Lightening, which was better adapted to Guile's needs.

GNU Guile 2.9.2 is a beta release, and as such offers no API or ABI stability guarantees. Users needing a stable Guile are advised to stay on the stable 2.2 series.

Users on the architectures that just gained JIT support are especially encouraged to report experiences (good or bad) to guile-devel@gnu.org. If you know you found a bug, please do send a note to bug-guile@gnu.org. Happy hacking!

23 May, 2019 09:00PM by Andy Wingo (guile-devel@gnu.org)

Andy Wingo

bigint shipping in firefox!

I am delighted to share with folks the results of a project I have been helping out on for the last few months: implementation of "BigInt" in Firefox, which is finally shipping in Firefox 68 (beta).

what's a bigint?

BigInts are a new kind of JavaScript primitive value, like numbers or strings. A BigInt is a true integer: it can take on the value of any finite integer (subject to some arbitrarily large implementation-defined limits, such as the amount of memory in your machine). This contrasts with JavaScript number values, which have the well-known property of only being able to precisely represent integers between -253 and 253.

BigInts are written like "normal" integers, but with an n suffix:

var a = 1n;
var b = a + 42n;
b << 64n
// result: 793209995169510719488n

With the bigint proposal, the usual mathematical operations (+, -, *, /, %, <<, >>, **, and the comparison operators) are extended to operate on bigint values. As a new kind of primitive value, bigint values have their own typeof:

typeof 1n
// result: 'bigint'

Besides allowing for more kinds of math to be easily and efficiently expressed, BigInt also allows for better interoperability with systems that use 64-bit numbers, such as "inodes" in file systems, WebAssembly i64 values, high-precision timers, and so on.

You can read more about the BigInt feature over on MDN, as usual. You might also like this short article on BigInt basics that V8 engineer Mathias Bynens wrote when Chrome shipped support for BigInt last year. There is an accompanying language implementation article as well, for those of y'all that enjoy the nitties and the gritties.

can i ship it?

To try out BigInt in Firefox, simply download a copy of Firefox Beta. This version of Firefox will be fully released to the public in a few weeks, on July 9th. If you're reading this in the future, I'm talking about Firefox 68.

BigInt is also shipping already in V8 and Chrome, and my colleague Caio Lima has an project in progress to implement it in JavaScriptCore / WebKit / Safari. Depending on your target audience, BigInt might be deployable already!

thanks

I must mention that my role in the BigInt work was relatively small; my Igalia colleague Robin Templeton did the bulk of the BigInt implementation work in Firefox, so large ups to them. Hearty thanks also to Mozilla's Jan de Mooij and Jeff Walden for their patient and detailed code reviews.

Thanks as well to the V8 engineers for their open source implementation of BigInt fundamental algorithms, as we used many of them in Firefox.

Finally, I need to make one big thank-you, and I hope that you will join me in expressing it. The road to ship anything in a web browser is long; besides the "simple matter of programming" that it is to implement a feature, you need a specification with buy-in from implementors and web standards people, you need a good working relationship with a browser vendor, you need willing technical reviewers, you need to follow up on the inevitable security bugs that any browser change causes, and all of this takes time. It's all predicated on having the backing of an organization that's foresighted enough to invest in this kind of long-term, high-reward platform engineering.

In that regard I think all people that work on the web platform should send a big shout-out to Tech at Bloomberg for making BigInt possible by underwriting all of Igalia's work in this area. Thank you, Bloomberg, and happy hacking!

23 May, 2019 12:13PM by Andy Wingo

May 22, 2019

parallel @ Savannah

GNU Parallel 20190522 ('Akihito') released

GNU Parallel 20190522 ('Akihito') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

GNU Parallel is 10 years old in a year on 2020-04-22. You are here by invited to a reception on Friday 2020-04-17.

See https://www.gnu.org/software/parallel/10-years-anniversary.html

Quote of the month:

  Amazingly useful script!
    -- unxusr@reddit.com

New in this release:

  • --group-by groups lines depending on value of a column. The value can be computed.
  • How to compress (bzip/gzip) a very large text quickly? https://medium.com/@gchandra/how-to-compress-bzip-gzip-a-very-large-text-quickly-27c11f4c6681
  • Simple tutorial to install & use GNU Parallel https://medium.com/@gchandra/simple-tutorial-to-install-use-gnu-parallel-79251120d618
  • Bug fixes and man page updates.

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 May, 2019 08:29PM by Ole Tange

May 21, 2019

GNU Guix

Creating and using a custom Linux kernel on Guix System

Guix is, at its core, a source based distribution with substitutes, and as such building packages from their source code is an expected part of regular package installations and upgrades. Given this starting point, it makes sense that efforts are made to reduce the amount of time spent compiling packages, and recent changes and upgrades to the building and distribution of substitutes continues to be a topic of discussion within Guix.

One of the packages which I prefer to not build myself is the Linux-Libre kernel. The kernel, while not requiring an overabundance of RAM to build, does take a very long time on my build machine (which my children argue is actually their Kodi computer), and I will often delay reconfiguring my laptop while I want for a substitute to be prepared by the official build farm. The official kernel configuration, as is the case with many GNU/Linux distributions, errs on the side of inclusiveness, and this is really what causes the build to take such a long time when I build the package for myself.

The Linux kernel, however, can also just be described as a package installed on my machine, and as such can be customized just like any other package. The procedure is a little bit different, although this is primarily due to the nature of how the package definition is written.

The linux-libre kernel package definition is actually a procedure which creates a package.

(define* (make-linux-libre version hash supported-systems
                           #:key
                           ;; A function that takes an arch and a variant.
                           ;; See kernel-config for an example.
                           (extra-version #f)
                           (configuration-file #f)
                           (defconfig "defconfig")
                           (extra-options %default-extra-linux-options)
                           (patches (list %boot-logo-patch)))
  ...)

The current linux-libre package is for the 5.1.x series, and is declared like this:

(define-public linux-libre
  (make-linux-libre %linux-libre-version
                    %linux-libre-hash
                    '("x86_64-linux" "i686-linux" "armhf-linux" "aarch64-linux")
                    #:patches %linux-libre-5.1-patches
                    #:configuration-file kernel-config))

Any keys which are not assigned values inherit their default value from the make-linux-libre definition. When comparing the two snippets above, you may notice that the code comment in the first doesn't actually refer to the #:extra-version keyword; it is actually for #:configuration-file. Because of this, it is not actually easy to include a custom kernel configuration from the definition, but don't worry, there are other ways to work with what we do have.

There are two ways to create a kernel with a custom kernel configuration. The first is to provide a standard .config file during the build process by including an actual .config file as a native input to our custom kernel. The following is a snippet from the custom 'configure phase of the make-linux-libre package definition:

(let ((build  (assoc-ref %standard-phases 'build))
      (config (assoc-ref (or native-inputs inputs) "kconfig")))

  ;; Use a custom kernel configuration file or a default
  ;; configuration file.
  (if config
      (begin
        (copy-file config ".config")
        (chmod ".config" #o666))
      (invoke "make" ,defconfig))

Below is a sample kernel package for one of my computers. Linux-Libre is just like other regular packages and can be inherited and overridden like any other:

(define-public linux-libre/E2140
  (package
    (inherit linux-libre)
    (native-inputs
     `(("kconfig" ,(local-file "E2140.config"))
      ,@(alist-delete "kconfig"
                      (package-native-inputs linux-libre))))))

In the same directory as the file defining linux-libre-E2140 is a file named E2140.config, which is an actual kernel configuration file. I left the defconfig keyword of make-linux-libre blank, so the only kernel configuration in the package is the one which I included as a native-input.

The second way to create a custom kernel is to pass a new value to the extra-options keyword of the make-linux-libre procedure. The extra-options keyword works with another function defined right below it:

(define %default-extra-linux-options
  `(;; https://lists.gnu.org/archive/html/guix-devel/2014-04/msg00039.html
   ("CONFIG_DEVPTS_MULTIPLE_INSTANCES" . #t)
   ;; Modules required for initrd:
   ("CONFIG_NET_9P" . m)
   ("CONFIG_NET_9P_VIRTIO" . m)
   ("CONFIG_VIRTIO_BLK" . m)
   ("CONFIG_VIRTIO_NET" . m)
   ("CONFIG_VIRTIO_PCI" . m)
   ("CONFIG_VIRTIO_BALLOON" . m)
   ("CONFIG_VIRTIO_MMIO" . m)
   ("CONFIG_FUSE_FS" . m)
   ("CONFIG_CIFS" . m)
   ("CONFIG_9P_FS" . m)))

(define (config->string options)
  (string-join (map (match-lambda
                      ((option . 'm)
                       (string-append option "=m"))
                      ((option . #t)
                       (string-append option "=y"))
                      ((option . #f)
                       (string-append option "=n")))
                    options)
               "\n"))

And in the custom configure script from the make-linux-libre package:

;; Appending works even when the option wasn't in the
;; file.  The last one prevails if duplicated.
(let ((port (open-file ".config" "a"))
      (extra-configuration ,(config->string extra-options)))
  (display extra-configuration port)
  (close-port port))

(invoke "make" "oldconfig"))))

So by not providing a configuration-file the .config starts blank, and then we write into it the collection of flags that we want. Here's another custom kernel which I have:

(define %macbook41-full-config
  (append %macbook41-config-options
          %filesystems
          %efi-support
          %emulation
          (@@ (gnu packages linux) %default-extra-linux-options)))

(define-public linux-libre-macbook41
  ;; XXX: Access the internal 'make-linux-libre' procedure, which is
  ;; private and unexported, and is liable to change in the future.
  ((@@ (gnu packages linux) make-linux-libre) (@@ (gnu packages linux) %linux-libre-version)
                      (@@ (gnu packages linux) %linux-libre-hash)
                      '("x86_64-linux")
                      #:extra-version "macbook41"
                      #:patches (@@ (gnu packages linux) %linux-libre-5.1-patches)
                      #:extra-options %macbook41-config-options))

From the above example %filesystems is a collection of flags I compiled enabling different filesystem support, %efi-support enables EFI support and %emulation enables my x86_64-linux machine to act in 32-bit mode also. %default-extra-linux-options are the ones quoted above, which had to be added in since I replaced them in the extra-options keyword.

This all sounds like it should be doable, but how does one even know which modules are required for their system? The two places I found most helpful to try to answer this question were the Gentoo Handbook, and the documentation from the kernel itself. From the kernel documentation, it seems that make localmodconfig is the command we want.

In order to actually run make localmodconfig we first need to get and unpack the kernel source code:

tar xf $(guix build linux-libre --source)

Once inside the directory containing the source code run touch .config to create an initial, empty .config to start with. make localmodconfig works by seeing what you already have in .config and letting you know what you're missing. If the file is blank then you're missing everything. The next step is to run:

guix environment linux-libre -- make localmodconfig

and note the output. Do note that the .config file is still empty. The output generally contains two types of warnings. The first start with "WARNING" and can actually be ignored in our case. The second read:

module pcspkr did not have configs CONFIG_INPUT_PCSPKR

For each of these lines, copy the CONFIG_XXXX_XXXX portion into the .config in the directory, and append =m, so in the end it looks like this:

CONFIG_INPUT_PCSPKR=m
CONFIG_VIRTIO=m

After copying all the configuration options, run make localmodconfig again to make sure that you don't have any output starting with "module". After all of these machine specific modules there are a couple more left that are also needed. CONFIG_MODULES is necessary so that you can build and load modules separately and not have everything built into the kernel. CONFIG_BLK_DEV_SD is required for reading from hard drives. It is possible that there are other modules which you will need.

This post does not aim to be a guide to configuring your own kernel however, so if you do decide to build a custom kernel you'll have to seek out other guides to create a kernel which is just right for your needs.

The second way to setup the kernel configuration makes more use of Guix's features and allows you to share configuration segments between different kernels. For example, all machines using EFI to boot have a number of EFI configuration flags that they need. It is likely that all the kernels will share a list of filesystems to support. By using variables it is easier to see at a glance what features are enabled and to make sure you don't have features in one kernel but missing in another.

Left undiscussed however, is Guix's initrd and its customization. It is likely that you'll need to modify the initrd on a machine using a custom kernel, since certain modules which are expected to be built may not be available for inclusion into the initrd.

Suggestions and contributions toward working toward a satisfactory custom initrd and kernel are welcome!

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

21 May, 2019 12:00PM by Efraim Flashner

May 19, 2019

GNU Guix 1.0.1 released

We are pleased to announce the release of GNU Guix version 1.0.1. This new version fixes bugs in the graphical installer for the standalone Guix System.

The release comes with ISO-9660 installation images, a virtual machine image, and with tarballs to install the package manager on top of your GNU/Linux distro, either from source or from binaries. Guix users can update by running guix pull.

It’s been just over two weeks since we announced 1.0.0—two weeks and 706 commits by 40 people already!

This is primarily a bug-fix release, specifically focusing on issues in the graphical installer for the standalone system:

  • The most embarrassing bug would lead the graphical installer to produce a configuration where %base-packages was omitted from the packages field. Consequently, the freshly installed system would not have the usual commands in $PATHls, ps, etc.—and Xfce would fail to start for that reason. See below for a “post-mortem” analysis.
  • The wpa-supplicant service would sometimes fail to start in the installation image, thereby breaking network access; this is now fixed.
  • The installer now allows you to toggle the visibility of passwords and passphrases, and it no longer restricts their length.
  • The installer can now create Btrfs file systems.
  • network-manager-applet is now part of %desktop-services, and thus readily usable not just from GNOME but also from Xfce.
  • The NEWS file has more details, but there were also minor bug fixes for guix environment, guix search, and guix refresh.

A couple of new features were reviewed in time to make it into 1.0.1:

  • guix system docker-image now produces an OS image with an “entry point”, which makes it easier to use than before.
  • guix system container has a new --network option, allowing the container to share networking access with the host.
  • 70 new packages were added and 483 packages were updated.
  • Translations were updated as usual and we are glad to announce a 20%-complete Russian translation of the manual.

Recap of bug #35541

The 1.0.1 release was primarily motivated by bug #35541, which was reported shortly after the 1.0.0 release. If you installed Guix System with the graphical installer, chances are that, because of this bug, you ended up with a system where all the usual GNU/Linux commands—ls, grep, ps, etc.—were not in $PATH. That in turn would also prevent Xfce from starting, if you chose that desktop environment for your system.

We quickly published a note in the system installation instructions explaining how to work around the issue:

  • First, install packages that provide those commands, along with the text editor of your choice (for example, emacs or vim):

    guix install coreutils findutils grep procps sed emacs vim
  • At this point, the essential commands you would expect are available. Open your configuration file with your editor of choice, for example emacs, running as root:

    sudo emacs /etc/config.scm
  • Change the packages field to add the “base packages” to the list of globally-installed packages, such that your configuration looks like this:

    (operating-system
      ;; … snip …
      (packages (append (list (specification->package "nss-certs"))
                        %base-packages))
      ;; … snip …
      )
  • Reconfigure the system so that your new configuration is in effect:

    guix pull && sudo guix system reconfigure /etc/config.scm

If you already installed 1.0.0, you can perform the steps above to get all these core commands back.

Guix is purely declarative: if you give it an operating system definition where the “base packages” are not available system-wide, then it goes ahead and installs precisely that. That’s exactly what happened with this bug: the installer generated such a configuration and passed it to guix system init as part of the installation process.

Lessons learned

Technically, this is a “trivial” bug: it’s fixed by adding one line to your operating system configuration and reconfiguring, and the fix for the installer itself is also a one-liner. Nevertheless, it’s obviously a serious bug for the impression it gives—this is not the user experience we want to offer. So how did such a serious bug go through unnoticed?

For several years now, Guix has had a number of automated system tests running in virtual machines (VMs). These tests primarily ensure that system services work as expected, but some of them specifically test system installation: installing to a RAID or encrypted device, with a separate /home, using Btrfs, etc. These tests even run on our continuous integration service (search for the “tests.*” jobs there).

Unfortunately, those installation tests target the so-called “manual” installation process, which is scriptable. They do not test the installer’s graphical user interface. Consequently, testing the user interface (UI) itself was a manual process. Our attention was, presumably, focusing more on UI aspects since—so we thought—the actual installation tests were already taken care of by the system tests. That the generated system configuration could be syntactically correct but definitely wrong from a usability viewpoint perhaps didn’t occur to us. The end result is that the issue went unnoticed.

The lesson here is that: manual testing should also look for issues in “unexpected places”, and more importantly, we need automated tests for the graphical UI. The Debian and Guix installer UIs are similar—both using the Newt toolkit. Debian tests its installer using “pre-seeds” (code), which are essentially answers to all the questions and choices the UI would present. We could adopt a similar approach, or we could test the UI itself at a lower level—reading the screen, and simulating key strokes. UI testing is notoriously tricky so we’ll have to figure out how to get there.

Conclusion

Our 1.0 party was a bit spoiled by this bug, and we are sorry that installation was disappointing to those of you who tried 1.0. We hope 1.0.1 will allow you to try and see what declarative and programmable system configuration management is like, because that’s where the real value of Guix System is—the graphical installer is icing on the cake.

Join us on #guix and on the mailing lists!

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

19 May, 2019 11:30PM by Ludovic Courtès

bison @ Savannah

Bison 3.4 released [stable]

We are happy to announce the release of Bison 3.4.

A particular focus was put on improving the diagnostics, which are now
colored by default, and accurate with multibyte input.  Their format was
also changed, and is now similar to GCC 9's diagnostics.

Users of the default backend (yacc.c) can use the new %define variable
api.header.include to avoid duplicating the content of the generated header
in the generated parser.  There are two new examples installed, including a
reentrant calculator which supports recursive calls to the parser and
Flex-generated scanner.

See below for more details.

==================================================================

Bison is a general-purpose parser generator that converts an annotated
context-free grammar into a deterministic LR or generalized LR (GLR) parser
employing LALR(1) parser tables.  Bison can also generate IELR(1) or
canonical LR(1) parser tables.  Once you are proficient with Bison, you can
use it to develop a wide range of language parsers, from those used in
simple desk calculators to complex programming languages.

Bison is upward compatible with Yacc: all properly-written Yacc grammars
work with Bison with no change.  Anyone familiar with Yacc should be able to
use Bison with little trouble.  You need to be fluent in C, C++ or Java
programming in order to use Bison.

Here is the GNU Bison home page:
   https://gnu.org/software/bison/

==================================================================

Here are the compressed sources:
  https://ftp.gnu.org/gnu/bison/bison-3.4.tar.gz   (4.1MB)
  https://ftp.gnu.org/gnu/bison/bison-3.4.tar.xz   (3.1MB)

Here are the GPG detached signatures[*]:
  https://ftp.gnu.org/gnu/bison/bison-3.4.tar.gz.sig
  https://ftp.gnu.org/gnu/bison/bison-3.4.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify bison-3.4.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys 0DDCAA3278D5264E

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
  Autoconf 2.69
  Automake 1.16.1
  Flex 2.6.4
  Gettext 0.19.8.1
  Gnulib v0.1-2563-gd654989d8

==================================================================

NEWS


* Noteworthy changes in release 3.4 (2019-05-19) [stable]

** Deprecated features

  The %pure-parser directive is deprecated in favor of '%define api.pure'
  since Bison 2.3b (2008-05-27), but no warning was issued; there is one
  now.  Note that since Bison 2.7 you are strongly encouraged to use
  '%define api.pure full' instead of '%define api.pure'.

** New features

*** Colored diagnostics

  As an experimental feature, diagnostics are now colored, controlled by the
  new options --color and --style.

  To use them, install the libtextstyle library before configuring Bison.
  It is available from

    https://alpha.gnu.org/gnu/gettext/

  for instance

    https://alpha.gnu.org/gnu/gettext/libtextstyle-0.8.tar.gz

  The option --color supports the following arguments:
    - always, yes: Enable colors.
    - never, no: Disable colors.
    - auto, tty (default): Enable colors if the output device is a tty.

  To customize the styles, create a CSS file similar to

    /* bison-bw.css */
    .warning   { }
    .error     { font-weight: 800; text-decoration: underline; }
    .note      { }

  then invoke bison with --style=bison-bw.css, or set the BISON_STYLE
  environment variable to "bison-bw.css".

*** Disabling output

  When given -fsyntax-only, the diagnostics are reported, but no output is
  generated.

  The name of this option is somewhat misleading as bison does more than
  just checking the syntax: every stage is run (including checking for
  conflicts for instance), except the generation of the output files.

*** Include the generated header (yacc.c)

  Before, when --defines is used, bison generated a header, and pasted an
  exact copy of it into the generated parser implementation file.  If the
  header name is not "y.tab.h", it is now #included instead of being
  duplicated.

  To use an '#include' even if the header name is "y.tab.h" (which is what
  happens with --yacc, or when using the Autotools' ylwrap), define
  api.header.include to the exact argument to pass to #include.  For
  instance:

    %define api.header.include {"parse.h"}

  or

    %define api.header.include {<parser/parse.h>}

*** api.location.type is now supported in C (yacc.c, glr.c)

  The %define variable api.location.type defines the name of the type to use
  for locations.  When defined, Bison no longer defines YYLTYPE.

  This can be used in programs with several parsers to factor their
  definition of locations: let one of them generate them, and the others
  just use them.

** Changes

*** Graphviz output

  In conformance with the recommendations of the Graphviz team, if %require
  "3.4" (or better) is specified, the option --graph generates a *.gv file
  by default, instead of *.dot.

*** Diagnostics overhaul

  Column numbers were wrong with multibyte characters, which would also
  result in skewed diagnostics with carets.  Beside, because we were
  indenting the quoted source with a single space, lines with tab characters
  were incorrectly underlined.

  To address these issues, and to be clearer, Bison now issues diagnostics
  as GCC9 does.  For instance it used to display (there's a tab before the
  opening brace):

    foo.y:3.37-38: error: $2 of ‘expr’ has no declared type
     expr: expr '+' "number"        { $$ = $1 + $2; }
                                         ^~
  It now reports

    foo.y:3.37-38: error: $2 of ‘expr’ has no declared type
        3 | expr: expr '+' "number" { $$ = $1 + $2; }
          |                                     ^~

  Other constructs now also have better locations, resulting in more precise
  diagnostics.

*** Fix-it hints for %empty

  Running Bison with -Wempty-rules and --update will remove incorrect %empty
  annotations, and add the missing ones.

*** Generated reports

  The format of the reports (parse.output) was improved for readability.

*** Better support for --no-line.

  When --no-line is used, the generated files are now cleaner: no lines are
  generated instead of empty lines.  Together with using api.header.include,
  that should help people saving the generated files into version control
  systems get smaller diffs.

** Documentation

  A new example in C shows an simple infix calculator with a hand-written
  scanner (examples/c/calc).

  A new example in C shows a reentrant parser (capable of recursive calls)
  built with Flex and Bison (examples/c/reccalc).

  There is a new section about the history of Yaccs and Bison.

** Bug fixes

  A few obscure bugs were fixed, including the second oldest (known) bug in
  Bison: it was there when Bison was entered in the RCS version control
  system, in December 1987.  See the NEWS of Bison 3.3 for the previous
  oldest bug.

19 May, 2019 10:01AM by Akim Demaille

May 16, 2019

FSF News

Six more devices from ThinkPenguin, Inc. now FSF-certified to Respect Your Freedom

This is ThinkPenguin's second batch of devices to receive RYF certification this spring. The FSF announced certification of seven other devices from ThinkPenguin on March 21st. This latest collection of devices makes ThinkPenguin the retailer with the largest catalog of RYF-certified devices.

"It's unfortunate that so many of even the simplest devices out there have surprise proprietary software requirements. RYF is an antidote for that. It connects ethical shoppers concerned about their freedom with companies offering options respecting that freedom," said the FSF's executive director, John Sullivan.

Today's certifications expands the availability of RYF-certified peripheral devices. The Penguin USB 2.0 External USB Stereo Sound Adapter and the 5.1 Channels 24-bit 96KHz PCI Express Audio Sound Card help users get the most of their computers in terms of sound quality. For wireless connectivity, ThinkPenguin offers the Wireless N PCI Express Dual-Band Mini Half-Height Card and Penguin Wireless N Mini PCIe Card. For users with an older printer, the USB to Parallel Printer Cable can let them continue to use it with their more current hardware. Finally, the PCIe eSATA / SATA 6Gbps Controller Card help users to connect to external eSATA devices as well as internal SATA.

"I've spent the last 14 years working on projects aimed at making free software adoption easy for everyone, but the single greatest obstacle over the past 20 years has not been software. It's been hardware. The RYF program helps solve this problem by linking users to trustworthy sources where they can get hardware guaranteed to work on GNU/Linux, and be properly supported using free software," said Christopher Waid, founder and CEO of ThinkPenguin.

While ThinkPenguin has consistently sought certification since the inception of the RYF program -- gaining their first certification in 2013, and adding several more over the years since -- the pace at which they are gaining certifications now eclipses all past efforts.

"ThinkPenguin continues to impress with the rapid expansion of their catalog of RYF-certified devices. Adding 14 new devices in a little over a month shows their dedication to the RYF certification program and the protection of users it represents," said the FSF's licensing and compliance manager, Donald Robertson, III.

To learn more about the Respects Your Freedom certification program, including details on the certification of these ThinkPenguin devices, please visit https://fsf.org/ryf.

Retailers interested in applying for certification can consult https://www.fsf.org/resources/hw/endorsement/criteria.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

About ThinkPenguin, Inc.

Started by Christopher Waid, founder and CEO, ThinkPenguin, Inc., is a consumer-driven company with a mission to bring free software to the masses. At the core of the company is a catalog of computers and accessories with broad support for GNU/Linux. The company provides technical support for end-users and works with the community, distributions, and upstream projects to make GNU/Linux all that it can be.

Media Contacts

Donald Robertson, III
Licensing and Compliance Manager
Free Software Foundation
+1 (617) 542 5942
licensing@fsf.org

ThinkPenguin, Inc.
+1 (888) 39 THINK (84465) x703
media@thinkpenguin.com

16 May, 2019 05:44PM

May 12, 2019

GNUnet News

2019-05-12: GNUnet 0.11.4 released

2019-05-12: GNUnet 0.11.4 released

We are pleased to announce the release of GNUnet 0.11.4.

This is a bugfix release for 0.11.3, mostly fixing minor bugs, improving documentation and fixing various build issues. In terms of usability, users should be aware that there are still a large number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny (about 200 peers) and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.11.4 release is still only suitable for early adopters with some reasonable pain tolerance.

Download links

(gnunet-gtk and gnunet-fuse were not released again, as there were no changes and the 0.11.0 versions are expected to continue to work fine with gnunet-0.11.4.)

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Note that GNUnet is now started using gnunet-arm -s. GNUnet should be stopped using gnunet-arm -e.

Noteworthy changes in 0.11.4

  • gnunet-arm -s no longer logs into the console by default and instead into a logfile (in $GNUNET_HOME).
  • The reclaim subsystem is no longer experimental. Further, the internal encryption scheme moved from ABE to GNS-style encryption.
  • GNUnet now depends on a more recent version of libmicrohttpd.
  • The REST API now includes read-only access to the configuration.
  • All manpages are now in mdoc format.
  • gnunet-download-manager.scm removed.

Known Issues

  • There are known major design issues in the TRANSPORT, ATS and CORE subsystems which will need to be addressed in the future to achieve acceptable usability, performance and security.
  • There are known moderate implementation limitations in CADET that negatively impact performance. Also CADET may unexpectedly deliver messages out-of-order.
  • There are known moderate design issues in FS that also impact usability and performance.
  • There are minor implementation limitations in SET that create unnecessary attack surface for availability.
  • The RPS subsystem remains experimental.
  • Some high-level tests in the test-suite fail non-deterministically due to the low-level TRANSPORT issues.

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.

Thanks

This release was the work of many people. The following people contributed code and were thus easily identified: ng0, Christian Grothoff, Hartmut Goebel, Martin Schanzenbach, Devan Carpenter, Naomi Phillips and Julius Bünger.

12 May, 2019 05:40PM

May 11, 2019

unifont @ Savannah

Unifont 12.1.01 Released

11 May 2019 Unifont 12.1.01 is now available. Significant changes in this version include the Reiwa Japanese era glyph (U+32FF), which was the only addition made in the Unicode 12.1.0 release of 7 May 2019; Rebecca Bettencourt has contributed many Under ConScript Uniocde Registry (UCSUR) scripts; and David Corbett and Johnnie Weaver modified glyphs in two Plane 1 scripts.  Full details are in the ChangeLog file.

Download this release at:

https://ftpmirror.gnu.org/unifont/unifont-12.1.01/

or if that fails,

https://ftp.gnu.org/gnu/unifont/unifont-12.1.01/

or, as a last resort,

ftp://ftp.gnu.org/gnu/unifont/unifont-12.1.01/

11 May, 2019 08:59PM by Paul Hardy

remotecontrol @ Savannah

May 09, 2019

gettext @ Savannah

GNU gettext 0.20 released

Download from https://ftp.gnu.org/pub/gnu/gettext/gettext-0.20.tar.gz

New in this release:

  • Support for reproducible builds:

  - msgfmt now eliminates the POT-Creation-Date header field from .mo files.

  • Improvements for translators:

  - update-po target in Makefile.in.in now uses msgmerge --previous.

  • Improvements for maintainers:

  - msgmerge now has an option --for-msgfmt, that produces a PO file meant for use by msgfmt only.  This option saves processing time, in particular by omitting fuzzy matching that is not useful in this situation.
  - The .pot file in a 'po' directory is now erased by "make maintainer-clean".
  - It is now possible to override xgettext options from the po/Makefile.in.in through options in XGETTEXT_OPTIONS (declared in po/Makevars).
  - The --intl option of the gettextize program (deprecated since 2010) is no longer available. Instead of including the intl sources in your package, we suggest making the libintl library an optional prerequisite of your package. This will simplify the build system of your package.
  - Accordingly, the Autoconf macro AM_GNU_GETTEXT_INTL_SUBDIR is gone as well.

  • Programming languages support:

  - C, C++:
    xgettext now supports strings in u8"..." syntax, as specified in C11 and C++11.
  - C, C++:
    xgettext now supports 'p'/'P' exponent markers in number tokens, as specified in C99 and C++17.
  - C++:
    xgettext now supports underscores in number tokens.
  - C++:
    xgettext now supports single-quotes in number tokens, as specified in C++14.
  - Shell:
    o The programs 'gettext', 'ngettext' now support a --context argument.
    o gettext.sh contains new function eval_pgettext and eval_npgettext for producing translations of messages with context.
  - Java:
    o xgettext now supports UTF-8 encoded .properties files (a new feature of Java 9).
    o The build system and tools now support Java 9, 10, and 11. On the other hand, support for old versions of Java (Java 5 and older, GCJ 4.2.x and older) has been dropped.
  - Perl:
    o Native support for context functions (pgettext, dpgettext, dcpgettext, npgettext, dnpgettext, dcnpgettext).
    o better detection of question mark and slash as operators (as opposed to regular expression delimiters).
  - Scheme:
    xgettext now parses the syntax for specialized byte vectors (#u8(...), #vu8(...), etc.) correctly.
  - Pascal:
    xgettext can now extract strings from .rsj files, produced by the Free Pascal compiler version 3.0.0 or newer.
  - Vala:
    xgettext now parses escape sequences in strings more accurately.
  - JavaScript:
    xgettext now parses template literals correctly.

  • Runtime behaviour:

  - The interpretation of the language preferences on macOS has been fixed.
  - Per-thread locales are now also supported on Solaris 11.4.
  - The replacements for the printf()/fprintf()/... functions that are provided through <libintl.h> on native Windows and NetBSD are now POSIX compliant.  There is no conflict any more between these replacements and other possible replacements provided by gnulib or mingw.

  • Libtextstyle:

  - This package installs a new library 'libtextstyle', together with a new header file <textstyle.h>.  It is a library for styling text output sent to a console or terminal emulator.  Packagers: please see the suggested packaging hints in the file PACKAGING.

09 May, 2019 02:15AM by Bruno Haible

May 07, 2019

cssc @ Savannah

CSSC-1.4.1 released

I'm pleased to announce the release of GNU CSSC, version
1.4.1.  This is a stable release.  The previous stable
release was 1.4.0.

Stable releases of CSSC are available from
https://ftp.gnu.org/gnu/cssc/.  Development releases and
release candidates are available from
https://alpha.gnu.org/gnu/cssc/.

CSSC ("Compatibly Stupid Source Control") is the GNU
project's replacement for the traditional Unix SCCS suite.
It aims for full compatibility, including precise nuances
of behaviour, support for all command-line options, and in
most cases bug-for-bug compatibility.  CSSC comes with an
extensive automated test suite.

If you are currently using SCCS to do version control of
software, you should be able to just drop in CSSC, even for
example if you have a large number of shell scripts which
are layered on top of SCCS and depend on it.  This should
allow you to develop on and for the GNU/Linux platform if
your source code exists only in an SCCS repository.  CSSC
also allows you to migrate to a more modern version control
system (such as git).

There is a mailing list for users of the CSSC suite.  To
join it, please send email to <cssc-users-request@gnu.org>
or visit the URL
http://lists.gnu.org/mailman/listinfo/cssc-users.

There is also a mailing list for (usually automated) mails
about bugs and changes to CSSC.  This is
http://lists.gnu.org/mailman/listinfo/bug-cssc.

For more information about CSSC, please see
http://www.gnu.org/software/cssc/.

These people have contributed to the development of CSSC :-

James Youngman, Ross Ridge, Eric Allman, Lars Hecking,
Larry McVoy, Dave Bodenstab, Malcolm Boff, Richard Polton,
Fila Kolodny, Peter Kjellerstedt, John Interrante, Marko
Rauhamaa, Achim Hoffann, Dick Streefland, Greg A. Woods,
Aron Griffis, Michael Sterrett, William W. Austin, Hyman
Rosen, Mark Reynolds, Sergey Ostashenko, Frank van
Maarseveen, Jeff Sheinberg, Thomas Duffy, Yann Dirson,
Martin Wilck

Many thanks to all the above people.

Changes since the previous release stable are:

New in CSSC-1.4.1, 2019-05-07

* This release - and future releases - of CSSC must be
  compiled with a C++ compiler that supports the 2011
  C++ standard.

* When the history file is updated (with admin or delta for
  example) the history file is not made executable simply
  because the 'x' flag is set.  Instead, preserve the
  executable-ness from the history file we are replacing.

* This release is based on updated versions of gnulib and of
the googletest unit test framework.

Checksums for the release file are:
$ for sum in sha1sum md5sum ; do $sum CSSC-1.4.1.tar.gz; done
bfb99cbd6255c7035e99455de7241d4753746fe1  CSSC-1.4.1.tar.gz
c9aaae7602e39b7a5d438b0cc48fcaa3  CSSC-1.4.1.tar.gz

Please report any bugs via this software to the CSSC bug
reporting page, http://savannah.gnu.org/bugs/?group=cssc

07 May, 2019 08:27PM by James Youngman

May 02, 2019

GNU Guix

GNU Guix 1.0.0 released

We are excited to announce the release of GNU Guix version 1.0.0!

The release comes with ISO-9660 installation images, a virtual machine image, and with tarballs to install the package manager on top of your GNU/Linux distro, either from source or from binaries. Guix users can update by running guix pull.

Guix 1.0!

One-point-oh always means a lot for free software releases. For Guix, 1.0 is the result of seven years of development, with code, packaging, and documentation contributions made by 260 people, translation work carried out by a dozen of people, and artwork and web site development by a couple of individuals, to name some of the activities that have been happening. During those years we published no less than 19 “0.x” releases.

The journey to 1.0

We took our time to get there, which is quite unusual in an era where free software moves so fast. Why did we take this much time? First, it takes time to build a community around a GNU/Linux distribution, and a distribution wouldn’t really exist without it. Second, we feel like we’re contributing an important piece to the GNU operating system, and that is surely intimidating and humbling.

Last, we’ve been building something new. Of course we stand on the shoulders of giants, and in particular Nix, which brought the functional software deployment paradigm that Guix implements. But developing Guix has been—and still is!—a challenge in many ways: it’s a programming language design challenge, an operating system design challenge, a challenge for security, reproducibility, bootstrapping, usability, and more. In other words, it’s been a long but insightful journey! :-)

What GNU Guix can do for you

Presumably some of the readers are discovering Guix today, so let’s recap what Guix can do for you as a user. Guix is a complete toolbox for software deployment in general, which makes it different from most of the tools you may be familiar with.

Guix manages packages, environments, containers, and systems.

This may sound a little abstract so let’s look at concrete use cases:

  • As a user, Guix allows you to install applications and to keep them up-to-date: search for software with guix search, install it with guix install, and maintain it up-to-date by regularly running guix pull and guix upgrade. Guix follows a so-called “rolling release” model, so you can run guix pull at any time to get the latest and greatest bits of free software.

    This certainly sounds familiar, but a distinguishing property here is dependability: Guix is transactional, meaning that you can at any time roll back to a previous “generation” of your package set with guix package --roll-back, inspect differences with guix package -l, and so on.

    Another useful property is reproducibility: Guix allows you to deploy the exact same software environment on different machines or at different points in time thanks to guix describe and guix pull.

    This, coupled with the fact that package management operations do not require root access, is invaluable notably in the context of high-performance computing (HPC) and reproducible science, which the Guix-HPC effort has been focusing on.

  • As a developer, we hope you’ll enjoy guix environment, which allows you to spawn one-off software environments. Suppose you’re a GIMP developer: running guix environment gimp spawns a shell with everything you need to hack on GIMP—much quicker than manually installing its many dependencies.

    Developers often struggle to push their work to users so they get quick feedback. The guix pack provides an easy way to create container images for use by Docker & co., or even standalone relocatable tarballs that anyone can run, regardless of the GNU/Linux distribution they use.

    Oh, and you may also like package transformation options, which allow you define package variants from the command line.

  • As a system administrator—and actually, we’re all system administrators of sorts on our laptops!—, Guix’s declarative and unified approach to configuration management should be handy. It surely is a departure from what most people are used to, but it is so reassuring: one configuration file is enough to specify all the aspects of the system config—services, file systems, locale, accounts—all in the same language.

    That makes it surprisingly easy to deploy otherwise complex services such as applications that depend on Web services. For instance, setting up CGit or Zabbix is a one-liner, even though behind the scenes that involves setting up nginx, fcgiwrap, etc. We’d love to see to what extent this helps people self-host services—sort of similar to what FreedomBox and YunoHost have been focusing on.

    With guix system you can instantiate a configuration on your machine, or in a virtual machine (VM) where you can test it, or in a container. You can also provision ISO images, VM images, or container images with a complete OS, from the same config, all with guix system.

The quick reference card shows the important commands. As you start diving deeper into Guix, you’ll discover that many aspects of the system are exposed using consistent Guile programming interfaces: package definitions, system services, the “init” system, and a whole bunch of system-level libraries. We believe that makes the system very hackable, and we hope you’ll find it as much fun to play with as we do.

So much for the overview!

What’s new since 0.16.0

For those who’ve been following along, a great many things have changed over the last 5 months since the 0.16.0 release—99 people contributed over 5,700 commits during that time! Here are the highlights:

  • The ISO installation image now runs a cute text-mode graphical installer—big thanks to Mathieu Othacehe for writing it and to everyone who tested it and improved it! It is similar in spirit to the Debian installer. Whether you’re a die-hard GNU/Linux hacker or a novice user, you’ll certainly find that this makes system installation much less tedious than it was! The installer is fully translated to French, German, and Spanish.
  • The new VM image better matches user expectations: whether you want to tinker with Guix System and see what it’s like, or whether you want to use it as a development environment, this VM image should be more directly useful.
  • The user interface was improved: aliases for common operations such as guix search and guix install are now available, diagnostics are now colorized, more operations show a progress bar, there’s a new --verbosity option recognized by all commands, and most commands are now “quiet” by default.
  • There’s a new --with-git-url package transformation option, that goes with --with-branch and --with-commit.
  • Guix now has a first-class, uniform mechanism to configure keyboard layout—a long overdue addition. Related to that, Xorg configuration has been streamlined with the new xorg-configuration record.
  • We introduced guix pack -R a while back: it creates tarballs containing relocatable application bundles that rely on user namespaces. Starting from 1.0, guix pack -RR (like “reliably relocatable”?) generates relocatable binaries that fall back to PRoot on systems where user namespaces are not supported.
  • More than 1,100 packages were added, leading to close to 10,000 packages, 2,104 packages were updated, and several system services were contributed.
  • The manual has been fully translated to French, the German and Spanish translations are nearing completion, and work has begun on a Simplified Chinese translation. You can help translate the manual into your language by joining the Translation Project.

That’s a long list already, but you can find more details in the NEWS file.

What’s next?

One-point-oh is a major milestone, especially for those of us who’ve been on board for several years. But with the wealth of ideas we’ve been collecting, it’s definitely not the end of the road!

If you’re interested in “devops” and distributed deployment, you will certainly be happy to help in that area, those interested in OS development might want to make the Shepherd more flexible and snappy, furthering integration with Software Heritage will probably be #1 on the to-do list of scientists concerned with long-term reproducibility, programming language tinkerers may want to push G-expressions further, etc. Guix 1.0 is a tool that’s both serviceable for one’s day-to-day computer usage and a great playground for the tinkerers among us.

Whether you want to help on design, coding, maintenance, system administration, translation, testing, artwork, web services, funding, organizing a Guix install party… your contributions are welcome!

We’re humans—don’t hesitate to get in touch with us, and enjoy Guix 1.0!

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

02 May, 2019 04:00PM by Ludovic Courtès

April 24, 2019

dico @ Savannah

Version 2.9

Version 2.9 of GNU dico is available from download from the GNU archive and from its main archive site.

This version fixes compilation on 32-bit systems.

24 April, 2019 06:57AM by Sergey Poznyakoff