Planet GNU

Aggregation of development blogs from the GNU Project

July 29, 2021

remotecontrol @ Savannah

July 28, 2021

FSF Blogs

FSF News

FSF job opportunity: Operations assistant

The Free Software Foundation (FSF), a Massachusetts 501(c)(3) charity with a worldwide mission to protect and promote computer-user freedom, seeks a motivated and organized Boston-based individual to be our full-time operations assistant.

28 July, 2021 08:35PM

July 25, 2021

health @ Savannah

Release of MyGNUHealth 1.0.3

Dear GNU community:

I am happy to announce that the release 1.0.3 of the GNU Health Personal Health Record (PHR) component, MyGNUHealth.

This release updates the medical genetics domain, with the latest human natural variant dataset based on UniProt Consortium (release 2021_03 of June 02 2021).

Statistics for single amino acid variants:

             Likely pathogenic or pathogenic (LP/P):  31398
             Likely benign or benign         (LB/B):  39584
             Uncertain significance            (US):   8763
                                             --------------
                                              Total:  79745

In addition, some minor changes / updates in the documentation and credits have been done.

This latest version is already available at Savannah, and the Python Package Index (PyPi). Shortly will also be in your favorite Libre operating system / distribution.

Again, thanks to all of you who collaborate and make GNU Health a reality!

Happy and healthy hacking!
Luis

25 July, 2021 01:03PM by Luis Falcon

July 22, 2021

parallel @ Savannah

GNU Parallel 20210722 ('Blue Unity') released

GNU Parallel 20210722 ('Blue Unity') has been released. It is available for download at: lbry://@GnuParallel:4

Please help spreading GNU Parallel by making a testimonial video like Juan Sierra Pons: http://www.elsotanillo.net/wp-content/uploads/GnuParallel_JuanSierraPons.mp4

It does not have to be as detailed as Juan's. It is perfectly fine if you just say your name, and what field you are using GNU Parallel for.

Quote of the month:

  We use gnu parallel now - and happier for it.
     -- Ben Davies @benjamindavies@twitter

New in this release:

  • parset supports associative arrays in bash, ksh, zsh.
  • Online HTML is now generated by Sphinx.
  • Bug fixes and man page updates.

News about GNU Parallel:

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep c82233e7da3166308632ac8c34f850c0
    12345678 c82233e7 da316630 8632ac8c 34f850c0
    $ md5sum install.sh | grep ae3d7aac5e15cf3dfc87046cfc5918d2
    ae3d7aac 5e15cf3d fc87046c fc5918d2
    $ sha512sum install.sh | grep dfc00d823137271a6d96225cea9e89f533ff6c81f
    9c5198d5 31a3b755 b7910ece 3a42d206 c804694d fc00d823 137271a6 d96225ce
    a9e89f53 3ff6c81f f52b298b ef9fb613 2d3f9ccd 0e2c7bd3 c35978b5 79acb5ca
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 July, 2021 08:33PM by Ole Tange

July 21, 2021

FSF Blogs

Philippe Aigrain, co-fondateur de La Quadrature du Net

21 July, 2021 08:15PM

July 20, 2021

Freedom moving forward: An overview of the FSF's history

20 July, 2021 07:35PM

July 14, 2021

health @ Savannah

MyGNUHealth maintenance release 1.0.2 is out!

MyGNUHealth 1.0.2 is ready to be downloaded from GNU.org!

This maintenance release fixes some issues with global (drawer) menus in MATE, XFCE desktops, as well as in SXMO on the PinePhone.

In addition, the documentation has been updated.
(https://www.gnuhealth.org/docs/mygnuhealth)

Happy and healthy hacking!

14 July, 2021 11:13PM by Luis Falcon

July 13, 2021

FSF Blogs

July 12, 2021

GNU Health

Back to the Future

Leonardo da Vinci said “simplicity is the ultimate sophistication“, but it seems like the “modern” computing world never heard that quote, or ignore it. Today, a single application takes hundreds of megabytes, both of disk and RAM space. Slow, buggy, inefficient systems at every level.

Probably the best example on this cluttering mess comes from the mobile computing. Most phones are bloated with useless software that not only hinders the navigation experience, but pose a threat to your privacy. Yes, all this software is proprietary. Worst of it, you can not even uninstall it.

Fortunately, there is hope. Let me introduce SXMO, the Simple X on Mobile project. As the authors describe it, SXMO is a minimalist environment for Linux smartphones, such as the PinePhone. SXMO embraces simplicity, and simplicity is both elegant and efficient.

MyGNUHealth running on PinePhone and SXMO
Full screen mode of MyGNUHealth on SXMO

SXMO uses a tiling window manager called dwm (Dynamic Window Manager), which allocates the different applications in the most efficient way. The dwm project is available as a single binary file, which source is intended not to exceed 2000 lines of code. That is amazing.

Simplicity is robust, and that again applies to SXMO. All the necessary components expected on a mobile phone (making and receiving calls, browsing the Internet, SMS messaging,..) just work. Moreover, SMXO comes with a scripting system that allow us to write solutions to our needs. For instance, the screenshots you see were taken with a script of 3 lines of code. Just place the little program under your “userscripts” directory, and voilà!, you’re ready to make screenshots from your PinePhone!

Browsing the Internet and the GNU Health homepage

Menu driven navigation in SXMO dwm in the PinePhone

In the end, most of current desktop environments today are huge, bloated and buggy. The discovery of SXMO has been an eyeopener. The perfect companion for my PinePhone.

I’m using SXMO on my PinePhone as a daily driver, and I just love it. Thanks to simple distributions such as Archlinux, Parabola or PostmarketOS, and simple Desktop / window managers as DWM, a am finally enjoying Libre mobile computing.

I feel projects like this take us back to the roots, to the beautiful world of simplicity, yet delivering the latest technology and showing us the path o the future.

References:

SXMO: https://www.sxmo.org

Pine64: https://www.pine64.org/

GNU Health : https://www.gnuhealth.org

PostmarketOS: https://postmarketos.org/

Archlinux: https://www.archlinux.org

Parabola: https://www.parabola.nu/

Featured Image: Leonardo da Vinci, drawing of a flying machine . Public domain, via Wikimedia Commons

12 July, 2021 02:18PM by Luis Falcon

July 08, 2021

FSF Events

"Freedom ladder" IRC discussion and brainstorming: August 05

Learning how to find help / Trying a free operating system

08 July, 2021 05:10PM

"Freedom ladder" IRC discussion and brainstorming: July 29

Understanding encryption / Mobile phone freedom

08 July, 2021 05:10PM

"Freedom ladder" IRC discussion and brainstorming: July 22

Free replacements and installing your first free program

08 July, 2021 05:10PM

"Freedom ladder" IRC discussion and brainstorming: July 15

Understanding nonfree software / Finding your own reason to use free software

08 July, 2021 05:10PM

health @ Savannah

MyGNUHealth 1.0.1 is out!

Dear all

I just released 1.0.1 for the stable series 1.0 of MyGNUHealth, the GNU Health Personal Health Record.

This maintenance release for MyGNUHealth contains, in a nutshell:

  • Fix the download path within GNU.org. Now it points to https://ftp.gnu.org/gnu/health/mygnuhealth/
  • Include Changelog file
  • Include local / offline documentation (resides on /usr/share/doc/mygnuhealth)
  • Clean up _pycache_ from tarball

Happy and healthy hacking!
Luis

08 July, 2021 12:23AM by Luis Falcon

July 07, 2021

Parabola GNU/Linux-libre

[From Arch] Sorting out old password hashes

Starting with libxcrypt 4.4.21, weak password hashes (such as MD5 and SHA1) are no longer accepted for new passwords. Users that still have their passwords stored with a weak hash will be asked to update their password on their next login.

If the login just fails (for example from display manager) switch to a virtual terminal (Ctrl-Alt-F2) and log in there once.

07 July, 2021 01:34AM by David P.

July 03, 2021

texinfo @ Savannah

Texinfo 6.8 released

We have released version 6.8 of Texinfo, the GNU documentation format.

It's available via a mirror (xz is much smaller than gz, but gz is available too just in case):

http://ftpmirror.gnu.org/texinfo/texinfo-6.8.tar.xz
http://ftpmirror.gnu.org/texinfo/texinfo-6.8.tar.gz

Please send any comments to bug-texinfo@gnu.org.

Full announcement: https://lists.gnu.org/archive/html/bug-texinfo/2021-07/msg00011.html

03 July, 2021 11:48AM by Gavin D. Smith

July 02, 2021

FSF News

Apply to be the FSF's next executive director

The Free Software Foundation (FSF), a Massachusetts 501(c)(3) charity with a worldwide mission to protect computer user freedom, seeks a principled, compassionate, and capable leader to be its new executive director. This position can be remote or based in our Boston office.

02 July, 2021 02:00PM

July 01, 2021

FSF takes next step in commitment to improving board governance

01 July, 2021 02:40PM

June 28, 2021

Christopher Allan Webber

Hello, I'm Chris Lemmer-Webber, and I'm nonbinary trans-femme

A picture of Chris and Morgan together

I recently came out as nonbinary trans-femme. That's a picture of me on the left, with my spouse Morgan Lemmer-Webber on the right.

In a sense, not much has changed, and so much has changed. I've dropped the "-topher" from my name, and given the common tendency to apply gender to pronouns in English, please either use nonbinary pronouns or feminine pronouns to apply to me. Other changes are happening as I wander through this space, from appearance to other things. (Probably the biggest change is finally achieving something resembling self-acceptance, however.)

If you want to know more, Morgan and I did a podcast episode which explains more from my present standing, and also explains Morgan's experiences with being demisexual, which not many people know about! (Morgan has been incredible through this whole process, by the way.)

But things may change further. Maybe a year from now those changes may be even more drastic, or maybe not. We'll see. I am wandering, and I don't know where I will land, but it won't be back to where I was.

At any rate, I've spent much of my life not being able to stand myself for how I look and feel. For most of my life, I have not been able to look at myself in a mirror for more than a second or two due to the revulsion I felt at the person I saw staring back at me. The last few weeks have been a shift change for me in that regard... it's a very new experience to feel so happy with myself.

I'm only at the beginning of this journey. I'd appreciate your support... people have been incredibly kind to me by and large so far but like everyone who goes through a process like this, it's very hard in those experiences where people aren't. Thank you to everyone who has been there for me so far.

28 June, 2021 10:53PM by Chris Lemmer-Webber

June 24, 2021

health @ Savannah

Welcome to MyGNUHealth, the GNU Health Libre Personal Health Record

Original article including MyGNUHealth pictures is at (https://meanmicio.org/2021/06/24/welcome-to-mygnuhealth-the-libre-personal-health-record/)

---

MyGNUHealth 1.0 us out! The GNU Health Libre Personal Health Record is now ready for prime time!

This is great news. Great news because citizens around the world have now access to a Free/Libre application, focused on privacy, that puts them in control of their health.

Health is personal, so is the health data. It’s been years since I got the idea of expanding the GNU Health ecosystem, not only to the health professionals and institutions, but making it personal, accessible to individuals. Now is a reality!

Throughout these years, the mobile health (mHealth) has been governed by private companies that benefit from your health data. Private companies, private insurances, proprietary operating systems, proprietary health applications. Big business, no privacy.

The GNU Health ecosystem exists because of Free software. Thanks to communities such as GNU and KDE, we can have fully operational operating systems, desktop environments, databases and programming languages that allow us to use and write free software. GNU Health is one example.

The Libre Software movement fights for the advancement of our societies, by providing universality in computing. In the case of GNU Health, that freedom and equity in computing is applied into the healthcare and social medicine domains. Health is a non-negotiable human right, so it must be health informatics.

What is MyGNUHealth?

MyGNUHealth (MyGH)is a Health Personal Record application focused in privacy, that can be used in desktops and mobile devices.

MyGH embraces the main health domains (*bio-psycho-social*). All the components in the GNU Health ecosystem combine social medicine and primary care with the latest on bioinformatics and precision medicine. The complex interactions between these health domains play a key role in the state of health and disease of an individual, family and society.

MyGH has the functionality of a health and activity tracker, and that of a health diary / record. It records and tracks the main anthropometric and physiological measures, such as weight, blood pressure, blood sugar level or oxygen saturation. It keeps track of your lifestyle, nutrition, physical activity, and sleep, with numerous charts to visualize the trends.

MyGNUHealth is also a diary, that records all relevant information from the medical and social domain and their context. In the medical domain, you can record your encounters, immunizations, hospitalizations, lab tests,genetic and family history, among others. In the genetic context, MyGH provides a dataset of over 30000 natural variants / SNP from UniProt that are relevant in human. Entering the RefSNP will automatically provide the information about that particular variant and it clinical significance.

The Social domain, contains the key social determinants of health (Social Gradient, Early life development, Stress, Social exclusion, Working conditions, Education, Physical environment, Unemployment, Social Support, Addiction, Food, Transportation, Health services, Family functionality, Family violence, Bullying, War) , most of them from the World Health Organization social determinants of health.

A very important feature of MyGH is that it is GNU Health Federation. That is, if you want to share any of this data with your health professional in real-time, and they will be able to study it.

The PinePhone and the revolution in mobile computing

Of course, in a world of mobile phones and mobile computing, we need free/libre mobile applications. The problem I was facing until recently, that prevented me from writing MyGNUHealth, was the fact that there was no libre mobile environment. The mobile computing market has been dominated by Google and Apple, which both deliver proprietary operating systems, Android and iOS respectively.

The irruption of the Pine64 community was the eye-opener and a game changer. A thriving community of talented people, determined to provide freedom in mobile computing. The Pine64 provides, among others, a smartphone (PinePhone), and a smartwatch (PineTime), and I have adopted both.

I wrote an article some weeks ago (“Liberating our mobile computing”), where I mentioned why I have changed the Android phone to the PinePhone, and my watch to the PineTime.

Does the PinePhone have the best camera? Can we compare the PinePhone with Apple or Google products? It’s hard to compare a multi-billion dollar corporation with a fresh community oriented project. The business model, the technology components and the ethics behind are very different.

So, why making the move? I made the change because we, as a society, need to embrace a technology that is universal and that respects our freedom and privacy. A technology that if focus on the individual and not the corporation. That moves takes determination and commitment. There is a small price to pay, but freedom and privacy are priceless.

As a physician, I need to provide my patients the resources that use state-of-the-art technology, and, at the same time, guarantee the privacy of their sensitive medical information. Libre software and open standards are key in healthcare. When my patients choose free/libre software, they have full control. They also have the possibility to share it with me or with other health professionals, in real-time and with the highest levels of privacy.

We can only manage sensitive health data with technology that respects our privacy. In other words, we can not put our personal information in the hands of corporate interests. Choosing Libre Software and Hardware means much more than just technology. Libre Software means embracing solidarity and cooperation. It means sharing knowledge, code and time with others. It means embracing open science for the advancement of our societies, specially for those that need it most.

MyGNUHealth will be included by default in many operating systems and distributions, so you don’t have to worry about the technical details. Just use your health companion! If your operating system does not have MyGH in their repositories, please ask them to include it.

Governments, institutions, and health professional need affordable technology that respects their citizens freedom. We need you to be part of this eHealth revolution.

Happy and healthy hacking!

About GNUHealth

MyGNUHealth is part of the GNU Health, the Libre digital health ecosystem. GNU Health is from GNU Solidario, a humanitarian, non-for-profit organization focused on the advancement of Social Medicine. GNU Solidario develops health applications and uses exclusively Free/Libre software. GNU Health is an official GNU project

Homepage : https://www.gnuhealth.org
Documentation portal : https://www.gnuhealth.org/docs

Original article: https://meanmicio.org/2021/06/24/welcome-to-mygnuhealth-the-libre-personal-health-record/

24 June, 2021 05:35PM by Luis Falcon

GNU Health

Welcome to MyGNUHealth, the Libre Personal Health Record

MyGNUHealth 1.0 us out! The GNU Health Libre Personal Health Record is now ready for prime time!

This is great news. Great news because citizens around the world have now access to a Free/Libre application, focused on privacy, that puts them in control of their health.

Health is personal, so is the health data. It’s been years since I got the idea of expanding the GNU Health ecosystem, not only to the health professionals and institutions, but making it personal, accessible to individuals. Now is a reality!

Throughout these years, the mobile health (mHealth) has been governed by private companies that benefit from your health data. Private companies, private insurances, proprietary operating systems, proprietary health applications. Big business, no privacy.

MyGNUHealth running on KDE Plasma desktop and Arch Linux

GNU and Libre Software

The GNU Health ecosystem exists because of Free software. Thanks to communities such as GNU, we can have fully operational operating systems, desktop environments, databases and programming languages that allow us to use and write free software. GNU Health is one example.

The Libre Software movement fights for the advancement of our societies, by providing universality in computing. In the case of GNU Health, that freedom and equity in computing is applied into the healthcare and social medicine domains. Health is a non-negotiable human right, so it must be health informatics.

What is MyGNUHealth?

MyGNUHealth (MyGH)is a Health Personal Record application focused in privacy, that can be used in desktops and mobile devices.

MyGH embraces the main health domains (bio-psycho-social). All the components in the GNU Health ecosystem combine social medicine and primary care with the latest on bioinformatics and precision medicine. The complex interactions between these health domains play a key role in the state of health and disease of an individual, family and society.

MyGH has the functionality of a health and activity tracker, and that of a health diary / record. It records and tracks the main anthropometric and physiological measures, such as weight, blood pressure, blood sugar level or oxygen saturation. It keeps track of your lifestyle, nutrition, physical activity, and sleep, with numerous charts to visualize the trends.

MyGNUHealth is also a diary, that records all relevant information from the medical and social domain and their context. In the medical domain, you can record your encounters, immunizations, hospitalizations, lab tests,genetic and family history, among others. In the genetic context, MyGH provides a dataset of over 30000 natural variants / SNP from UniProt that are relevant in human. Entering the RefSNP will automatically provide the information about that particular variant and it clinical significance.

The Social domain, contains the key social determinants of health (Social Gradient, Early life development, Stress, Social exclusion, Working conditions, Education, Physical environment, Unemployment, Social Support, Addiction, Food, Transportation, Health services, Family functionality, Family violence, Bullying, War) , most of them from the World Health Organization social determinants of health.

A very important feature of MyGH is that it is GNU Health Federation. That is, if you want to share any of this data with your health professional in real-time, and they will be able to study it.

Lifestyle and activity tracker
Social domain and its contexts, along the book of life
Mood and energy assessment
Medical genetics showing the relevant information on a particular natural variant / SNP

The PinePhone and the revolution in mobile computing

Of course, in a world of mobile phones and mobile computing, we need free/libre mobile applications. The problem I was facing until recently, that prevented me from writing MyGNUHealth, was the fact that there was no libre mobile environment. The mobile computing market has been dominated by Google and Apple, which both deliver proprietary operating systems, Android and iOS respectively.

The irruption of the Pine64 community was the eye-opener and a game changer. A thriving community of talented people, determined to provide freedom in mobile computing. The Pine64 provides, among others, a smartphone (PinePhone), and a smartwatch (PineTime), and I have adopted both.

Starting up MyGNUHealth application in the PinePhone
KDE Plasma mobile applications on the PinePhone

I wrote an article some weeks ago (“Liberating our mobile computing”), where I mentioned why I have changed the Android phone to the PinePhone, and my watch to the PineTime.

Does the PinePhone have the best camera? Can we compare the PinePhone with Apple or Google products? It’s hard to compare a multi-billion dollar corporation with a fresh, community-oriented project. The business model, the technology components and the ethics behind are very different.

So, why making the move? I made the change because we, as a society, need to embrace a technology that is universal and that respects our freedom and privacy. A technology that focuses on the individual and not in the corporation. That moves takes determination and commitment. There is a small price to pay, but freedom and privacy are priceless.

Taking MyGNUHealth and the PinePhone to the outdoors.

As a physician, I need to provide my patients the resources that use state-of-the-art technology, and, at the same time, guarantee the privacy of their sensitive medical information. Libre software and open standards are key in healthcare. When my patients choose free/libre software, they have full control. They also have the possibility to share it with me or with other health professionals, in real-time and with the highest levels of privacy.

We can only manage sensitive health data with technology that respects our privacy. In other words, we can not put our personal information in the hands of corporate interests. Choosing Libre Software and Hardware means much more than just technology. Libre Software means embracing solidarity and cooperation. It means sharing knowledge, code and time with others. It means embracing open science for the advancement of our societies, specially for those that need it most.

MyGNUHealth will be included by default in many operating systems and distributions, so you don’t have to worry about the technical details. Just use your health companion! If your operating system does not have MyGH in their repositories, please ask them to include it.

Governments, institutions, and health professional need affordable technology that respects their citizens freedom. We need you to be part of this eHealth revolution.

Happy and healthy hacking!

About GNUHealth:

MyGNUHealth is part of the GNU Health, the Libre digital health ecosystem. GNU Health is from GNU Solidario, a humanitarian, non-for-profit organization focused on the advancement of Social Medicine. GNU Solidario develops health applications and uses exclusively Free/Libre software. GNU Health is an official GNU project.

Homepage : https://www.gnuhealth.org

Documentation portal : https://www.gnuhealth.org/docs

24 June, 2021 02:54PM by Luis Falcon

dejagnu @ Savannah

DejaGnu 1.6.3 released

DejaGnu 1.6.3 was released on 16 June 2021.  Many bugs are fixed in this release and active development is resuming, though perhaps at a slow pace.

24 June, 2021 01:48AM by Jacob Bachmeyer

June 23, 2021

texmacs @ Savannah

TeXmacs 2.1 released

This version of TeXmacs consolidates many developments that took place in the last decade. Most importantly, the interface is now based on Qt, which allowed us develop native versions for Linux, MacOS, and Windows. TeXmacs has evolved from a scientific text editor into a scientific office suite, with an integrated presentation mode, technical drawing editor, versioning tools, bibliography tool, etc. The typesetting quality has continued to improve with a better support of microtypography and a large variety of fonts. The converters for LaTeX and Html have also been further perfected and TeXmacs now comes with a native support for Pdf.

23 June, 2021 12:48PM by Joris van der Hoeven

June 22, 2021

parallel @ Savannah

GNU Parallel 20210622 ('Protasevich') released [stable]

GNU Parallel 20210622 ('Protasevich') [stable] has been released. It is available for download at: lbry://@GnuParallel:4

No new functionality was introduced so this is a good candidate for a stable release.

Please help spreading GNU Parallel by making a testimonial video like Juan Sierra Pons: http://www.elsotanillo.net/wp-content/uploads/GnuParallel_JuanSierraPons.mp4

It does not have to be as detailed as Juan's. It is perfectly fine if you just say your name, and what field you are using GNU Parallel for.

Quote of the month:

  GNU Parallel makes my life so much easier.
  I'm glad I don't have to implement multi-threaded Python scripts on the regular.
     -- Fredrick Brennan @fr_brennan@twitter

New in this release:

  • Bug fixes and man page updates.

News about GNU Parallel:

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep c82233e7da3166308632ac8c34f850c0
    12345678 c82233e7 da316630 8632ac8c 34f850c0
    $ md5sum install.sh | grep ae3d7aac5e15cf3dfc87046cfc5918d2
    ae3d7aac 5e15cf3d fc87046c fc5918d2
    $ sha512sum install.sh | grep dfc00d823137271a6d96225cea9e89f533ff6c81f
    9c5198d5 31a3b755 b7910ece 3a42d206 c804694d fc00d823 137271a6 d96225ce
    a9e89f53 3ff6c81f f52b298b ef9fb613 2d3f9ccd 0e2c7bd3 c35978b5 79acb5ca
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 June, 2021 05:30PM by Ole Tange

June 21, 2021

GNU Taler news

Comment émettre une monnaie numérique de banque centrale

Nous sommes heureux de vous annoncer la publication de notre article sur "Comment émettre une monnaie numérique de banque centrale" par le Banque nationale suisse.

21 June, 2021 10:00PM

June 18, 2021

GNU Guix

Substitutes now also available from bordeaux.guix.gnu.org

There have been a number of different project operated sources of substitutes, for the last couple of years the default source of substitutes has been ci.guix.gnu.org (with a few different URLs).

Now, in addition to ci.guix.gnu.org, bordeaux.guix.gnu.org is a default substitute server.

Put that way, this development maybe doesn't sound particularly interesting. Why is a second substitute server useful? There's some thoughts on that exact question in the next section. If you're just interested in how to use (or how not to use) substitutes from bordeaux.guix.gnu.org, then you can just skip ahead to the last section.

Why a second source of substitutes?

This change is an important milestone, following on from the work that started on the Guix Build Coordinator towards the start of 2020.

Back in 2020, the substitute availability from ci.guix.gnu.org was often an issue. There seemed to be a number of contributing factors, including some parts of the architecture. Without going too much in to the details of the issues, aspects of the design of the Guix Build Coordinator were specifically meant to avoid some of these issues.

While there were some very positive results from testing back in 2020, it's taken so long to bring the substitute availability benefits to general users of Guix that ci.guix.gnu.org has changed and improved significantly in the meantime. This means that any benefits in terms of substitute availability are less significant now.

One clearer benefit of just having two independent sources of substitutes is redundancy. While the availability of ci.guix.gnu.org has been very high (in my opinion), having a second independent substitute server should mean that if there's a future issue with users accessing either source of substitutes, the disruption should be reduced.

I'm also excited about the new possibilities offered by having a second substitute server, particularly one using the Guix Build Coordinator to manage the builds.

Substitutes for the Hurd is already something that's been prototyped, so I'm hopeful that bordeaux.guix.gnu.org can start using childhurd VMs to build things soon.

Looking a bit further forward, I think there's some benefits to be had in doing further work on how the nar and narinfo files used for substitutes are managed. There are some rough plans already on how to address the retention of nars, and how to look at high performance mirrors.

Having two substitute servers is one step towards stronger trust policies for substitutes (as discussed on guix-devel, where you would only use a substitute if both ci.guix.gnu.org and bordeaux.guix.gnu.org have built it exactly the same. This would help protect against the compromise of a single substitute server.

Using substitutes from bordeaux.guix.gnu.org

If you're using Guix System, and haven't altered the default substitute configuration, updating guix (via guix pull), reconfiguring using the updated guix, and then restarting the guix-daemon should enable substitutes from bordeaux.guix.gnu.org.

If the ACL is being managed manually, you might need to add the public key for bordeaux.guix.gnu.org manually as well.

When using Guix on a foreign distribution with the default substitute configuration, you'll need to run guix pull as root, then restart the guix-daemon. You'll then need to add the public key for bordeaux.guix.gnu.org to the ACL.

guix archive --authorize < /root/.config/guix/current/share/guix/bordeaux.guix.gnu.org.pub

If you want to just use ci.guix.gnu.org, or bordeaux.guix.gnu.org for that matter, you'll need to adjust the substitute urls configuration for the guix-daemon to just refer to the substitute servers you want to use.

18 June, 2021 12:00PM by Christopher Baines

June 17, 2021

gdbm @ Savannah

Version 1.20

Version 1.20 is available for download.

Changes in this version:

New bucket cache

The bucket cache support has been rewritten from scratch.  The new code provides for significant speed up of search operations.

Change in the mmap prereading strategy

Pre-reading of the memory mapper regions, introduced in version 1.19 can be advantageous only when doing intensive look-ups on a read-only
database.  It degrades performance otherwise, especially if doing multiple inserts.  Therefore, this version introduces a new flag
to gdbm_open: GDBM_PREREAD.  When given, it enables pre-reading of memory mapped regions. (details)

17 June, 2021 11:07AM by Sergey Poznyakoff

June 16, 2021

GNU Taler news

How to issue a privacy-preserving central bank digital currency

We are happy to announce the publication of our policy brief on"How to issue a privacy-preserving central bank digital currency" by The European Money and Finance Forum.

16 June, 2021 10:00PM

How to issue a privacy-preserving central bank digital currency

We are happy to announce the publication of our policy brief on"How to issue a privacy-preserving central bank digital currency" by The European Money and Finance Forum.

16 June, 2021 10:00PM

June 15, 2021

FSF News

June 11, 2021

FSF and GNU move official IRC channels to Libera.Chat network

11 June, 2021 08:40PM

GNU Guix

Reproducible data processing pipelines

Last week, we at Guix-HPC published videos of a workshop on reproducible software environments we organized on-line. The videos are well worth watching—especially if you’re into reproducible research, and especially if you speak French or want to practice. This post, though, is more of a meta-post: it’s about how we processed these videos. “A workshop on reproducibility ought to have a reproducible video pipeline”, we thought. So this is what we did!

From BigBlueButton to WebM

Over the last year and half, perhaps you had the “opportunity” to participate in an on-line conference, or even to organize one. If so, chances are that you already know BigBlueButton (BBB), the free software video conferencing suite initially designed for on-line teaching. In a nutshell, it allows participants to chat (audio, video, and keyboard), and speakers can share their screen or a PDF slide deck. Organizers can also record the session.

BBB then creates a link to recorded sessions with a custom JavaScript player that replays everything: typed chat, audio and video (webcams), shared screens, and slide decks. This BBB replay a bit too rough though and often not the thing you’d like to publish after the conference. Instead, you’d rather do a bit of editing: adjusting the start and end time of each talk, removing live chat from what’s displayed (which allows you to remove info that personally identifies participants, too!), and so forth. Turns out this kind of post-processing is a bit of work, primarily because BBB does “the right thing” of recording each stream separately, in the most appropriate form: webcam and screen shares are recorded as separate videos, chat is recorded as text with timings, slide decks is recorded as a bunch of PNGs plus timings, and then there’s a bunch of XML files with metadata putting it all together.

Anyway, with a bit of searching, we quickly found the handy bbb-render tool, which can first download all these files and then assemble them using the Python interface to the GStreamer Editing Services (GES). Good thing: we don’t have to figure out all these things; we “just” have to run these two scripts in an environment with the right dependencies. And guess what: we know of a great tool to control execution environments!

A “deployment-aware Makefile”

So we have a process that takes input files—those PNGs, videos, and XML files—and produces output files—WebM video files. As developers we immediately recognize a pattern and the timeless tool to deal with it: make. The web already seems to contain countless BBB post-processing makefiles (and shell scripts, too). We were going to contribute to this while we suddenly realized that we know of another great tool to express such processes: Guix! Bonus: while a makefile would address just the tip of the iceberg—running bbb-render—Guix can also take care of the tedious task of deploying the right environment to run bbb-render in.

What we did was to write some sort of a deployment-aware makefile. It’s still a relatively unconventional way to use Guix, but one that’s very convenient. We’re talking about videos, but really, you could use the same approach for any kind of processing graph where you’d be tempted to just use make.

The end result here is a Guix file that returns a manifest—a list of videos to “build”. You can build the videos with:

guix build -m render-videos.scm

Overall, the file defines a bunch of functions (procedures in traditional Scheme parlance), each of which takes input files and produces output files. More accurately, these functions returns objects that describe how to build their output from the input files—similar to how a makefile rule describes how to build its target(s) from its prerequisite(s). (The reader familiar with functional programming may recognize a monad here, and indeed, those build descriptions can be thought of as monadic values in a hypothetical “Guix build” monad; technically though, they’re regular Scheme values.)

Let’s take a guided tour of this 300-line file.

Rendering

The first step in this file describes where bbb-render can be found and how to run it to produce a GES “project” file, which we’ll use later to render the video:

(define bbb-render
  (origin
    (method git-fetch)
    (uri (git-reference (url "https://github.com/plugorgau/bbb-render")
                        (commit "a3c10518aedc1bd9e2b71a4af54903adf1d972e5")))
    (file-name "bbb-render-checkout")
    (sha256
     (base32 "1sf99xp334aa0qgp99byvh8k39kc88al8l2wy77zx7fyvknxjy98"))))

(define rendering-profile
  (profile
   (content (specifications->manifest
             '("gstreamer" "gst-editing-services" "gobject-introspection"
               "gst-plugins-base" "gst-plugins-good"
               "python-wrapper" "python-pygobject" "python-intervaltree")))))

(define* (video-ges-project bbb-data start end
                            #:key (webcam-size 25))
  "Return a GStreamer Editing Services (GES) project for the video,
starting at START seconds and ending at END seconds.  BBB-DATA is the raw
BigBlueButton directory as fetched by bbb-render's 'download.py' script.
WEBCAM-SIZE is the percentage of the screen occupied by the webcam."
  (computed-file "video.ges"
                 (with-extensions (list (specification->package "guile-gcrypt"))
                  (with-imported-modules (source-module-closure
                                          '((guix build utils)
                                            (guix profiles)))
                    #~(begin
                        (use-modules (guix build utils) (guix profiles)
                                     (guix search-paths) (ice-9 match))

                        (define search-paths
                          (profile-search-paths #+rendering-profile))

                        (for-each (match-lambda
                                    ((spec . value)
                                     (setenv
                                      (search-path-specification-variable
                                       spec)
                                      value)))
                                  search-paths)

                        (invoke "python"
                                #+(file-append bbb-render "/make-xges.py")
                                #+bbb-data #$output
                                "--start" #$(number->string start)
                                "--end" #$(number->string end)
                                "--webcam-size"
                                #$(number->string webcam-size)))))))

First it defines the source code location of bbb-render as an “origin”. Second, it defines rendering-profile as a “profile” containing all the packages needed to run bbb-render’s make-xges.py script. The specification->manifest procedure creates a manifest from a set of packages specs, and likewise specification->package returns the package that matches a given spec. You can try these things at the guix repl prompt:

$ guix repl
GNU Guile 3.0.7
Copyright (C) 1995-2021 Free Software Foundation, Inc.

Guile comes with ABSOLUTELY NO WARRANTY; for details type `,show w'.
This program is free software, and you are welcome to redistribute it
under certain conditions; type `,show c' for details.

Enter `,help' for help.
scheme@(guix-user)> ,use(guix profiles)
scheme@(guix-user)> ,use(gnu)
scheme@(guix-user)> (specification->package "guile@2.0")
$1 = #<package guile@2.0.14 gnu/packages/guile.scm:139 7f416be776e0>
scheme@(guix-user)> (specifications->manifest '("guile" "gstreamer" "python"))
$2 = #<<manifest> entries: (#<<manifest-entry> name: "guile" version: "3.0.7" …> #<<manifest-entry> name: "gstreamer" version: "1.18.2" …> …)

Last, it defines video-ges-project as a function that takes the BBB raw data, a start and end time, and produces a video.ges file. There are three key elements here:

  1. computed-file is a function to produce a file, video.ges in this case, by running the code you give it as its second argument—the recipe, in makefile terms.
  2. The recipe passed to computed-file is a G-expression (or “gexp”), introduced by this fancy #~ (hash tilde) notation. G-expressions are a way to stage code, to mark it for eventual execution. Indeed, that code will only be executed if and when we run guix build (without --dry-run), and only if the result is not already in the store.
  3. The gexp refers to rendering-profile, to bbb-render, to bbb-data and so on by escaping with the #+ or #$ syntax (they’re equivalent, unless doing cross-compilation). During build, these reference items in the store, such as /gnu/store/…-bbb-render, which is itself the result of “building” the origin we’ve seen above. The #$output reference corresponds to the build result of this computed-file, the complete file name of video.ges under /gnu/store.

That’s quite a lot already! Of course, this real-world example is more intimidating than the toy examples you’d find in the manual, but really, pretty much everything’s there. Let’s see in more detail at what’s inside this gexp.

The gexp first imports a bunch of helper modules with build utilities and tools to manipulate profiles and search path environment variables. The for-each call iterates over search path environment variables—PATH, PYTHONPATH, and so on—, setting them so that the python command is found and so that the needed Python modules are found.

The with-imported-modules form above indicates that the (guix build utils) and (guix profiles) modules, which are part of Guix, along with their dependencies (their closure), need to be imported in the build environment. What about with-extensions? Those (guix …) module indirectly depend on additional modules, provided by the guile-gcrypt package, hence this spec.

Next comes the ges->webm function which, as the name implies, takes a .ges file and produces a WebM video file by invoking ges-launch-1.0. The end result is a video containing the recording’s audio, the webcam and screen share (or slide deck), but not the chat.

Opening and closing

We have a WebM video, so we’re pretty much done, right? But… we’d also like to have an opening, showing the talk title and the speaker’s name, as well as a closing. How do we get that done?

Perhaps a bit of a sledgehammer, but it turns out that we chose to produce those still images with LaTeX/Beamer, from these templates.

We need again several processing steps:

  1. We first define the latex->pdf function that takes a template .tex file, a speaker name and title. It copies the template, replaces placeholders with the speaker name and title, and runs pdflatex to produce the PDF.
  2. The pdf->bitmap function takes a PDF and returns a suitably-sized JPEG.
  3. image->webm takes that JPEG and invokes ffmpeg to render it as WebM, with the right resolution, frame rate, and audio track.

With that in place, we define a sweet and small function that produces the opening WebM file for a given talk:

(define (opening title speaker)
  (image->webm
   (pdf->bitmap (latex->pdf (local-file "opening.tex") "opening.pdf"
                            #:title title #:speaker speaker)
                "opening.jpg")
   "opening.webm" #:duration 5))

We need one last function, video-with-opening/closing, that given a talk, an opening, and a closing, concatenates them by invoking ffmpeg.

Putting it all together

Now we have all the building blocks!

We use local-file to refer to the raw BBB data, taken from disk:

(define raw-bbb-data/monday
  ;; The raw BigBlueButton data as returned by './download.py URL', where
  ;; 'download.py' is part of bbb-render.
  (local-file "bbb-video-data.monday" "bbb-video-data"
              #:recursive? #t))

(define raw-bbb-data/tuesday
  (local-file "bbb-video-data.tuesday" "bbb-video-data"
              #:recursive? #t))

No, the raw data is not in the Git repository (it’s too big and contains personally-identifying information about participants), so this assumes that there’s a bbb-video-data.monday and a bbb-video-data.tuesday in the same directory as render-videos.scm.

For good measure, we define a <talk> data type:

(define-record-type <talk>
  (talk title speaker start end cam-size data)
  talk?
  (title     talk-title)
  (speaker   talk-speaker)
  (start     talk-start)           ;start time in seconds
  (end       talk-end)             ;end time
  (cam-size  talk-webcam-size)     ;percentage used for the webcam
  (data      talk-bbb-data))       ;BigBlueButton data

… such that we can easily define talks, along with talk->video, which takes a talk and return a complete, final video:

(define (talk->video talk)
  "Given a talk, return a complete video, with opening and closing."
  (define file-name
    (string-append (canonicalize-string (talk-speaker talk))
                   ".webm"))

  (let ((raw (ges->webm (video-ges-project (talk-bbb-data talk)
                                           (talk-start talk)
                                           (talk-end talk)
                                           #:webcam-size
                                           (talk-webcam-size talk))
                        file-name))
        (opening (opening (talk-title talk) (talk-speaker talk))))
    (video-with-opening/closing file-name raw
                                opening closing.webm)))

The very last bit iterates over the talks and returns a manifest containing all the final videos. Now we can build the ready-to-be-published videos, all at once:

$ guix build -m render-videos.scm
[… time passes…]
/gnu/store/…-emmanuel-agullo.webm
/gnu/store/…-francois-rue.webm
…

Voilà!

Image of an old TV screen showing a video opening.

Why all the fuss?

OK, maybe you’re thinking “this is just another hackish script to fiddle with videos”, and that’s right! It’s also worth mentioning another approach: Racket’s video language, which is designed to manipulate video abstractions, similar to GES but with a sweet high-level functional interface.

But look, this one’s different: it’s self-contained, it’s reproducible, and it has the right abstraction level. Self-contained is a big thing; it means you can run it and it knows what software to deploy, what environment variables to set, and so on, for each step of the pipeline. Granted, it could be simplified with appropriate high-level interfaces in Guix. But remember: the alternative is a makefile (“deployment-unaware”) completed by a README file giving a vague idea of the dependencies needed. The reproducible bit is pretty nice too (especially for a workshop on reproducibility). It also means there’s caching: videos or intermediate byproducts already in the store don’t need to be recomputed. Last, we have access to a general-purpose programming language where we can build abstractions, such as the <talk> data type, that makes the whole thing more pleasant to work with and more maintainable.

Hopefully that’ll inspire you to have a reproducible video pipeline for your next on-line event, or maybe that’ll inspire you to replace your old makefile and shelly habits for data processing!

High-performance computing (HPC) people might be wondering how to go from here and build “computing-resource-aware” or “storage-resource-aware” pipelines where each computing step could be submitted to the job scheduler of an HPC cluster and use distributed file systems for intermediate results rather than /gnu/store. If you’re one of these folks, do take a look at how the Guix Workflow Language addresses these issues.

Acknowledgments

Thanks to Konrad Hinsen for valuable feedback on an earlier draft.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64 and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

11 June, 2021 05:00PM by Ludovic Courtès

June 09, 2021

www-zh-cn @ Savannah

Welcome our new member - jiderlesi

Dear www-zh-cn-translators:

It's a good time to welcome our new member:

User Details:
 -------------
Name:    Yuqi Feng
Login:   jiderlesi
Email:   jiderlesi@outlook.de <mailto:jiderlesi@outlook.de>

We thank jiderlesi for her/his commitment for contributing to GNU Chinese Translation.
We wish jiderlesi has a wonderful and successful free journey.

09 June, 2021 07:41AM by Wensheng XIE

June 07, 2021

GNU Health

IFMSA Bangladesh joins the GNU Health Alliance

The non-profit organization with 3500+ medical students and 65 universities across the country is now part of the GNU Health Alliance of Academic and Research Institutions
The non-profit organization with 3500+ medical students and 65 universities across the country is now part of the GNU Health Alliance of Academic and Research Institutions

It’s a great day for Bangladesh. It’s a great day for public health! Today, GNU Solidario and the International Federation of Medical Students Association, IFMSA Bangladesh, have signed an initial 5-year partnership on the grounds of the GNU Health Alliance of Academic and Research Institutions.

IFMSA Bangladesh is a non-for-profit, non-political organization that comprises 3500+ medical students from over 65 schools of Medicine across Bangladesh. They are a solid organization, very well organized, with different standing committees and support divisions.

IFMSA vision and mission fits very well with those of GNU Solidario advancement of Social Medicine. IFMSA has projects on Public Health (reproductive health; personal hygiene; cardiovascular disease and cancer prevention, … ), Human rights and peace (campaigns to end violence against women; protection of the underprivileged elders and children.. ). I am positive the GNU Health ecosystem will help them reach their goals in each of their projects!

The GNU Health Alliance of Academic and Research Institutions is extremely happy to have IFMSA Bangladesh as a member. IFMSA Bangladesh joins now a group of outstanding researchers and institutions that have made phenomenal advancements in health informatics and contributions to public health. Some examples:

  • The National University of Entre Ríos (UNER) has been awarded the project to use GNU Health as a real-time observatory for the COVID-19 pandemic, by the Government of Argentina. In the context of the GNU Health Alliance, UNER has also developed the oral health package for GNU Health; and implemented the GNU Health Hospital Management Information System component in many public health care institutions in the country. The team from the UNER has traveled to Cameroon to implement GNU Health HMIS in several health facilities in the country, as well as training their health professionals.
  • Thymbra Healthcare (R&D Labs) has contributed the medical genetics and precision medicine. Currently, Thymbra is focused on MyGNUHealth, the GNU Health Personal Health Record (PHR) for KDE plasma mobile and desktops devices, and working on the integration of MyGNUHealth with the PinePhone.
  • Khadas has signed an agreement to work on with the GNU Health community in Artificial Intelligence and medical imaging, as well on integrating Single Board Computers (SBCs) with GNU Health (the GNU Health in a Box project)

The fact that an association of 3500+ medical students embrace GNU Health means that all these bright future doctors from Bangladesh will also bear the ethics and philosophy of Libre Software to their communities. Public Health can not be run by private corporations, nor by proprietary software.

IFMSA has 5 years ahead to make a wonderful revolution in the public health care system. Health institutions will be able to implement state-of-the-art health informatics. Medical students can learn GNU Health inside-out, and conduct workshops across the country in the Libre digital health ecosystem. Most importantly, I am positive GNU Health will provide a wonderful opportunity to improve the health promotion and disease prevention campaigns in Bangladesh.

As the president of GNU Solidario, I am truly honored and looking forward to start collaborating with our colleagues from Bangladesh, and, when the pandemic is over, be able to meet them in person.

My most sincere appreciation to IFMSA Bangladesh for becoming part of the GNU Health community. To the 3500+ members, a very warm welcome!

Let’s keep building communities that foster universal health care, freedom and social medicine around the world.

For further information about the GNU Health Alliance of Academic and Research Institutions, please contact us at:

GNU Health Alliance: alliance@gnuhealth.org

Press: press@gnuhealth.org

General Information : info@gnuhealth.org

07 June, 2021 06:44PM by Luis Falcon

edma @ Savannah

GNU/EDMA 0.19.1. Alpha Release

GNU/EDMA 0.19.1 has been released as an Alpha Version. This version tries to fix the long standing issue with 64bits platforms.

In order to fix that problem this version adds a dependency on `libffi`.

This is an alpha release and it is still under test and can be downloaded from:

http://alpha.gnu.org/gnu/edma/

Any feedback or comment is welcomed

Best Regards
David

07 June, 2021 07:14AM by David Martínez Oliveira

June 05, 2021

gsl @ Savannah

GNU Scientific Library 2.7 released

Version 2.7 of the GNU Scientific Library (GSL) is now available. GSL provides a large collection of routines for numerical computing in C.

This release introduces some new features and fixes several bugs. The full NEWS file entry is appended below.

The file details for this release are:

ftp://ftp.gnu.org/gnu/gsl/gsl-2.7.tar.gz
ftp://ftp.gnu.org/gnu/gsl/gsl-2.7.tar.gz.sig

The GSL project homepage is http://www.gnu.org/software/gsl/

GSL is free software distributed under the GNU General Public License.

Thanks to everyone who reported bugs and contributed improvements.

Patrick Alken

-------------------------------

  • What is new in gsl-2.7:
    • fixed doc bug for gsl_histogram_min_bin (lhcsky at 163.com)
    • fixed bug #60335 (spmatrix test failure, J. Lamb)
    • clarified documentation on interpolation accelerators (V. Krishnan)
    • fixed bug #45521 (erroneous GSL_ERROR_NULL in ode-initval2, thanks to M. Sitte)
    • added support for native C complex number types in gsl_complex when using a C11 compiler
    • upgraded to autoconf 2.71, automake 1.16.3, libtool 2.4.6
    • updated exponential fitting example for nonlinear least squares
    • added banded LU decomposition and solver (gsl_linalg_LU_band)
    • New functions added to the library:

    - gsl_matrix_norm1
    - gsl_spmatrix_norm1
    - gsl_matrix_complex_conjtrans_memcpy
    - gsl_linalg_QL: decomp, unpack
    - gsl_linalg_complex_QR_* (thanks to Christian Krueger)
    - gsl_vector_sum
    - gsl_matrix_scale_rows
    - gsl_matrix_scale_columns
    - gsl_multilarge_linear_matrix_ptr
    - gsl_multilarge_linear_rhs_ptr
    - gsl_spmatrix_dense_add (renamed from gsl_spmatrix_add_to_dense)
    - gsl_spmatrix_dense_sub
    - gsl_linalg_cholesky_band: solvem, svxm, scale, scale_apply
    - gsl_linalg_QR_UD: decomp, lssolve
    - gsl_linalg_QR_UU: decomp, lssolve,QTvec
    - gsl_linalg_QR_UZ: decomp
    - gsl_multifit_linear_lcurvature
    - gsl_spline2d_eval_extrap

    • bug fix in checking vector lengths in gsl_vector_memcpy (dieggsy@pm.me)
    • made gsl_sf_legendre_array_index() inline and documented gsl_sf_legendre_nlm()|

05 June, 2021 03:02PM by Patrick Alken

poke @ Savannah

GNU poke 1.3 released

I am happy to announce a new release of GNU poke, version 1.3.

This is a bug fix release in the poke 1.x series.

See the file NEWS in the released tarball for a detailed list of
changes in this release.

The tarball poke-1.3.tar.gz is now available at
https://ftp.gnu.org/gnu/poke/poke-1.3.tar.gz.

  GNU poke (http://www.jemarch.net/poke) is an interactive, extensible
  editor for binary data.  Not limited to editing basic entities such
  as bits and bytes, it provides a full-fledged procedural,
  interactive programming language designed to describe data
  structures and to operate on them.

This release is the product of a month of work resulting in 41
commits, made by 4 contributors.

Thanks to the people who contributed with code and/or documentation to
this release.  In certain but no significant order they are:

  Mohammad-Reza Nabipoor <m.nabipoor@yahoo.com>
  Egeyar Bagcioglu <egeyar@gmail.com>
  Konstantinos Chasialis <sdi1600195@di.uoa.gr> 

As always, thank you all!

And this is all for now.
Happy poking!

--
Jose E. Marchesi
Frankfurt am Main
5 June 2021

05 June, 2021 10:55AM by Jose E. Marchesi

June 04, 2021

GNU Health

Liberating our mobile computing

Last week I got the PineTime, a free/libre smartwatch. In the past months, I’ve been working on MyGNUHealth and porting it to the PinePhone.

Why doing so? Because running free/libre operating systems and having control of the applications on your mobile phones and wearables is the right thing to do.

Yesterday, I told myself: “This is the day to move away from Android and take control over my phone”. And I made the switch. Now I am using a PinePhone on Manjaro running KDE plasma mobile. I have also switched my smartwatch to the PineTime.

The mobile phone and smartwatch were the last pieces of hardware and software to liberate. All my computing is now libre. No proprietary operating systems, no closed-source applications. Not on my laptop, not in my desktop, not on my phone.

Facing and overcoming the social pressure

At the moment I ditched Android, I felt an immense sense of relief and happiness. It took me back 30 years ago, early FreeBSD and GNU/Linux times, being in control of every component of my computer.

We can not put our daily life activities, electronic transactions and data in the hands of the corporations. Android phones shipped today are full of “bloatware” and closed-source applications. We can safely call most of those applications spyware.

The PinePhone is a libre computer, with a phone. All the applications are Libre Software. I have SSH, most of the cool KDE plasma applications I enjoy in the desktop, I can have them now in my pocket. Again, most importantly, I am free.

Of course, freedom comes with a price. The price to face social and corporate pressure. For instance, somebody asked me yesterday how to deal with banking without the app. My answer was, I never used an app for banking. Running a proprietary financial application is shooting at the heart of your privacy. If your bank does not let you do your transactions from any standard web browser, then change your bank. Quick digression… the financial system and the big technological corporations are desperately trying to get rid of good all coins and bills. This is yet another attack on our privacy. Nobody needs to know when, where and what I buy.

A brighter future depends on us

Some people might argue that this technology might not be ready for prime time, yet. I would say that I am ok with it, and the more we join, the more feedback we provide, and the better end result we’ll get.

The Pine64 project is mainly a community-oriented ecosystem. Its hardware, operating system and applications are from the community and for the community. I am developing MyGNUHealth Personal Health Record to be run on KDE Plasma, both for desktop and for the PinePhone and other Libre mobile devices. It is my commitment with freedom, privacy and universal healthcare to deliver health informatics in Libre, privacy focused platforms that anyone can adopt.

MyGNUHealth Personal Health Record running on the Desktop and on the PinePhone. The PineTime smartwatch as the next companion for MyGNUHealth. All these components are privacy focused, Free/Libre Software and hardware.

It depends on you to be prisoner of the corporation and massive surveillance systems, or to be in full control of your programming, health information and life. It takes commitment to achieve it… some components might be too bleeding edge or the camera might not have the highest resolutions and you won’t have the Whatsapp “app” (removing that application would actually be a blessing). It’s a very small price to pay for freedom and privacy. It’s a very small price to pay for the advancement of our society.

InfiniTime firmware upgrade using Siglo.

It’s been many years since I’ve been in the look out for a truly libre phone. After many projects that succumbed, the PinePhone is the first one that has gained momentum. Please support the PinePhone project. Support KDE plasma mobile. Support Arch, Manjaro, openSUSE, FreeBSD or your favorite Libre operating system. Support those who make Libre convergent applications that can be run on mobile devices, like Kirigami. Support InfiniTime and any free/libre firmware for smartwatches, as well as their companions as Siglo or Amazfish.

The future of Libre mobile computing is now, more than ever, in your hands.

Happy and healthy hacking.

04 June, 2021 03:45PM by Luis Falcon

May 30, 2021

gnuastro @ Savannah

Gnuastro 0.15 released

The 15th release of Gnuastro is now available. See the full announcement for more.

30 May, 2021 11:41PM by Mohammad Akhlaghi

May 29, 2021

m4 @ Savannah

May 25, 2021

FSF Events

May 22, 2021

parallel @ Savannah

GNU Parallel 20210522 ('Gaza') released

GNU Parallel 20210522 ('Gaza') has been released. It is available for download at: lbry://@GnuParallel:4

Please help spreading GNU Parallel by making a testimonial video like Juan Sierra Pons: http://www.elsotanillo.net/wp-content/uploads/GnuParallel_JuanSierraPons.mp4

It does not have to be as detailed as Juan's. It is perfectly fine if you just say your name, and what field you are using GNU Parallel for.

Quote of the month:

  If you work with lots of files at once
  Take a good look at GNU parallel
  Change your life for the better
    -- French @notareverser@twitter

New in this release:

  • --plus includes {%%regexp} and {##regexp}.
  • Bug fixes and man page updates.

News about GNU Parallel:

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep c82233e7da3166308632ac8c34f850c0
    12345678 c82233e7 da316630 8632ac8c 34f850c0
    $ md5sum install.sh | grep ae3d7aac5e15cf3dfc87046cfc5918d2
    ae3d7aac 5e15cf3d fc87046c fc5918d2
    $ sha512sum install.sh | grep dfc00d823137271a6d96225cea9e89f533ff6c81f
    9c5198d5 31a3b755 b7910ece 3a42d206 c804694d fc00d823 137271a6 d96225ce
    a9e89f53 3ff6c81f f52b298b ef9fb613 2d3f9ccd 0e2c7bd3 c35978b5 79acb5ca
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 May, 2021 08:20PM by Ole Tange

May 20, 2021

freeipmi @ Savannah

FreeIPMI 1.6.8 Released

o Fix incorrect sensor read corner case on BMCs that use non-default LUNs (LP#1926299).
o Remove hard coded paths from system config files (i.e. mostly files in /etc).  Have paths updated based on options to configure.

https://ftp.gnu.org/gnu/freeipmi/freeipmi-1.6.8.tar.gz

20 May, 2021 04:28AM by Albert Chu

May 13, 2021

Andy Wingo

cross-module inlining in guile

Greetings, hackers of spaceship Earth! Today's missive is about cross-module inlining in Guile.

a bit of history

Back in the day... what am I saying? I always start these posts with loads of context. Probably you know it all already. 10 years ago, Guile's partial evaluation pass extended the macro-writer's bill of rights to Schemers of the Guile persuasion. This pass makes local function definitions free in many cases: if they should be inlined and constant-folded, you are confident that they will be. peval lets you write clear programs with well-factored code and still have good optimization.

The peval pass did have a limitation, though, which wasn't its fault. In Guile, modules have historically been a first-order concept: modules are a kind of object with a hash table inside, which you build by mutating. I speak crassly but that's how it is. In such a world, it's hard to reason about top-level bindings: what module do they belong to? Could they ever be overridden? When you have a free reference to a, and there's a top-level definition of a in the current compilation unit, is that the a that's being referenced, or could it be something else? Could the binding be mutated in the future?

During the Guile 2.0 and 2.2 cycles, we punted on this question. But for 3.0, we added the notion of declarative modules. For these modules, bindings which are defined once in a module and which are not mutated in the compilation unit are declarative bindings, which can be reasoned about lexically. We actually translate them to a form of letrec*, which then enables inlining via peval, contification, and closure optimization -- in descending order of preference.

The upshot is that with Guile 3.0, top-level bindings are no longer optimization barriers, in the case of declarative modules, which are compatible enough with historic semantics and usage that they are on by default.

However, module boundaries have still been an optimization barrier. Take (srfi srfi-1), a little utility library on lists. One definition in the library is xcons, which is cons with arguments reversed. It's literally (lambda (cdr car) (cons car cdr)). But does the compiler know that? Would it know that (car (xcons x y)) is the same as y? Until now, no, because no part of the optimizer will look into bindings from outside the compilation unit.

mr compiler, tear down this wall

But no longer! Guile can now inline across module boundaries -- in some circumstances. This feature will be part of a future Guile 3.0.8.

There are actually two parts of this. One is the compiler can identify a set of "inlinable" values from a declarative module. An inlinable value is a small copyable expression. A copyable expression has no identity (it isn't a fresh allocation), and doesn't reference any module-private binding. Notably, lambda expressions can be copyable, depending on what they reference. The compiler then extends the module definition that's residualized in the compiled file to include a little procedure that, when passed a name, will return the Tree-IL representation of that binding. The design of that was a little tricky; we want to avoid overhead when using the module outside of the compiler, even relocations. See compute-encoding in that module for details.

With all of that, we can call ((module-inlinable-exports (resolve-interface '(srfi srfi-1))) 'xcons) and get back the Tree-IL equivalent of (lambda (cdr car) (cons car cdr)). Neat!

The other half of the facility is the actual inlining. Here we lean on peval again, causing <module-ref> forms to trigger an attempt to copy the term from the imported module to the residual expression, limited by the same effort counter as the rest of peval.

The end result is that we can be absolutely sure that constants in imported declarative modules will inline into their uses, and fairly sure that "small" procedures will inline too.

caveat: compiled imported modules

There are a few caveats about this facility, and they are sufficiently sharp that I should probably fix them some day. The first one is that for an imported module to expose inlinable definitions, the imported module needs to have been compiled already, not loaded from source. When you load a module from source using the interpreter instead of compiling it first, the pipeline is optimized for minimizing the latency between when you ask for the module and when it is available. There's no time to analyze the module to determine which exports are inlinable and so the module exposes no inlinable exports.

This caveat is mitigated by automatic compilation, enabled by default, which will compile imported modules as needed.

It could also be fixed for modules by topologically ordering the module compilation sequence; this would allow some parallelism in the build but less than before, though for module graphs with cycles (these exist!) you'd still have some weirdness.

caveat: abi fragility

Before Guile supported cross-module inlining, there was only explicit inlining across modules in Guile, facilitated by macros. If you write a module that has a define-inlinable export and you think about its ABI, then you know to consider any definition referenced by the inlinable export, and you know by experience that its code may be copied into other compilation units. Guile doesn't automatically recompile a dependent module when a macro that it uses changes, currently anyway. Admittedly this situation leans more on culture than on tools, which could be improved.

However, with automatically inlinable exports, this changes. Any definition in a module could be inlined into its uses in other modules. This may alter the ABI of a module in unexpected ways: you think that module C depends on module B, but after inlining it may depend on module A as well. Updating module B might not update the inlined copies of values from B into C -- as in the case of define-inlinable, but less lexically apparent.

At higher optimization levels, even private definitions in a module can be inlinable; these may be referenced if an exported macro from the module expands to a term that references a module-private variable, or if an inlinable exported binding references the private binding. But these optimization levels are off by default precisely because I fear the bugs.

Probably what this cries out for is some more sensible dependency tracking in build systems, but that is a topic for another day.

caveat: reproducibility

When you make a fresh checkout of Guile from git and build it, the build proceeds in the following way.

Firstly, we build libguile, the run-time library implemented in C.

Then we compile a "core" subset of Scheme files at optimization level -O1. This subset should include the evaluator, reader, macro expander, basic run-time, and compilers. (There is a bootstrap evaluator, reader, and macro expander in C, to start this process.) Say we have source files S0, S1, S2 and so on; generally speaking, these files correspond to Guile modules M0, M1, M2 etc. This first build produces compiled files C0, C1, C2, and so on. When compiling a file S2 implementing module M2, which happens to import M1 and M0, it may be M1 and M0 are provided by compiled files C1 and C0, or possibly they are loaded from the source files S1 and S0, or C1 and S0, or S1 and C0.

The bootstrap build uses make for parallelism, with each compile process starts afresh, importing all the modules that comprise the compiler and then using them to compile the target file. As the build proceeds, more and more module imports can be "serviced" by compiled files instead of source files, making the build go faster and faster. However this introduces system-specific nondeterminism as to the set of compiled files available when compiling any other file. This strategy works because it doesn't really matter whether module M1 is provided by compiled file C1 or source file S1; the compiler and the interpreter implement the same language.

Once the compiler is compiled at optimization level -O1, Guile then uses that freshly built compiler to build everything at -O2. We do it in this way because building some files at -O1 then all files at -O2 takes less time than going straight to -O2. If this sounds weird, that's because it is.

The resulting build is reproducible... mostly. There is a bug in which some unique identifiers generated as part of the implementation of macros can be non-reproducible in some cases, and that disabling parallel builds seems to solve the problem. The issue being that gensym (or equivalent) might be called a different number of times depending on whether you are loading a compiled module, or whether you need to read and macro-expand it. The resulting compiled files are equivalent under alpha-renaming but not bit-identical. This is a bug to fix.

Anyway, at optimization level -O1, Guile will record inlinable definitions. At -O2, Guile will actually try to do cross-module inlining. We run into two issues when compiling Guile; one is if we are in the -O2 phase, and we compile a module M which uses module N, and N is not in the set of "core" modules. In that case depending on parallelism and compile order, N may be loaded from source, in which case it has no inlinable exports, or from a compiled file, in which case it does. This is not a great situation for the reliability of this optimization. I think probably in Guile we will switch so that all modules are compiled at -O1 before compiling at -O2.

The second issue is more subtle: inlinable bindings are recorded after optimization of the Tree-IL. This is more optimal than recording inlinable bindings before optimization, as a term that is not inlinable due to its size in its initial form may become small enough to inline after optimization. However, at -O2, optimization includes cross-module inlining! A term that is inlinable at -O1 may become not inlinable at -O2 because it gets slightly larger, or vice-versa: terms that are too large at -O1 could shrink at -O2. We don't even have a guarantee that we will reach a fixed point even if we repeatedly recompile all inputs at -O2, because we allow non-shrinking optimizations.

I think this probably calls for a topological ordering of module compilation inside Guile and perhaps in other modules. That would at least give us reproducibility, provided we avoid the feedback loop of keeping around -O2 files compiled from a previous round, even if they are "up to date" (their corresponding source file didn't change).

and for what?

People who have worked on inliners will know what I mean that a good inliner is like a combine harvester: ruthlessly efficient, a qualitative improvement compared to not having one, but there is a pointy end with lots of whirling blades and it's important to stop at the end of the row. You do develop a sense of what will and won't inline, and I think Dybvig's "Macro writer's bill of rights" encompasses this sense. Luckily people don't lose fingers or limbs to inliners, but inliners can maim expectations, and cross-module inlining more so.

Still, what it buys us is the freedom to be abstract. I can define a module like:

(define-module (elf)
  #:export (ET_NONE ET_REL ET_EXEC ET_DYN ET_CORE))

(define ET_NONE		0)		; No file type
(define ET_REL		1)		; Relocatable file
(define ET_EXEC		2)		; Executable file
(define ET_DYN		3)		; Shared object file
(define ET_CORE		4)		; Core file

And if a module uses my (elf) module and references ET_DYN, I know that the module boundary doesn't prevent the value from being inlined as a constant (and possibly unboxed, etc).

I took a look and on our usual microbenchmark suite, cross-module inlining doesn't make a difference. But that's both a historical oddity and a bug: firstly that the benchmark suite comes from an old Scheme world that didn't have modules, and so won't benefit from cross-module inlining. Secondly, Scheme definitions from the "default" environment that aren't explicitly recognized as primitives aren't inlined, as the (guile) module isn't declarative. (Probably we should fix the latter at some point.)

But still, I'm really excited about this change! Guile developers use modules heavily and have been stepping around this optimization boundary for years. I count 100 direct uses of define-inlinable in Guile, a number of them inside macros, and many of these are to explicitly hack around the optimization barrier. I really look forward to seeing if we can remove some of these over time, to go back to plain old define and just trust the compiler to do what's needed.

by the numbers

I ran a quick analysis of the modules include in Guile to see what the impact was. Of the 321 files that define modules, 318 of them are declarative, and 88 contain inlinable exports (27% of total). Of the 6519 total bindings exported by declarative modules, 566 of those are inlinable (8.7%). Of the inlinable exports, 388 (69%) are functions (lambda expressions), 156 (28%) are constants, and 22 (4%) are "primitives" referenced by value and not by name, meaning definitions like (define head car) (instead of re-exporting car as head).

On the use side, 90 declarative modules import inlinable bindings (29%), resulting in about 1178 total attempts to copy inlinable bindings. 902 of those attempts are to copy a lambda expressions in operator position, which means that peval will attempt to inline their code. 46 of these attempts fail, perhaps due to size or effort constraints. 191 other attempts end up inlining constant values. 20 inlining attempts fail, perhaps because a lambda is used for a value. Somehow, 19 copied inlinable values get elided because they are evaluated only for their side effects, probably to clean up let-bound values that become unused due to copy propagation.

All in all, an interesting endeavor, and one to improve on going forward. Thanks for reading, and catch you next time!

13 May, 2021 11:25AM by Andy Wingo

May 11, 2021

GNU Guix

GNU Guix 1.3.0 released

Image of a Guix test pilot.

We are pleased to announce the release of GNU Guix version 1.3.0!

The release comes with ISO-9660 installation images, a virtual machine image, and with tarballs to install the package manager on top of your GNU/Linux distro, either from source or from binaries. Guix users can update by running guix pull.

It’s been almost 6 months since the last release, during which 212 people contributed code and packages, and a number of people contributed to other important tasks—code review, system administration, translation, web site updates, Outreachy mentoring, and more.

There’s been more than 8,300 commits in that time frame, which we’ll humbly try to summarize in these release notes.

User experience

A distinguishing Guix feature is its support for declarative deployment: instead of running a bunch of guix install and guix remove commands, you run guix package --manifest=manifest.scm, where manifest.scm lists the software you want to install in a snippet that looks like this:

;; This is 'manifest.scm'.
(specifications->manifest
  (list "emacs" "guile" "gcc-toolchain"))

Doing that installs exactly the packages listed. You can have that file under version control and share it with others, which is convenient. Until now, one would have to write the manifest by hand—not insurmountable, but still a barrier to someone willing to migrate to the declarative model.

The new guix package --export-manifest command (and its companion --export-channels option) produces a manifest based on the contents of an existing profile. That makes it easy to transition from the classic “imperative” model, where you run guix install as needed, to the more formal declarative model. This was long awaited!

Users who like to always run the latest and greatest pieces of the free software commons will love the new --with-latest package transformation option. Using the same code as guix refresh, this option looks for the latest upstream release of a package, fetches it, authenticates it, and builds it. This is useful in cases where the new version is not yet packaged in Guix. For example, the command below, if run today, will (attempt to) install QEMU 6.0.0:

$ guix install qemu --with-latest=qemu 
The following package will be upgraded:
   qemu 5.2.0 → 6.0.0

Starting download of /tmp/guix-file.eHO6MU
From https://download.qemu.org//qemu-6.0.0.tar.bz2...
 …0.tar.bz2  123.3MiB                                                                                                                      28.2MiB/s 00:04 [##################] 100.0%

Starting download of /tmp/guix-file.9NRlvT
From https://download.qemu.org//qemu-6.0.0.tar.bz2.sig...
 …tar.bz2.sig  310B                                                                                                                         1.2MiB/s 00:00 [##################] 100.0%
gpgv: Signature made Thu 29 Apr 2021 09:28:25 PM CEST
gpgv:                using RSA key CEACC9E15534EBABB82D3FA03353C9CEF108B584
gpgv: Good signature from "Michael Roth <michael.roth@amd.com>"
gpgv:                 aka "Michael Roth <mdroth@utexas.edu>"
gpgv:                 aka "Michael Roth <flukshun@gmail.com>"
The following derivation will be built:
   /gnu/store/ypz433vzsbg3vjp5374fr9lhsm7jjxa4-qemu-6.0.0.drv

…

There’s one obvious caveat: this is not guaranteed to work. If the new version has a different build system, or if it requires extra dependencies compared to the version currently packaged, the build process will fail. Yet, it provides users with additional flexibility which can be convenient at times. For developers, it’s also a quick way to check whether a given package successfully builds against the latest version of one of its dependencies.

Several changes were made here and there to improve user experience. As an example, a new --verbosity level was added. By default (--verbosity=1), fewer details about downloads get printed, which matches the expectation of most users.

Another handy improvement is suggestions when making typos:

$ guix package --export-manifests
guix package: error: export-manifests: unrecognized option
hint: Did you mean `export-manifest'?

$ guix remve vim
guix: remve: command not found
hint: Did you mean `remove'?

Try `guix --help' for more information.

People setting up build offloading over SSH will enjoy the simplified process, where the guile executable no longer needs to be in PATH, with appropriate GUILE_LOAD_PATH settings, on target machines. Instead, offloading now channels all its operations through guix repl.

The Guix reference manual is fully translated into French, German, and Spanish, with preliminary translations in Russian, Chinese, and other languages. Guix itself is fully translated in French, German, and Slovak, and partially translated in almost twenty other languages. Translations are now handled on Weblate, and you can help!

Developer tools

We have good news for packagers! First, guix import comes with a new Go recursive importer, that can create package definitions or templates thereof for whole sets of Go packages. The guix import crate command, for Rust packages, now honors “semantic versioning” when used in recursive mode.

The guix refresh command now includes new “updaters”: sourceforge, for code hosted on SourceForge, and generic-html which, as the name implies, is a generic update that works by scanning package home pages. This greatly improves guix refresh coverage.

Packagers and developers may also like the new --with-patch package transformation option, which provides a way to build a bunch of packages with a patch applied to one or several of them.

Building on the Guix System image API introduced in v1.2.0, the guix system vm-image and guix system disk-image are superseded by a unified guix system image command. For example,

guix system vm-image --save-provenance config.scm

becomes

guix system image -t qcow2 --save-provenance config.scm

while

guix system disk-image -t iso9660 gnu/system/install.scm

becomes

guix system image -t iso9660 gnu/system/install.scm

This brings performance benefits; while a virtual machine used to be involved in the production of the image artifacts, the low-level bits are now handled by the dedicated genimage tool. Another benefit is that the qcow2 format is now compressed, which removes the need to manually compress the images by post-processing them with xz or another compressor. To learn more about the guix system image command, you can refer to its documentation.

Last but not least, the introduction of the GUIX_EXTENSIONS_PATH Guix search path should make it possible for Guix extensions, such as the Guix Workflow Language, to have their Guile modules automatically discovered, simplifying their deployments.

Performance

One thing you will hopefully notice is that substitute installation (downloading pre-built binaries) became faster, as we explained before. This is in part due to the opportunistic use of zstd compression, which has a high decompression throughput. The daemon and guix publish support zstd as an additional compression method, next to gzip and lzip.

Another change that can help fetch substitutes more quickly is local substitute server discovery. The new --discover option of guix-daemon instructs it to discover and use substitute servers on the local-area network (LAN) advertised with the mDNS/DNS-SD protocols, using Avahi. Similarly, guix publish has a new --advertise option to advertise itself on the LAN.

On Guix System, you can run herd discover guix-daemon on to turn discovery on temporarily, or you can enable it in your system configuration. Opportunistic use of neighboring substitute servers is entirely safe, thanks to reproducible builds.

In other news, guix system init has been optimized, which contributes to making Guix System installation faster.

On some machines with limited resources, building the Guix modules is an expensive operation. A new procedure, channel-with-substitutes-available from the (guix ci) module, can now be used to pull Guix to the latest commit which has already been built by the build farm. Refer to the documentation for an example.

POWER9 support, packages, services, and more!

POWER9 support is now available as a technology preview, thanks to the tireless work of those who helped porting Guix to that platform. There aren't many POWER9 binary substitutes available yet, due to the limited POWER9 capacity of our build farm, but if you are not afraid of building many packages from source, we'd be thrilled to hear back from your experience!

2,000 packages were added, for a total of more than 17K packages; 3,100 were updated. The distribution comes with GNU libc 2.31, GCC 10.3, Xfce 4.16.0, Linux-libre 5.11.15, LibreOffice 6.4.7.2, and Emacs 27.2, to name a few. Among the many packaging changes, one that stands out is the new OCaml bootstrap: the OCaml package is now built entirely from source via camlboot. Package updates also include Cuirass 1.0, the service that powers our build farm.

The services catalog has also seen new additions such as wireguard, syncthing, ipfs, a simplified and more convenient service for Cuirass, and more! You can search for services via the guix system search facility.

The NEWS file lists additional noteworthy changes and bug fixes you may be interested in.

Try it!

The installation script has been improved to allow for more automation. For example, if you are in a hurry, you can run it with:

# yes | ./install.sh

to proceed to install the Guix binary on your system without any prompt!

You may also be interested in trying the Guix System demonstration VM image which now supports clipboard integration with the host and dynamic resizing thanks to the SPICE protocol, which we hope will improve the user experience.

To review all the installation options at your disposal, consult the download page and don't hesitate to get in touch with us.

Enjoy!

Credits

Luis Felipe (illustration)

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64 and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

11 May, 2021 10:00PM by Ludovic Courtès, Maxim Cournoyer

m4 @ Savannah

GNU M4 1.4.18d released [beta]

More details at https://lists.gnu.org/archive/html/m4-discuss/2021-05/msg00002.html
This beta release will enable translators time to prepare .po files before the stable release of 1.4.19 later this month.

11 May, 2021 05:15PM by Eric Blake

May 10, 2021

GNU Guile

GNU Guile 3.0.7 released

We are humbled to announce the release of GNU Guile 3.0.7. This release fixes a number of bugs, a couple of which were introduced in the previous release. For full details, see the NEWS entry. See the release note for signatures, download links, and all the rest. Happy hacking!

10 May, 2021 09:00AM by Andy Wingo (guile-devel@gnu.org)

May 09, 2021

GNUnet News

DISSENS: Decentralized Identities for Self-sovereign End-users

DISSENS: Decentralized Identities for Self-sovereign End-users (NGI TRUST)

Since mid 2020, a consortium between Taler Systems S.A. , the Bern University of Applied Sciences and Fraunhofer AISEC has been working on bringing privacy-friendly payments using GNU Taler and self-sovereign identity using GNUnet's re:claimID together in an e-commerce framework.

Content

Registrations of accounts prior to receiving services online is the standard process for commercial offerings on the Internet which depend on two corner stones of the Web: Payment processing and digital identities. The use of third-party identity provider services (IdPs) is practical as it delegates the task of verifying and storing personal information. The use of payment processors is convenient for the customer as it provides one-click payments. However, the quasi-oligopoly of services providers in those areas include Google and Facebook for identities and PayPal or Stripe for payment processing. Those corporations are not only based in privacy-unfriendly jurisdictions, but also exploit private data for profit.

DISSENS makes the case that what is urgently needed are fundamentally different, user-centric and privacy-friendly alternatives to the above. Self-sovereign identity (SSI) management is the way to replace IdPs with a user-centric, decentralized mechanism where data and access control is fully under the control of the data subject. In combination with a privacy-friendly payment system, DISSENS aims to achieve the same one-click user experience that is currently achieved by privacy-invasive account-based Web shops, but without the users having to setup accounts.

To achieve this, DISSENS integrates re:claimID with the GNU Taler payment system in a pilot in order to demonstrate the practical feasibility and benefits of privacy enhancing technologies for users and commercial service providers. DISSENS also implements a reference scenario which includes credentials issued by the partners Fraunhofer AISEC and BFH for employees and students, respectively. Users are able to access and use a pilot service developed by Taler Systems S.A. while being able to claim specific discounts for students and researchers.

This approach offers significant benefits over existing solutions built using other SSI systems such as Sovrin or serto (formerly uPort):

No gatekeepers; No vendor lock-in:

The approach is completely open to issuers and does not impose any registration restrictions (such as registration fees) in order to define domain specific credentials. Further, the system does not impose a consortium-based governance model — which tend to eventually be driven by commercial interests and not consumer interests. The design enables all participants in the ecosystem to participate without prior onboarding while at the same time being offered full transparency and control regarding their personal data and processes involved.

Support for non-interactive business processes:

At the same time, unlike the SSI systems cited above, re:claimID offers a way to access user information without online interaction with the user. Offline access of shared identity data is a crucial requirement in almost any business process as such processes often occur after direct interaction with the user. For example, customer information such as billing addresses are required in — possibly recurring — back office billing processes which occur well after interaction with a customer.

Scalability and sustainability:

Finally, both re:claimID as the SSI system as well as Taler do not suffer from the usual predicament Blockchain-based systems find themselves in: Both systems do not require a decentralized, public ledger. This eliminates the need for consensus mechanisms, which do not scale and are ecologically unsustainable. In fact, DISSENS employs decentralization only where it provides the most value and use more efficient technology stacks where needed: re:claimID builds on top of the GNU Name System , which makes use of a DHT, an efficient (O(log n)) peer-to-peer data structure. For payments, GNU Taler uses centralized infrastructure operated by audited and regulated exchange providers and facilitates account-less end-to-end interactions between customers and services where all parties have O(1) transaction costs.

The result of DISSENS will provide businesses and credential issuers with ready-to-use and standards-compliant templates to build privacy-friendly services in the Web. The aim of the DISSENS project was to design a technology stack which combines privacy-friendly online payments with self-sovereign personal data management. The result enables users to be in complete control over their digital identity and personal information while at the same time being able to selectively share information necessary to use commercial services. The pilot demonstrates a sustainable, user-centric, standard-compliant and accessible use case for public service employees and students in the domain of commercial food delivery. It serves as an easy-to-adapt template for the integration of other scenarios and use cases.

Future work

GNUnet is working on the underlying components mature to the point that Taler+re:claimID can be recommended to operators to enable for account-less shopping with or without verified credentials. This will also require the continuation of our work on the low-level transport rewrite as it is a core component of GNS which in turn is what makes re:claimID spin.

Links

This work is generously funded by the EC's Next Generation Internet (NGI) initiative as part of their NGI TRUST programme.

09 May, 2021 10:00PM

DISSENS: Decentralized Identities for Self-sovereign End-users

DISSENS: Decentralized Identities for Self-sovereign End-users (NGI TRUST)

Since mid 2020, a consortium between Taler Systems S.A. , the Bern University of Applied Sciences and Fraunhofer AISEC has been working on bringing privacy-friendly payments using GNU Taler and self-sovereign identity using GNUnet's re:claimID together in an e-commerce framework.

Content

Registrations of accounts prior to receiving services online is the standard process for commercial offerings on the Internet which depend on two corner stones of the Web: Payment processing and digital identities. The use of third-party identity provider services (IdPs) is practical as it delegates the task of verifying and storing personal information. The use of payment processors is convenient for the customer as it provides one-click payments. However, the quasi-oligopoly of services providers in those areas include Google and Facebook for identities and PayPal or Stripe for payment processing. Those corporations are not only based in privacy-unfriendly jurisdictions, but also exploit private data for profit.

DISSENS makes the case that what is urgently needed are fundamentally different, user-centric and privacy-friendly alternatives to the above. Self-sovereign identity (SSI) management is the way to replace IdPs with a user-centric, decentralized mechanism where data and access control is fully under the control of the data subject. In combination with a privacy-friendly payment system, DISSENS aims to achieve the same one-click user experience that is currently achieved by privacy-invasive account-based Web shops, but without the users having to setup accounts.

To achieve this, DISSENS integrates re:claimID with the GNU Taler payment system in a pilot in order to demonstrate the practical feasibility and benefits of privacy enhancing technologies for users and commercial service providers. DISSENS also implements a reference scenario which includes credentials issued by the partners Fraunhofer AISEC and BFH for employees and students, respectively. Users are able to access and use a pilot service developed by Taler Systems S.A. while being able to claim specific discounts for students and researchers.

This approach offers significant benefits over existing solutions built using other SSI systems such as Sovrin or serto (formerly uPort):

No gatekeepers; No vendor lock-in:

The approach is completely open to issuers and does not impose any registration restrictions (such as registration fees) in order to define domain specific credentials. Further, the system does not impose a consortium-based governance model — which tend to eventually be driven by commercial interests and not consumer interests. The design enables all participants in the ecosystem to participate without prior onboarding while at the same time being offered full transparency and control regarding their personal data and processes involved.

Support for non-interactive business processes:

At the same time, unlike the SSI systems cited above, re:claimID offers a way to access user information without online interaction with the user. Offline access of shared identity data is a crucial requirement in almost any business process as such processes often occur after direct interaction with the user. For example, customer information such as billing addresses are required in — possibly recurring — back office billing processes which occur well after interaction with a customer.

Scalability and sustainability:

Finally, both re:claimID as the SSI system as well as Taler do not suffer from the usual predicament Blockchain-based systems find themselves in: Both systems do not require a decentralized, public ledger. This eliminates the need for consensus mechanisms, which do not scale and are ecologically unsustainable. In fact, DISSENS employs decentralization only where it provides the most value and use more efficient technology stacks where needed: re:claimID builds on top of the GNU Name System , which makes use of a DHT, an efficient (O(log n)) peer-to-peer data structure. For payments, GNU Taler uses centralized infrastructure operated by audited and regulated exchange providers and facilitates account-less end-to-end interactions between customers and services where all parties have O(1) transaction costs.

The result of DISSENS will provide businesses and credential issuers with ready-to-use and standards-compliant templates to build privacy-friendly services in the Web. The aim of the DISSENS project was to design a technology stack which combines privacy-friendly online payments with self-sovereign personal data management. The result enables users to be in complete control over their digital identity and personal information while at the same time being able to selectively share information necessary to use commercial services. The pilot demonstrates a sustainable, user-centric, standard-compliant and accessible use case for public service employees and students in the domain of commercial food delivery. It serves as an easy-to-adapt template for the integration of other scenarios and use cases.

Future work

GNUnet is working on the underlying components mature to the point that Taler+re:claimID can be recommended to operators to enable for account-less shopping with or without verified credentials. This will also require the continuation of our work on the low-level transport rewrite as it is a core component of GNS which in turn is what makes re:claimID spin.

Links

This work is generously funded by the EC's Next Generation Internet (NGI) initiative as part of their NGI TRUST programme.

09 May, 2021 10:00PM

May 08, 2021

m4 @ Savannah

GNU M4 1.4.18b released [beta]

More details at https://lists.gnu.org/archive/html/m4-discuss/2021-05/msg00001.html
This beta release will enable translators time to prepare .po files before the stable release of 1.4.19 later this month.

08 May, 2021 10:56AM by Eric Blake

May 06, 2021

health @ Savannah

Beta testers for MyGNUHealth Personal Health Record

Dear community

I am very happy to announce that the documentation for MyGNUHealth beta is now online.

We would love beta testers both in the desktop (KDE Plasma) and in the PinePhone, so if everything goes well, shortly we will be able to release the first stable release.

We would like to count with **translators** of the documentation and the application itself. We are working with the KDE community in these areas.

For those of you interested, I suggest the first step is to read the current MyGNUHealth documentation and take it from there :)

https://www.gnuhealth.org/docs

Axel and other friends from the KDE community have been early testers since alpha stage. Now we need to go further and check it with a larger audience.

If you want to become a beta MyGNUHealth beta tester, send us a mail to info@gnuhealth.org .

All the best
Luis

06 May, 2021 09:38PM by Luis Falcon

remotecontrol @ Savannah

Nest Thermostat bug disables Google Home app control

https://9to5google.com/2021/05/04/nest-thermostat-migration-google-home-app-bug/

Nest Thermostat bug disables Google Home app control with endless account migration loop.

06 May, 2021 10:36AM by Stephen H. Dawson DSL

May 05, 2021

poke @ Savannah

Pokology: a community-driven website about GNU poke

We are happy to announce the availability of a new website, https://pokology.org.

Pokology is a community-driven live repository of knowledge relative to GNU poke, maintained by the poke developers, users and friends.

The site is similar to a wiki, collectively maintained as a git repository.  The contents are written in comfortable org-mode files which get automatically published to HTML.

Happy poking!

05 May, 2021 12:30AM by Jose E. Marchesi

April 28, 2021

GNU Guile

GNU Guile 3.0.6 released

We are pleased to announce the release of GNU Guile 3.0.6. This release improves source-location information for compiled code, removes the dependency on libltdl, fixes some important bugs, adds an optional bundled "mini-gmp" library, as well as the usual set of minor optimizations and bug fixes. For full details, see the NEWS entry. See the release note for signatures, download links, and all the rest. Happy hacking!

28 April, 2021 07:20AM by Andy Wingo (guile-devel@gnu.org)

April 27, 2021

dico @ Savannah

Version 2.11

Version 2.11 of GNU dico is available for download.  This version fixes several bugs and inconsistencies in the gcide module and the gcider utility.  Also, this version drops the support for Python 2.7.

27 April, 2021 07:40AM by Sergey Poznyakoff

April 24, 2021

Christopher Allan Webber

Beyond the shouting match: what is a blockchain, really?

If there's one thing that's true about the word "blockchain", it's that these days people have strong opinions about it. Open your social media feed and you'll see people either heaping praises on blockchains, calling them the saviors of humanity, or condemning them as destroying and burning down the planet and making the rich richer and the poor poorer and generally all the other kinds of fights that people like to have about capitalism (also a quasi-vague word occupying some hotly contested mental real estate).

There are good reasons to hold opinions about various aspects of what are called "blockchains", and I too have some pretty strong opinions I'll be getting into in a followup article. The followup article will be about "cryptocurrencies", which many people also seem to think of as synonymous with "blockchains", but this isn't particularly true either, but we'll deal with that one then.

In the meanwhile, some of the fighting on the internet is kind of confusing, but even more importantly, kind of confused. Some of it might be what I call "sportsballing": for whatever reason, for or against blockchains has become part of your local sportsball team, and we've all got to be team players or we're gonna let the local team down already, right? And the thing about sportsballing is that it's kind of arbitrary and it kind of isn't, because you might pick a sportsball team because you did all your research or you might have picked it because that just happens to be the team in your area or the team your friends like, but god almighty once you've picked your sportsball team let's actually not talk against it because that might be giving in to the other side. But sportsballing kind of isn't arbitrary either because it tends to be initially connected to real communities of real human beings and there's usually a deeper cultural web than appears at surface level, so when you're poking at it, it appears surface-level shallow but there are some real intricacies beneath the surface. (But anyway, go sportsball team.)

But I digress. There are important issues to discuss, yet people aren't really discussing them, partly because people mean different things. "Blockchain" is a strange term that encompasses a wide idea space, and what people consider or assume essential to it vary just as widely, and thus when two people are arguing they might not even be arguing about the same thing. So let's get to unpacking.

"Blockchain" as handwaving towards decentralized networks in general

Years ago I was at a conference about decentralized networked technology, and I was having a conversation with someone I had just met. This person was telling me how excited they were about blockchains... finally we have decentralized network designs, and so this seems really useful for society!

I paused for a moment and said yes, blockchains can be useful for some things, though they tend to have significant costs or at least tradeoffs. It's good that we also have other decentralized network technology; for example, the ActivityPub standard I was involved in had no blockchains but did rely on the much older "classic actor model."

"Oh," the other person said, "I didn't know there were other kinds of decentralized network designs. I thought that 'blockchain' just meant 'decentralized network technology'."

It was as if a light had turned on and illuminated the room for me. Oh! This explained so many conversations I had been having over the years. Of course... for many people, blockchains like Bitcoin were the first ever exposure they had (aside from email, which maybe they never gave much thought to as being decentralized) of something that involved a decentralized protocol. So for many people, "blockchain" and "decentralized technology" are synonyms, if not in technical design, but in terms of understanding of a space.

Mark S. Miller, who was standing next to me, smiled and gave a very interesting followup: "There is only one case in which you need a blockchain, and that is in a decentralized system which needs to converge on a single order of events, such as a public ledger dealing with the double spending problem."

Two revelations at once. It was a good conversation... it was a good start. But I think there's more.

Blockchains are the "cloud" of merkle trees

As time has gone on, the discourse over blockchains has gotten more dramatic. This is partly because what a "blockchain" is hasn't been well defined.

All terminology exists on an ever-present battle between fuzziness and crispness, with some terms being much clearer than others. The term "boolean" has a fairly crisp definition in computer science, but if I ask you to show me your "stove", the device you show me today may be incomprehensible to someone's definition a few centuries ago, particularly in that today it might not involve fire. Trying to define as in terms of its functionality can also cause confusion: if I asked you to show me a stove, and you showed me a computer processor or a car engine, I might be fairly confused, even though technically people enjoy showing off that they can cook eggs on both of these devices when they get hot enough. (See also: Identity is a Katamari, language is a Katamari explosion.)

Still, some terms are fuzzier than others, and as far as terms go, "blockchain" is quite fuzzy. Hence my joke: "Blockchains are the 'cloud' of merkle trees."

This ~joke tends to get a lot of laughs out of a particular kind of audience, and confused looks from others, so let me explain. The one thing everyone seems to agree on is that it's a "chain of blocks", but all that really seems to mean is that it's a merkle tree... really, just an immutable datastructure where one node points at the parent node which points at the parent node all the way up. The joke then is not that this merkle tree runs on a cloud, but that "cloud computing" means approximately nothing: it's marketing speak for some vague handwavey set of "other peoples' computers are doing computation somewhere, possibly on your behalf sometimes." Therefore, "cloud of merkle trees" refers to the vagueness of the situation. (As everyone knows, jokes are funnier when fully explained, so I'll turn on my "STUDIO LAUGHTER" sign here.)

So, a blockchain is a chain of blocks, ie a merkle tree, and I mean, technically speaking, that means that Git is a blockchain (especially if the commits are signed), but when you see someone arguing on the internet about whether or not blockchains are "good" or "bad", they probably weren't thinking about git, which aside from having a high barrier of entry in its interface and some concerns about the hashing algorithm used, isn't really something likely to drag you into an internet flamewar.

"Blockchain" is to "Bitcoin" what "Roguelike" is to "Rogue"

These days it's common to see people either heaping praises on blockchains or criticizing them, and those people tend to be shouting past one another. I'll save unpacking that for another post. In the meanwhile though, it's worth noting that people might not be talking about the same things.

What isn't in doubt is whether or not Bitcoin is a blockchain... trying to understand and then explore the problem space around Bitcoin is what created the term "blockchain". It's a bit like the video game genre of roguelikes, which started with the game Rogue, particularly explored and expanded upon in NetHack, and then suddenly exploding into the indie game scene as a "genre" of its own. Except the genre has become fuzzier and fuzzier as people have explored the surrounding space. What is essential? Is a grid based layout essential? Is a non-euclidean grid acceptable? Do you have to provide an ascii or ansi art interface so people can play in their terminals? Dare we allow unicode characters? What if we throw out terminals altogether and just play on a grid of 2d pixelart? What about 3d art? What about permadeath? What about the fantasy theme? What about random level generation? What are the key features of a roguelike?

Well now we're at the point where I pick up a game like Blazing Beaks and it calls itself a "roguelite", which I guess is embracing the point that terminology has gotten extremely fuzzy... this game feels more like Robotron than Rogue.

So... if "blockchain" is to Bitcoin what "roguelike" is to Rogue, then what's essential to a blockchain? Does the blockchain have to be applied to a financial instrument, or can it be used to store updateable information about eg identity? Is global consensus required? Or what about a "trusted quorum" of nodes, such as in Hyperledger? Is "mining" some kind of asset a key part of the system? Is proof of work acceptable, or is proof of stake okay? What about proof of space, proof of space-time, proof of pudding?

On top of all this, some of the terms around blockchains have been absorbed as if into them. For instance, I think to many people, "smart contract" means something like "code which runs on a blockchain" thanks to Ethereum's major adoption of the term, but the E programming language described "smart contracts" as the "likely killer app of distributed capabilities" all the way back in 1999, and was borrowing the term from Nick Szabo, but really the same folks working on E had described many of those same ideas in the Agoric Papers back in 1988. Bitcoin wasn't even a thing at all until at least 2008, so depending on how you look at it, "smart contracts" precede "blockchains" by one or two decades. So "blockchain" has somehow even rolled up terms outside of its space as if within it. (By the way, I don't think anyone has given a good and crisp definition for "smart contract" either despite some of these people trying to give me one, so let me give you one that I think is better and embraces its fuzziness: "Smart contracts allow you to do the kinds of things you might do with legal contracts, but relying on networked computation instead of a traditional state-based legal system." It's too bad more people also don't know about the huge role that Mark Miller's "split contracts" idea plays into this space because that's what makes the idea finally makes sense... but that's a conversation for another time.) (EDIT: Well, after I wrote this, Kate Sills lent me her definition, which I think is the best one: "Smart contracts are credible commitments using technology, and outside a state-provided legal system." I like it!)

So anyway, the point of this whole section is to say that kind of like roguelike, people are thinking of different things as essential to blockchains. Everyone roughly agrees on the jumping-off point of ideas but since not everyone agrees from there, it's good to check in when we're having the conversation. Wait, you do/don't like this game because it's a roguelike? Maybe we should check in on what features you mean. Likewise for blockchains. Because if you're blaming blockchains for burning down the planet, more than likely you're not condemning signed git repositories (or at least, if you're condemning them, you're probably doing so about it from an aspect that isn't the fundamental datastructure... probably).

This is an "easier said than done" kind of thing though, because of course, I'm kind of getting into some "in the weeds" level of details here... but it's the "in the weeds" where all the substance of the disagreements really are. The person you are talking with might not actually even know or consider the same aspects to be essential that you consider essential though, so taking some time to ask which things we mean can help us lead to a more productive conversation sooner.

"Blockchain" as an identity signal

First, a digression. One thing that's kind of curious about the term "virtue signal" is that in general it tends to be used as a kind of virtue signal. It's kind of like the word hipster in the previous decade, which weirdly seemed to be obsessively and pejoratively used by people who resembled hipsters than anyone else. Hence I used to make a joke called "hipster recursion", which is that since hipsters seem more obsessesed with pejorative labeling of hipsterism than anyone else, there's no way to call someone a "hipster" without yourself taking on hipster-like traits, and so inevitably even this conversation is N-levels deep into hipster recursion for some numerical value of N.

"Virtue signaling" appears similar, but even more ironically so (which is a pretty amazing feat given how much of hipsterdom seems to surround a kind of inauthentic irony). When I hear someone say "virtue signaling" with a kind of sneer, part of that seems to be acknowledging that other people are sending signals merely to impress others that they are some kind of the same group but it seems as if it's being raised as in a you-know-and-I-know-that-by-me-acknowledging-this-I'm-above-virtue-signaling kind of way. Except that by any possible definition of virtue signaling, the above appears to be a kind of virtue signaling, so now we're into virtue signaling recursion.

Well, one way to claw our way out of the rabbithole of all this is to drop the pejorative aspect of it and just acknowledge that signaling is something that everyone does. Hence me saying "identity signaling" here. You can't really escape identity signaling, or even sportsballing, but you can acknowledge that it's a thing that we all do, and there's a reason for it: people only have so much time to find out information about each other, so they're searching for clues that they might align and that, if they introduce you to their peer group, that you might align with them as well, without access to a god-like view of the universe where they know exactly what you think and exactly what kinds of things you've done and exactly what way you'll behave in the future or whether or not you share the same values. (After all, what else is virtue ethics but an ethical framework that takes this in its most condensed form as its foundation?) But it's true that at its worst, this seems to result in shallow, quick, judgmental behavior, usually based on stereotypes of the other side... which can be unfortunate or unfair to whomever is being talked about. But also on the flip side, people also do identity signal to each other because they want to create a sense of community and bonding. That's what a lot of culture is. It's worth acknowledging then that this occurs, recognizing its use and limitations, without pretending that we are above it.

So wow, that's quite a major digression, so now let's get back to "identity signaling". There is definitely a lot of identity signaling that tends to happen around the word "blockchain", for or against. Around the critiques of the worst of this, I tend to agree: I find much of the machismo hyper-white-male-privilege that surrounds some of the "blockchain" space uncomfortable or cringey.

But I also have some close friends who are not male and/or are people of color and those ones tend to actually suffer the worst of it from these communities internally, but also seem to find things of value in them, but particularly seem to feel squeezed externally when the field is reduced to these kinds of (anti?-)patterns. There's something sad about that, where I see on the one hand friends complaining about blockchain from the outside on behalf of people who on the inside seem to be both struggling internally but then kind of crushed by being lumped into the same identified problems externally. This is hardly a unique problem but it's worth highlighting for a moment I think.

But anyway, I've taken a bunch of time on this, more than I care to, maybe because (irony again?) I feel that too much of public conversation is also hyperfocusing on this aspect... whether there's a subculture around blockchain, whether or not that subculture is good or bad, etc. There's a lot worthwhile in unpacking this discourse-wise, but some of the criticisms of blockchains as a technology (to the extent it even is coherently one) seem to get lumped up into all of this. It's good to provide thoughtful cultural critique, particularly one which encourages healthy social change. And we can't escape identity signaling. But as someone who's trying to figure out what properties of networked systems we do and don't want, I feel like I'm trying to navigate the machine and for whatever reason, my foot keeps getting caught in the gears here. Well, maybe that itself is pointing to some architectural mistakes, but socially architectural ones. But it's useful to also be able to draw boundaries around it so that we know where this part of the conversation begins and ends.

"Blockchain" as "decentralized centralization" (or "decentralized convergence")

One of the weird things about people having the idea of "blockchains" as being synonymous with "decentralization" is that it's kind of both very true and very untrue, depending on what abstraction layer you're looking at.

For a moment, I'm going to frame this in harsh terms: blockchains are decentralized centralization.

What? How dare I! You'll notice that this section is in harsh contrast to the "blockchain as handwaving towards decentralized networks in general" section... well, I am acknowledging the decentralized aspect of it, but the weird thing about a blockchain is that it's a decentralized set of nodes converging on (creating a centrality of!) a single abstract machine.

Contrast with classic actor model systems like CapTP in Spritely Goblins, or as less good examples (because they aren't quite as behavior-oriented as they are correspondence-oriented, usually) ActivityPub or SMTP (ie, email). All of these systems involve decentralized computation and collaboration stemming from sending messages to actors (aka "distributed objects"). Of CapTP this is especially clear and extreme: computations happen in parallel across many collaborating machines (and even better, many collaborating objects on many collaborating machines), and the behavior of other machines and their objects is often even opaque to you. (CapTP survives this in a beautiful way, being able to do well on anonymous, peer to peer, "mutually suspicious" networks. But maybe read my rambling thoughts about CapTP elsewhere.)

While to some degree there are some very clever tricks in the world of cryptography where you may be able to get back some of the opacity, this tends to be very expensive, adding an expensive component to the already inescapable additional expenses of a blockchain. A multi-party blockchain with some kind of consensus will always, by definition be slower than a single machine operating alone.

If you are irritated by this framing: good. It's probably good to be irritated by it at least once, if you can recognize the portion of truth in it. But maybe that needs some unpacking to get there. It might be better to say "blockchains are decentralized convergence", but I have some other phrasing that might be helpful.

"Blockchain" as "a single machine that many people run"

There's value in having a single abstract machine that many people run. The most famous source of value is in the "double spending problem". How do we make sure that when someone has money, they don't spend that money twice?

Traditional accounting solves this with a linear, sequential ledger, and it turns out that the right solution boils down to the same thing in computers. Emphasis on sequential: in order to make sure money balances out right, we really do have to be able to order things.

Here's the thing though: the double spending problem was in a sense solved in terms of single-computers a long time ago in the object capability security community. Capability-based Financial Instruments was written about a decade before blockchains even existed and showed off how to make a "mint" (kind of like a fiat-currency bank) that can be implemented in about 25 lines of code in the right architecture (I've ported it to Goblins, for instance) and yet has both distributed accounts and is robust against corruption on errors.

However, this seems to be running on a "single-computer based machine", and again operates like a fiat currency. Anyone can create their own fiat currency like this, and they are cheap, cheap, cheap (and fast!) to make. But it does rely on sequentiality to some degree to operate correctly (avoiding a class of attacks called "re-entrancy attacks").

But this "single-computer based machine" might bother you for a couple reasons:

  • We might be afraid the server might crash and service will be interrupted, or worse yet, we will no longer be able to access our accounts.

  • Or, even if we could trade these on an open market, and maybe diversify our portfolio, maybe we don't want to have to trust a single operator or even some appointed team of operators... maybe we have a lot of money in one of these systems and we want to be sure that it won't suddenly vanish due to corruption.

Well, if our code operates deterministically, then what if from the same initial conditions (or saved snapshot of the system) we replay all input messages to the machine? Functional programmers know: we'll end up with the same result.

So okay, we might want to be sure this doesn't accidentally get corrupted, maybe for backup reasons. So maybe we submit the input messages to two computers, and then if one crashes, we just continue on with the second one until the other comes up, and then we can restore the first one from the progress the second machine made while the first one was down.

Oh hey, this is already technically a blockchain. Except our trust model is that we implicitly trust both machines.

Hm. Maybe we're now worried that we might have top-down government pressure to coerce some behavior on one of our nodes, or maybe we're worried that someone at a local datacenter is going to flip some bits to make themselves rich. So we actually want to spread this abstract machine out over three countries. So okay, we do that, and now we set a rule agreeing on what all the series of input messages are... if two of three nodes agree, that's good enough. Oh hey look, we've just invented the "small-quorum-style" blockchain/ledger!

(And yes, you can wire up Goblins to do just this; a hint as to how is seen in the Terminal Phase time travel demo. Actually, let's come back to that later.)

Well, okay. This is probably good enough for a private financial asset, but what about if we want to make something more... global? Where nobody is in charge!

Well, we could do that too. Here's what we do.

  • First, we need to prevent a "swarming attack" (okay, this is generally called a "sybil attack" in the literature, but for a multitude of reasons I won't get into, I don't like that term). If a global set of peers are running this single abstract machine, we need to make sure there aren't invocations filling up the system with garbage, since we all basically have to keep that information around. Well... this is exactly where those proof-of-foo systems come in the first time; in fact Proof of Work's origin is in something called Hashcash which was designed to add "friction" to disincentivize spam for email-like systems. If we don't do something friction-oriented in this category, our ledger is going to be too easily filled with garbage too fast. We also need to agree on what the order of messages is, so we can use this mechanism in conjuction with a consensus algorithm.

  • When are new units of currency issued? Well, in our original mint example, the person who set up the mint was the one given the authority to make new money out of thin air (and they can hand out attenuated versions of that authority to others as they see fit). But what if instead of handing this capability out to individuals we handed it out to anyone who can meet an abstract requirement? For instance, in zcap-ld an invoker can be any kind of entity which is specified with linked data proofs, meaning those entities can be something other than a single key... for instance, what if we delegated to an abstract invoker that was specified as being "whoever can solve the state of the machine's current proof-of-work puzzle"? Oh my gosh! We just took our 25-line mint and extended it for mining-style blockchains. And the fundamental design still applies!

With these two adjustments, we've created a "public blockchain" akin to bitcoin. And we don't need to use proof-of-work for either technically... we could swap in different mechanisms of friction / qualification.

If the set of inputs are stored as a merkle tree, then all of the system types we just looked at are technically blockchains:

  • A second machine as failover in a trusted environment

  • Three semi-trusted machines with small-scale private consensus

  • A public blockchain without global trust, with swarming-attack resistance and an interesting abstract capability accessible to anyone who can meet the abstract requirement (in this case, to issue some new currency).

The difference for choosing any of the above is really a question of: "what is your trust/failover requirements?"

Blockchains as time travel plus convergent inputs

If this doesn't sound believable to you, that you could create something like a "public blockchain" on top of something like Goblins so easily, consider how we might extend time travel in Terminal Phase to add multiplayer. As a reminder, here's an image:

Time travel in Spritely Goblins shown through Terminal Phase

Now, a secret thing about Terminal Phase is that the gameplay is deterministic (the random starfield in the background is not, but the gameplay is) and runs on a fixed frame-rate. This means that given the same set of keyboard inputs, the game will always play the same, every time.

Okay, well let's say we wanted to hand some way for someone to replay our last game. Chess games can be fully replayed with a very condensed syntax, meaning that merely handing someone a short list of codes they can precisely replay the same game, every time, deterministically.

Well okay, as a first attempt at thinking this through, what if for some game of Terminal Phase I played we wrote down each keystroke I entered on my keyboard, on every tick of the game? Terminal Phase runs at 30 ticks per second. So okay, if you replay these, each one at 30 ticks per second, then yeah, you'd end up with the same gameplay every time.

It would be simple enough for me to encode these as a linked list (cons, cons, cons!) and hand them to you. You could descend all the way to the root of the list and start playing them back up (ie, play the list in reverse order) and you'd get the same result as I did. I could even stream new events to you by giving you new items to tack onto the front of the list, and you could "watch" a game I was playing live.

So now imagine that you and I want to play Terminal Phase together now, over the network. Let's imagine there are two ships, and for simplicity, we're playing cooperatively. (The same ideas can be extended to competitive, but for narrating how real-time games work it's easier to to start with a cooperative assumption.)

We could start out by wiring things up on the network so that I am allowed to press certain keys for player 1 and you are allowed to press certain keys for player 2. (Now it's worth noting that a better way to do this doesn't involve keys on the keyboard but capability references, and really that's how we'd do things if we were to bring this multiplayer idea live, but I'm trying to provide a metaphor that's easy to think about without introducing the complicated sounding kinds of terms like "c-lists" and "vat turns" that we ocap people seem to like.) So, as a first attempt, maybe if we were playing on a local area network or something, we could synchronize at every game tick: I share my input with you and you share yours, and then and only then do both of our systems actually input them into that game-tick's inputs. We'll have achieved a kind of "convergence" as to the current game state on every tick. (EDIT: I wrote "a kind of consensus" instead of "a kind of convergence" originally, and that was an error, because it misleads on what consensus algorithms tend to do.)

Except this wouldn't work very well if you and I were living far away from each other and playing over the internet... the lag time for doing this for every game tick might slow the system to a crawl... our computers wouldn't get each others' inputs as fast as the game was moving along, and would have to pause until we received each others' moves.

So okay, here's what we'll do. Remember the time-travel GUI above? As you can see, we're effectively restoring from an old snapshot. Oh! So okay. We could save a snapshot of the game every second, and then both get each other our inputs to each other as fast as we can, but knowing it'll lag. So, without having seen your inputs yet, I could move my ship up and to the right and fire (and send that I did that to you). My game would be in a "dirty state"... I haven't actually seen what you've done yet. Now suddenly I get the last set of moves you did over the network... in the last five frames, you move down and to the left and fire. Now we've got each others' inputs... what our systems can do is secretly time travel behind the scenes to the last snapshot, then fast forward, replaying both of our inputs on each tick up until the latest state where we've both seen each others' moves (but we wouldn't show the fast forward process, we'd just show the result with the fast forward having been applied). This can happen fast enough that I might see your ship jump forward a little, and maybe your bullet will kill the enemy instead of mine and the scores shift so that you actually got some points that for a moment I thought I had, but this can all happen in realtime and we don't need to slow down the game at all to do it.

Again, all the above can be done, but with actual wiring of capabilities instead of the keystroke metaphor... and actually, the same set of ideas can be done with any kind of system, not just a game.

And oh hey, technically, technically, technically if we both hashed each of our previous messages in the linked list and signed each one, then this would qualify as a merkle tree and then this would also qualify as a blockchain... but wait, this doesn't have anything to do with cryptocurrencies! So is it really a blockchain?

"Blockchain" as synonym for "cryptocurrency" but this is wrong and don't do this one

By now you've probably gotten the sense that I really was annoyed with the first section of "blockchain" as a synonym for "decentralization" (especially because blockchains are decentralized centralization/convergence) and that is completely true. But even more annoying to me is the synonym of "blockchain" with "cryptocurrency".

"Cryptocurrency" means "cryptographically based currency" and it is NOT synonymous with blockchains. Digicash precedes blockchains by a dramatic amount, but it is a cryptocurrency. The "simple mint" type system also precedes blockchains and while it can be run on a blockchain, it can also run on a solo computer/machine.

But as we saw, we could perceive multiplayer Terminal Phase as technically, technically a blockchain, even though it has nothing to do with currencies whatsoever.

So again a blockchain is just a single, abstract, sequential machine, run by multiple parties. That's it. It's more general than cryptocurrencies, and it's not exclusive to implementing them either. One is a kind of programming-plus-cryptography-use-case (cryptocurrencies), the other one is a kind of abstracted machine (blockchains).

So please. They are frequently combined, but don't treat them as the same thing.

Blockchains as single abstract machines on a wider network

One of my favorite talks is Mark Miller's Programming Secure Smart Contracts talk. Admittedly, I like it partly because it well illustrates some of the low-level problems I've been working on, and that might not be as useful to everyone else. But it has this lovely diagram in it:

Machines / Vats / Ocaps / Erights layers of abstractions

This is better understood by watching the video, but the abstraction layers described here are basically as follows:

  • "Machines" are the lowest layer of abstraction on the network, but there a variety of kinds of machines. Public blockchains are one, quorum blockchains are another, solo computer machines yet another (and the simplest case, too). What's interesting then is that we can see public chains and quorums abstractly demonstrated as machines in and of themselves... even though they are run by many parties.

  • Vats are the next layer of abstraction, these are basically the "communicating event loops"... actors/objects live inside them, and more or less these things run sequentially.

  • Replace "JS ocaps" with "language ocaps" and you can see actors/objects in both Javascript and Spritely living here.

  • Finally, at the top are "erights" and "smart contracts", which feed into each other... "erights" are "exclusive electronic rights", and "smart contracts" are generally patterns of cooperation involving achieving mutual goals despite suspicion, generally involving the trading of these erights things (but not necessarily).

Okay, well cool! This finally explains the worldview I see blockchains on. And we can see a few curious things:

  • The "public chain" and "quorum" kinds of machines still boil down to a single, sequential abstract machine.

  • Object connections exist between the machines... ocap security. No matter whether it's run by a single computer or multiple.

  • Public blockchains, quorum blockchains, solo-computer machines all talk to each other, and communicate between object references on each other.

Blockchains are not magical things. They are abstracted machines on the network. Some of them have special rules that let whoever can prove they qualify for them access some well-known capabilities, but really they're just abstracted machines.

And here's an observation: you aren't ever going to move all computation to a single blockchain. Agoric's CEO, Dean Tribble, explained beautifully why on a recent podcast:

One of the problems with Ethereum is it is as tightly coupled as possible. The entire world is a single sequence of actions that runs on a computer with about the power of a cell phone. Now, that's obviously hugely valuable to be able to do commerce in a high-integrity fashion, even if you can only share a cell phone's worth of compute power with the entire rest of the world. But that's clearly gonna hit a brick wall. And we've done lots of large-scale distributed systems whether payments or cyberspace or coordination, and the fundamental model that covers all of those is islands of sequential programming in a sea of asynchronous communication. That is what the internet is about, that's what the interchain is about, that's what physics requires you to do if you want a system to scale.

Put this way, it should be obvious: are we going to replace the entire internet with something that has the power of a cell phone? To ask the question is to know the answer: of course not. Even when we do admit blockchain'y systems into our system, we're going to have to have many of them communicating with each other.

Blockchains are just machines that many people/agents run. That's it.

Some of these are encoded with some nice default programming to do some useful things, but all of them can be done in non-blockchain systems because communicating islands of sequential processes is the generalization. You might still want a blockchain, ie you might want multiple parties running one of those machines as a shared abstract machine, but how you configure that blockchain from there might depend on your trust and integrity requirements.

What do I think of blockchains?

I've covered a wide variety of perspectives of "what is a blockchain" in this article.

On the worse end of things are the parts involving hand-wavey confusion about decentralization, mistaken ideas of them being tied to cryptocurrencies, marketing hype, cultural assumptions, and some real, but not intrinsic, cultural problems.

In the middle, I am particularly keen on highlighting the similarity between the term "blockchain" and the term "roguelike", how both of them might boil down to some key ideas or not, but more importantly they're both a rough family of ideas that diverge from one highly influential source (Bitcoin and Rogue respectively). This is also the source of much of the "shouting past each other", because many people are referring to different components that they view as essential or inessential. Many of these pieces may be useful or harmful in isolation, in small amounts, in large amounts, but much of the arguing (and posturing) involves highlighting different things.

On the better end of things is a revelation, that blockchains are just another way of abstracting a computer so that multiple parties can run it. The particular decisions and use cases layered on top of this fundamental design are highly variant.

Having made the waters clear again, we could muddy them. A friend once tried to convince me that all computers are technically blockchains, that blockchains are the generalization of computing, and the case of a solo computer is merely one where a blockchain is run only by one party and no transaction history or old state is kept around. Maybe, but I don't think this is very useful. You can go in either direction, and I think the time travel and Terminal Phase section maybe makes that clear to me, but I'm not so sure how it lands with others I suppose. But a term tends to be useful in terms of what it introduces, and calling everything a blockchain seems to make the term even less useful than it already is. While a blockchain could be one or more parties running a sequential machine as the generalization, I suggest we stick to two or more.

Blockchains are not magic pixie dust, putting something on a blockchain does not make it work better or more decentralized... indeed, what a blockchain really does is converging (or re-centralizing) a machine from a decentralized set of computers. And it always does so with some cost, some set of overhead... but what those costs and overhead are varies depending on what the configuration decisions are. Those decisions should always stem from some careful thinking about what those trust and integrity needs are... one of the more frustrating things about blockchains being a technology of great hype and low understanding is that such care is much less common than it should be.

Having a blockchain, as a convergent machine, can be useful. But how that abstracted convergent machine is arranged can diverge dramatically; if we aren't talking about the same choices, we might shout past each other. Still, it may be an unfair ask to request that those without a deep technical background go into technical specifics, and I recognize that, and in a sense there can be some amount gained from speaking towards broad-sweeping, fuzzy sets and the patterns they seem to be carrying. A gut-sense assertion from a set of loosely observed behaviors can be a useful starting point. But to get at the root of what those gut senses actually map to, we will have to be specific, and we should encourage that specificity where we can (without being rude about it) and help others see those components as well.

But ultimately, as convergent machines, blockchains will not operate alone. I think the system that will hook them all together should be CapTP. But no matter the underlying protocol abstraction, blockchains are just abstract machines on the network.

Having finally disentangled what blockchains are, I think soon I would like to move onto what cryptocurrencies are. Knowing that they are not necessarily tied to blockchains opens us up to considering an ecosystem, even an interoperable and exchangeable one, of varying cryptographically based financial instruments, and the different roles and uses they might play. But that is another post of its own, for whenever I can get to it, I suppose.

ADDENDUM: After writing this post, I had several conversations with several blockchain-oriented people. Each of them roughly seemed to agree that Bitcoin was roughly the prototypical "blockchain", but each of them also seemed to highlight different things they thought were "essential" to what a "blockchain" is: some kinds of consensus algorithms being better than others, that kinds of social arrangements are enabled, whether transferrable assets are encoded on the chain, etc. To start with, I feel like this does confirm some of the premise of this post, that Bitcoin is the starting point, but like Rogue and "roguelikes", "blockchains" are an exploration space stemming from a particular influential technical piece.

However my friend Kate Sills (who also gave me a much better definition for "smart contracts", added above) highlighted something that I hadn't talked about much in my article so far, which I do agree deserves expansion. Kate said: "I do think there is something huge missing from your piece. Bitcoin is amazing because it aligns incentives among actors who otherwise have no goals in common."

I agree that there's something important here, and this definition of "blockchain" maybe does explain why while from a computer science perspective, perhaps signed git trees do resemble blockchains, they don't seem to fit within the realm of what most people are thinking about... while git might be a tool used by several people with aligned incentives, it is not generally itself the layer of incentive-alignment.

24 April, 2021 08:30PM by Chris Lemmer-Webber

April 23, 2021

GNU Guix

Building derivations, how complicated can it be?

Derivations are key to Guix, they're the low-level build instructions used for things like packages, disk images, and most things than end up in the store.

Around a year ago, the established approach to build derivations across multiple machines was daemon offloading. This offloading approach is mostly static in terms of the machines involved and uses SSH to communicate and move things between machines.

The Guix Build Coordinator project set out to provide an alternative approach, both to explore what's possible, but also to provide a usable tool to address two specific use cases.

The first use case was building things (mostly packages) for the purpose of providing substitutes. At the time, the daemon offloading approach used on ci.guix.gnu.org which is the default source of substitutes. This approach was not scaling particularly well, so there was room for improvement.

The second use case was more aspirational, support various quality assurance tasks, like building packages changed by patches, regularly testing fixed output derivations, or building the same derivations across different machines to test for hardware specific differences.

While both these tasks have quite a lot in common, there's still quite a lot of differences, this in part led to a lot of flexibility in the design of the Guix Build Coordinator.

Architecture

Like offloading, the Guix Build Coordinator works in a centralised manner. There's one coordinator process which manages state, and agent processes run on the machines that perform builds. The coordinator plans which builds to allocate to which agents, and agents ask the coordinator for builds, which they then attempt.

Once agents complete a build, they send the log file and any outputs back to the coordinator. This is shown in the diagram below. Note that the Guix Build Coordinator doesn't actually take care of building the individual derivations, that's still left to guix-daemon's on the machines involved.

Guix Build Coordinator sequence diagram

The builds to perform are worked through methodically, a build won't start unless all the inputs are available. This behaviour replicates what guix-daemon does, but across all the machines involved.

If agents can't setup to perform a build, they report this back to the coordinator, which may then perform other builds to produce those required inputs.

Currently HTTP is used when agents want to communicate to the coordinator, although additional approaches could be implemented in the future. Similarly, SQLite is used as the database, but from the start there has been a plan to support PostgreSQL, but that's yet to be implemented.

Comparison to offloading

There's lots more to the internal workings of the Guix Build Coordinator, but how does this compare to the daemon offloading builds?

Starting from the outside and working in, the API for the Guix Build Coordinator is all asynchronous. When you submit a build, you get an ID back, which you can use to track the progress of that build. This is in contrast to the way the daemon is used, where you keep a connection established while builds are progressing.

Offloading is tightly integrated with the daemon, which can be both useful as offloading can transparently apply to anything that would otherwise be built locally. However, this can also be a limitation since the daemon is one of the harder components to modify.

With offloading, guix-daemon reaches out to another machine, copies over all the inputs and the derivation, and then starts the build. Rather than doing this, the Guix Build Coordinator agent pulls in the inputs and derivation using substitutes.

This pull approach has a few advantages, firstly it removes the need to keep a large store on the machine running the coordinator, and this large store requirement of using offloading became a problem in terms of scalability for the offloading approach. Another advantage is that it makes deploying agents easier, as they don't need to be reachable from the coordinator over the network, which can be an issue with NATs or virtual machines.

When offloading builds, the outputs would be copied back to the store on build success. Instead, the Guix Build Coordinator agent sends the outputs back as nar files. The coordinator would then process these nar files to make substitutes available. This helps distribute the work in generating the nar files, which can be quite expensive.

These differences may be subtle, but the architecture makes a big difference, it's much easier to store and serve nars at scale if this doesn't require a large store managed by a single guix-daemon.

There's also quite a few things in common with the daemon offloading approach. Builds are still methodically performed across multiple machines, and load is taken in to account when starting new builds.

A basic example

Getting the Guix Build Coordinator up and running does require some work, the following commands should get the coordinator and an agent running on a single machine. First, you start the coordinator.

  guix-build-coordinator

Then in another terminal, you interact with the running coordinator to create an agent in it's database.

  guix-build-coordinator agent new

Note the UUID of the generated agent.

  guix-build-coordinator agent <AGENT ID> password new

Note the generated password for the agent. With the UUID and password, the agent can then be started.

  guix-build-coordinator-agent --uuid=<AGENT ID> --password=<AGENT PASSWORD>

At this point, both processes should be running and the guix-build-coordinator should be logging requests from the agent.

In a third terminal, also at the root of the repository, generate a derivation, and then instruct the coordinator to have it built.

  guix build --no-grafts -d hello

Note the derivation that is output.

  guix-build-coordinator build <DERIVATION FILE>

This will return a randomly generated UUID that represents the build.

While running from the command line is useful for development and experimentation, there are services in Guix for running the coordinator and agents.

Applications of the Guix Build Coordinator

While I think the Guix Build Coordinator is a better choice than daemon offloading in some circumstances, it doesn't currently replace it.

At a high level, the Guix Build Coordinator is useful where there's a need to build derivations and do something with the outputs or build results, more than just having the outputs in the local store. This could be serving substitutes, or testing derivations for example.

At small scales, the additional complexity of the coordinator is probably unnecessary, but when it's useful to use multiple machines, either because of the resources that provides, or because of a more diverse range of hardware, then it makes much more sense to use the Guix Build Coordinator to coordinate what's going on.

Looking forward

The Guix Build Coordinator isn't just an alternative to daemon offloading, it's more a general toolkit for coordinating the building of derivations.

Much of the functionality in the Guix Build Coordinator happens in hooks. There are bits of code (hooks) that run when certain events happen, like a build gets submitted, or a build finished successfully. It's these hooks that are responsible for doing things like processing nars to be served as substitutes, or submitting retries for a failed build.

This hook to automatically retry building particular derivations is particularly useful when trying to provide substitutes where you want to lessen the impact of random failures, or for quality assurance purposes, where you want more data to better identify problems.

There are also more features such as build and agent tags and build priorities that can be really useful in some scenarios.

My hope is that the Guix Build Coordinator will enable a better substitute experience for Guix users, as well as enabling a whole new range of quality assurance tasks. It's already possible to see some impact from the Guix Build Coordinator, but there's still much work to do!

Additional material

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

23 April, 2021 08:00PM by Christopher Baines

April 22, 2021

parallel @ Savannah

GNU Parallel 20210422 ('Ever Given') released [stable]

GNU Parallel 20210322 ('Ever Given') [stable] has been released. It is available for download at: lbry://@GnuParallel:4

No new functionality was introduced so this is a good candidate for a stable release.

An easy way to support GNU Parallel is to tip on LBRY.

Please help spreading GNU Parallel by making a testimonial video like Juan Sierra Pons: http://www.elsotanillo.net/wp-content/uploads/GnuParallel_JuanSierraPons.mp4

It does not have to be as detailed as Juan's. It is perfectly fine if you just say your name, and what field you are using GNU Parallel for.

Quote of the month:

  GNU Parallel is your friend.
  Can shorten that time by X cores.
    -- iRODS @irods@twitter

New in this release:

  • Bug fixes and man page updates.

News about GNU Parallel:

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep c82233e7da3166308632ac8c34f850c0
    12345678 c82233e7 da316630 8632ac8c 34f850c0
    $ md5sum install.sh | grep ae3d7aac5e15cf3dfc87046cfc5918d2
    ae3d7aac 5e15cf3d fc87046c fc5918d2
    $ sha512sum install.sh | grep dfc00d823137271a6d96225cea9e89f533ff6c81f
    9c5198d5 31a3b755 b7910ece 3a42d206 c804694d fc00d823 137271a6 d96225ce
    a9e89f53 3ff6c81f f52b298b ef9fb613 2d3f9ccd 0e2c7bd3 c35978b5 79acb5ca
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 April, 2021 04:24PM by Ole Tange