Planet GNU

Aggregation of development blogs from the GNU Project

July 17, 2018

Riccardo Mottola

Graphos GNUstep and Tablet interface

I have acquired a Thinkpad X41 Tablet and worked quite a bit on it making it usable and then installing Linux and of course GNUstep on it. The original battery was dead and the compatible replacement I got is bigger, it works very well, but makes the device unbalanced.

Anyway, my interest about it how usable GNUstep applications would be and especially Graphos, its (and my) drawing application.

Using the interface in Tablet mode is different: the stylus is very precise and allows clicking by pointing the tip and a second button is also possible. However, contrary to the mouse use, the keyboard is folded so no keyboard modifiers are possible. Furthermore GNUstep has no on-screen keyboard so typing is not possible.

The classic OpenStep-style Menus work exceedingly well with a touch interface:the menus are easy to click and teared-out they remain like palettes, making toolbars not necessary.
This is a good start!

However, Graphos was not easy to use: aside from typing text, with no keyboard, several components requried typing (e.g. inserting Line Width).
I worked on the interface so that all these elements also had a clickable interface (e.g. Stepper Arrows). Duplicating certain items available in context-menus in regular menus, which can be detached, provided also an enhancements.
Standard items like the color wheel already work very well


Drawing on the screen is definitely very precise and convenient. Stay tuned for the upcoming release!

17 July, 2018 10:41AM by Riccardo (noreply@blogger.com)

July 16, 2018

FSF Events

Molly de Blanc - "What's the story with Munich?" (Portland, OR)

Free Software Conservancy campaigns manager Molly de Blanc will be speaking at OSCON (2018-07-18–19).

Government adoption is an important step for the advancement of free software. When governments make the switch from proprietary technology, larger-scale change may follow: workers who use free technologies bring them home from the office, and students bring file formats, specialized software, and services like online homework submission systems home from school. Government offices also purchase software on massive scales, and their money can have large-scale impact on technology.

In 2003, the city council of Munich voted to plan a migration from a Microsoft-based system to a GNU/Linux one. By 2013, more than 15,000 machines were running on a customized GNU/Linux distribution. This success was short lived, however, as in late 2017 the city council voted to return to Windows and proprietary systems.

Molly de Blanc discusses the timeline of Munich’s tech procurement, where it succeeded, and what went wrong. Molly then covers other municipal procurement policies, the benefits of adopting free and open source technology at a government level, and how you can help.

Please fill out our contact form, so that we can contact you about future events in and around Portland.

16 July, 2018 02:17PM

July 15, 2018

Parabola GNU/Linux-libre

[From Arch] libutf8proc>=2.1.1-3 update requires manual intervention

The libutf8proc package prior to version 2.1.1-3 had an incorrect soname link. This has been fixed in 2.1.1-3, so the upgrade will need to overwrite the untracked soname link created by ldconfig. If you get an error

libutf8proc: /usr/lib/libutf8proc.so.2 exists in filesystem

when updating, use

pacman -Suy --overwrite usr/lib/libutf8proc.so.2

to perform the upgrade.

15 July, 2018 03:57PM by Omar Vega Ramos

July 13, 2018

libredwg @ Savannah

Revealing unknown DWG classes

I implemented three major buzzwords today in some trivial ways.

  • massive parallel processing
  • asynchronous processing
  • machine-learning: a self-improving program

The problem is mostly trivial, and the solutions also. I need to
reverse-engineer a binary closed file-format, but got some hints from
a related ASCII file-format, DWG vs DXF.

I have several pairs of files, and a helper library to convert the
ASCII data to the binary representation in the DWG. There are various
variants for most data values, and several fields are unknown, they
are not represented in the DXF, only in the DWG. So I wrote an example
program called unknown, which walks over all unknown binary blobs and
tries to find the matching known values. If a bitmap is found only
once, we have a unique match, if it's found multiple times, we have
several possibilities the fields could be laid out or if it is not
found, we have a problem, the binary representation is wrong.

When preparing the program called unknown, I decided to parse to log
files in perl and store the unknown blobs as C `.inc` files, to be
compiled into unknown as array of structs.

Several DWG files are too large and either produce too large log files
filling my hard disc or cannot be parsed properly leading to overly huge
mallocs and invalid loops, so these files need to be killed after some
timeout of 10s.

So instead of

for d in test/test-data/*.dwg; do
log=`basename "$d" .dwg`.log
echo $d
programs/dwgread -v5 "$d" 2>$log
done

I improved it to

for d in test/test-data/*.dwg; do
log=`basename "$d" .dwg`.log
echo $d
programs/dwgread -v5 "$d" 2>$log &
(sleep 10s; kill %1 2>/dev/null) &
done

The dwgread program is put into the background, with %1 being its PID,
and `sleep 10s; kill %1` implements a simple timeout via bash, not via
perl. Both processes are in the background and the second optionally
kills the first. So with some 100 files in test-data, this is
**massive parallelization**, as the dwgread processes immediately
return, and it's output appears some time later, when the process is
finished or killed. So it's also **asynchronous**, as I cannot see the
result of each individual process anymore, which returned SUCCESS,
which returned ERROR and which was killed. You need to look at the
logfiles, similar to debugging hard real-world problems, like
real-time controllers. This processing is also massively faster, but
mostly I did it to implement a simple timeout mechanism in bash.

The next problem with the background processing is that I don't know
when all the background processes stopped, so I had to add one more
line:

while pgrep dwgread; do sleep 1; done

Otherwise I would continue processing the logfiles, creating my C
structs from these, but some logfiles would still grow and I would
miss several unknown classes.

The processed data is ~10GB gigabyte large, so massive parallel
processing saves some time. The log files are only temporarily needed
to extract the binary blobs and can be removed later.

Eventually I turnned off the massive parallelization using another
timeout solution:

for d in test/test-data/*.dwg; do
log=`basename "$d" .dwg`.log
echo $d
timeout -k 1 10 programs/dwgread -v5 "$d" 2>$log
done

I could also use GNU [parallel](https://www.gnu.org/software/parallel/parallel_tutorial.html) with timeout instead to re-enable the
parallelization though and collect the async results properly.

parallel timeout 10 programs/dwgread -v5 {} \2\>{/.}.log ::: test/test-data/*.dwg
cd test/test-data
parallel timeout 10 ../../programs/dwgread -v5 {} \2\>../../{/.}_{//}.log ::: \*/\*.dwg

So now the other interesting problem, the machine-learning part.
Let me show you first some real data I'm creating.

Parsing the logfiles and DXF data via some trivial perl scripts creates an array of such structs:

{ "ACDBASSOCOSNAPPOINTREFACTIONPARAM", "test/test-data/example_2000.dxf", 0x393, /* 473 */
"\252\100\152\001\000\000\000\000\000\000\074\057\340\014\014\200\345\020\024\126\310\100", 176, NULL },

/* ACDBASSOCOSNAPPOINTREFACTIONPARAM 393 in test/test-data/example_2000.dxf */
static const struct _unknown_field unknown_dxf_473[] = {
{ 5, "393", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 330, "392", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 100, "AcDbAssocActionParam", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 1, "", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 100, "AcDbAssocCompoundActionParam", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "1", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 360, "394", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 330, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 100, "ACDBASSOCOSNAPPOINTREFACTIONPARAM", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "1", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 40, "-1.0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 0, NULL, NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1}}
};

I prefer the data to be compiled in, so it's not on the heap but in
the .DATA segment as const. And from this data, the programs creates
this log:

ACDBASSOCOSNAPPOINTREFACTIONPARAM: 0x393 (176) test/test-data/example_2000.dxf
=bits: 010101010000001001010110100000000000000000000000000000000000000000000
0000000000000111100111101000000011100110000001100000000000110100111000010000
0101000011010100001001100000010
handle 0.2.393 (0)
search 5:"393" (24 bits of type HANDLE [0]) in 176 bits
=search (0): 010000001100000011001001
handle 4.2.392 (0)
search 330:"392" (24 bits of type HANDLE [1]) in 176 bits
=search (0): 010000101100000001001001
handle 8.0.0 (393)
search 330:"392" (8 bits of type HANDLE [1]) in 176 bits
=search (0): 00000001
330: 392 [HANDLE] found 3 at offsets 75-82, 94, 120 /176
100: AcDbAssocActionParam
search 90:"0" (2 bits of type BL [3]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
search 1:"" (2 bits of type TV [4]) in 176 bits
=search (0): 00
1: [TV] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
100: AcDbAssocCompoundActionParam
search 90:"0" (2 bits of type BL [6]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
search 90:"0" (2 bits of type BL [7]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
search 90:"1" (10 bits of type BL [8]) in 176 bits
=search (0): 0000001000
field 90 already found at 176
search 90:"1" (10 bits of type BL [8]) in 7 bits
=search (169): 0000001000
search 90:"1" (10 bits of type BS [8]) in 7 bits
=search (169): 0000001000
handle 3.2.394 (0)
search 360:"394" (24 bits of type HANDLE [9]) in 176 bits
=search (0): 010011001100000000101001
handle 2.2.394 (393)
search 360:"394" (24 bits of type HANDLE [9]) in 176 bits
=search (0): 010001001100000000101001
handle 3.2.394 (393)
search 360:"394" (24 bits of type HANDLE [9]) in 176 bits
=search (0): 010011001100000000101001
handle 4.2.394 (393)
search 360:"394" (24 bits of type HANDLE [9]) in 176 bits
=search (0): 010000101100000000101001
handle 5.2.394 (393)
search 360:"394" (24 bits of type HANDLE [9]) in 176 bits
=search (0): 010010101100000000101001
handle 6.0.0 (393)
search 360:"394" (8 bits of type HANDLE [9]) in 176 bits
=search (0): 00000110
360: 394 [HANDLE] found 2 at offsets 109-116, 122-129 /176
search 90:"0" (2 bits of type BL [10]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
search 90:"0" (2 bits of type BL [11]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
handle 4.0.0 (0)
search 330:"0" (8 bits of type HANDLE [12]) in 176 bits
=search (0): 00000010
330: 0 [HANDLE] found 2 at offsets 8-15, 168-175 /176
100: ACDBASSOCOSNAPPOINTREFACTIONPARAM
search 90:"0" (2 bits of type BL [14]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
search 90:"1" (10 bits of type BL [15]) in 176 bits
=search (0): 0000001000
field 90 already found at 168
search 90:"1" (10 bits of type BL [15]) in 7 bits
=search (169): 0000001000
search 90:"1" (10 bits of type BS [15]) in 7 bits
=search (169): 0000001000
search 40:"-1.0" (66 bits of type BD [16]) in 176 bits
=search (0): 000000000000000000000000000000000000000000000000001111001111010000
40: -1.0 [BD] found 1 at offset 32-97 /176
66/176=37.5%
possible: [ 8....8187 7......xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxx..81 77 7....8118.........9211 77 7..7 7...7 7..7
7..7 77 xxxxxxxx]

It converts each blob into bits of 0 or 1, converts each DXF field to
some binary types, also logged as bits, then some handwritten **membits()**
search similar to `memmem()` or `strstr()` searches the bitmask in the blob
and records all found instances. In the printed **possible [ ]** array the
xxxx represents unique finds, `1-9` and `.` multiple finds and space
holes for unknown fields, not represented in the DXF, DWG only
fields. These can be guessed from the documentation and some thinking.

Most fields itself are specialized run-length bit-encoded, so a 0.0
needs only 2 bits 10, an empty string needs the same 2 bits 10, the
number 0 as BL (bitlong) needs 2 bits 10, and 90:"1", i.e. the number
1 as BL (bitlong) needs 10 bits 0010000000. So you really need unique
values in the sample data to get enough unique finds. In this bad
example, which so far really is the best example in my data I get 37.5% of
exact matches, with 6x 90:0, i.e. 6 times the number 0. You won't know
which binary 00 is the the real number 0. Conversion from
strings to floats is also messy and inprecise. While representing a
double as 64 binary bits is always properly defined in Intel chips,
the reverse is not true, respresenting the string as double can lead
to various representations and I search various variants cutting off
the mantissa precision to find our matching binary double.

What I'll be doing then is to shuffle the values a bit in the DXF to
represent uniquely identifiable values, like 1,2,3,4,5,6, convert this
DXF back to a DWG, and analyse this pair again. This is e.g. what I did to
uniquely identify the position of the header variables in earlier DWG
versions.

Bad percentages are not processed anymore and removed from the
program. So I create a constant feedback loop, with the program
creating these logs, and a list of classes to skip and permute to
create better matches. I could do this within my program for a
complete self-learning process, but I rather create logfiles,
re-analyse them, adjust the data structures shown above, re-compile
the program and run it again. Previously I did such a re-compilation
step via shared modules, which I can compile from within the program,
dlload and dlunload it, but this is silly. Simple C structs on disc
are easier to follow than shared libs in memory. I also store the
intermediate steps in git, so I can check how the progress of the
self-improvement evolves, if at all. Several such objects were getting
worse and worse deviating to 0%, because there were no unique values
and finds anymore. Several representations are also still wrong, some
text values are really values from some external references, such as
the layer name.

So this was a short intro into massive parallel processing,
asynchronous processing, and machine-learning: a self-improving program.
The code in question is here:
https://github.com/LibreDWG/libredwg/tree/master/examples
All the `*.inc` and `*.skip` files are automatically created by make -C examples regen-unknown.

The initial plan was to create a more complex backtracking solver to
find the best matches for all possible variants, but step-wise
refinement in controllable loops and usage of several trivial tools is
so far much easier than real AI. AI really is trivial if you do it
properly.

13 July, 2018 10:11PM by Reini Urban

July 12, 2018

FSF Blogs

Introducing Alyssa Rosenzweig, intern with the FSF tech team

Howdy there, fellow cyber denizens; 'tis I, Alyssa Rosenzweig, your friendly local biological life form! I'm a certified goofball, licensed to be silly under the GPLv3, but more importantly, I'm passionate about free software's role in society. I'm excited to join the Free Software Foundation as an intern this summer to expand my understanding of our movement. Well, that, and purchasing my first propeller beanie in strict compliance with the FSF office dress code!

Anywho, I hail from a family of engineers and was introduced to programming at an early age. As a miniature humanoid, I discovered that practice let me hit buttons on a keyboard and have my textual protagonist dance on my terminal -- that was cool! Mimicking those around me, I hacked with an Apple laptop, running macOS, compiling in Xcode, and talking on Skype. I was vaguely aware of the free software ethos, so sometimes I liberated my code. Sometimes I did not. I was little more than a button masher with a flashing TTY; I wrote video games while inside a video game, my life firewalled from reality.

I grew up. Offline, I learned in school about politics, civics, history. My fascination grew from PHP and C++ to Dr. King, Mahatma Gandhi, Cesar Chavez: real people, making real change, in the real world. Online, I added Richard Stallman to my nascent list of heroes. Discovering the free software movement transformed me. Soon, armed with both programming and politics, I watched the genie fly out of the bottle, granting me three wishes. I chose liberty, equality, and fraternity – Vive la philosophie! Yet I was restless. I was still. How could I? People lived. People died. Programs booted. Programs -9'd. The world spun. I sat. How could I? How could I? I put 10 and 10 together, and soon I knew my mission: to program for freedom, to write free software. Voracious, I read code and prose, and focused, I hacked and hacked. Today, this path has led to me to copyleft my blog on free software, to condemn proprietary software at every turn, and most of all, to code, to collaborate, to contribute.

Critically, I have developed a focus on low-level freedom. I joined Libreboot, a free boot firmware, and through that immersion in boot freedom, I learned of two grave new threats: the Intel Management Engine and the AMD Platform Security Processor. It became clear that Intel and AMD's x86, the dominant architecture among free and proprietary software users alike, no longer belonged in our movement. I switched to ARM machines.

Unfortunately, free software support for ARM is lacking. On popular almost-free chipsets like the RK3288, the graphics processor requires proprietary blobs. Thus, we hackers are creating Panfrost, a free driver for modern ARM Mali chips. Today, on an RK3288 laptop, Panfrost is mature enough to run the famous benchmark, es2gears, with zero lines of proprietary code. But even with projects like Panfrost, intense ARM fragmentation has made the architectural jump a RISCy proposition for free software supporters. Indeed, there is not yet a user-friendly, fully free GNU/Linux distribution available for ARM.

This summer with the FSF, I am working to address these issues. My immediate focus is contributing to ARM-related resources like the LibrePlanet wiki and the FSF website. Longer-term, I seek to improve distribution support to enable x86-bound users to make the switch. No one -- and no zero -- has claimed the road ahead is easy. But little by little, together we can chip away at the proprietary monopoly, in the name of freer chips.

12 July, 2018 08:38PM

Sonali's Progress on the Free Software Directory, weeks 1-2

As a part of my project to make the Free Software Directory mobile friendly, I can add extensions, modify the code, and format the pages the way I like. I have complete freedom to experiment on their development site as much as I want. It's wonderful to be able to work on something I really enjoy under the guidance of experienced mentors.

What is the Free Software Directory?

The Free Software Directory is a project of the Free Software Foundation that catalogs free software and allows people to download free software with verified free licenses. In 2011, it was re-launched as a wiki, using MediaWiki and Semantic MediaWiki to give users greater freedom to add, use or modify its textual data.

My work:

After setting up Trisquel (a fully free operating system and GNU/Linux distribution) on my machine and getting access to the development server, I was good to go by the second day of my internship. I had already done a lot of research, so I knew what I was going for.

MediaWiki offers a simple way to make any wiki mobile-optimized. The extension MobileFrontend provides various site transformations that make your wiki mobile-friendly. It comes with useful features like a mobile menu and section collapsing, and has a very simple reader-oriented interface. It can be installed very easily. People with less experience don't need to spend time learning how to understand complex code. The CSS of the mobile theme can also be edited to achieve desired customizations using MediaWiki:Mobile.css (which is a Web page on your wiki, a counterpart of MediaWiki:Common.css). In short, MobileFrontend can give a very effective mobile-optimized view without much hassle. (For more information, see the project page.)

I installed MobileFrontend from the MediaWiki extension distributor and extracted the files locally. Then I uploaded them to the extensions directory in the root, i.e. /var/www/w/extensions.

Next, I had to edit LocalSettings.php. I used Vim for that. Vim is a very efficient and simple text editor. I was going to use Vim for the first time, so I spent some time before that going through Vimtutor.

After ensuring that the extension was installed properly, I couldn't wait to see how the site looked on mobile. I opened its mobile view on my desktop, and that was when I realized that while MobileFrontend is able to make most of the pages mobile optimized, it doesn't necessarily make it "mobile friendly." I checked the development site for various bugs and text which was not properly aligned.

Issues I faced:

  1. The HeaderTabs extension is incompatible with MobileFrontend. Only the first tab is displayed, and the rest of the tabs (and their text) disappear. The FSD uses HeaderTabs to display various entries. It was necessary to either fix those or disable them for mobile view. The solution: I was able to disable HeaderTabs using MobileDetect extension. It introduces a function called mobiledetect(), which returns true when a mobile device is detected, and false otherwise. I added a simple if statement in LocalSettings.php which excluded HeaderTabs from loading on mobile browsers.
  2. The mobile menu looks incomplete; there doesn't seem to be an easy way to add important links to the mobile menu.
  3. Noticed by Ian (one of my mentors): 2 column view doesn't merge into a single column during mobile view, i.e. things that appear on the right column in a wide desktop view should come in-line in the mobile view. This is caused by the styling of the form: float: left; float: right, etc. Solution: I replaced those styles with div classes left-float and right-float. Then I defined those classes in MediaWiki:Common.css and used a media query to bring the 2 columns in-line for screen size lesser than 800 px.
  4. Suggested by Andrew (my mentor), tables appear crammed in small screens. I am currently working on this issue. With the help of flexboxes and CSS, I am trying to make long horizontal tables appear vertical during mobile view.

Things I plan to work on in the coming week:

  1. Add the FSD logo to the mobile site
  2. Theme the mobile site so that it looks more like the FSF Directory theme
  3. Find a way to enable mobile view on desktops of smaller sizes (less than 800 px)
  4. And most importantly, spend time learning, researching, and building something useful.

12 July, 2018 07:06PM

Riccardo Mottola

DataBasin + DataBasinKit 1.0 released

A new release (1.0) for DataBasin and its framework DataBasinKit is out!

This release provides lots of news, most of the enhancements coming from the framework and exposed by the GUI:
  • Update login endpoint to login.salesforce.com (back again!)
  • Implement retrieve (get fields from a list of IDs, natively)
  • Support nillable fields on create
  • save HTML tables and pseudo-XLS in HTML-typed formats
  • Fix cloning of connections in case of threading
  • Implement Typing of fields after describing query elements (DBSFDataTypes)

DataBasin is a tool to access and work with SalesForce.com. It allows to perform queries remotely, export and import data, inspect single records and describe objects. DataBasinKit is its underlying framework which implements the APIs in Objective-C. Works on GNUstep (major Unix variants and MinGW on windows) and natively on macOS.

12 July, 2018 04:44PM by Riccardo (noreply@blogger.com)

July 11, 2018

FSF Events

Richard Stallman - « Contrôle ton ordinateur pour ne pas être contrôlé ! » (EduCode, Brussels, Belgium)

Richard Stallman will be speaking at the EduCode 2018 (2018-08-27–29). His speech will be in French. It will be nontechnical and the public is encouraged to attend.

Location: Bozar (École des Beaux Arts, Centre for Fine Arts), Rue Ravenstein 21, 100 Bruxelles, BÊlgique (Belgium)

Important: Please note that, while registration is required, for Day 1 and Day 3, it can be done anonymously, in cash and at the venue.

Please fill out our contact form, so that we can contact you about future events in and around Brussels.

11 July, 2018 01:25PM

July 09, 2018

FSF News

FSF Events

Conference - "SeaGL 2018" (Seattle, WA)

The Seattle GNU/Linux Conference (November 9–10) is this year again going to take place at Seattle Central College (Maps).

From the website:

SeaGL is a grassroots technical conference dedicated to spreading awareness and knowledge about the GNU/Linux community and free/libre/open-source software/hardware. Our goal for SeaGL is to produce an event which is as enjoyable and informative for those who spend their days maintaining hundreds of servers as it is for a student who has only just started exploring technology options. SeaGL's first year was 2013. The SeaGL web site is built with Jekyll and we use OSEM for event management.

Admission is gratis and registration can be done anonymously and without running nonfree software.

Please fill out our contact form, so that we can contact you about future events in and around the Seattle area.

09 July, 2018 02:07PM

July 06, 2018

GNU Guix

GNU Guix and GuixSD 0.15.0 released

We are pleased to announce the new release of GNU Guix and GuixSD, version 0.15.0! This release brings us close to what we wanted to have for 1.0, so it’s probably one of the last zero-dot-something releases.

The release comes with GuixSD ISO-9660 installation images, a virtual machine image of GuixSD, and with tarballs to install the package manager on top of your GNU/Linux distro, either from source or from binaries.

It’s been 7 months (much too long!) since the previous release, during which 100 people contributed code and packages. The highlights include:

  • The unloved guix pull command, which allows users to upgrade Guix and its package collection, has been overhauled and we hope you will like it. We’ll discuss these enhancements in another post soon but suffice to say that the new guix pull now supports rollbacks (just like guix package) and that the new --list-generations option allows you to visualize past upgrades. It’s also faster, not as fast as we’d like though, so we plan to optimize it further in the near future.
  • guix pack can now produce relocatable binaries. With -f squashfs it can now produce images stored as SquashFS file systems. These images can then be executed by Singularity, a “container engine” deployed on some high-performance computing clusters.
  • GuixSD now runs on ARMv7 and AArch64 boxes! We do not provide an installation image though because the details depend on the board you’re targeting, so you’ll have to build the image yourself following the instructions. On ARMv7 it typically uses U-Boot, while AArch64 boxes such as the OverDrive rely on the EFI-enabled GRUB. Bootloader definitions are available for many boards—Novena, A20 OLinuXino, BeagleBone, and even NES.
  • We further improved error-reporting and hints provided by guix system. For instance, it will now suggest upfront kernel modules that should be added to the initrd—previously, you could install a system that would fail to boot simply because the initrd lacked drivers for your hard disk.
  • OS configuration has been simplified with the introduction of things like the initrd-modules field and the file-system-label construct.
  • There’s a new guix system docker-image command that does exactly what you’d expect. :-)
  • There’s a dozen new GuixSD services: the Enlightenment and MATE desktops, Apache httpd, support for transparent emulation with QEMU through the qemu-binfmt service, OpenNTPD, and more.
  • There were 1,200 new packages, so we’re now close to 8,000 packages.
  • Many bug fixes!
  • The manual is now partially translated into French and you can help translate it into your native language by joining the Translation Project.

See the release announcement for details.

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64, ARMv7, and AArch64 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

06 July, 2018 12:00PM by Ludovic Courtès

July 05, 2018

Luca Saiu

The European Parliament has rejected the copyright directive, for now

The EU copyright directive in its present form has deep and wide implications reaching far beyond copyright, and erodes into core human rights and values. For more information I recommend Julia Reda’s analysis at , which is accessible to the casual reader but also contains pointers to the text of the law. Today on June 5, following a few weeks of very intense debate, campaigning and lobbying including deliberate attempts to mislead politicians (), the European Parliament voted in plenary session to reject the directive in its current form endorsed by the JURI committee, and instead reopen the debate. It ... [Read more]

05 July, 2018 10:47PM by Luca Saiu (positron@gnu.org)

libredwg @ Savannah

libredwg-0.5 released [alpha]

See https://www.gnu.org/software/libredwg/ and http://git.savannah.gnu.org/cgit/libredwg.git/tree/NEWS?h=0.5

Here are the compressed sources:
http://ftp.gnu.org/gnu/libredwg/libredwg-0.5.tar.gz (9.2MB)
http://ftp.gnu.org/gnu/libredwg/libredwg-0.5.tar.xz (3.4MB)

Here are the GPG detached signatures[*]:
http://ftp.gnu.org/gnu/libredwg/libredwg-0.5.tar.gz.sig
http://ftp.gnu.org/gnu/libredwg/libredwg-0.5.tar.xz.sig

Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html

Here are the SHA256 checksums:

920c1f13378c849d41338173764dfac06b2f2df1bea54e5069501af4fab14dd1 libredwg-0.5.tar.gz
fd7b6d029ec1c974afcb72c0849785db0451d4ef148e03ca4a6c4a4221b479c0 libredwg-0.5.tar.xz

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:

gpg --verify libredwg-0.5.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

gpg --keyserver keys.gnupg.net --recv-keys B4F63339E65D6414

and rerun the 'gpg --verify' command.

05 July, 2018 05:35AM by Reini Urban

July 02, 2018

FSF Blogs

Take a stand before July 5th: Contact your MEP today

We recently asked you to contact your Members of European Parliment (MEPs) to express your opposition to the Copyright Directive, a proposed policy including a section called Article 13. Article 13 threatens free speech, free culture, and free software. A number of you contacted your MEPs and wrote back to us -- thank you!

In spite of your efforts, 15 MEPs in the Legal Affairs Committee (JURI) voted against Web freedom and passed the Copyright Directive through its first round of approval.

But there's still hope! The second round is coming up this Thursday, July 5th. We're urging you to contact your MEPs before then.

Talk to your MEP before July 5th to #SaveYourInternet.

How to contact your MEP

Find their details

Europa.eu contains contact information for all MEPs. This includes email addresses, Twitter accounts, mailing addresses, and phone numbers.

Call

Calling is one of the most important things you can do. Try a sample script:

Hi, I'm [NAME]. I live in [LOCATION]. I'm calling to let you know that I oppose Article 13 of the Copyright Directive. Thank you.

It's that simple. If you want to say more, tell the person you're talking with a bit about yourself -- are you an artist concerned for the future of your art? A developer or free software activist looking out for software freedom? Maybe you're a student, parent, or voter concerned about the future of the Web.

Feel free to talk more about why you care about Article 13 and its impact, including the ways it's bad for the Web, free culture, free speech, and free software. Some key points include:

  • It harms free speech and free expression.
  • It limits lawful use of commentary, parody, and remix.
  • It will automatically filter out source code that is being legally used under free software licenses.

For more ideas, Julia Reda has written about some of the consequences of Article 13 passing.

Email your MEP

Not sure what to say? Try this sample script, or write your own:

Dear [NAME],

I am writing to urge you to vote against the Copyright Directive this July.

The Copyright Directive will turn code sharing platforms that are used to build software into censorship machines. When we say software, not only do we mean things like apps for our convenience, but also the digital infrastructure that runs our world.

Much of this technology is free software -- software that uses licenses that respect the freedom of users and developers of software. Among other things, it allows developers to use, modify, and reuse code. Automatic filtering would prevent the legal and rightful uploading of code that uses these licenses -- a major blow for development and digital freedom.

I hope you'll do the right thing and vote for freedom with a vote against the Copyright Directive.

Cheers,
[NAME]

Social media

We know that many of your MEPs maintain accounts on Twitter. You can Tweet at them, using the hashtags SaveYourInternet, CensorshipMachine, and DeleteArt13.

What else can I do?

Share this post with your friends! Educating others and raising awareness of these issues helps people understand how important these policies are.

If you write to your MEP, share your letters with us (info@fsf.org) and those in your networks: having examples of what to say makes it easier to write a letter. Please feel free to also contact us about phone calls you make.

To learn more, you can check out the links below:

You can also support the work of the Free Software Foundation by supporting our Spring fundraising and membership drive.

02 July, 2018 08:13PM

May 2018: Photos from Brazil and Argentina

Free Software Foundation president Richard Stallman (RMS) went on a 12-city visit to Brazil and Argentina this past May and June. The trip took him…

…to the Federal Institute of São Paulo, in Araraquara, São Paulo, Brazil, where, on May 14th, he gave his speech "A Free Digital Society."1 The event was well attended, with universities and schools from neighboring cities having organized to shuttle their students to it by bus.

IFSP Professor José Rodolfo Beluzo, who had organized the visit, said the talk was "highly praised and helped people to reflect on the freedom of information." "It also helped us begin to reflect on the freedom of information in our school," he added.

(Copyright © 2018 Marta Kawamura Gonçalves. Photos licensed under CC BY 4.0.)

…to Campus Party, which this year was held in Salvador de Bahia, Bahia. On May 17th, he gave his free software speech1 on the main (Feel the Future) stage, to about 420 attendees…

(Copyright © 2018 Campus Party (photos by Gabriel Maciel and Rafael Martinelli). Photos licensed under CC BY 4.0.)

…and on to Argentina, starting in Misiones Posadas, where, on May 19th, in the Sala de Prosa of the Parque del Conocimiento, he gave his speech "Software libre en la ética y la práctica"1 to an audience of about 400 people, including government officials in telecommunications and legislators from Misiones.

(Copyright © 2018 Juan José Barrientos. Photos licensed under CC BY 4.0.)

(Copyright © 2018 Alejandro Prieto. Photos licensed under CC BY 4.0.)

…to San Miguel de Tucumán, in Tucumán, and then on to Mendoza, where he spoke at the Facultad Regional Mendoza de la Universidad Tecnológica Nacional. He again gave his free software speech,1 to a diverse audience of over a thousand people.

Adrián Sierra, the UTN's secretary of student affairs, who invited RMS, was pleased with the outcome, saying that, before RMS's visit, the issue of software freedom had been an "optional" one, and that, after his visit, it was clear to university authorities that using free software was about guaranteeing freedom, sovereignty and was a moral duty for an educational institution. He added, "All the IT labs provide the option of running GNU/Linux and the Centro de Copiado Autogestionado of the Centro de Estudiantes runs and develops free software exclusively."

(Copyright © 2018 Lucas Sing. Photos licensed under CC BY 4.0.)

Before the speech started, the UTN, which forms half of Argentina's engineers, granted RMS the title of honorary professor, its highest distinction.

(Copyright © 2018 Lucas Sing. Photos licensed under CC BY 4.0.)

(Copyright © 2018 Pablo Cerezo. Photos licensed under CC BY 4.0.)

…to the Centro Cultural Teatrino de la Trapalanda, in Río Cuarto, Córdoba, where he gave his speech "Sociedad digital libre,"1 on May 26th. RMS drove home the point:

Any school level should teach only free software, because a school's social mission is to educate citizens of a society that is capable, strong, independent, caring, and free. Because to teach a proprietary program is a way of implanting dependence in the future. It's like teaching to smoke tobacco.

(Copyright © 2018 Lucas Bellomo. Photos licensed under CC BY 4.0.)

…to the Autonomous City of Buenos Aires, where, on May 28th, he spoke at the Instituto Universitario de la Policía Federal (IUPFA), giving his speech "Software libre: Soberanía e independencia tecnológica."

(Copyright © 2018 IUPFA. Photos licensed under CC BY 4.0.)

…and, on May 30th, at the Centro Cultural de la Cooperación Floreal Gorini, where people came to hear him give his speech "Copyright vs. Comunidad."1

(Copyright © 2018 Cristian Segarra. Photos licensed under CC BY 4.0.)

…to La Plata, Buenos Aires, where he spoke at the Universidad Nacional de La Plata (UNP), on May 31st.

(Copyright © 2018 Dirección de Innovación Tecnologica y Cadenas Productivas of the UNP. Photos licensed under CC BY 4.0.)

He finished with speeches in Mar del Plata, Buenos Aires, on June 2nd, and then, back in Brazil, in Pato Branco, Paraná, on June 4th and, finally, in São Paulo, on June 6th. Much work remains to be done; however, the trip did much to increase the public's awareness of the importance of computer-user freedom. Commenting on the situation in Brazil, Alexandre Oliva, founding member of Free Software Foundation Latin America and the 2017 winner of the FSF's Award for the Advancement of Free Software concluded,

Last decade, free software had a lot of exposure and support in Brazilian governments, from federal to municipal. This declined significantly in the present decade: as the government tides turned, the movement lost many influential positions and even activists who moved on. Those of us who remained need to prepare the next generation of activists, focusing on the young. They might seem lost to anti-social networks, mobile phone cells and idIoT surveillance devices, but these are all additional reasons to inform them and urge them to join the resistance. Having RMS around to draw their attention to these issues is invaluable!

We hope you will join the movement!



Thank you to everyone who made all these trips possible! A special thanks to Alexandre for, among other things, making the Araraquara and São Paulo speeches possible, and to Javier Barcena for all his help on the Argentinian side.

Please fill out our contact form, so that we can inform you about future events in and around Araraquara, Misiones Posadas, Tucumán, Mendoza, Río Cuarto, Buenos Aires and La Plata, Mar del Plata, Pato Branco, and São Paulo, all of which RMS visited on this trip.

Please see www.fsf.org/events for a full list of all of RMS's confirmed engagements,
and contact rms-assist@gnu.org if you'd like him to come speak.


1. The recording will soon be posted on our audio-video archive.

02 July, 2018 05:20PM

GNU Guile

GNU Guile 2.2.4 released

We are delighted to announce GNU Guile 2.2.4, the fourth bug-fix release in the new 2.2 stable release series. It fixes many bugs that had accumulated over the last few months, in particular bugs that could lead to crashes of multi-threaded Scheme programs. This release also brings documentation improvements, the addition of SRFI-71, and better GDB support.

See the release announcement for full details and a download link. Enjoy!

02 July, 2018 09:00AM by Ludovic Courtès (guile-devel@gnu.org)

coreutils @ Savannah

coreutils-8.30 released [stable]

02 July, 2018 02:02AM by Pádraig Brady

June 28, 2018

Christopher Allan Webber

We Miss You, Charlie Brown

Morgan was knocking on the bathroom door. She wanted to know why I was crying when I was meant to be showering.

I was crying because my brain had played a cruel trick on me last night. It conjured a dream in which all the characters from the comic strip "Peanuts" represented myself and friends. Charlie Brown, the familiar but awkward everyman of the series, was absent from every scene, and in that way, heavily present.

I knew that Charlie Brown was absent because Charlie Brown had committed suicide.

I knew that Charlie Brown was my friend Matt Despears.

The familiar Peanuts imagery passed by: Linus (who was me), sat at the wall, but with nobody to talk to. Lucy held out the football, but nobody was there to kick it. Snoopy sat at his doghouse with an empty bowl, and nobody was there to greet him. And so on.

Then the characters, in silence, moved on with their lives, and the title scrolled by the screen... "We Miss You, Charlie Brown".

And so, that morning, I found myself in the shower, crying.

Why Peanuts? I don't know. I wouldn't describe myself as an overly energetic fan of the series. I also don't give too much credit for dream imagery as being necessarily important, since I think much tends to be the byproduct of the cleanup processes of the brain. But it hit home hard, probably because the imagery is so very familiar and repetitive, and so the absence of a key component amongst that familiarity stands out strongly. And maybe Charlie Brown just a good fit for Matt: awkward but loveable.

It has now been over six years since Matt has passed, and I find myself thinking of him often, usually when I have the urge to check in with him and remember that he isn't there. Before this recent move I was going through old drives and CDs and cleaning out and wiping out old junk, and found an archive of old chat logs from when I was a teenager. I found myself reliving old conversations, and most of it was utter trash... I felt embarrassed with my past self and nearly deleted the entire archive. But then I went through and read those chat logs with Matt. I can't say they were of any higher quality... my conversations with Matt seemed even more absurd on average than the rest. But I kept the chat logs. I didn't want to lose that history.

I felt compelled to write this up, and I don't entirely know why. I also nearly didn't write this up, because I think maybe this kind of writing can be dangerous. That may sound absurd, but I can speak from my experience of someone who frequently experiences suicidal ideation that the phrase "would anyone really miss me when I'm gone" comes to mind, and maybe this reinforces that.

I do think that society tends to romanticize depression and suicide in some strange ways, particularly this belief that suffering makes art greater. A friend of mine pointed this out to me for the first time in reference to John Toole's "A Confederacy of Dunces", often advertised and sold to others by, "and the author committed suicide before it was ever published!" But it would have been better to have more books by John Toole instead.

So as for "will anyone miss me if I'm gone", I want to answer that without romanticizing it. The answer is just "Yes, but it would be better if you were here."

A group of friends and I got together to play a board game recently. We sat around the table and had a good time. I drew a picture of "Batpope", one of Matt's favorite old jokes, and we left it on an empty spot at the table for Matt. But we would have rathered that Matt was there. His absence was felt. And that's usually how it is... like in the dream, we pass through the scenes of our lives, and we carry on, but there's a missing space, and one can feel the shape. There's no romance to that... just absence and memories.

We miss you, Matt Despears.

28 June, 2018 04:50PM by Christopher Lemmer Webber

June 27, 2018

gdbm @ Savannah

Version 1.16

Version 1.16 has been released.

This version improves free space management and fixes a long-standing bug discovered recently due to introduction of strict database consistency checks.

27 June, 2018 07:05PM by Sergey Poznyakoff

June 26, 2018

FSF Blogs

European Union Public License v. 1.2 added to license list

We recently added the EUPL-1.2 to our list of Various Licenses and Comments About Them. This list helps users to understand whether a particular license is a free software license, and whether it is compatible with the GNU General Public License (GNU GPL). Like the previous version of the EUPL (EUPL-1.1), the EUPL-1.2 is included in the section for free licenses that are GNU GPL-incompatible, but with an important caveat. While the EUPL-1.2's copyleft by itself is incompatible with the GNU GPL, the license provides a few mechanisms for re-licensing which enable combination with GNU GPL-licensed works. We explain the situation more fully in the entry itself:

This is a free software license. By itself, it has a copyleft comparable
to the GPL's, and incompatible with it. However, it gives recipients
ways to relicense the work under the terms of other selected licenses,
and some of those—the Eclipse Public License in particular—only provide
a weaker copyleft. Thus, developers can't rely on this license to
provide a strong copyleft.

The EUPL allows relicensing to GPLv2 only and GPLv3 only, because those
licenses are listed as two of the alternative licenses that users may
convert to. It also, indirectly, allows relicensing to GPL version 3 or
any later version, because there is a way to relicense to the CeCILL v2,
and the CeCILL v2 gives a way to relicense to any version of the GNU GPL.

To do this two-step relicensing, you need to first write a piece of code
which you can license under the CeCILL v2, or find a suitable module
already available that way, and add it to the program. Adding that code
to the EUPL-covered program provides grounds to relicense it to the
CeCILL v2. Then you need to write a piece of code which you can license
under the GPLv3+, or find a suitable module already available that way,
and add it to the program. Adding that code to the CeCILL-covered
program provides grounds to relicense it to GPLv3+.

These comments are very similar to the ones we made for the EUPL-1.1, as the EUPL-1.2 is an update that is very much in line with its predecessor. The biggest change was adding the GNU GPLv3 only as an alternative license, simplifying the process of incorporating EUPL-1.2 code into a GNU GPLv3 only project. Note, however, that the two-step re-licensing process previously described is still needed in order to incorporate EUPL-1.2 code into a GNU GPLv3+ project.

With all that said, the EUPL-1.2 is the newest addition to our list of free licenses, but more will be added in the future. To keep up to date, here's what you can do:

26 June, 2018 09:35PM

FSF Events

Karen M. Sandler, Molly de Blanc - "Introduction to User Freedom" (HOPE, New York, NY)

Free Software Conservancy executive director Karen M. Sandler and FSF campaigns manager Molly de Blanc will be speaking at The Circle of Hope (2018-07-20–22).

If you are coding, writing, or making art or any other creative works, at some point you need to pick a license for how you want to share what you’ve done. A license represents a series of ethical, legal, and values decisions. Instead of proprietary "software" and "culture," you have "free software" and "free culture." The licenses used to accomplish this are the legal embodiment of a set of ideals represented in the four freedoms of free software. This talk will provide a historical and philosophical overview of just what it means for something to be free, why it matters, and what your responsibilities are in a world where our experiences, our selves, and our lives have become intellectual property that may not always belong to us.

This session is accessible to anyone with a general knowledge of what free software is, and that open contribution communities power many free software projects.

Karen and Molly's speech will be nontechnical and the public is encouraged to attend.

Please check the event schedule for the date and time.

Location: Hotel Pennsylvania, 401 Seventh Avenue (15 Penn Plaza), New York, NY

Please fill out our contact form, so that we can contact you about future events in and around New York City.

26 June, 2018 03:45PM

June 21, 2018

foliot @ Savannah

GNU Foliot version 0.9.8

GNU Foliot version 0.9.8 is released (June 2018)

This is a maintenance release, which brings GNU Foliot up-to-date with Grip 0.2.0, upon which it
depends. In addition, the default installation locations changed, and there is a new configure option.

For a list of changes since the previous version, visit the NEWS file. For a complete description, consult the git summary and git log.

21 June, 2018 03:37AM by David Pirotte

June 20, 2018

parallel @ Savannah

GNU Parallel 20180622 ('Kim Trump') released

GNU Parallel 20180622 ('Kim Trump') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Quote of the month:

GNU Parallel 實在太方便啦!!
Yucheng Chuang @yorkxin@twitter

New in this release:

  • Deal better with multibyte chars by forcing LC_ALL=C.
  • GNU Parallel was shown on Danish national news for 1.7 seconds: dr.dk
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, April 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

20 June, 2018 10:00PM by Ole Tange

June 19, 2018

Riccardo Mottola

GNUMail + Pantomime 1.3.0

A new release for GNUmail (Mail User Agent for GNUstep and MacOS) and Pantomime (portable MIME Framework): 1.3.0!


Panomime APIs were update to have safer types: mostly count and sizes were transitioned to more Cocoa-like NSUinteger/NSInteger or size_t/ssize_t where appropriate.
This required a major release as 1.3.0 for both Pantomime and GNUMail. In several functions returning -1 was replaced by NSNotFound.

Note: When running the new GNUMail it will update your message cache to the new format. In case of problems, clean it (or in case of reverting to the old version). Message size is now encoded as unsigned instead of signed inside it.

Countless enhancements and bug fixes in both Pantomime and GNUMail should improve usability.
Previously there were issues of certain messages not loading when containing special characters and/or decoding personal part of Addresses.

Pantomime:

  • Correct signature detection as per RFC (caused issues when removing it during replies)
  • improved address and quoted parsing
  • generally improved header parsing
  • Encoding fixes
  • Serious iconv fix which could cause memory corruption due to realloc
  • Fixes for Local folders (should help fix #53063, #51852 and generally bugs with POP and Local accounts)
  • generally improved init methods to check for self, that may help avoid memory issues and debugging in the future
  • various code cleanup in Message loading for better readibility
  • more logging code for debug build, should help debugging

  • Possibility to create filters for To and CC directly in message context menu
  • Read/Unread and Flag/Unflag actions directly in the message context menu
  • Size status for Messages in bytes KiloBytes or MegaBytes depending on size
  • Spelling fixes
  • Improved Menu Validation
  • fix for #52817
  • generally improved init methods to check for self, that may help avoid memory issues and debugging in the future
  • GNUstep Only: Find Panel is now GORM based

19 June, 2018 01:10PM by Riccardo (noreply@blogger.com)

June 16, 2018

gdbm @ Savannah

Version 1.15

GDBM version 1.15 is available for download. Important changes in this release:

Extensive database consistency checking

GDBM tries to detect inconsistencies in input database files as early as possible. When an inconcistency is detected, a helpful diagnostics is returned and the database is marked as needing recovery. From this moment on, any GDBM function trying to access the database will immediately return error code (instead of eventually segfaulting as previous versions did). In order to reconstruct the database and return it to healthy state, the gdbm_recover function should be used.

Commands can be given to gdbmtool in the command line

The syntax is:

Multiple commands are separated by semicolon (take care to escape it), e.g.:

Fixed data conversion bugs in storing structured keys or content

== New member in the gdbm_recovery structure: duplicate_keys.==

Upon return from gdbm_recover, this member holds the number of keys that has not been recovered, because the same key had already been stored in the database. The actual number of stored keys is thus:

New error codes

The following new error codes are introduced:

  • GDBM_BAD_BUCKET (Malformed bucket header)
  • GDBM_BAD_HEADER (Malformed database file header)
  • GDBM_BAD_AVAIL (Malformed avail_block)
  • GDBM_BAD_HASH_TABLE (Malformed hash table)
  • GDBM_BAD_DIR_ENTRY (Invalid directory entry)

Removed gdbm-1.8.3 compatibility layer

16 June, 2018 04:52PM by Sergey Poznyakoff

June 15, 2018

Riccardo Mottola

OresmeKit initial release: plotting for GNUstep and Cocoa

Finally a public release of OresmeKit.
Started many years ago, it has finally come the moment for a first public release, since I put together even a first draft of documentation. Stay tuned for improvements and new graph types.

Oresme is useful for plotting and graphing data both native on Cocoa/MacOS as on GNUstep.

OresmeKit is a framework which provides NSView subclasses that can display data. It is useful to easily embed charts and graphs in your applications, e.g. monitoring apps, dashboards and such.
OresmeKit supports both GNUstep and Cocoa/MacOS.

An initial API Documentation is also available as well as two example in the SVN repository.







15 June, 2018 02:41PM by Riccardo (noreply@blogger.com)

June 14, 2018

gsl @ Savannah

GNU Scientific Library 2.5 released

Version 2.5 of the GNU Scientific Library (GSL) is now available. GSL provides a large collection of routines for numerical computing in C.

This release introduces some new features and fixes several bugs. The full NEWS file entry is appended below. The file details for this release are:

ftp://ftp.gnu.org/gnu/gsl/gsl-2.5.tar.gz
ftp://ftp.gnu.org/gnu/gsl/gsl-2.5.tar.gz.sig

The GSL project homepage is http://www.gnu.org/software/gsl/

GSL is free software distributed under the GNU General Public License.

Thanks to everyone who reported bugs and contributed improvements.

Patrick Alken

14 June, 2018 07:02PM by Patrick Alken

June 12, 2018

FSF Events

Richard Stallman to speak in Amsterdam (We Make the City, Amsterdam, Netherlands)

Richard Stallman will be speaking twice at "Next Generation Cities - Strategies for Inclusive Digital Transformation," part of the festival We Make the City (2018-06-21).

He will take part in a panel:

  • What: panel - "Smart City, Spy City? Avenues for making a city 'smart' while respecting privacy and anonymity"
  • Abstract:
    For a free society, we must reduce the level of surveillance to below what the Soviet Union suffered. - a discussion between RMS, Marleen Stikker, and Francesca Bria on the need for free, fair and inclusive digital t echnology and infrastructures"
  • When: 10:15–10:45
  • Where: Q Factory, Atlantisplein 1, Amsterdam, Netherlands

and will give a speech:

  • What: speech - "How We Can Have Less Surveillance Than The USSR?"
  • Abstract:
    Digital technology has enabled governments to impose surveillance that Stalin could only dream of, making it next to impossible to talk with a reporter undetected. This puts democracy in danger. Stallman will present the absolute limit on general surveillance in a democracy, and suggest ways to design systems not to collect dossiers on all citizens.
  • When: 13:00–14:00
  • Where: Q Factory, Atlantisplein 1, Amsterdam, Netherlands

Please fill out our contact form, so that we can contact you about future events in and around Amsterdam.

12 June, 2018 10:50AM

June 11, 2018

libredwg @ Savannah

Major speedup for big DWG's

Thanks to David Bender and James Michael DuPont for convincing me that we need a hash table for really big DWGs. I got a DWG example with 42MB, which needed 2m to process and then 3m to free the dwg struct. I also had to fix a couple of internal problems.

We couldn't use David Bender's hashmap which he took from Android (Apache 2 licensed), and I didn't like it too much neither. So today I sat down and wrote a good int hashmap from scratch, with several performance adjustments, because we never get a key 0 and we won't need to delete keys.
So it's extremely small and simple, using cache-friendly open addressing, and I got it right at the second attempt.

Performance with this hash table now got down to 7 seconds.
Then I also removed the unneeded dwg_free calls from some cmdline apps, because the kernel does it much better then libc malloc/free. 3 minutes for free() is longer than the slowest garbage collector I've ever seen.
So now processing this 42MB dwg needs 7s.

While I was there I also had to adjust several hard coded limits used for testing our small test files. With realistic big DWG's they failed all over. There are still some minor problems, but the majority of the DWG's can be read.
And I pass through now all new error codes, with a bitmask of all occured uncritical and critical errors. On any critical error the library aborts, on some errors it just aborts for this particular object and some uncritical errors are just skipped over. See dwg.h or the docs for the error levels.

What's left is writing proper DXF's, which is apparently not that easy. With full precision mode as in newer acad's with subclass markers and all kind of internal handles, it's very strict, and I wasn't yet able to import any such DXF into acad. Should be possible though to import these into any other app.
So now I'm thinking of adding a third level of DXF: minimal, low and full (default). minimal are entities only, and are documented by acad. low would be new, the level of DXF other applications produce, such as libdxfrw or pythoncad. This is a basic DXF. Full is what teigha and acad produce.
In the end I want full, because I want to copy over parametric constraints from and to better modelers, full 3d data (3dsolid, surface) and full rendering and BIM data (Material, Sun, Lights, ...).

Reading DXF will have to wait for the next release, as I'm stuck with writing DXF's first. This needs to be elegant and maintainable, and not such a mess as with other libraries. I want to use the same code for reading and writing DXF, JSON (e.g. GeoJSON), and XML (e.g. OpenstreetMap, FreeCAD).

11 June, 2018 06:15PM by Reini Urban

unifont @ Savannah

Unifont 11.0.01 Released - Upgrade Recommended

Unifont 11.0.01 was released on 5 June 2018, coinciding with the formal release of Unicode 11.0.0 by The Unicode Consortium.

I wanted to check over this release before recommending that GNU/Linux distributions incorporate it. So far there only appears to be one new bug added: U+1C90 has an extra vertical line added to it, making the character double-width instead of single-width. This will be fixed in the next release. Unifont 10.0.x went through 7 updates in about half a year. I felt that was not stable enough for those trying to maintain GNU/Linux distributions, so I did not keep recommending that each update, with minor changes from one to the next, be propagated. I plan to have more stability in Unifont 11.0.x.

Unifont provides fonts with a glyph for each printable code point in the Unicode Basic Multilingual Plane, as well as wide coverage of the Supplemental Multilingual Plane and some ConScript Unicode Registry glyphs.

The Unifont package includes TrueType fonts for all of these ranges, and BDF and PCF fonts for the Unicode Basic Multilingual Plane. There is also a specialized PSF font for using GNU APL in console mode on GNU/Linux systems.

The web page for this project is https://savannah.gnu.org/projects/unifont/.

You can download the latest version from GNU mirror sites, accessible at http://ftpmirror.gnu.org/unifont/unifont-11.0.01. If the mirror site does not contain this latest version, you can download files directly from GNU at https://ftp.gnu.org/gnu/unifont/unifont-11.0.01/ or ftp://ftp.gnu.org/gnu/unifont/unifont-11.0.01/.

Highlights of this version:

Support for the brand new Unicode Copyleft glyph (U+01F12F), which was added in Unicode 11.0.0. This glyph is present in the Unifont package's TrueType fonts.

The addition of the space character (U+0020) to all Unifont package TrueType fonts, for more straightforward rendering of Unifont Upper (i.e., Unicode Plane 1) scripts that contain spaces.

The addition of several new scripts that were introduced in Unicode 11.0.0:

  • U+1C90..U+1CBF Georgian Extended
  • U+010D00..U+010D3F Hanifi Rohingya
  • U+010F00..U+010F2F Old Sogdian
  • U+010F30..U+010F6F Sogdian
  • U+011800..U+01184F Dogra
  • U+011D60..U+011DAF Gunjala Gondi
  • U+011EE0..U+011EFF Makasar
  • U+016E40..U+016E9F Medefaidrin
  • U+01D2E0..U+01D2FF Mayan Numerals
  • U+01EC70..U+01ECBF Indic Siyaq Numbers
  • U+01FA00..U+01FA6F Chess Symbols

Paul Hardy

GNU Unifont Maintainer

11 June, 2018 12:29PM by Paul Hardy

June 06, 2018

freedink @ Savannah

New FreeDink DFArc frontend 3.14 release

Here's a new release of DFArc, a frontend to run the GNU FreeDink game and manage its numerous add-on adventures or D-Mods :)
https://ftp.gnu.org/pub/gnu/freedink/dfarc-3.14.tar.gz

This release fixes CVE-2018-0496: Sylvain Beucler and Dan Walma discovered several directory traversal issues in DFArc (as well as in the RTsoft's Dink Smallwood HD / ProtonSDK version), allowing an attacker to overwrite arbitrary files on the user's system.

Also in this release:

- New Swedish and Friulian translations.

- Updated Catalan, Brazilian Portuguese and Spanish translations.

- Fix crash when clicking on 'Package' when there is no D-Mod present.

- Compilation fixes for OS X.

- Reproducible build process for Windows (as well as GNU/Linux depending on your distro) - see https://reproducible-builds.org/

A note about distros security support:

- Debian Security team graciously issued a CVE ID under 72h but declined both a security upload and a rationale on their choice; fix diverted to the next ~quarterly point release
- Fedora/RedHat security did not answer after 6 days; fortunately Fedora is flexible enough to allow package maintainers to upgrade DFArc in previous releases on their own
- Gentoo Security did not answer after 7 days
- FreeBSD ports and Mageia packagers were contacted but did not answer
- In Arch, package still stuck between orphaned and deleted state due to a 2017 bug

It seems security support for packages without large user base and/or games is delayed significantly at best.

About GNU FreeDink:

Dink Smallwood is an adventure/role-playing game, similar to Zelda, made by RTsoft. Besides twisted humor, it includes the actual game editor, allowing players to create hundreds of new adventures called Dink Modules or D-Mods for short.

GNU FreeDink is a new and portable version of the game engine, which runs the original game as well as its D-Mods, with close
compatibility, under multiple platforms.

DFArc is an integrated frontend, .dmod installer and .dmod archiver for the Dink Smallwood game engine.

06 June, 2018 07:03PM by Sylvain Beucler

Sylvain Beucler

Best GitHub alternative: us

Why try to choose the host that sucks less, when hosting a single-file (S)CGI gets you decentralized git-like + tracker + wiki?

Fossil

https://www.fossil-scm.org/

We gotta take the power back.

06 June, 2018 06:16PM

GNUnet News

GNUnet 0.11.0pre66

Platform: 
Source Code (TGZ)

We are pleased to announce the release of GNUnet 0.11.0pre66.

This is a pre-release to assist developers and downstream packagers to test the
package before the final release after four years of development.

06 June, 2018 07:20AM by Christian Grothoff

June 05, 2018

health @ Savannah

GNU Health patchset 3.2.10 released

Dear community

GNU Health 3.2.10 patchset has been released !

Priority: Medium

Table of Contents

  • About GNU Health Patchsets
  • Updating your system with the GNU Health control Center
  • Summary of this patchset
  • Installation notes
  • List of issues related to this patchset

About GNU Health Patchsets

We provide "patchsets" to stable releases. Patchsets allow applying bug fixes and updates on production systems. Always try to keep your production system up-to-date with the latest patches.

Patches and Patchsets maximize uptime for production systems, and keep your system updated, without the need to do a whole installation.

NOTE: Patchsets are applied on previously installed systems only. For new, fresh installations, download and install the whole tarball (ie, gnuhealth-3.2.10.tar.gz)

Updating your system with the GNU Health control Center

Starting GNU Health 3.x series, you can do automatic updates on the GNU Health and Tryton kernel and modules using the GNU Health control center program.

Please refer to the administration manual section ( https://en.wikibooks.org/wiki/GNU_Health/Control_Center )

The GNU Health control center works on standard installations (those done following the installation manual on wikibooks). Don't use it if you use an alternative method or if your distribution does not follow the GNU Health packaging guidelines.

Summary of this patchset

Patch 3.2.10 fixes issues related to the CalDAV functionality, updating the calDAV event after changing an appointment.

The gnuhealth-setup program has also been updated, including numpy for the latest pytools

Refer to the List of issues related to this patchset for a comprehensive list of fixed bugs.

Installation Notes

You must apply previous patchsets before installing this patchset. If your patchset level is 3.2.9, then just follow the general instructions.
You can find the patchsets at GNU Health main download site at GNU.org (https://ftp.gnu.org/gnu/health/)

In most cases, GNU Health Control center (gnuhealth-control) takes care of applying the patches for you.

Follow the general instructions at

After applying the patches, make a full update of your GNU Health database as explained in the documentation.

  • Restart the GNU Health Tryton server

List of issues and tasks related to this patchset

  • bug #54055: Caldav event does not update after changing the appointment

For detailed information about each issue, you can visit https://savannah.gnu.org/bugs/?group=health
For detailed information about each task, you can visit https://savannah.gnu.org/task/?group=health

For detailed information you can read about Patches and Patchsets

05 June, 2018 11:53AM by Luis Falcon

June 04, 2018

gnuastro @ Savannah

Gnuastro 0.6 released

The sixth release of Gnuastro is now ready for download. Please see the announcement for more details.

04 June, 2018 04:25PM by Mohammad Akhlaghi

libredwg @ Savannah

Enabled r2018 support

I finished now reading the remaining DWG formats r2010, r2013 and r2018.
The only DWG read limitations are now:

  • pre-R13: some entities, all blocks
  • r2010+ Some AEC EED (Autodesk Architectural Desktop) objects.
  • Untested: FIELDLIST, AcDbField, TABLECONTENT, TABLEGEOMETRY, GEODATA, WIPEOUTVARIABLE
  • Unhandled (i.e. passed through): MATERIAL, CELLSTYLEMAP, MULTILEADER, PROXY, PROXY_ENTITY, DBCOLOR, PLOTSETTINGS, TABLESTYLE, VBA_PROJECT, DIMASSOC, ACDBSECTIONVIEWSTYLE, ACDBDETAILVIEWSTYLE, ACDBASSOCNETWORK, ACDBASSOC2DCONSTRAINTGROUP, ACDBASSOCGEOMDEPENDENCY, ACDB_LEADEROBJECTCONTEXTDATA_CLASS, NPOCOLLECTION, EXACXREFPANELOBJECT, ARCALIGNEDTEXT (2000+), UNDERLAYDEFINITION (2 strings), OBJECTCONTEXTDATA, AcDbAnnotScaleObjectContextDatax

Writing is only done for r13-r2000 with r2000 being the default format. (I hope).

DXF support is coming. Writing DXF is done, but AutoCAD cannot import it yet, as I write all known fields, handles and references, unlike libdxfrw which only writes a limited set. You cannot map parametric constraints or advanced classes with that.

DXF reading is only planned, as I have to decide yet which strategy to use, with minimal maintenance. It should be a table driven parser, from the dwg.spec.

With the upcoming 0.5 release I"m only waiting on the fencepost permissions on GNU. You can get daily snapshots and windows binaries from the github release page: https://github.com/LibreDWG/libredwg/releases/

Reini Urban

04 June, 2018 08:40AM by Reini Urban

June 02, 2018

Sylvain Beucler

Reproducible Windows builds

I'm working again on making reproducible .exe-s. I thought I'd share my process:

Pros:

  • End users get a bit-for-bit reproducible .exe, known not to contain trojan and auditable from sources
  • Point releases can reuse the exact same build process and avoid introducing bugs

Steps:

  • Generate a source tarball (non reproducibly)
  • Debian Docker as a base, with fixed version + snapshot.debian.org sources.list
    • Dockerfile: install packaged dependencies and MXE(.cc) from a fixed Git revision
    • Dockerfile: compile MXE with SOURCE_DATE_EPOCH + fix-ups
  • Build my project in the container with SOURCE_DATE_EPOCH and check SHA256
  • Copy-on-release

Result:

git.savannah.gnu.org/gitweb/?p=freedink/dfarc.git;a=tree;f=autobuild/dfarc-w32-snapshot

Generate a source tarball (non reproducibly)

This is not reproducible due to using non-reproducible tools (gettext, automake tarballs, etc.) but it doesn't matter: only building from source needs to be reproducible, and the source is the tarball.

It would be better if the source tarball were perfectly reproducible, especially for large generated content (./configure, wxGlade-generated GUI source code...), but that can be a second step.

Debian Docker as a base

AFAIU the Debian Docker images are made by Debian developers but are in no way official images. That's a pity, and to be 100% safe I should start anew from debootstrap, but Docker is providing a very efficient framework to build images, notably with caching of every build steps, immediate fresh containers, and public images repository.

This means with a single:

sudo -g docker make

you get my project reproducibly built from scratch with nothing to setup at all.

I avoid using a :latest tag, since it will change, and also backports, since they can be updated anytime. Here I'm using stretch:9.4 and no backports.

Using snapshot.debian.org in sources.list makes sure the installed packaged dependencies won't change at next build. For a dot release however (not for a rebuild), they should be updated in case there was a security fix that has an effect on built software (rare, but exists).

Last but not least, APT::Install-Recommends "false"; for better dependency control.

MXE

mxe.cc is compilation environment to get MingGW (GCC for Windows) and selected dependencies rebuilt unattended with a single make. Doing this manually would be tedious because every other day, upstream breaks MinGW cross-compilation, and debugging an hour-long build process takes ages. Been there, done that.

MXE has a reproducible-boosted binutils with a patch for SOURCE_DATE_EPOCH that avoids getting date-based and/or random build timestamps in the PE (.exe/.dll) files. It's also compiled with --enable-deterministic-archives to avoid timestamp issues in .a files (but no automatic ordering).

I set SOURCE_DATE_EPOCH to the fixed Git commit date and I run MXE's build.

This does not apply to GCC however, so I needed to e.g. patch a __DATE__ in wxWidgets.

In addition, libstdc++.a has a file ordering issue (said ordering surprisingly stays stable between a container and a host build, but varies when using a different computer with the same distros and tools versions). I hence re-archive libstdc++.a manually.

It's worth noting that PE files don't have issues with build paths (and varying BuildID-s - unlike ELF... T_T).

Again, for a dot release, it makes sense to update the MXE Git revision so as to catch security fixes, but at least I have the choice.

Build project

With this I can start a fresh Docker container and run the compilation process inside, as a non-privileged user just in case.

I set SOURCE_DATE_EPOCH to the release date at 00:00UTC, or the Git revision date for snapshots.

This rebuild framework is excluded from the source tarball, so the latter stays stable during build tuning. I see it as a post-release tool, hence not part of the release (just like distros packaging).

The generated .exe is statically compiled which helps getting a stable result (only the few needed parts of dependencies get included in the final executable).

Since MXE is not itself reproducible differences may come from MXE itself, which may need fixes as explained above. This is annoying and hopefully will be easier once they ship GCC6. To debug I unzip the different .zip-s, upx -d my .exe-s, and run diffoscope.

I use various tricks (stable ordering, stable timestamping, metadata cleaning) to make the final .zip reproducible as well. Post-processing tools would be an alternative if they were fixed.

reprotest

Any process is moot if it can't be tested.

reprotest helps by running 2 successive compilations with varying factors (build path, file system ordering, etc.), and check that we get the exact same binary. As a trade-off, I don't run it on the full build environment, just on the project itself. I plugged reprotest to the Docker container by running a sshd on the fly. I have another Makefile target to run reprotest in my host system where I also installed MXE, so I can compare results and sometimes find differences (e.g. due to using a different filesystem). In addition this is faster for debugging since changing anything in the early Dockerfile steps means a full 1h rebuild.

Copy-on-release

At release time I make a copy of the directory that contains all the self-contained build scripts and the Dockerfile, and rename it after the new release version. I'll continue improving upon the reproducible build system in the 'snapshot' directory, but the versioned directory will stay as-is and can be used in the future to get the same bit-for-bit identical .exe anytime.

This is the technique I used in my Android Rebuilds project.

Other platforms

For now I don't control the build process for other platforms: distros have their own autobuilders, so does F-Droid. Their problem :P

I have plans to make reproducible GNU/Linux AppImage-based builds in the future though. I should be able to use a finer-grained, per-dependency process rather than the huge MXE-based chunk I currently do.

I hope this helps other projects provide reproducible binaries directly! Comments/suggestions welcome.

02 June, 2018 05:12PM

May 30, 2018

FSF News

Minifree Libreboot X200 Tablet now FSF-certified to Respect Your Freedom

Libreboot X200 tablet

This is the third device from Minifree Ltd to receive RYF certification. The Libreboot X200 Tablet is a fully free laptop/tablet hybrid that comes with Trisquel and Libreboot pre-installed. The device is similar to the previously certified Libreboot X200 laptop, but with a built-in tablet that enables users to draw, sign documents, or make handwritten notes. Like all devices from Minifree Ltd., purchasing the Libreboot X200 Tablet helps to fund development of Libreboot, the free boot firmware that currently runs on all RYF-certified laptops. It may be purchased at https://minifree.org/product/libreboot-x200-tablet/, and comes with free technical support included.

"We need RYF-certified laptops of all shapes, sizes, and form factors, and for them to be available from multiple sources around the world so users have options. This is a welcome expansion of those options, as well as an opportunity for people to help unlock future possibilities by funding Libreboot development," said the FSF's executive director, John Sullivan.

"The Libreboot X200 Tablet is another great addition to the line-up of freedom respecting devices from Minifree, which has a long history of developing the software and tools that make RYF-certifiable devices possible," said the FSF's licensing & compliance manager, Donald Robertson, III.

"I'm happy that the FSF is now endorsing yet another Minifree product. Minifree's mission is to provide affordable, libre systems that are easy to use and therefore accessible to the public. Minifree's purpose is to provide funding to the Libreboot project, supporting it fully, and I'm delighted to once again cooperate with the FSF on this most noble goal," said Leah Rowe, Founder & CEO, Minifree Ltd.

To learn more about the Respects Your Freedom certification program, including details on the certification of the Libreboot x200 Tablet, please visit https://fsf.org/ryf.

Hardware sellers interested in applying for certification can consult https://www.fsf.org/resources/hw/endorsement/criteria.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

About Minifree Ltd

Minifree Ltd, trading as Ministry of Freedom (formerly trading as Gluglug), is a UK supplier shipping worldwide that sells GNU/Linux-libre computers with the Libreboot firmware and Trisquel GNU/Linux-libre operating system pre-installed.

Libreboot is a free BIOS/UEFI replacement, offering faster boot speeds, better security, and many advanced features compared to most proprietary boot firmware.

Media Contacts

Donald Robertson, III
Licensing and Compliance Manager
Free Software Foundation
+1 (617) 542 5942
licensing@fsf.org

Leah Rowe
Founder & CEO
Minifree Ltd
+44 7442 425 835
info@gluglug.org.uk

Image Copyright 2018 Minifree Ltd, Licensed under Creative Commons Attribution-ShareAlike 4.0.

30 May, 2018 02:59PM

Parabola GNU/Linux-libre

Server outage

One of our servers, winston.parabola.nu, is currently offline for hardware reasons. It has been offline since 2018-05-30 00:15 UTC. Hang tight, it should be back online soon.

30 May, 2018 03:11AM by Luke Shumaker

May 28, 2018

bison @ Savannah

bison-3.0.5 released [stable]

28 May, 2018 05:04AM by Akim Demaille

May 26, 2018

GNU Guix

Customize GuixSD: Use Stock SSH Agent Everywhere!

I frequently use SSH. Since I don't like typing my password all the time, I use an SSH agent. Originally I used the GNOME Keyring as my SSH agent, but recently I've switched to using the ssh-agent from OpenSSH. I accomplished this by doing the following two things:

  • Replace the default GNOME Keyring with a custom-built version that disables the SSH agent feature.

  • Start my desktop session with OpenSSH's ssh-agent so that it's always available to any applications in my desktop session.

Below, I'll show you in detail how I did this. In addition to being useful for anyone who wants to use OpenSSH's ssh-agent in GuixSD, I hope this example will help to illustrate how GuixSD enables you to customize your entire system to be just the way you want it!

The Problem: GNOME Keyring Can't Handle My SSH Keys

On GuixSD, I like to use the GNOME desktop environment. GNOME is just one of the various desktop environments that GuixSD supports. By default, the GNOME desktop environment on GuixSD comes with a lot of goodies, including the GNOME Keyring, which is GNOME's integrated solution for securely storing secrets, passwords, keys, and certificates.

The GNOME Keyring has many useful features. One of those is its SSH Agent feature. This feature allows you to use the GNOME Keyring as an SSH agent. This means that when you invoke a command like ssh-add, it will add the private key identities to the GNOME Keyring. Usually this is quite convenient, since it means that GNOME users basically get an SSH agent for free!

Unfortunately, up until GNOME 3.28 (the current release), the GNOME Keyring's SSH agent implementation was not as complete as the stock SSH agent from OpenSSH. As a result, earlier versions of GNOME Keyring did not support many use cases. This was a problem for me, since GNOME Keyring couldn't read my modern SSH keys. To make matters worse, by design the SSH agent for GNOME Keyring and OpenSSH both use the same environment variables (e.g., SSH_AUTH_SOCK). This makes it difficult to use OpenSSH's ssh-agent everywhere within my GNOME desktop environment.

Happily, starting with GNOME 3.28, GNOME Keyring delegates all SSH agent functionality to the stock SSH agent from OpenSSH. They have removed their custom implementation entirely. This means that today, I could solve my problem simply by using the most recent version of GNOME Keyring. I'll probably do just that when the new release gets included in Guix. However, when I first encountered this problem, GNOME 3.28 hadn't been released yet, so the only option available to me was to customize GNOME Keyring or remove it entirely.

In any case, I'm going to show you how I solved this problem by modifying the default GNOME Keyring from the Guix package collection. The same ideas can be used to customize any package, so hopefully it will be a useful example. And what if you don't use GNOME, but you do want to use OpenSSH's ssh-agent? In that case, you may still need to customize your GuixSD system a little bit. Let me show you how!

The Solution: ~/.xsession and a Custom GNOME Keyring

The goal is to make OpenSSH's ssh-agent available everywhere when we log into our GNOME desktop session. First, we must arrange for ssh-agent to be running whenever we're logged in.

There are many ways to accomplish this. For example, I've seen people implement shell code in their shell's start-up files which basically manages their own ssh-agent process. However, I prefer to just start ssh-agent once and not clutter up my shell's start-up files with unnecessary code. So that's what we're going to do!

Launch OpenSSH's ssh-agent in Your ~/.xsession

By default, GuixSD uses the SLiM desktop manager. When you log in, SLiM presents you with a menu of so-called "desktop sessions", which correspond to the desktop environments you've declared in your operating system declaration. For example, if you've added the gnome-desktop-service to your operating system declaration, then you'll see an option for GNOME at the SLiM login screen.

You can further customize your desktop session with the ~/.xsession file. The contract for this file in GuixSD is the same as it is for many GNU/Linux distributions: if it exists, then it will be executed. The arguments passed to it will be the command line invocation that would normally be executed to start the desktop session that you selected from the SLiM login screen. Your ~/.xsession is expected to do whatever is necessary to customize and then start the specified desktop environment. For example, when you select GNOME from the SLiM login screen, your ~/.xsession file will basically be executed like this (for the exact execution mechanism, please refer to the source code linked above):

$ ~/.xsession gnome-session

The upshot of all this is that the ~/.xsession is an ideal place to set up your SSH agent! If you start an SSH agent in your ~/.xsession file, you can have the SSH agent available everywhere, automatically! Check it out: Put this into your ~/.xsession file, and make the file executable:

#!/run/current-system/profile/bin/bash
exec ssh-agent "$@"

When you invoke ssh-agent in this way, it executes the specified program in an environment where commands like ssh-add just work. It does this by setting environment variables such as SSH_AUTH_SOCK, which programs like ssh-add find and use automatically. Because GuixSD allows you to customize your desktop session like this, you can use any SSH agent you want in any desktop environments that you want, automatically!

Of course, if you're using GNOME Keyring version 3.27 or earlier (like I was), then this isn't quite enough. In that case, the SSH agent feature of GNOME Keyring will override the environment variables set by OpenSSH's ssh-agent, so commands like ssh-add will wind up communicating with the GNOME Keyring instead of the ssh-agent you launched in your ~/.xsession. This is bad because, as previously mentioned, GNOME Keyring version 3.27 or earlier doesn't support as many uses cases as OpenSSH's ssh-agent.

How can we work around this problem?

Customize the GNOME Keyring

One heavy-handed solution would be to remove GNOME Keyring entirely. That would work, but then you would lose out on all the other great features that it has to offer. Surely we can do better!

The GNOME Keyring documentation explains that one way to disable the SSH agent feature is to include the --disable-ssh-agent configure flag when building it. Thankfully, Guix provides some ways to customize software in exactly this way!

Conceptually, we "just" have to do the following two things:

  • Customize the existing gnome-keyring package.

  • Make the gnome-desktop-service use our custom gnome-keyring package.

Create a Custom GNOME Keyring Package

Let's begin by defining a custom gnome-keyring package, which we'll call gnome-keyring-sans-ssh-agent. With Guix, we can do this in less than ten lines of code:

(define-public gnome-keyring-sans-ssh-agent
  (package
    (inherit gnome-keyring)
    (name "gnome-keyring-sans-ssh-agent")
    (arguments
     (substitute-keyword-arguments
         (package-arguments gnome-keyring)
       ((#:configure-flags flags)
        `(cons "--disable-ssh-agent" ,flags))))))

Don't worry if some of that code is unclear at first. I'll clarify it now!

In Guix, a <package> record like the one above is defined by a macro called define-record-type* (defined in the file guix/records.scm in the Guix source). It's similar to an SRFI-9 record. The inherit feature of this macro is very useful: it creates a new copy of an existing record, overriding specific fields in the new copy as needed.

In the above, we define gnome-keyring-sans-ssh-agent to be a copy of the gnome-keyring package, and we use inherit to change the name and arguments fields in that new copy. We also use the substitute-keyword-arguments macro (defined in the file guix/utils.scm in the Guix source) to add --disable-ssh-agent to the list of configure flags defined in the gnome-keyring package. The effect of this is to define a new GNOME Keyring package that is built exactly the same as the original, but in which the SSH agent is disabled.

I'll admit this code may seem a little opaque at first, but all code does when you first learn it. Once you get the hang of things, you can customize packages any way you can imagine. If you want to learn more, you should read the docstrings for the define-record-type* and substitute-keyword-arguments macros in the Guix source code. It's also very helpful to grep the source code to see examples of how these macros are used in practice. For example:

$ # Search the currently installed Guix for the current user.
$ grep -r substitute-keyword-arguments ~/.config/guix/latest
$ # Search the Guix Git repository, assuming you've checked it out here.
$ grep -r substitute-keyword-arguments ~/guix

Use the Custom GNOME Keyring Package

OK, we've created our own custom GNOME Keyring package. Great! Now, how do we use it?

In GuixSD, the GNOME desktop environment is treated as a system service. To make GNOME use our custom GNOME Keyring package, we must somehow customize the gnome-desktop-service (defined in the file gnu/services/desktop.scm) to use our custom package. How do we customize a service? Generally, the answer depends on the service. Thankfully, many of GuixSD's services, including the gnome-desktop-service, follow a similar pattern. In this case, we "just" need to pass a custom <gnome-desktop-configuration> record to the gnome-desktop-service procedure in our operating system declaration, like this:

(operating-system

  ...

  (services (cons*
             (gnome-desktop-service
              #:config my-gnome-desktop-configuration)
             %desktop-services)))

Here, the cons* procedure just adds the GNOME desktop service to the %desktop-services list, returning the new list. For details, please refer to the Guile manual.

Now the question is: what should my-gnome-desktop-configuration be? Well, if we examine the definition of this record type in the Guix source, we see the following:

(define-record-type* <gnome-desktop-configuration> gnome-desktop-configuration
  make-gnome-desktop-configuration
  gnome-desktop-configuration
  (gnome-package gnome-package (default gnome)))

The gnome package referenced here is a "meta" package: it exists only to aggregate many GNOME packages together, including gnome-keyring. To see its definition, we can simply invoke guix edit gnome, which opens the file where the package is defined:

(define-public gnome
  (package
    (name "gnome")
    (version (package-version gnome-shell))
    (source #f)
    (build-system trivial-build-system)
    (arguments '(#:builder (mkdir %output)))
    (propagated-inputs
     ;; TODO: Add more packages according to:
     ;;       <https://packages.debian.org/jessie/gnome-core>.
     `(("adwaita-icon-theme"        ,adwaita-icon-theme)
       ("baobab"                    ,baobab)
       ("font-cantarell"            ,font-cantarell)
       [... many packages omitted for brevity ...]
       ("gnome-keyring"             ,gnome-keyring)
       [... many packages omitted for brevity ...]
    (synopsis "The GNU desktop environment")
    (home-page "https://www.gnome.org/")
    (description
     "GNOME is the graphical desktop for GNU.  It includes a wide variety of
applications for browsing the web, editing text and images, creating
documents and diagrams, playing media, scanning, and much more.")
    (license license:gpl2+)))

Apart from being a little long, this is just a normal package definition. We can see that gnome-keyring is included in the list of propagated-inputs. So, we need to create a replacement for the gnome package that uses our gnome-keyring-sans-ssh-agent instead of gnome-keyring. The following package definition accomplishes that:

(define-public gnome-sans-ssh-agent
  (package
    (inherit gnome)
    (name "gnome-sans-ssh-agent")
    (propagated-inputs
     (map (match-lambda
            ((name package)
             (if (equal? name "gnome-keyring")
                 (list name gnome-keyring-sans-ssh-agent)
                 (list name package))))
          (package-propagated-inputs gnome)))))

As before, we use inherit to create a new copy of the gnome package that overrides the original name and propagated-inputs fields. Since Guix packages are just defined using good old scheme, we can use existing language features like map and match-lambda to manipulate the list of propagated inputs. The effect of the above is to create a new package that is the same as the gnome package but uses gnome-keyring-sans-ssh-agent instead of gnome-keyring.

Now that we have gnome-sans-ssh-agent, we can create a custom <gnome-desktop-configuration> record and pass it to the gnome-desktop-service procedure as follows:

(operating-system

  ...

  (services (cons*
             (gnome-desktop-service
              #:config (gnome-desktop-configuration
                        (gnome-package gnome-sans-ssh-agent)))
             %desktop-services)))

Wrapping It All Up

Finally, you need to run the following commands as root to create and boot into the new system generation (replace MY-CONFIG with the path to the customized operating system configuration file):

# guix system reconfigure MY-CONFIG
# reboot

After you log into GNOME, any time you need to use SSH, the stock SSH agent from OpenSSH that you started in your ~/.xsession file will be used instead of the GNOME Keyring's SSH agent. It just works! Note that it still works even if you select a non-GNOME desktop session (like XFCE) at the SLiM login screen, since the ~/.xsession is not tied to any particular desktop session,

In the unfortunate event that something went wrong and things just aren't working when you reboot, don't worry: with GuixSD, you can safely roll back to the previous system generation via the usual mechanisms. For example, you can run this from the command line to roll back:

# guix system roll-back
# reboot

This is one of the great benefits that comes from the fact that Guix follows the functional software deployment model. However, note that because the ~/.xsession file (like many files in your home directory) is not managed by Guix, you must manually undo the changes that you made to it in order to roll back fully.

Conclusion

I hope this helps give you some ideas for how you can customize your own GuixSD system to make it exactly what you want it to be. Not only can you customize your desktop session via your ~/.xsession file, but Guix also provides tools for you to modify any of the default packages or services to suit your specific needs.

Happy hacking!

Notices

CC0

To the extent possible under law, Chris Marusich has waived all copyright and related or neighboring rights to this article, "Customize GuixSD: Use Stock SSH Agent Everywhere!". This work is published from: United States.

The views expressed in this article are those of Chris Marusich and do not necessarily reflect the views of his past, present, or future employers.

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

26 May, 2018 03:00PM by Chris Marusich

health @ Savannah

openSUSE donates more Raspberry Pis to the GNU Health project

Dear community:

Today, in the context of the openSUSE conference 2018, oSC18, openSUSE donated 10 Raspberry Pis to the GNU health project.

GNU Health embedded, is a project that delivers GNU Health in single board machines, like raspberry pi.

The #GNUHealthEmbedded project delivers a ready-to-run, full blown installation of GNU Health. From the GNUHealth kernel, the database and even a demo database that can be installed.

The user only needs to point the client to the the server address.

The new Raspberry Pis will include:

  • Latest openSUSE Leap 15
  • GNU Health 3.4 server
  • Offline documentation
  • Lab interfaces

It will also include the Federation related packages, so it could act as a relay or as a node in the distributed, federated model.

Thank you so much to openSUSE for their generous donation, and commitment to the GNUHealth project, as an active member and as an sponsor !

Main news from openSUSE portal:
https://news.opensuse.org/2018/05/26/opensuse-donates-10-more-raspberry-pis-to-gnu-health/

26 May, 2018 01:02PM by Luis Falcon

May 25, 2018

Sylvain Beucler

Testing GNU FreeDink in your browser

Ever wanted to try this weird GNU FreeDink game, but never had the patience to install it?
Today, you can play it with a single click :)

Play GNU FreeDink

This is a first version that can be polished further but it works quite well.
This is the original C/C++/SDL2 code with a few tweaks, cross-compiled to WebAssembly (and an alternate version in asm.js) with emscripten.
Nothing brand new I know, but things are getting smoother, and WebAssembly is definitely a performance boost.

I like distributed and autonomous tools, so I'm generally not inclined to web-based solutions.
In this case however, this is a local version of the game. There's no server side. Savegames are in your browser local storage. Even importing D-Mods (game add-ons) is performed purely locally in the in-memory virtual FS with a custom .tar.bz2 extractor cross-compiled to WebAssembly.
And you don't have to worry about all these Store policies (and Distros policies^W^W^W.

I'm interested in feedback on how well these works for you in your browsers and devices:

I'm also interested in tips on how to place LibreJS tags - this is all free JavaScript.

25 May, 2018 11:46PM

May 23, 2018

GNUnet News

May 22, 2018

parallel @ Savannah

GNU Parallel 20180522 ('Great March of Return') released

GNU Parallel 20180522 ('Great March of Return') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Quote of the month:

Gnu parallel seems to work fine.
Sucking up the life out of all cores, poor machine ahahahahah
-- osxreverser

New in this release:

  • --tty allows for more programs accessing /dev/tty in parallel. Some programs require tty access without using it.
  • env_parallel --session will record names in current environment in $PARALLEL_IGNORED_NAMES and exit. It is only used with env_parallel, and can work like --record-env but in a single session.
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, April 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 May, 2018 10:57PM by Ole Tange

May 21, 2018

Andy Wingo

correct or inotify: pick one

Let's say you decide that you'd like to see what some other processes on your system are doing to a subtree of the file system. You don't want to have to change how those processes work -- you just want to see what files those processes create and delete.

One approach would be to just scan the file-system tree periodically, enumerating its contents. But when the file system tree is large and the change rate is low, that's not an optimal thing to do.

Fortunately, Linux provides an API to allow a process to receive notifications on file-system change events, called inotify. So you open up the inotify(7) manual page, and are greeted with this:

With careful programming, an application can use inotify to efficiently monitor and cache the state of a set of filesystem objects. However, robust applications should allow for the fact that bugs in the monitoring logic or races of the kind described below may leave the cache inconsistent with the filesystem state. It is probably wise to do some consistency checking, and rebuild the cache when inconsistencies are detected.

It's not exactly reassuring is it? I mean, "you had one job" and all.

Reading down a bit farther, I thought that with some "careful programming", I could get by. After a day of trying, I am now certain that it is impossible to build a correct recursive directory monitor with inotify, and I am not even sure that "good enough" solutions exist.

pitfall the first: buffer overflow

Fundamentally, inotify races the monitoring process with all other processes on the system. Events are delivered to the monitoring process via a fixed-size buffer that can overflow, and the monitoring process provides no back-pressure on the system's rate of filesystem modifications. With inotify, you have to be ready to lose events.

This I think is probably the easiest limitation to work around. The kernel can let you know when the buffer overflows, and you can tweak the buffer size. Still, it's a first indication that perfect is not possible.

pitfall the second: now you see it, now you don't

This one is the real kicker. Say you get an event that says that a file "frenemies.txt" has been created in the directory "/contacts/". You go to open the file -- but is it still there? By the time you get around to looking for it, it could have been deleted, or renamed, or maybe even created again or replaced! This is a TOCTTOU race, built-in to the inotify API. It is literally impossible to use inotify without this class of error.

The canonical solution to this kind of issue in the kernel is to use file descriptors instead. Instead of or possibly in addition to getting a name with the file change event, you get a descriptor to a (possibly-unlinked) open file, which you would then be responsible for closing. But that's not what inotify does. Oh well!

pitfall the third: race conditions between inotify instances

When you inotify a directory, you get change notifications for just that directory. If you want to get change notifications for subdirectories, you need to open more inotify instances and poll on them all. However now you have N2 problems: as poll and the like return an unordered set of readable file descriptors, each with their own ordering, you no longer have access to a linear order in which changes occurred.

It is impossible to build a recursive directory watcher that definitively says "ok, first /contacts/frenemies.txt was created, then /contacts was renamed to /peeps, ..." because you have no ordering between the different watches. You don't know that there was ever even a time that /contacts/frenemies.txt was an accessible file name; it could have been only ever openable as /peeps/frenemies.txt.

Of course, this is the most basic ordering problem. If you are building a monitoring tool that actually wants to open files -- good luck bubster! It literally cannot be correct. (It might work well enough, of course.)

reflections

As far as I am aware, inotify came out to address the needs of desktop search tools like the belated Beagle (11/10 good pupper just trying to get his pup on). Especially in the days of spinning metal, grovelling over the whole hard-drive was a real non-starter, especially if the search database should to be up-to-date.

But after looking into inotify, I start to see why someone at Google said that desktop search was in some ways harder than web search -- I mean we all struggle to find files on our own machines, even now, 15 years after the whole dnotify/inotify thing started. Part of it is that the given the choice between supporting reliable, fool-proof file system indexes on the one hand, and overclocking the IOPS benchmarks on the other, the kernel gave us inotify. I understand it, but inotify still sucks.

I dunno about you all but whenever I've had to document such an egregious uncorrectable failure mode as any of the ones in the inotify manual, I have rewritten the software instead. In that spirit, I hope that some day we shall send inotify to the pet cemetery, to rest in peace beside Beagle.

21 May, 2018 02:29PM by Andy Wingo

nano @ Savannah

GNU nano 2.9.7 was released

Accumulated changes over the last five releases include: the ability to bind a key to a string (text and/or escape sequences), a default color of bright white on red for error messages, an improvement to the way the Scroll-Up and Scroll-Down commands work, and the new --afterends option to make Ctrl+Right (next word) stop at the end of a word instead of at the beginning. Check it out.

21 May, 2018 10:36AM by Benno Schulenberg

May 16, 2018

Andy Wingo

lightweight concurrency in lua

Hello, all! Today I'd like to share some work I have done recently as part of the Snabb user-space networking toolkit. Snabb is mainly about high-performance packet processing, but it also needs to communicate with management-oriented parts of network infrastructure. These communication needs are performed by a dedicated manager process, but that process has many things to do, and can't afford to make blocking operations.

Snabb is written in Lua, which doesn't have built-in facilities for concurrency. What we'd like is to have fibers. Fortunately, Lua's coroutines are powerful enough to implement fibers. Let's do that!

fibers in lua

First we need a scheduling facility. Here's the smallest possible scheduler: simply a queue of tasks and a function to run those tasks.

local task_queue = {}

function schedule_task(thunk)
   table.insert(task_queue, thunk)
end

function run_tasks()
   local queue = task_queue
   task_queue = {}
   for _,thunk in ipairs(queue) do thunk() end
end

For our purposes, a task is just a function that will be called with no arguments.

Now let's build fibers. This is easier than you might think!

local current_fiber = false

function spawn_fiber(fn)
   local fiber = coroutine.create(fn)
   schedule_task(function () resume_fiber(fiber) end)
end

function resume_fiber(fiber, ...)
   current_fiber = fiber
   local ok, err = coroutine.resume(fiber, ...)
   current_fiber = nil
   if not ok then
      print('Error while running fiber: '..tostring(err))
   end
end

function suspend_current_fiber(block, ...)
   -- The block function should arrange to reschedule
   -- the fiber when it becomes runnable.
   block(current_fiber, ...)
   return coroutine.yield()
end

Here, a fiber is simply a coroutine underneath. Suspending a fiber suspends the coroutine. Resuming a fiber runs the coroutine. If you're unfamiliar with coroutines, or coroutines in Lua, maybe have a look at the lua-users wiki page on the topic.

The difference between a fibers facility and just coroutines is that with fibers, you have a scheduler as well. Very much like Scheme's call-with-prompt, coroutines are one of those powerful language building blocks that should rarely be used directly; concurrent programming needs more structure than what Lua offers.

If you're following along, it's probably worth it here to think how you would implement yield based on these functions. A yield implementation should yield control to the scheduler, and resume the fiber on the next scheduler turn. The answer is here.

communication

Once you have fibers and a scheduler, you have concurrency, which means that if you're not careful, you have a mess. Here I think the Go language got the essence of the idea exactly right: Do not communicate by sharing memory; instead, share memory by communicating.

Even though Lua doesn't support multiple machine threads running concurrently, concurrency between fibers can still be fraught with bugs. Tony Hoare's Communicating Sequential Processes showed that we can avoid a class of these bugs by treating communication as a first-class concept.

Happily, the Concurrent ML project showed that it's possible to build these first-class communication facilities as a library, provided the language you are working in has threads of some kind, and fibers are enough. Last year I built a Concurrent ML library for Guile Scheme, and when in Snabb we had a similar need, I ported that code over to Lua. As it's a new take on the problem in a different language, I think I've been able to simplify things even more.

So let's take a crack at implementing Concurrent ML in Lua. In CML, the fundamental primitive for communication is the operation. An operation represents the potential for communication. For example, if you have a channel, it would have methods to return "get operations" and "put operations" on that channel. Actually receiving or sending a message on a channel occurs by performing those operations. One operation can be performed many times, or not at all.

Compared to a system like Go, for example, there are two main advantages of CML. The first is that CML allows non-deterministic choice between a number of potential operations in a generic way. For example, you can construct a operation that, when performed, will either get on one channel or wait for a condition variable to be signalled, whichever comes first. In Go, you can only select between operations on channels.

The other interesting part of CML is that operations are built from a uniform protocol, and so users can implement new kinds of operations. Compare again to Go where all you have are channels, and nothing else.

The CML operation protocol consists three related functions: try which attempts to directly complete an operation in a non-blocking way; block, which is called after a fiber has suspended, and which arranges to resume the fiber when the operation completes; and wrap, which is called on the result of a successfully performed operation.

In Lua, we can call this an implementation of an operation, and create it like this:

function new_op_impl(try, block, wrap)
   return { try=try, block=block, wrap=wrap }
end

Now let's go ahead and write the guts of CML: the operation implementation. We'll represent an operation as a Lua object with two methods. The perform method will attempt to perform the operation, and return the resulting value. If the operation can complete immediately, the call to perform will return directly. Otherwise, perform will suspend the current fiber and arrange to continue only when the operation completes.

The wrap method "decorates" an operation, returning a new operation that, if and when it completes, will "wrap" the result of the completed operation with a function, by applying the function to the result. It's useful to distinguish the sub-operations of a non-deterministic choice from each other.

Here our new_op function will take an array of operation implementations and return an operation that, when performed, will synchronize on the first available operation. As you can see, it already has the equivalent of Go's select built in.

function new_op(impls)
   local op = { impls=impls }
   
   function op.perform()
      for _,impl in ipairs(impls) do
         local success, val = impl.try()
         if success then return impl.wrap(val) end
      end
      local function block(fiber)
         local suspension = new_suspension(fiber)
         for _,impl in ipairs(impls) do
            impl.block(suspension, impl.wrap)
         end
      end
      local wrap, val = suspend_current_fiber(block)
      return wrap(val)
   end

   function op.wrap(f)
      local wrapped = {}
      for _, impl in ipairs(impls) do
         local function wrap(val)
            return f(impl.wrap(val))
         end
         local impl = new_op_impl(impl.try, impl.block, wrap)
         table.insert(wrapped, impl)
      end
      return new_op(wrapped)
   end

   return op
end

There's only one thing missing there, which is new_suspension. When you go to suspend a fiber because none of the operations that it's trying to do can complete directly (i.e. all of the try functions of its impls returned false), at that point the corresponding block functions will publish the fact that the fiber is waiting. However the fiber only waits until the first operation is ready; subsequent operations becoming ready should be ignored. The suspension is the object that manages this state.

function new_suspension(fiber)
   local waiting = true
   local suspension = {}
   function suspension.waiting() return waiting end
   function suspension.complete(wrap, val)
      assert(waiting)
      waiting = false
      local function resume()
         resume_fiber(fiber, wrap, val)
      end
      schedule_task(resume)
   end
   return suspension
end

As you can see, the suspension's complete method is also the bit that actually arranges to resume a suspended fiber.

Finally, just to round out the implementation, here's a function implementing non-deterministic choice from among a number of sub-operations:

function choice(...)
   local impls = {}
   for _, op in ipairs({...}) do
      for _, impl in ipairs(op.impls) do
         table.insert(impls, impl)
      end
   end
   return new_op(impls)
end

on cml

OK, I'm sure this seems a bit abstract at this point. Let's implement something concrete in terms of these primitives: channels.

Channels expose two similar but different kinds of operations: put operations, which try to send a value, and get operations, which try to receive a value. If there's a sender already waiting to send when we go to perform a get_op, the operation continues directly, and we resume the sender; otherwise the receiver publishes its suspension to a queue. The put_op case is similar.

Finally we add some synchronous put and get convenience methods, in terms of their corresponding CML operations.

function new_channel()
   local ch = {}
   -- Queues of suspended fibers waiting to get or put values
   -- via this channel.
   local getq, putq = {}, {}

   local function default_wrap(val) return val end
   local function is_empty(q) return #q == 0 end
   local function peek_front(q) return q[1] end
   local function pop_front(q) return table.remove(q, 1) end
   local function push_back(q, x) q[#q+1] = x end

   -- Since a suspension could complete in multiple ways
   -- because of non-deterministic choice, it could be that
   -- suspensions on a channel's putq or getq are already
   -- completed.  This helper removes already-completed
   -- suspensions.
   local function remove_stale_entries(q)
      local i = 1
      while i <= #q do
         if q[i].suspension.waiting() then
            i = i + 1
         else
            table.remove(q, i)
         end
      end
   end

   -- Make an operation that if and when it completes will
   -- rendezvous with a receiver fiber to send VAL over the
   -- channel.  Result of performing operation is nil.
   function ch.put_op(val)
      local function try()
         remove_stale_entries(getq)
         if is_empty(getq) then
            return false, nil
         else
            local remote = pop_front(getq)
            remote.suspension.complete(remote.wrap, val)
            return true, nil
         end
      end
      local function block(suspension, wrap)
         remove_stale_entries(putq)
         push_back(putq, {suspension=suspension, wrap=wrap, val=val})
      end
      return new_op({new_op_impl(try, block, default_wrap)})
   end

   -- Make an operation that if and when it completes will
   -- rendezvous with a sender fiber to receive one value from
   -- the channel.  Result is the value received.
   function ch.get_op()
      local function try()
         remove_stale_entries(putq)
         if is_empty(putq) then
            return false, nil
         else
            local remote = pop_front(putq)
            remote.suspension.complete(remote.wrap)
            return true, remote.val
         end
      end
      local function block(suspension, wrap)
         remove_stale_entries(getq)
         push_back(getq, {suspension=suspension, wrap=wrap})
      end
      return new_op({new_op_impl(try, block, default_wrap)})
   end

   function ch.put(val) return ch.put_op(val).perform() end
   function ch.get()    return ch.get_op().perform()    end

   return ch
end

a wee example

You might be wondering what it's like to program with channels in Lua, so here's a little example that shows a prime sieve based on channels. It's not a great example of concurrency in that it's not an inherently concurrent problem, but it's cute to show computations in terms of infinite streams.

function prime_sieve(count)
   local function sieve(p, rx)
      local tx = new_channel()
      spawn_fiber(function ()
         while true do
            local n = rx.get()
            if n % p ~= 0 then tx.put(n) end
         end
      end)
      return tx
   end

   local function integers_from(n)
      local tx = new_channel()
      spawn_fiber(function ()
         while true do
            tx.put(n)
            n = n + 1
         end
      end)
      return tx
   end

   local function primes()
      local tx = new_channel()
      spawn_fiber(function ()
         local rx = integers_from(2)
         while true do
            local p = rx.get()
            tx.put(p)
            rx = sieve(p, rx)
         end
      end)
      return tx
   end

   local done = false
   spawn_fiber(function()
      local rx = primes()
      for i=1,count do print(rx.get()) end
      done = true
   end)

   while not done do run_tasks() end
end

Here you also see an example of running the scheduler in the last line.

where next?

Let's put this into perspective: in a couple hundred lines of code, we've gone from minimal Lua to a language with lightweight multitasking, extensible CML-based operations, and CSP-style channels; truly a delight.

There are a number of possible ways to extend this code. One of them is to implement true multithreading, if the language you are working in supports that. In that case there are some small protocol modifications to take into account; see the notes on the Guile CML implementation and especially the Manticore Parallel CML project.

The implementation above is pleasantly small, but it could be faster with the choice of more specialized data structures. I think interested readers probably see a number of opportunities there.

In a library, you might want to avoid the global task_queue and implement nested or multiple independent schedulers, and of course in a parallel situation you'll want core-local schedulers as well.

The implementation above has no notion of time. What we did in the Snabb implementation of fibers was to implement a timer wheel, inspired by Juho Snellman's Ratas, and then add that timer wheel as a task source to Snabb's scheduler. In Snabb, every time the equivalent of run_tasks() is called, a scheduler asks its sources to schedule additional tasks. The timer wheel implementation schedules expired timers. It's straightforward to build CML timeout operations in terms of timers.

Additionally, your system probably has other external sources of communication, such as sockets. The trick to integrating sockets into fibers is to suspend the current fiber whenever an operation on a file descriptor would block, and arrange to resume it when the operation can proceed. Here's the implementation in Snabb.

The only difficult bit with getting nice nonblocking socket support is that you need to be able to suspend the calling thread when you see the EWOULDBLOCK condition, and for coroutines that is often only possible if you implemented the buffered I/O yourself. In Snabb that's what we did: we implemented a compatible replacement for Lua's built-in streams, in Lua. That lets us handle EWOULDBLOCK conditions in a flexible manner. Integrating epoll as a task source also lets us sleep when there are no runnable tasks.

Likewise in the Snabb context, we are also working on a TCP implementation. In that case you want to structure TCP endpoints as fibers, and arrange to suspend and resume them as appropriate, while also allowing timeouts. I think the scheduler and CML patterns are going to allow us to do that without much trouble. (Of course, the TCP implementation will give us lots of trouble!)

Additionally your system might want to communicate with fibers from other threads. It's entirely possible to implement CML on top of pthreads, and it's entirely possible as well to support communication between pthreads and fibers. If this is interesting to you, see Guile's implementation.

When I talked about fibers in an earlier article, I built them in terms of delimited continuations. Delimited continuations are fun and more expressive than coroutines, but it turns out that for fibers, all you need is the expressive power of coroutines -- multi-shot continuations aren't useful. Also I think the presentation might be more straightforward. So if all your language has is coroutines, that's still good enough.

There are many more kinds of standard CML operations; implementing those is also another next step. In particular, I have found semaphores and condition variables to be quite useful. Also, standard CML supports "guards", invoked when an operation is performed, and "nacks", invoked when an operation is definitively not performed because a choice selected some other operation. These can be layered on top; see the Parallel CML paper for notes on "primitive CML".

Also, the choice operator above is left-biased: it will prefer earlier impls over later ones. You might want to not always start with the first impl in the list.

The scheduler shown above is the simplest thing I could come up with. You may want to experiment with other scheduling algorithms, e.g. capability-based scheduling, or kill-safe abstractions. Do it!

Or, it could be you already have a scheduler, like some kind of main loop that's already there. Cool, you can use it directly -- all that fibers needs is some way to schedule functions to run.

godspeed

In summary, I think Concurrent ML should be better-known. Its simplicity and expressivity make it a valuable part of any concurrent system. Already in Snabb it helped us solve some longstanding gnarly issues by making the right solutions expressible.

As Adam Solove says, Concurrent ML is great, but it has a branding problem. Its ideas haven't penetrated the industrial concurrent programming world to the extent that they should. This article is another attempt to try to get the word out. Thanks to Adam for the observation that CML is really a protocol; I'm sure the concepts could be made even more clear, but at least this is a step forward.

All the code in this article is up on a gitlab snippet along with instructions for running the example program from the command line. Give it a go, and happy hacking with CML!

16 May, 2018 03:17PM by Andy Wingo

GNU Guix

Tarballs, the ultimate container image format

A year ago we introduced guix pack, a tool that allows you to create “application bundles” from a set of Guix package definitions. On your Guix machine, you run:

guix pack -S /opt/gnu/bin=bin guile gnutls guile-json

and you get a tarball containing your favorite programming language implementation and a couple of libraries, where /opt/gnu/bin is a symlink to the bin directory containing, in this case, the guile command. Add -f docker and, instead of a tarball, you get an image in the Docker format that you can pass to docker load on any machine where Docker is installed. Overall that’s a relatively easy way to share software stacks with machines that do not run Guix.

The tarball format is plain and simple, it’s the one we know and love, and it’s been there “forever” as its name suggests. The tarball that guix pack produces can be readily extracted on another machine, one that doesn’t run Guix, and you’re done. The problem though, is that you’ll need to either unpack the tarball in the root file system or to play tricks with the unshare command, as we saw in the previous post. Why can’t we just extract such a tarball in our home directory and directly run ./opt/gnu/bin/guile for instance?

Relocatable packages

The main issue is that, except in the uncommon case where developers went to great lengths to make it possible (as with GUB, see the *-reloc*.patch files), packages built for GNU/Linux are not relocatable. ELF files embed things like the absolute file name of the dynamic linker, directories where libraries are to be search for (they can be relative file names with $ORIGIN but usually aren’t), and so on; furthermore, it’s very common to embed things like the name of the directory that contains locale data or other application-specific data. For Guix-built software, all these are absolute file names under /gnu/store so Guix-built binaries won’t run unless those /gnu/store files exist.

On machines where support for “user namespaces” is enabled, we can easily “map” the directory where users unpacked the tarball that guix pack produced to /gnu/store, as shown in the previous post:

$ tar xf /path/to/pack.tar.gz
$ unshare -mrf chroot . /opt/gnu/bin/guile --version
guile (GNU Guile) 2.2.0

It does the job but remains quite tedious. Can’t we automate that?

guix pack --relocatable

The --relocatable (or -R) option of guix pack, which landed a few days ago, produces tarballs with automatically relocatable binaries. Back to our earlier example, let’s say you produce a tarball with this new option:

guix pack --relocatable -S /bin=bin -S /etc=etc guile gnutls guile-json

You can send the resulting tarball to any machine that runs the kernel Linux (it doesn’t even have to be GNU/Linux) with user namespace support—which, unfortunately, is disabled by default on some distros. There, as a regular user, you can run:

$ tar xf /path/to/pack.tar.gz
$ source ./etc/profile    # define ’GUILE_LOAD_PATH’, etc.
$ ./bin/guile
guile: warning: failed to install locale
GNU Guile 2.2.3
Copyright (C) 1995-2017 Free Software Foundation, Inc.

Guile comes with ABSOLUTELY NO WARRANTY; for details type `,show w'.
This program is free software, and you are welcome to redistribute it
under certain conditions; type `,show c' for details.

Enter `,help' for help.
scheme@(guile-user)> ,use(json)
scheme@(guile-user)> ,use(gnutls)

We were able to run Guile and to use our Guile libraries since sourcing ./etc/profile augmented the GUILE_LOAD_PATH environment variable that tells Guile where to look for libraries. Indeed we can see it by inspecting the value of %load-path at the Guile prompt:

scheme@(guile-user)> %load-path
$1 = ("/gnu/store/w9xd291967cvmdp3m0s7739icjzgs8ns-profile/share/guile/site/2.2" "/gnu/store/b90y3swxlx3vw2yyacs8cz59b8cbpbw5-guile-2.2.3/share/guile/2.2" "/gnu/store/b90y3swxlx3vw2yyacs8cz59b8cbpbw5-guile-2.2.3/share/guile/site/2.2" "/gnu/store/b90y3swxlx3vw2yyacs8cz59b8cbpbw5-guile-2.2.3/share/guile/site" "/gnu/store/b90y3swxlx3vw2yyacs8cz59b8cbpbw5-guile-2.2.3/share/guile")

Wait, it’s all /gnu/store! As it turns out, guix pack --relocatable created a wrapper around guile that populates /gnu/store in the mount namespace of the process. Even though /gnu/store does not exist on that machine, our guile process “sees” our packages under /gnu/store:

scheme@(guile-user)> ,use(ice-9 ftw)
scheme@(guile-user)> (scandir "/gnu/store")
$2 = ("." ".." "0249nw8c7z626fw1fayacm160fpd543k-guile-json-0.6.0R" "05dvazr5wfh7lxx4zi54zfqnx6ha8vxr-bash-static-4.4.12" "0jawbsyafm93nxf4rcmkf1rsk7z03qfa-libltdl-2.4.6" "0z1r7ai6syi2qnf5z8w8n25b1yv8gdr4-info-dir" "1n59wjm6dbvc38b320iiwrxra3dg7yv8-libunistring-0.9.8" "2fg01r58vv9w41kw6drl1wnvqg7rkv9d-libtasn1-4.12" "2ifmksc425qcysl5rkxkbv6yrgc1w9cs-gcc-5.5.0-lib" "2vxvd3vls7c8i9ngs881dy1p5brc7p85-gmp-6.1.2" "4sqaib7c2dfjv62ivrg9b8wa7bh226la-glibc-2.26.105-g0890d5379c" "5kih0kxmipzjw10c53hhckfzkcs7c8mm-gnutls-3.5.13" "8hxm8am4ll05sa8wlwgdq2lj4ddag464-zlib-1.2.11" "90vz0r78bww7dxhpa7vsiynr1rcqhyh4-nettle-3.4" "b90y3swxlx3vw2yyacs8cz59b8cbpbw5-guile-2.2.3" "c4jrwbv7qckvnqa7f3h7bd1hh8rbg72y-libgc-7.6.0" "f5lw5w4nxs6p5gq0c2nb3jsrxc6mmxbi-libgc-7.6.0" "hjxic0k4as384vn2qp0l964isfkb0blb-guile-json-0.6.0" "ksyja5lbwy0mpskvn4rfi5klc00c092d-libidn2-2.0.4" "l15mx9lrwdflyvmb4a05va05v5yqizg5-libffi-3.2.1" "mm0zclrzj3y7rj74hzyd0f224xly04fh-bash-minimal-4.4.12" "vgmln3b639r68vvy75xhcbi7d2w31mx1-pkg-config-0.29.2" "vz3zfmphvv4w4y7nffwr4jkk7k4s0rfs-guile-2.2.3" "w9xd291967cvmdp3m0s7739icjzgs8ns-profile" "x0jf9ckd30k3nhs6bbhkrxsjmqz8phqd-nettle-3.4" "x8z6cr7jggs8vbyh0xzfmxbid63z6y83-guile-2.2.3R" "xbkl3nx0fqgpw2ba8jsjy0bk3nw4q3i4-gnutls-3.5.13R" "xh4k91vl0i8nlyrmvsh01x0mz629w5a9-gmp-6.1.2" "yx12x8v4ny9f6fipk8285jgfzqavii83-manual-database" "zksh1n0p9x903kqbvswgwy2vsk2b7255-libatomic-ops-7.4.8")

The wrapper is a small statically-linked C program. (Scheme would be nice and would allow us to reuse call-with-container, but it would also take up more space.) All it does is create a child process with separate mount and user namespaces, which in turn mounts the tarball’s /gnu/store to /gnu/store, bind-mounts other entries from the host root file system, and chroots into that. The result is a binary that sees everything a “normal” program sees on the host, but with the addition of /gnu/store, with minimal startup overhead.

In a way, it’s a bit of a hack: for example, what gets bind-mounted in the mount namespace of the wrapped program is hard-coded, which is OK, but some flexibility would be welcome (things like Flatpak’s sandbox permissions, for instance). Still, that it Just Works is a pretty cool feature.

Tarballs vs. Snap, Flatpak, Docker, & co.

Come to think of it: if you’re a developer, guix pack is probably one of the easiest ways to create an “application bundle” to share with your users; and as a user, these relocatable tarballs are about the simplest thing you can deal with since you don’t need anything but tar—well, and user namespace support. Plus, since they are bit-reproducible, anyone can rebuild them to ensure they do not contain malware or to check the provenance and licensing of its contents.

Application bundles cannot replace full-blown package management, which allows users to upgrade, get security updates, use storage and memory efficiently, and so on. For the purposes of quickly sharing packages with users or with Guix-less machines, though, you might find Guix packs to be more convenient than Snap, Flatplak, or Docker. Give it a spin and let us know!

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

16 May, 2018 09:00AM by Ludovic Courtès

May 15, 2018

FSF News

Zerocat Chipflasher "board-edition-1" now FSF-certified to Respect Your Freedom

chipflasher in action

This is the first device under The Zerocat Label to receive RYF certification. The Chipflasher enables users to flash devices such as laptops, allowing them to replace proprietary software with free software like Libreboot. While users are able to purchase RYF-certified laptops that already come with Libreboot pre-loaded, for the first time ever they are capable of freeing their own laptops using an RYF-certified device. The Zerocat Chipflasher board-edition-1 is now available for purchase as a limited edition at http://www.zerocat.org/shop-en.html. These first ten limited edition boards are signed by Kai Mertens, chief developer of The Zerocat Label, and will help to fund additional production and future development of RYF-certified devices.

"The certification of the Zerocat Chipflasher is a big step forward for the Respects Your Freedom program. Replacing proprietary boot firmware is one of the first tasks for creating a laptop that meets RYF's criteria, and now anyone can do so for their own devices with a flasher that is itself RYF-certified," said the FSF's executive director, John Sullivan.

An RYF-certified flashing device could also help to grow the number of laptops available via the RYF program.

"When someone sets out to start their own business selling RYF-certified devices, they now have a piece of hardware they can trust to help them with that process. We hope to see even more laptops made available under the program, and having those laptops flashed with a freedom-respecting device will help to set those retailers on the right path from the start," said the FSF's licensing & compliance manager, Donald Robertson, III.

"Free software tools carry the inherent message 'Let’s help our neighbors!', as this is basically the spirit of the licenses that these tools are shipped with. From a global perspective, we are all 'neighbors,' no matter which country. And from this point of view, I would be happy if the flasher will be regarded as a contribution towards worldwide cooperation and friendship," said Mertens.

To learn more about the Respects Your Freedom device certification program, including details on the certification of all these devices, please visit https://fsf.org/ryf.

Hardware sellers interested in applying for certification can consult https://www.fsf.org/resources/hw/endorsement/criteria.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at and , are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

About The Zerocat Label

The Zerocat Label has been set up in order to direct the focus on free design hardware development.

The development of free designs for hardware has many benefits. One of the most important is probably its capacity to conserve the Earth’s resources, and those of people. When we share knowledge, we work towards global solutions instead of individual profits.

Our current approach is to check which free design computer chips are available today, and to start creating something useful with them, even if bigger goals remain out of reach at this time. Creating tools in a modular manner allows us to combine them and to achieve complex solutions in the future.

Our next tasks are to find an answer to questions like “What free-design tools do we need?” and “What are we able to accomplish right now?” We hope to be able to build other free-design tools of wide interest by answering those questions. It is an experimental endeavor.

Media Contacts

Donald Robertson, III Licensing and Compliance Manager Free Software Foundation +1 (617) 542 5942 licensing@fsf.org

Kai Mertens Chief Developer The Zerocat Label zerocat@posteo.de

Image Copyright 2018 Kai Mertens, Licensed under Creative Commons Attribution-ShareAlike 4.0.

15 May, 2018 02:05PM

May 12, 2018

German Arias

New release of eiffel-iup

It is already available a new version of eiffel-iup, a Liberty Eiffel wrapper to IUP toolkit. So you can build your graphical application from Eiffel using Liberty Eiffel, the GNU implementation of Eiffel language. So happy hacking.

12 May, 2018 01:16AM by Germán Arias

May 11, 2018

librejs @ Savannah

LibreJS 7.14 released

GNU LibreJS aims to address the JavaScript problem described in Richard Stallman's article The JavaScript Trap. LibreJS is a free add-on for GNU IceCat and other Mozilla-based browsers. It blocks nonfree nontrivial JavaScript while allowing JavaScript that is free and/or trivial. https://www.gnu.org/philosophy/javascript-trap.en.html

The source tarball for this release can be found at:
http://ftp.gnu.org/gnu/librejs/librejs-7.14.tar.gz
http://ftp.gnu.org/gnu/librejs/librejs-7.14.tar.gz.sig

The installable extension file (compatible with Mozilla-based browsers version >= v57) is available here:
http://ftp.gnu.org/gnu/librejs/librejs-7.14.xpi
http://ftp.gnu.org/gnu/librejs/librejs-7.14.xpi.sig

GPG key:05EF 1D2F FE61 747D 1FC8 27C3 7FAC 7D26 472F 4409
https://savannah.gnu.org/project/memberlist-gpgkeys.php?group=librejs

Version 7.14 is an extensive bugfix release that builds on the work done by Nathan Nichols, Nyk Nyby and Zach Wick to port LibreJS to the new WebExtensions format, and previously on the contributions by Loic Duros and myself among others.

Changes since version 7.13 (excerpt from the git changelog):

  • Check global licenses for pages
  • Enable legacy license matching and hash whitelist matching
  • Refactor whitelisting of domains
  • Generalize comment styles for license matching
  • Use multi-part fetch mechanism for read_script
  • Improved system that prevents parsing non-html documents
  • Do not process non-javascript scripts (json, templates, etc)
  • Do not run license_read on whitelisted scripts
  • Prevent parsing inline scripts if there is a global license
  • Prevent evaluation of external scripts, as they are always nontrivial
  • Avoid parsing empty whitespace sections
  • Correct tab and badge initialization to prevent race conditions
  • Generalize gpl-3.0 license text
  • Improved logging
  • Disable whitelisted and blacklisted sections on display panel for now
  • Hide per-script action buttons until functionality works
  • Fixes to the CSS plus showing links instead of hashes

11 May, 2018 10:28PM by Ruben Rodriguez

FSF News

Contract opportunity: JavaScript Developer for GNU LibreJS

The Free Software Foundation (FSF), a Massachusetts 501(c)(3) charity with a worldwide mission to protect computer user freedom, seeks a contract JavaScript Developer to work on GNU LibreJS, a free browser add-on that addresses the problem of nonfree JavaScript described in Richard Stallman's article The JavaScript Trap. This is a temporary, paid contract opportunity, with specific deliverables, hours, term, and payment to be determined with the selected candidate. We anticipate the contract being approximately 80 hours of full-time work, with the possibility of extension depending on results and project status.

Reporting to our technical team, the contractor will work to implement important missing features in the LibreJS extension. We are looking for someone with experience in backend JavaScript development, WebExtensions, and NodeJS/Browserify. Experience with software licensing is a plus. This is an urgent priority; we are seeking someone who is able to start now. Contractors can be based anywhere, but must be able to attend telephone meetings during Eastern Daylight Time business hours.

Examples of deliverables include, but are not limited to:

  • Web Labels support, plus Web Labels in JSON format
  • SPDX support
  • Unit and functional testing
  • User interface improvements
  • New and updated documentation

LibreJS is a critical component of the FSF's campaign for user freedom on the Web, and freeing JavaScript specifically. Building on past contributions, this is an opportunity to help unlock a world where users can better protect their freedom as they browse, and collaborate with each other to make and share modified JavaScript to use.

Reference documentation

Proposal instructions

Proposals must be submitted via email to hiring@fsf.org. The email must contain the subject line "LibreJS Developer." A complete application should include:

  • Letter of interest
  • CV / portfolio with links to any previous work online, especially browser extensions
  • At least two recent client references

All materials must be in a free format. Email submissions that do not follow these instructions will probably be overlooked. No phone calls, please.

Proposals will be reviewed on a rolling basis until the contract is filled. To guarantee consideration, submit your proposal by Friday, May 18, 2018.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. We are based in Boston, MA, USA.

11 May, 2018 02:18PM

May 09, 2018

GNU Guix

Paper on reproducible bioinformatics pipelines with Guix

I’m happy to announce that the bioinformatics group at the Max Delbrück Center that I’m working with has released a preprint of a paper on reproducibility with the title Reproducible genomics analysis pipelines with GNU Guix.

We built a collection of bioinformatics pipelines called "PiGx" ("Pipelines in Genomix") and packaged them as first-class packages with GNU Guix. Then we looked at the degree to which the software achieves bit-reproducibility, analysed sources of non-determinism (e.g. time stamps), discussed experimental reproducibility at runtime (e.g. random number generators, the interface provided by the kernel and the GNU C library, etc) and commented on the practice of using “containers” (or application bundles) instead.

Reproducible builds is a crucial foundation for computational experiments. We hope that PiGx and the reproducibility analysis we presented in the paper can serve as a useful case study demonstrating the importance of a principled approach to computational reproducibility and the effectiveness of Guix in the pursuit of reproducible software management.

09 May, 2018 10:00AM by Ricardo Wurmus

May 08, 2018

libredwg @ Savannah

Smokers and mirrors

I've setup continuous integration testing for all branches and pull requests at https://travis-ci.org/LibreDWG/libredwg/builds for GNU/Linux, and at https://ci.appveyor.com/project/rurban/libredwg for windows, which also generates binaries (a dll) automatically.

There's also an official github mirror at https://github.com/LibreDWG/libredwg
where pull requests are accepted. This repo drives the CI's.
See https://github.com/LibreDWG/libredwg/releases for the nightly windows builds.

The first alpha release should be in June, when all the new permissions are finalized.

08 May, 2018 04:13PM by Reini Urban

May 05, 2018

Parabola GNU/Linux-libre

[From Arch] js52 52.7.3-2 upgrade requires intervention

Due to the SONAME of /usr/lib/libmozjs-52.so not matching its file name, ldconfig created an untracked file /usr/lib/libmozjs-52.so.0. This is now fixed and both files are present in the package.

To pass the upgrade, remove /usr/lib/libmozjs-52.so.0 prior to upgrading.

05 May, 2018 06:35AM by Omar Vega Ramos

May 02, 2018

guile-cv @ Savannah

Guile-CV version 0.1.9

Guile-CV version 0.1.9 is released! (Mai 2018)
Changes since the previous version

For a list of changes since the previous version, visit the NEWS file. For
a complete description, consult the git summary and git log

02 May, 2018 04:11AM by David Pirotte

April 26, 2018

GNU Guix

Guix welcomes Outreachy, GSoC, and Guix-HPC interns

We are thrilled to announce that five people will join Guix as interns over the next few months! As part of Google’s Summer of Code (GSoC), under the umbrella of the GNU Project, three people are joining us:

  • Tatiana Sholokhova will work on a Web interface for the Guix continuous integration (CI) tool, Cuirass, similar in spirit to that of Hydra. Cuirass was started as part of GSoC 2016.
  • uniq10 will take over the build daemon rewrite in Scheme, a project started as part of last year's GSoC by reepca. The existing code lives in the guile-daemon branch. Results from last year already got us a long way towards a drop-in replacement of the current C++ code base.
  • Ioannis P. Koutsidis will work on implementing semantics similar to that of systemd unit files in the Shepherd, the “init system” (PID 1) used on GuixSD.

Through Outreachy, the inclusion program for groups underrepresented in free software and tech, one person will join:

Finally, we are welcoming one intern as part of the Guix-HPC effort:

  • Pierre-Antoine Rouby arrived a couple of weeks ago at Inria for a four-month internship on improving the user experience of Guix in high-performance computing (HPC) and reproducible scientific workflows. Pierre-Antoine has already contributed a couple of HPC package definitions and will next look at tools such as hpcguix-web, guix pack, and more.

Gábor Boskovits, Ricardo Wurmus, and Ludovic Courtès will be their primary mentors, and the whole Guix crowd will undoubtedly help and provide guidance as it has always done. Welcome to all of you!

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

26 April, 2018 03:00PM by Ludovic Courtès

April 22, 2018

parallel @ Savannah

GNU Parallel 20180422 ('Tiangong-1') released

GNU Parallel 20180422 ('Tiangong-1') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Quote of the month:

Today I discovered GNU Parallel, and I don’t know what to do with all this spare time.
--Ryan Booker

New in this release:

  • --csv makes GNU Parallel parse the input sources as CSV. When used with --pipe it only passes full CSV-records.
  • Time in --bar is printed as 1d02h03m04s.
  • Optimization of --tee: It spawns a process less per value.
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, April 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 April, 2018 09:19PM by Ole Tange

April 20, 2018

Parabola GNU/Linux-libre

[From Arch] glibc 2.27-2 and pam 1.3.0-2 may require manual intervention

The new version of glibc removes support for NIS and NIS+. The default /etc/nsswitch.conf file provided by filesystem package already reflects this change. Please make sure to merge pacnew file if it exists prior to upgrade.

NIS functionality can still be enabled by installing libnss_nis package. There is no replacement for NIS+ in the official repositories.

pam 1.3.0-2 no longer ships pam_unix2 module and pam_unix_*.so compatibility symlinks. Before upgrading, review PAM configuration files in the /etc/pam.d directory and replace removed modules with pam_unix.so. Users of pam_unix2 should also reset their passwords after such change. Defaults provided by pambase package do not need any modifications.

20 April, 2018 03:18PM by David P.