Planet GNU

Aggregation of development blogs from the GNU Project

August 15, 2018

FSF Events

Richard Stallman - "El software libre y libertad -- en la vida y en la educación" (Rancagua, Chile)

Richard Stallman will speak about the goals and philosophy of the Free Software Movement, and the status and history of the GNU operating system, which in combination with the kernel Linux is now used by tens of millions of users world-wide.

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

Lugar: Bernardo O'Higgins Auditorio Regional, Plaza de los Héroes 445, Rancagua, Región del Libertador Gral, Rancagua, Chile

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Rancagua.

15 August, 2018 05:30PM

Parabola GNU/Linux-libre

netctl 1.18-1 and systemd 239.0-2.parabola7 may require manual intervention

The new versions of netctl and systemd-resolvconf may not be installed together. Users who have both netctl and systemd-resolvconf installed will need to manually switch to from systemd-resolvconf to openresolv before upgrading.

If you get an error

:: unable to satisfy dependency 'resolvconf' required by netctl

use

pacman -S openresolv

prior to upgrading.

15 August, 2018 05:05PM by Luke Shumaker

August 14, 2018

GNUnet News

GSoC 2018 - GNUnet Web-based User Interface

What was done?
In the context of Google Summer of Code 2018, my mentor (Martin Schanzenbach) and I have worked on creating and extending the REST API of GNUnet. Currently, we mirrored the functionality of following commands:

gnunet-identity
gnunet-namestore
gnunet-gns
gnunet-peerinfo

Additionally, we developed a website with the Javascript framework Angular 6 and the design framework iotaCSS to use the new REST API. The REST API of GNUnet is now documented with Sphinx.

14 August, 2018 07:55AM by Phil Buschmann

August 13, 2018

GNU Guix

GSoC 2018 report: Cuirass Web interface

For the last three months I have been working with the Guix team as a Google Summer of Code intern. The title of my project is "GNU Guix (Cuirass): Adding a web interface similar to the Hydra web interface".

Cuirass is a continuous integration system which monitors the Guix git repository, schedules builds of Guix packages, and presents the build status of all Guix packages. Before my project, Cuirass did not have a web interface. The goal of the project was to implement an interface for Cuirass which would allow a user to view the overall build progress, details about evaluations, build failures, etc. The web interface of Hydra is a good example of such a tool.

In this post, I present a final report on the project. The Cuirass repository with the changes made during the project is located at http://git.savannah.gnu.org/cgit/guix/guix-cuirass.git. A working instance of the implemented interface is available at https://berlin.guixsd.org/. You can find more examples and demonstrations of the achieved results below.

About Cuirass

Cuirass is designed to monitor a git repository containing Guix package definitions and build binaries from these package definitions. The state of planned builds is stored in a SQLite database. The key concepts of the Cuirass internal state are:

  • Job specification. Specifications state what has actually to be done by Cuirass. A specification is defined by a Scheme data structure (an association list) which includes a job name, repository URL, as well as the branch and a procedure proc that specifies how this is to be built.

  • Evaluation. An evaluation is a high-level build actcion related to a certain revision of a repository of a given specification. For each specification, Cuirass continuously produces new evaluations which build different versions of the project represented by revisions of the corresponding repository. Derivations and builds (see below) each belong to a specific evaluation.

  • Derivation. Derivations represent low-level build actions. They store such information as name of a build script and its arguments, input and output of a build action, target system type, and necessary environment variables.

  • Build. A build is a result of build actions that are prescribed by a derivation. This could be a failed build or a directory containing the files that were generated by compiling a package.

Besides the core which executes build actions and records their results in the database, Cuirass includes a web server which previously only responded to a handful of API requests with JSON containing information about the current status of builds.

Web interface

The Cuirass web interface implemented during the project is served by the Cuirass web server whose functionality has been extended to generating HTML responses and serving static files. General features of the interface are listed below.

  • The backend is written in Guile and implements request processing procedures which parse request parameters and extract specific data to be displayed from the database.

  • The frontend consists of HTML templates represented with Guile SXML and the Bootstrap 4 CSS library.

  • The appearance is minimalistic. Every page includes only specific content information and basic navigation tools.

  • The interface is lightweight and widely accessible. It does not use JavaScript which makes it available to users who do not want to have JavaScript running in the browser.

Structure

Let's review the structure of the interface and take a look at the information you can find in it. Note that the web-interface screenshots presented below were obtained with synthetic data loaded into Cuirass database.

Main page

The main page is accessible on the root request endpoint (/). The main page displays a list of all the specifications stored in the Cuirass database. Each entry of the list is a clickable link which leads to a page about the evaluations of the corresponding specification (see below).

Here is an example view of the main page.

Main page screenshot

Evaluations list

The evaluations list of a given specification with name <name> is located at /jobset/<name>/. On this page, you can see a list of evaluations of the given project starting from the most recent ones. You can navigate to older evaluations using the pagination buttons at the bottom of the page. In the table, you can find the following information:

  • The ID of the evaluation which is clickable and leads to a page with information about all builds of the evaluation (see below).

  • List of commits corresponding to the evaluation.

  • Build summary of the evaluation: number of succeeded (green), failed (red), and scheduled (grey) builds of this evaluation. You can open the list of builds with a certain status by clicking on one of these three links.

Here is a possible view of the evaluations list page:

Screenshoot of evaluations list

Builds list

The builds list of a evaluation with ID <id> is located at /eval/<id>/. On this page, you can see a list of builds of the given evaluation ordered by their stop time starting from the most recent one. Similarly to the evaluation list, there are pagination buttons located at the bottom of the page. For each build in the list, there is information about the build status (succeeded, failed, or scheduled), stop time, nixname (name of the derivation), system, and also a link to the corresponding build log. As said above, it is possible to filter builds with a certain status by clicking on the status link in the evaluations list.

Screenshot of builds list

Summary

Cuirass now has the web interface which makes it possible for users to get an overview on the status of Guix package builds in a user-friendly way. As the result of my GSoC internship, the core of the web interface was developed. Now there are several possibilities for future improvements and I would like to welcome everyone to contribute.

It was a pleasure for me to work with the Guix team. I would like to thank you all for this great experience! Special thanks to my GSoC mentors: Ricardo Wurmus, Ludovic Courtès, and Gábor Boskovits, and also to Clément Lassieur and Danny Milosavljevic for their guidance and help throughout the project.

13 August, 2018 03:00PM by Tatiana Sholokhova

libredwg @ Savannah

libredwg-0.6 released [alpha]

See https://www.gnu.org/software/libredwg/

API breaking changes:
* Removed dwg_obj_proxy_get_reactors(), use dwg_obj_get_reactors() instead.
* Renamed SORTENTSTABLE.owner_handle to SORTENTSTABLE.owner_dict.
* Renamed all -as-rNNNN program options to --as-rNNNN.

Other newsworthy changes:
* Removed all unused type-specific reactors and xdicobjhandle fields,
use the generic object and entity fields instead.
* Added signed BITCODE_RLd and BITCODE_BLd (int32_t) types.
* Added unknown_bits field to all UNSTABLE/DEBUGGING classes.
* Custom CFLAGS are now honored.
* Support for GNU parallel and coreutils timeout logfile and picat processing.

Important bugfixes:
* Fixed previously empty strings for r2007+ for some objects and entities (#34).
* Fixed r2010+ picture size calculation (DXF 160, 310), leading to wrong entity offsets.
* Added more checks for unstable objects: empty handles, controls, overflows, isnan.
* Fixed some common_entity_data, mostly with non-indexed colors and gradient filled HATCH
(#27, #28, #31)
* Fixed some proper relative handles, which were previously treated as NULL handle.
* Fixed writing TV strings, now the length includes the final \0 char.
* Fixed the initial minimal hash size, fixing an endless loop on very small
(truncated) DWG's (<1000 bytes).
* Much less memory leaks.
* Improved free, i.e. no more double free with EED data. (#33)
* Better perl bindings build support on Windows, prefer local dwg.h over
installed dwg.h on testing (#29).
* Fixed dejagnu compilation on C11 by using -fgnu89-inline (#2)

New features:
* Added unstable support for the objects ASSOCDEPENDENCY, ASSOCPLANESURFACEACTIONBODY,
DBCOLOR, DIMASSOC, DYNAMICBLOCKPURGEPREVENTER, HELIX, LIGHT, PERSSUBENTMANAGER,
UNDERLAYDEFINITION and the entities MULTILEADER, UNDERLAY.
* Added getopt_long() support to all programs, position independent options.
* Implemented examples/unknown to find field layouts of unknown objects.
With bd and bits helpers to decode unknowns.
Now with a http://picat-lang.org helper. See also HACKING and savannah News.
* Implemented parsing ACIS version 2 to the binary SAB format.
* Added all missing dwg_object_to_OBJECT() functions for objects.
* Added dwg_ent_minsert_set_num_cols(), dwg_ent_minsert_set_num_rows()
* Added --disable-dxf, --enable--debug configure options. With debug there are many
more unstable objects available.
* Added libredwg.pc (#30)
* Added valgrind supressions for known darwin/glibc leaks.
* Changed and clarified the semver version numbering on development checkouts with
major.minor[.patch[.build.nonmastercommits-gittag]]. See HACKING.

Here are the compressed sources:
http://ftp.gnu.org/gnu/libredwg/libredwg-0.6.tar.gz (9.4MB)
http://ftp.gnu.org/gnu/libredwg/libredwg-0.6.tar.xz (3.5MB)

https://github.com/LibreDWG/libredwg/releases/tag/0.6 (also window binaries)

Here are the GPG detached signatures[*]:
http://ftp.gnu.org/gnu/libredwg/libredwg-0.6.tar.gz.sig
http://ftp.gnu.org/gnu/libredwg/libredwg-0.6.tar.xz.sig

Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html

Here are the SHA256 checksums:

995da379a27492646867fb490ee406f18049f145d741273e28bf1f38cabc4d5c libredwg-0.6.tar.gz
6d525ca849496852f62ad6a11b7b801d0aafd1fa1366c45bdb0f11a90bd6f878 libredwg-0.6.tar.xz
21d9619c6858ea25f95a9b6d8d6946e387309023ec17810f4433d8f61e8836af libredwg-0.6-win32.zip
d029d35715b8d86a68f8dacc7fdb5a5ac6405bc0a1b3457e75fc49c6c4cf6e06 libredwg-0.6-win64.zip

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:

gpg --verify libredwg-0.6.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

gpg --keyserver keys.gnupg.net --recv-keys B4F63339E65D6414

and rerun the 'gpg --verify' command.

13 August, 2018 09:48AM by Reini Urban

August 11, 2018

GNUnet News

irc bot status

As of 2018-08-09 in the early morning, we are having problems with our current IRC bot.
update 2018-08-13: I have started working on a replacement - the drupal bot is not coming back.
We have plans to migrate to a new bot as soon as possible, but hope to restore functionality to the existing one soon enough.

This post will be updated as soon as the bot is back online. The logs themselves are not affected.

We apologize for any inconvenience.

11 August, 2018 05:30PM by ng0

unifont @ Savannah

Unifont 11.0.02 Released

10 August 2018

Unifont 11.0.02 is now available. This is an interim release, with another released planned in the autumn of 2018. The main addition in this release is David Corbett's contribution of the over 600 glyphs in the Sutton SignWriting Unicode block.

Download this release at:

https://ftpmirror.gnu.org/unifont/unifont-11.0.02/

or if that fails,

ftp://ftp.gnu.org/gnu/unifont/unifont-11.0.02/

Enjoy!

Paul Hardy

11 August, 2018 01:53PM by Paul Hardy

August 09, 2018

FSF News

FSF job opportunity: Business operations manager

This position, reporting to the executive director, works as part of our operations team to ensure the organization's financial, human resources, and administrative functions run smoothly and in compliance with all legal and policy requirements. We are looking for a hands-on and detail-oriented professional who is comfortable working independently and with multiple teams, including some remote coworkers. Ideal candidates will be proactive and highly adaptable, with an aptitude for learning new tools and coming up with creative solutions. Applicants should have at least three years of experience with bookkeeping and nonprofit operations; human resources experience a plus.

Examples of job responsibilities include, but are not limited to:

  • processing accounts receivable and payable, bank deposits, and monthly financial reconciliation,

  • preparing annual budget and regular financial reports for management, helping the organization maintain its fiscal health and excellent four-star rating on Charity Navigator,

  • assisting management with the annual audit,

  • working with the operations team to ensure that GNU Press (https://shop.fsf.org/) continues to support fundraising efforts,

  • purchasing for operational and programmatic purposes,

  • coordinating ongoing vendor review,

  • administering the FSF's payroll and benefits programs,

  • providing administrative assistance to management during hiring, onboarding, and offboarding,

  • monitoring legal and regulatory landscape for changes that may impact the FSF, and

  • pitching in to help with organization-wide projects, like our major fundraising activities and annual LibrePlanet conference.

Because the FSF works globally and seeks to have our materials distributed in as many languages as possible, multilingual candidates will have an advantage. With our small staff of thirteen, each person makes a clear contribution. We work hard, but offer a humane and fun work environment at an office located in the heart of downtown Boston. The FSF is a mature but growing organization that provides great potential for advancement; existing staff get the first chance at any new job openings.

Benefits and Salary

This job is a union position that must be worked on-site at the FSF's downtown Boston office. The salary is fixed at $61,672/year and is non-negotiable. Benefits include:

  • fully subsidized individual or family health coverage through Blue Cross Blue Shield,
  • partially subsidized dental plan,
  • four weeks of paid vacation annually,
  • seventeen paid holidays annually,
  • weekly remote work allowance,
  • public transit commuting cost reimbursement,
  • 403(b) program with employer match,
  • yearly cost-of-living pay increases based on government guidelines,
  • health care expense reimbursement,
  • ergonomic budget,
  • relocation (to Boston area) expense reimbursement,
  • conference travel and professional development opportunities, and
  • potential for an annual performance bonus.

Application Instructions

Applications must be submitted via email to hiring@fsf.org. The email must contain the subject line "Business Operations Manager." A complete application should include:

  • cover letter,
  • resume, and
  • two recent references.

All materials must be in a free format. Email submissions that do not follow these instructions will probably be overlooked. No phone calls, please.

Applications will be reviewed on a rolling basis until the position is filled. To guarantee consideration, submit your application by August 26, 2018.

The FSF is an equal opportunity employer and will not discriminate against any employee or application for employment on the basis of race, color, marital status, religion, age, sex, sexual orientation, national origin, handicap, or any other legally protected status recognized by federal, state or local law. We value diversity in our workplace.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. We are based in Boston, MA, USA.

09 August, 2018 06:36PM

August 07, 2018

librejs @ Savannah

LibreJS 7.15 released

GNU LibreJS aims to address the JavaScript problem described in Richard Stallman's article The JavaScript Trap*. LibreJS is a free add-on for GNU IceCat and other Mozilla-based browsers. It blocks nonfree nontrivial JavaScript while allowing JavaScript that is free and/or trivial. * https://www.gnu.org/philosophy/javascript-trap.en.html

The source tarball for this release can be found at:
http://ftp.gnu.org/gnu/librejs/librejs-7.15.0.tar.gz
http://ftp.gnu.org/gnu/librejs/librejs-7.15.0.tar.gz.sig

The installable extension file (compatible with Mozilla-based browsers version >= v60) is available here:
http://ftp.gnu.org/gnu/librejs/librejs-7.15.0.xpi
http://ftp.gnu.org/gnu/librejs/librejs-7.15.0.xpi.sig

GPG key:05EF 1D2F FE61 747D 1FC8 27C3 7FAC 7D26 472F 4409
https://savannah.gnu.org/project/memberlist-gpgkeys.php?group=librejs

Version 7.15 includes a partial rework of the mechanism for script loading and parsing, improving performance, reliability and code maintainability. The release also adds the implementation of per-script white/blacklisting, and many smaller bugfixes. All contributions thanks to the work of Giorgio Maone.

Changes since version 7.14 (excerpt from the git changelog):

Fixed whitelisting of scripts with query strings in URL.
Fixed report attempts when no tabId is available.
UI rewrite for better responsiveness and simplicity.
Broader detection of UTF-8 encoding in responses.
Fixed badge shouldn't be shown on privileged pages.
Fixed sub-frames resetting badge to green.
Uniform conventions for module importing paths.
Temporarily display back hidden old UI elements.
Refactoring list management in its own class.
Bug fixing and simplifying UI synchronization.
Whitelisted/blackilisted reporting and modification support.
Stateful response processing support.
Implement early whitelisting / blacklisting logic.
Display actual extension version number in UI.
White/Black lists back-end refactoring.
Refactor and fix HTTP response filtering.

07 August, 2018 07:16PM by Ruben Rodriguez

FSF Blogs

Stop Supreme Court nominee Kavanaugh to protect free software!

United States Supreme Court judges serve from the time they are appointed until they choose to retire -- it's a lifetime appointment. One judge recently stepped down, and Brett Kavanaugh was nominated to fill the empty seat. He comes with a firm stance against net neutrality.

Last year he wrote:

Supreme Court precedent establishes that Internet service providers have a First Amendment right to exercise editorial discretion over whether and how to carry Internet content.

Here, Kavanaugh argues that controlling the way you use the Internet is a First Amendment right that ISPs -- companies, not people -- hold. The First Amendment, which guarantees Americans the right to free speech, freedom of the press, and freedom to congregate, is one of the most dearly-held amendments of the United States Constitution. With this statement, he says that net neutrality protections -- policies that prevent companies from "editorializing" what you see on the Web -- is a violation of the Constitution. He believes net neutrality is unconstitutional. We know he's wrong.

We need you to contact your congressional representatives, asking them to vote against Kavanaugh's bid for the Supreme Court of the United States.

Why does net neutrality matter to free software?

There are so many reasons why we think net neutrality is important -- and why it's necessary for free software. We'll briefly mention that:

  • Free software is built collaboratively by using Web tools and Internet connections around the world.

  • Free software is most easily discoverable thanks to pages like the Free Software Directory.

  • Need to update your system or software quickly? You need an Internet connection to make that happen.

  • Organizations like the FSF use the Web to educate and share free software ideology and tools.

  • The FSF itself is run by a small team spread across the globe. Every day we use tools like IRC to communicate and work for user freedom together, with one another, and with you.

  • We promote decentralized free software replacements for centralized software services, and losing net neutrality means that centralized services will have a huge built-in advantage.

Without a free Web and Internet, what we can do online will be limited by what ISPs like Comcast and Verizon want. They will have the legal right to control which Web sites we can access and how fast that access will be -- and they will take advantage of their new ability to extort even greater fees from Web sites and consumers alike.

Call, microblog, or write

Call your senators!

Don't know who to call?

  • Call your Senator directly. Find their number here.

  • Dial the Senate at (202) 224-3121 and they will connect you.

(Note: The number for the the Senate is a switchboard that will direct your call.)

If you're looking for help knowing what to say, try:

Hi, I'm [NAME] from [PLACE]. I think Brett Kavanaugh is unfit to serve on the Supreme Court. His position against net neutrality is harmful to all users of the Internet. Please vote against his nomination to the Supreme Court. Thank you.

Microblog

Many Senators are on microblogging services. You can microblog at yours!

Samples:

Keep Brett Kavanaugh off #SCOTUS for #freesoftware's sake.

Keep #netneutrality, protect #userfreedom, vote against Brett Kavanaugh #SaveSCOTUS

A vote against Kavanaugh is a vote for #netneutrality! #SaveSCOTUS

Writing to your Senators

If you're more comfortable reaching out to your senators over email, do it! Their email addresses are available online. If you're looking for ideas, try out:

Dear [SENATOR], I'm [NAME] and I live in your district. I am deeply troubled by the nomination of Brett Kavanaugh to the Supreme Court of the United States. He has come out against net neutrality, which is necessary for a free Web, a free society, and user freedom (). If Kavanaugh becomes a Supreme Court Justice, the threat to net neutrality is undeniable. Thank you, [NAME]

If your senator [voted for the CRA][5], consider thanking them for that as well.

Why does the Supreme Court matter for net neutrality?

While the regulations that are used to protect net neutrality are currently under the Federal Communication Commission, they might not stay this way. Should an Internet Service Provider (ISP) decide to sue for the "right" to ignore net neutrality protections, it is possible such a case would reach the Supreme Court. Should this happen, Kavanaugh's anti-net neutrality opinions could have a serious impact on our digital rights.

More on net neutrality from the FSF

07 August, 2018 03:55PM

August 06, 2018

FSF Events

Richard Stallman - "El software libre en la ética y en la práctica" (Quilpué, Chile)

Richard Stallman hablará sobre las metas y la filosofía del movimiento del Software Libre, y el estado y la historia del sistema operativo GNU, el cual junto con el núcleo Linux, es actualmente utilizado por decenas de millones de personas en todo el mundo.

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

Lugar: Salón de Honor, Universidad Técnica Federico Santa María Sede Viña del Mar, Universidad Santa Maria, Quilpué, Chile

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Valparaíso.

06 August, 2018 11:35AM

August 02, 2018

FSF Blogs

Respects Your Freedom certification program continues to grow

We recently had some exciting news for our Respects Your Freedom (RYF) certification program. Our program helps users to find hardware that they can trust to come with freedom inside. When a retailer receives certification on a device, it means users know they will receive hardware that meets with our strict standards on free software and documentation.

First up, on May 15th, we certified the Zerocat Chipflasher "board-edition-1", which can be purchased from the Zerocat Label. This device is a really exciting addition to the program. The Zerocat Chipflasher enables users to flash their own devices using only free software, replacing proprietary firmware with free software. One of the big steps currently for a retailer in creating an RYF-certified laptop is flashing laptops to replace proprietary boot firmware with Libreboot. With the Zerocat Chipflasher, for the first time ever, retailers (or any user) can flash their laptops with a device that can likewise be trusted to respect the rights of users. It means many more users will be able to free their own devices using only free software, and could even help spur the creation of more RYF-certified devices in the future. Currently they are selling a limited edition version, signed by the founder of the Zerocat Label, which will help to fund future availability of the device. Only five remain at this point, but once they are sold, there will be enough funding for future runs of the device.

Next up, on May 30th, we announced certification of the Minifree Libreboot X200 Tablet, which can be purchased here. While not the first laptop-tablet hybrid to be RYF-certified (that honor goes to the Technoethical TET-X200T), it offers users even more choices in finding a tablet computing device they can trust. Minifree is also run by the creator of the free boot firmware project Libreboot, so buying a device from them helps to support the continued development of that critical piece of free software.

Both of these devices show great potential for the future of the RYF certification program, whether it is enabling the next generation of retailers who can free devices for users, or continuing the development of the free software needed on those devices. The future in general is looking quite bright for RYF, as we currently have around fifty devices working their way through the certification program. Help us in congratulating the Zerocat Label and Minifree Ltd in their recent accomplishments. And if you would like to help support our work on RYF and other free software initiatives, here's what you can do:

02 August, 2018 05:10PM

August 01, 2018

libc @ Savannah

The GNU C Library version 2.28 is now available

The GNU C Library
=================

The GNU C Library version 2.28 is now available.

The GNU C Library is used as the C library in the GNU system and
in GNU/Linux systems, as well as many other systems that use Linux
as the kernel.

The GNU C Library is primarily designed to be a portable
and high performance C library. It follows all relevant
standards including ISO C11 and POSIX.1-2008. It is also
internationalized and has one of the most complete
internationalization interfaces known.

The GNU C Library webpage is at http://www.gnu.org/software/libc/

Packages for the 2.28 release may be downloaded from:
http://ftpmirror.gnu.org/libc/
http://ftp.gnu.org/gnu/libc/

The mirror list is at http://www.gnu.org/order/ftp.html

NEWS for version 2.28
=====================

Major new features:

  • The localization data for ISO 14651 is updated to match the 2016

Edition 4 release of the standard, this matches data provided by
Unicode 9.0.0. This update introduces significant improvements to the
collation of Unicode characters. This release deviates slightly from
the standard in that the collation element ordering for lowercase and
uppercase LATIN script characters is adjusted to ensure that regular
expressions with ranges like [a-z] and [A-Z] don't interleave e.g. A
is not matched by [a-z]. With the update many locales have been
updated to take advantage of the new collation information. The new
collation information has increased the size of the compiled locale
archive or binary locales.

  • The GNU C Library can now be compiled with support for Intel CET, AKA

Intel Control-flow Enforcement Technology. When the library is built
with --enable-cet, the resulting glibc is protected with indirect
branch tracking (IBT) and shadow stack (SHSTK). CET-enabled glibc is
compatible with all existing executables and shared libraries. This
feature is currently supported on i386, x86_64 and x32 with GCC 8 and
binutils 2.29 or later. Note that CET-enabled glibc requires CPUs
capable of multi-byte NOPs, like x86-64 processors as well as Intel
Pentium Pro or newer. NOTE: --enable-cet has been tested for i686,
x86_64 and x32 on non-CET processors. --enable-cet has been tested
for x86_64 and x32 on CET SDVs, but Intel CET support hasn't been
validated for i686.

  • The GNU C Library now has correct support for ABSOLUTE symbols

(SHN_ABS-relative symbols). Previously such ABSOLUTE symbols were
relocated incorrectly or in some cases discarded. The GNU linker can
make use of the newer semantics, but it must communicate it to the
dynamic loader by setting the ELF file's identification (EI_ABIVERSION
field) to indicate such support is required.

  • Unicode 11.0.0 Support: Character encoding, character type info, and

transliteration tables are all updated to Unicode 11.0.0, using
generator scripts contributed by Mike FABIAN (Red Hat).

  • <math.h> functions that round their results to a narrower type are added

from TS 18661-1:2014 and TS 18661-3:2015:

- fadd, faddl, daddl and corresponding fMaddfN, fMaddfNx, fMxaddfN and
fMxaddfNx functions.

- fsub, fsubl, dsubl and corresponding fMsubfN, fMsubfNx, fMxsubfN and
fMxsubfNx functions.

- fmul, fmull, dmull and corresponding fMmulfN, fMmulfNx, fMxmulfN and
fMxmulfNx functions.

- fdiv, fdivl, ddivl and corresponding fMdivfN, fMdivfNx, fMxdivfN and
fMxdivfNx functions.

  • Two grammatical forms of month names are now supported for the following

languages: Armenian, Asturian, Catalan, Czech, Kashubian, Occitan, Ossetian,
Scottish Gaelic, Upper Sorbian, and Walloon. The following languages now
support two grammatical forms in abbreviated month names: Catalan, Greek,
and Kashubian.

  • Newly added locales: Lower Sorbian (dsb_DE) and Yakut (sah_RU) also

include the support for two grammatical forms of month names.

  • Building and running on GNU/Hurd systems now works without out-of-tree

patches.

  • The renameat2 function has been added, a variant of the renameat function

which has a flags argument. If the flags are zero, the renameat2 function
acts like renameat. If the flag is not zero and there is no kernel
support for renameat2, the function will fail with an errno value of
EINVAL. This is different from the existing gnulib function renameatu,
which performs a plain rename operation in case of a RENAME_NOREPLACE
flags and a non-existing destination (and therefore has a race condition
that can clobber the destination inadvertently).

  • The statx function has been added, a variant of the fstatat64

function with an additional flags argument. If there is no direct
kernel support for statx, glibc provides basic stat support based on
the fstatat64 function.

  • IDN domain names in getaddrinfo and getnameinfo now use the system libidn2

library if installed. libidn2 version 2.0.5 or later is recommended. If
libidn2 is not available, internationalized domain names are not encoded
or decoded even if the AI_IDN or NI_IDN flags are passed to getaddrinfo or
getnameinfo. (getaddrinfo calls with non-ASCII names and AI_IDN will fail
with an encoding error.) Flags which used to change the IDN encoding and
decoding behavior (AI_IDN_ALLOW_UNASSIGNED, AI_IDN_USE_STD3_ASCII_RULES,
NI_IDN_ALLOW_UNASSIGNED, NI_IDN_USE_STD3_ASCII_RULES) have been
deprecated. They no longer have any effect.

  • Parsing of dynamic string tokens in DT_RPATH, DT_RUNPATH, DT_NEEDED,

DT_AUXILIARY, and DT_FILTER has been expanded to support the full
range of ELF gABI expressions including such constructs as
'$ORIGIN$ORIGIN' (if valid). For SUID/GUID applications the rules
have been further restricted, and where in the past a dynamic string
token sequence may have been interpreted as a literal string it will
now cause a load failure. These load failures were always considered
unspecified behaviour from the perspective of the dynamic loader, and
for safety are now load errors e.g. /foo/${ORIGIN}.so in DT_NEEDED
results in a load failure now.

  • Support for ISO C threads (ISO/IEC 9899:2011) has been added. The

implementation includes all the standard functions provided by
<threads.h>:

- thrd_current, thrd_equal, thrd_sleep, thrd_yield, thrd_create,
thrd_detach, thrd_exit, and thrd_join for thread management.

- mtx_init, mtx_lock, mtx_timedlock, mtx_trylock, mtx_unlock, and
mtx_destroy for mutual exclusion.

- call_once for function call synchronization.

- cnd_broadcast, cnd_destroy, cnd_init, cnd_signal, cnd_timedwait, and
cnd_wait for conditional variables.

- tss_create, tss_delete, tss_get, and tss_set for thread-local storage.

Application developers must link against libpthread to use ISO C threads.

Deprecated and removed features, and other changes affecting compatibility:

  • The nonstandard header files <libio.h> and <_G_config.h> are no longer

installed. Software that was using either header should be updated to
use standard <stdio.h> interfaces instead.

  • The stdio functions 'getc' and 'putc' are no longer defined as macros.

This was never required by the C standard, and the macros just expanded
to call alternative names for the same functions. If you hoped getc and
putc would provide performance improvements over fgetc and fputc, instead
investigate using (f)getc_unlocked and (f)putc_unlocked, and, if
necessary, flockfile and funlockfile.

  • All stdio functions now treat end-of-file as a sticky condition. If you

read from a file until EOF, and then the file is enlarged by another
process, you must call clearerr or another function with the same effect
(e.g. fseek, rewind) before you can read the additional data. This
corrects a longstanding C99 conformance bug. It is most likely to affect
programs that use stdio to read interactive input from a terminal.
(Bug #1190.)

  • The macros 'major', 'minor', and 'makedev' are now only available from

the header <sys/sysmacros.h>; not from <sys/types.h> or various other
headers that happen to include <sys/types.h>. These macros are rarely
used, not part of POSIX nor XSI, and their names frequently collide with
user code; see https://sourceware.org/bugzilla/show_bug.cgi?id=19239 for
further explanation.

<sys/sysmacros.h> is a GNU extension. Portable programs that require
these macros should first include <sys/types.h>, and then include
<sys/sysmacros.h> if _GNU_LIBRARY_ is defined.

  • The tilegx*-*-linux-gnu configurations are no longer supported.
  • The obsolete function ustat is no longer available to newly linked

binaries; the headers <ustat.h> and <sys/ustat.h> have been removed. This
function has been deprecated in favor of fstatfs and statfs.

  • The obsolete function nfsservctl is no longer available to newly linked

binaries. This function was specific to systems using the Linux kernel
and could not usefully be used with the GNU C Library on systems with
version 3.1 or later of the Linux kernel.

  • The obsolete function name llseek is no longer available to newly linked

binaries. This function was specific to systems using the Linux kernel
and was not declared in a header. Programs should use the lseek64 name
for this function instead.

  • The AI_IDN_ALLOW_UNASSIGNED and NI_IDN_ALLOW_UNASSIGNED flags for the

getaddrinfo and getnameinfo functions have been deprecated. The behavior
previously selected by them is now always enabled.

  • The AI_IDN_USE_STD3_ASCII_RULES and NI_IDN_USE_STD3_ASCII_RULES flags for

the getaddrinfo and getnameinfo functions have been deprecated. The STD3
restriction (rejecting '_' in host names, among other things) has been
removed, for increased compatibility with non-IDN name resolution.

  • The fcntl function now have a Long File Support variant named fcntl64. It

is added to fix some Linux Open File Description (OFD) locks usage on non
LFS mode. As for others *64 functions, fcntl64 semantics are analogous with
fcntl and LFS support is handled transparently. Also for Linux, the OFD
locks act as a cancellation entrypoint.

  • The obsolete functions encrypt, encrypt_r, setkey, setkey_r, cbc_crypt,

ecb_crypt, and des_setparity are no longer available to newly linked
binaries, and the headers <rpc/des_crypt.h> and <rpc/rpc_des.h> are no
longer installed. These functions encrypted and decrypted data with the
DES block cipher, which is no longer considered secure. Software that
still uses these functions should switch to a modern cryptography library,
such as libgcrypt.

  • Reflecting the removal of the encrypt and setkey functions above, the

macro _XOPEN_CRYPT is no longer defined. As a consequence, the crypt
function is no longer declared unless _DEFAULT_SOURCE or _GNU_SOURCE is
enabled.

  • The obsolete function fcrypt is no longer available to newly linked

binaries. It was just another name for the standard function crypt,
and it has not appeared in any header file in many years.

  • We have tentative plans to hand off maintenance of the passphrase-hashing

library, libcrypt, to a separate development project that will, we hope,
keep up better with new passphrase-hashing algorithms. We will continue
to declare 'crypt' in <unistd.h>, and programs that use 'crypt' or
'crypt_r' should not need to change at all; however, distributions will
need to install <crypt.h> and libcrypt from a separate project.

In this release, if the configure option --disable-crypt is used, glibc
will not install <crypt.h> or libcrypt, making room for the separate
project's versions of these files. The plan is to make this the default
behavior in a future release.

Changes to build and runtime requirements:

GNU make 4.0 or later is now required to build glibc.

Security related changes:

CVE-2016-6261, CVE-2016-6263, CVE-2017-14062: Various vulnerabilities have
been fixed by removing the glibc-internal IDNA implementation and using
the system-provided libidn2 library instead. Originally reported by Hanno
Böck and Christian Weisgerber.

CVE-2017-18269: An SSE2-based memmove implementation for the i386
architecture could corrupt memory. Reported by Max Horn.

CVE-2018-11236: Very long pathname arguments to realpath function could
result in an integer overflow and buffer overflow. Reported by Alexey
Izbyshev.

CVE-2018-11237: The mempcpy implementation for the Intel Xeon Phi
architecture could write beyond the target buffer, resulting in a buffer
overflow. Reported by Andreas Schwab.

The following bugs are resolved with this release:

[1190] stdio: fgetc()/fread() behaviour is not POSIX compliant
[6889] manual: 'PWD' mentioned but not specified
[13575] libc: SSIZE_MAX defined as LONG_MAX is inconsistent with ssize_t,
when __WORDSIZE != 64
[13762] regex: re_search etc. should return -2 on memory exhaustion
[13888] build: /tmp usage during testing
[13932] math: dbl-64 pow unexpectedly slow for some inputs
[14092] nptl: Support C11 threads
[14095] localedata: Review / update collation data from Unicode / ISO
14651
[14508] libc: -Wformat warnings
[14553] libc: Namespace pollution loff_t in sys/types.h
[14890] libc: Make NT_PRFPREG canonical.
[15105] libc: Extra PLT references with -Os
[15512] libc: __bswap_constant_16 not compiled when -Werror -Wsign-
conversion is given
[16335] manual: Feature test macro documentation incomplete and out of
date
[16552] libc: Unify umount implementations in terms of umount2
[17082] libc: htons et al.: statement-expressions prevent use on global
scope with -O1 and higher
[17343] libc: Signed integer overflow in /stdlib/random_r.c
[17438] localedata: pt_BR: wrong d_fmt delimiter
[17662] libc: please implement binding for the new renameat2 syscall
[17721] libc: __restrict defined as /* Ignore */ even in c11
[17979] libc: inconsistency between uchar.h and stdint.h
[18018] dynamic-link: Additional $ORIGIN handling issues (CVE-2011-0536)
[18023] libc: extend_alloca is broken (questionable pointer comparison,
horrible machine code)
[18124] libc: hppa: setcontext erroneously returns -1 as exit code for
last constant.
[18471] libc: llseek should be a compat symbol
[18473] soft-fp: [powerpc-nofpu] __sqrtsf2, __sqrtdf2 should be compat
symbols
[18991] nss: nss_files skips large entry in database
[19239] libc: Including stdlib.h ends up with macros major and minor being
defined
[19463] libc: linknamespace failures when compiled with -Os
[19485] localedata: csb_PL: Update month translations + add yesstr/nostr
[19527] locale: Normalized charset name not recognized by setlocale
[19667] string: Missing Sanity Check for malloc calls in file 'testcopy.c'
[19668] libc: Missing Sanity Check for malloc() in file 'tst-setcontext-
fpscr.c'
[19728] network: out of bounds stack read in libidn function
idna_to_ascii_4i (CVE-2016-6261)
[19729] network: out of bounds heap read on invalid utf-8 inputs in
stringprep_utf8_nfkc_normalize (CVE-2016-6263)
[19818] dynamic-link: Absolute (SHN_ABS) symbols incorrectly relocated by
the base address
[20079] libc: Add SHT_X86_64_UNWIND to elf.h
[20251] libc: 32bit programs pass garbage in struct flock for OFD locks
[20419] dynamic-link: files with large allocated notes crash in
open_verify
[20530] libc: bswap_16 should use __builtin_bswap16() when available
[20890] dynamic-link: ldconfig: fsync the files before atomic rename
[20980] manual: CFLAGS environment variable replaces vital options
[21163] regex: Assertion failure in pop_fail_stack when executing a
malformed regexp (CVE-2015-8985)
[21234] manual: use of CFLAGS makes glibc detect no optimization
[21269] dynamic-link: i386 sigaction sa_restorer handling is wrong
[21313] build: Compile Error GCC 5.4.0 MIPS with -0S
[21314] build: Compile Error GCC 5.2.0 MIPS with -0s
[21508] locale: intl/tst-gettext failure with latest msgfmt
[21547] localedata: Tibetan script collation broken (Dzongkha and Tibetan)
[21812] network: getifaddrs() returns entries with ifa_name == NULL
[21895] libc: ppc64 setjmp/longjmp not fully interoperable with static
dlopen
[21942] dynamic-link: _dl_dst_substitute incorrectly handles $ORIGIN: with
AT_SECURE=1
[22241] localedata: New locale: Yakut (Sakha) locale for Russia (sah_RU)
[22247] network: Integer overflow in the decode_digit function in
puny_decode.c in libidn (CVE-2017-14062)
[22342] nscd: NSCD not properly caching netgroup
[22391] nptl: Signal function clear NPTL internal symbols inconsistently
[22550] localedata: es_ES locale (and other es_* locales): collation
should treat ñ as a primary different character, sync the collation
for Spanish with CLDR
[22638] dynamic-link: sparc: static binaries are broken if glibc is built
by gcc configured with --enable-default-pie
[22639] time: year 2039 bug for localtime etc. on 64-bit platforms
[22644] string: memmove-sse2-unaligned on 32bit x86 produces garbage when
crossing 2GB threshold (CVE-2017-18269)
[22646] localedata: redundant data (LC_TIME) for es_CL, es_CU, es_EC and
es_BO
[22735] time: Misleading typo in time.h source comment regarding
CLOCKS_PER_SECOND
[22753] libc: preadv2/pwritev2 fallback code should handle offset=-1
[22761] libc: No trailing `%n' conversion specifier in FMT passed from
`__assert_perror_fail ()' to `__assert_fail_base ()'
[22766] libc: all glibc internal dlopen should use RTLD_NOW for robust
dlopen failures
[22786] libc: Stack buffer overflow in realpath() if input size is close
to SSIZE_MAX (CVE-2018-11236)
[22787] dynamic-link: _dl_check_caller returns false when libc is linked
through an absolute DT_NEEDED path
[22792] build: tcb-offsets.h dependency dropped
[22797] libc: pkey_get() uses non-reserved name of argument
[22807] libc: PTRACE_* constants missing for powerpc
[22818] glob: posix/tst-glob_lstat_compat failure on alpha
[22827] dynamic-link: RISC-V ELF64 parser mis-reads flag in ldconfig
[22830] malloc: malloc_stats doesn't restore cancellation state on stderr
[22848] localedata: ca_ES: update date definitions from CLDR
[22862] build: _DEFAULT_SOURCE is defined even when _ISOC11_SOURCE is
[22884] math: RISCV fmax/fmin handle signalling NANs incorrectly
[22896] localedata: Update locale data for an_ES
[22902] math: float128 test failures with GCC 8
[22918] libc: multiple common of `__nss_shadow_database'
[22919] libc: sparc32: backtrace yields infinite backtrace with
makecontext
[22926] libc: FTBFS on powerpcspe
[22932] localedata: lt_LT: Update of abbreviated month names from CLDR
required
[22937] localedata: Greek (el_GR, el_CY) locales actually need ab_alt_mon
[22947] libc: FAIL: misc/tst-preadvwritev2
[22963] localedata: cs_CZ: Add alternative month names
[22987] math: [powerpc/sparc] fdim inlines errno, exceptions handling
[22996] localedata: change LC_PAPER to en_US in es_BO locale
[22998] dynamic-link: execstack tests are disabled when SELinux is
disabled
[23005] network: Crash in __res_context_send after memory allocation
failure
[23007] math: strtod cannot handle -nan
[23024] nss: getlogin_r is performing NSS lookups when loginid isn't set
[23036] regex: regex equivalence class regression
[23037] libc: initialize msg_flags to zero for sendmmsg() calls
[23069] libc: sigaction broken on riscv64-linux-gnu
[23094] localedata: hr_HR: wrong thousands_sep and mon_thousands_sep
[23102] dynamic-link: Incorrect parsing of multiple consecutive $variable
patterns in runpath entries (e.g. $ORIGIN$ORIGIN)
[23137] nptl: s390: pthread_join sometimes block indefinitely (on 31bit
and libc build with -Os)
[23140] localedata: More languages need two forms of month names
[23145] libc: _init/_fini aren't marked as hidden
[23152] localedata: gd_GB: Fix typo in "May" (abbreviated)
[23171] math: C++ iseqsig for long double converts arguments to double
[23178] nscd: sudo will fail when it is run in concurrent with commands
that changes /etc/passwd
[23196] string: __mempcpy_avx512_no_vzeroupper mishandles large copies
(CVE-2018-11237)
[23206] dynamic-link: static-pie + dlopen breaks debugger interaction
[23208] localedata: New locale - Lower Sorbian (dsb)
[23233] regex: Memory leak in build_charclass_op function in file
posix/regcomp.c
[23236] stdio: Harden function pointers in _IO_str_fields
[23250] nptl: Offset of __private_ss differs from GCC
[23253] math: tgamma test suite failures on i686 with -march=x86-64
-mtune=generic -mfpmath=sse
[23259] dynamic-link: Unsubstituted ${ORIGIN} remains in DT_NEEDED for
AT_SECURE
[23264] libc: posix_spawnp wrongly executes ENOEXEC in non compat mode
[23266] nis: stringop-truncation warning with new gcc8.1 in nisplus-
parser.c
[23272] math: fma(INFINITY,INFIITY,0.0) should be INFINITY
[23277] math: nan function should not have const attribute
[23279] math: scanf and strtod wrong for some hex floating-point
[23280] math: wscanf rounds wrong; wcstod is ok for negative numbers and
directed rounding
[23290] localedata: IBM273 is not equivalent to ISO-8859-1
[23303] build: undefined reference to symbol
'__parse_hwcap_and_convert_at_platform@@GLIBC_2.23'
[23307] dynamic-link: Absolute symbols whose value is zero ignored in
lookup
[23313] stdio: libio vtables validation and standard file object
interposition
[23329] libc: The __libc_freeres infrastructure is not properly run across
DSO boundaries.
[23349] libc: Various glibc headers no longer compatible with
<linux/time.h>
[23351] malloc: Remove unused code related to heap dumps and malloc
checking
[23363] stdio: stdio-common/tst-printf.c has non-free license
[23396] regex: Regex equivalence regression in single-byte locales
[23422] localedata: oc_FR: More updates of locale data
[23442] build: New warning with GCC 8
[23448] libc: Out of bounds access in IBM-1390 converter
[23456] libc: Wrong index_cpu_LZCNT
[23458] build: tst-get-cpu-features-static isn't added to tests
[23459] libc: COMMON_CPUID_INDEX_80000001 isn't populated for Intel
processors
[23467] dynamic-link: x86/CET: A property note parser bug

Release Notes
=============

https://sourceware.org/glibc/wiki/Release/2.28

Contributors
============

This release was made possible by the contributions of many people.
The maintainers are grateful to everyone who has contributed
changes or bug reports. These include:

Adhemerval Zanella
Agustina Arzille
Alan Modra
Alexandre Oliva
Amit Pawar
Andreas Schwab
Andrew Senkevich
Andrew Waterman
Aurelien Jarno
Carlos O'Donell
Chung-Lin Tang
DJ Delorie
Daniel Alvarez
David Michael
Dmitry V. Levin
Dragan Stanojevic - Nevidljivi
Florian Weimer
Flávio Cruz
Francois Goichon
Gabriel F. T. Gomes
H.J. Lu
Herman ten Brugge
Hongbo Zhang
Igor Gnatenko
Jesse Hathaway
John David Anglin
Joseph Myers
Leonardo Sandoval
Maciej W. Rozycki
Mark Wielaard
Martin Sebor
Michael Wolf
Mike FABIAN
Patrick McGehearty
Patsy Franklin
Paul Pluzhnikov
Quentin PAGÈS
Rafal Luzynski
Rajalakshmi Srinivasaraghavan
Raymond Nicholson
Rical Jasan
Richard Braun
Robert Buj
Rogerio Alves
Samuel Thibault
Sean McKean
Siddhesh Poyarekar
Stefan Liebler
Steve Ellcey
Sylvain Lesage
Szabolcs Nagy
Thomas Schwinge
Tulio Magno Quites Machado Filho
Valery Timiriliyev
Vincent Chen
Wilco Dijkstra
Zack Weinberg
Zong Li

01 August, 2018 07:08AM by Carlos O'Donell

July 31, 2018

FSF Blogs

Apple App Store anniversary marks ten years of proprietary appsploitation

It's been ten years since Apple opened the App Store. This created a whole new industry through which third party app creators and Apple themselves found new ways to threaten user freedom with technical tricks and legal loopholes. Since the beginning, we at the Free Software Foundation have recognized the threats posed by the iPhone and have reported on Apple on fsf.org and DefectiveByDesign, while free software supporters around the world have been taking action.

Apple controls your apps

The only way to install apps on a non-jailbroken iPhone is through the iOS App Store -- this means that your device can only run what Apple wants it to run.

Apple acts as a gatekeeper for which apps you're allowed to access. They control what becomes available -- and not every app gets to stay there. They regularly remove apps for many reasons and sometimes no reason at all. They claim this enhances your security, but are happy to abuse their power: Apple blocked updates to messaging app Telegram in Russia, after demands from the government. This was after Telegram was removed entirely from the App Store, only to be made available again later. When BitCoin posed a threat to Apple Pay, they removed all BitCoin apps.

Other instances of app removal include July 2017, when Apple removed apps that circumvented the Great Firewall, making them no longer available in the App Store. GNU Go was removed after issues with GPL compliance on Apple's end.

Apple loves DRM

Apple loves Digital Restrictions Management (DRM)! DRM is the use of technology (including software) to restrict access to digital media like ebooks, games, and music. Apple's use of DRM not only steps on the freedoms of users, but has proven to be downright dangerous. In 2016, AceDeceiver became the first iOS trojan exploiting flaws in iOS DRM.

In a DRM-free a world, any player can play music purchased from any store, and any store can sell music which is playable on all players. This is clearly the best alternative for consumers, and Apple would embrace it in a heartbeat. - Steve Jobs

When DRM was dropped from the iTunes store, Steve Jobs wrote an essay titled "Thoughts on Music," which took a firm stance against DRM. It has since been removed from the Apple Web site. In it, Jobs called for the world to abandon DRM technologies, and for Apple to embrace a DRM-free future. This is clearly no longer Apple's stance on DRM.

Apple loves surveillance

In addition to colluding with Russia, Apple has allowed itself to be the tool of other governments looking to monitor and control their populations. For example: the National Security Agency's (NSA) PRISM program, which allows the NSA access to data, including "search history, the content of emails, file transfers, and live chats" of Apple users. While Apple claimed no knowledge of the program, the NSA reported that the company gave them this access. Whether or not this is true, we can never know due to the opaque and proprietary nature of Apple's code and business practices.

The iPhone X brought with it facial recognition capabilities. In light of government use of facial recognition and government contracts with Amazon to use their Rekognition facial scanning technology, we are deeply concerned about the transparency and surveillance issues associated with the widespread deployment of this technology.

While Apple has taken strong stances against certain types of government surveillance, they support surveillance in other contexts. By building proprietary technology designed to lock users into a system, they enable the possibility for gross surveillance and limitless anti-privacy policies. Apple CEO Tim Cook is currently preventing this. Should Cook change his mind, decide that being pro-privacy is no longer profitable, or leave Apple, things could change. There are no guarantees a new CEO would not kowtow to government demands.

Apple also offers other forms of surveillance, like the Screen Time feature, which allows users to control the ability to access different apps and functionality on an iPhone. This might seem great if you're looking to keep yourself off Twitter, but it's also a tool of monitoring and control. Tools like this can (and will) be used by domestic abusers looking to control other people's access to technology.

There are better choices you can make!

What can you do?

Talk to Apple

One thing we always recommend is contacting Apple about your stance on iOS devices.

Talk to others

If you're looking to get more hands-on, visit your local Apple store with flyers and stickers and hand them out to people going into the store. We also have DRM-specific flyers.

If you're organizing an event or action, email info@fsf.org and we can arrange to send you some items to hand out (gratis within the United States, for the cost of shipping outside the United States).

Buy a better mobile device

There are more ethical options for your mobile needs. Technoethical -- a company that sells a number of Respects Your Freedom (RYF) devices -- also sells mobile devices. These mobile devices come pre-installed with Replicant (see below), a version of Android that has had the nonfree parts removed. Purism is also working on creating a mobile device. (Please note that at the moment there are no RYF-certified mobile devices.)

Get better software

Android and iOS aren't your only choices for a mobile operating system! Replicant is a free software OS. We help support Replicant through the Working Together for Free Software fund, and consider it to be a High Priority Project.

Rather than using a proprietary app store, you can download and install apps using F-Droid, a free software marketplace that features free software and is free itself! F-Droid works on Replicant and Android devices.

Join a project

Projects like Replicant and F-Droid are always looking for volunteers with a range of skill sets, including designers, developers, translators, and writers. You can also package for F-Droid, creating a more robust selection of apps available.

31 July, 2018 03:55PM

FSF Events

Richard Stallman - "El movimiento del software libre" (Santiago de Chile)

Richard Stallman hablará sobre las metas y la filosofía del movimiento del Software Libre, y el estado y la historia del sistema operativo GNU, el cual junto con el núcleo Linux, es actualmente utilizado por decenas de millones de personas en todo el mundo.

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

Lugar: INACAP Renca, Bravo de saravia 2980, comuna de Renca (desde el centro de Santiago, tomar el bus 314), Santiago de Chile, Chile

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Santiago de Chile.

31 July, 2018 03:10PM

Richard Stallman estará en Valparaíso, Chile

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

El título y el lugar exacto serán determinados.

Lugar: Universidad Católica de Valparaíso, Valparaíso, Chile

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Valparaíso.

31 July, 2018 02:55PM

Richard Stallman- "Por una sociedad digital libre" (Valparaíso, Chile)

Existen muchas amenazas a la libertad en la sociedad digital, tales como la vigilancia masiva, la censura, las esposas digitales, el software privativo que controla a los usuarios y la guerra contra la práctica de compartir. El uso de servicios web presenta otras más amenazas a la libertad de los usuarios. Por último, no contamos con ningún derecho concreto para hacer nada en Internet, todas nuestras actividades en línea son precarias y podremos continuar con ellas siempre y cuando las empresas deseen cooperar.

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

Lugar: Auditorio Escuela de Ingeniería Química, Avenida Brasil nº 2162, Universida Católica de Valparaíso, Valparaíso, Chile

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Valparaíso.

31 July, 2018 02:45PM

July 30, 2018

gdbm @ Savannah

Lonely Cactus

On the Joys and Perils of YouTube

For want of a social aspect to my technology addiction, of late I have been recording video content and placing it on YouTube.  It has been an interesting endeavor, because it involves skills that I heretofore have never trained.  How does one look good on camera?  What does one do with one's hands?  What it the efficient way to record and edit video.  What is the right way to do lighting and audio?  It has been fun so far, largely because the videos I've recorded still look so amateurish.  That I will be able to learn and progress at something new enchants me.

YouTube is an amazing platform, and the result of untold man-years of effort. The voice recognition involved in the automatic closed captioning is impressive.

But as any graybeard GNU-ster will attest, placing you content solely in the hands of a faceless evil corporation like Google is unwise, since I am not their customer.  Their advertisers are their customers, and their users are just free content creators and a source of training data for their AI.  So, in parallel, I've been revisiting the idea of resurrecting my website.

It is a somewhat overwhelming idea for me because there are infinite possibilities.  I could (and should) just do a WordPress instance and call it a day, for that would be efficient, but, I would love to take the opportunity to learn something new.

Over the weekend, I enumerated my many sources of internet content.  So far, I've discovered
  • YouTube
  • Twitter
  • A code blog on Blogger
  • A personal website, hosted by a hosting provided, that is never updated
  • Another personal website that is just a parked domain right now
  • Yet another personal website, on my home PC, that is rarely updated
  • A security camera
Over a weekend's pondering, I have decided that I will keep three projects, and that each of these will just be different skins on the same content.
  • The true content backend -- not publicly visible -- which will be the source.  Video, images, audio will be stored in their native resolution and formats
  • A website.  It will be GNU-friendly.  No weird javascript.  Video will be medium resolution to split the difference quality and download time.  Probably 720p Ogg+Theora+Vorbis.  Audio will be Ogg+Vorbis or MP3.  Images will be JPEG.
  • YouTube.
  • A gopher server where the 1990s will live on forever.  Video will be shrunk to 352x288 pixel MP4 or CIF-sized 3gp+h.263+AMR_NB.  Audio will be MP3.  Images will be 640x480 GIF.
I will probably end up with a LAMP instance with Python/Django or whatever because hosting VMs like that.

---

In my personal archaeology, I also found these projects that are not externally visible or not working
  • An instance of the never-completed telnet PupperBBS, which has no content
  • A real-time chat service called Jozabad
  • A shoutcast/icecast server that has been serving up the same song for who knows how long

30 July, 2018 03:09PM by Mike (noreply@blogger.com)

July 27, 2018

FSF Blogs

GNU Spotlight with Mike Gerwitz: 15 new GNU releases!

For announcements of most new GNU releases, subscribe to the info-gnu mailing list: https://lists.gnu.org/mailman/listinfo/info-gnu.

To download: nearly all GNU software is available from https://ftp.gnu.org/gnu/, or preferably one of its mirrors from https://www.gnu.org/prep/ftp.html. You can use the url https://ftpmirror.gnu.org/ to be automatically redirected to a (hopefully) nearby and up-to-date mirror.

This month, we welcome Jan Nieuwenhuizen as maintainer of the new GNU Mes.

A number of GNU packages, as well as the GNU operating system as a whole, are looking for maintainers and other assistance: please see https://www.gnu.org/server/takeaction.html#unmaint if you'd like to help. The general page on how to help GNU is at https://www.gnu.org/help/help.html.

If you have a working or partly working program that you'd like to offer to the GNU project as a GNU package, see https://www.gnu.org/help/evaluation.html.

As always, please feel free to write to us at maintainers@gnu.org with any GNUish questions or suggestions for future installments.

27 July, 2018 03:14PM

libredwg @ Savannah

Revealing unknown DWG classes (2)

I've added more solver code and a more detailled explanation to the HACKING file, to find the binary layout of unknown DWG classes, in reference to public docs and generated DXF files. See https://savannah.gnu.org/forum/forum.php?forum_id=9197 for the first part.

So this is now the real AI part of examples/unknown. I've added the spec of the following classes in the meantime, in various states of completeness:
ASSOCDEPENDENCY, DIMASSOC, ASSOCACTION, the 4 SURFACE classes, HELIX, UNDERLAY and UNDERLAYDEFINITION (PDF, DGN, DWF), ASSOCALIGNEDDIMACTIONBODY, DYNAMICBLOCKPURGEPREVENTER, DBCOLOR, PERSSUBENTMANAGER, ASSOC2DCONSTRAINTGROUP, EVALUATION_GRAPH, ASSOCPERSSUBENTMANAGER, ASSOCOSNAPPOINTREFACTIONPARAM, ASSOCNETWORK, SUNSTUDY.
Many more are in work, as with the picat solver and backtracker I can now create the most promising solutions at scale.

There's a lot of code related to examples/unknown to automatically
find the field layout of yet unknown classes. At first you need
DWG/DXF pairs of unknown entities or objects and put them into
test/test-data/. At creation take care to create uniquely identifiable
names and numbers, not to create DXF fields all with the same value 0.
Then you'll never known which field in the DWG is which.

Then run make -C examples regen-unknown, which does this:

run ./logs.sh to create -v5 logfiles with the binary blobs for all
UNKNOWN_OBJ and UNKNOWN_ENT instances in those DWG's.

Then the perl script log_unknown.pl creates the include file
alldwg.inc adding all those blobs.

The next perl script log_unknown_dxf.pl parses alldwg.inc and looks
for matching DXF files, and creates the 3 include files alldxf_0.inc
with the matching blob data from alldwg.inc, alldxf_1.inc with the
matching field types and values from the DXF and alldxf_2.inc to
workaround some static initialization issues in the C file.

Next run make unknown, which does this:

Compiles and runs examples/unknown, which creates for a every string
value in the DXF some bits representations and tries to find them in
the UNKNOWN blobs. If it doesn't find them, either the string-to-bit
conversion lost too much precision to be able to find them, esp. with
doubles, or we have a different problem. make unknown creates a big
log file unknown-`git describe`.log in which you can see the
individual statistics and initial layout guesses.

E.g.
42/230=18.3%
possible: [34433333344443333334444333333311xxxxxxxxxx3443333...
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
11 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 11 1]

The x stands for a fixed field, the numbers and a dot for the number
of variants this bit is used for (the dot for >9) and a space means
this is a hole for a field which is not represented as DXF field, i.e.
a FIELD_*(name, 0) in the dwg.spec with DXF group code 0.

unknown also creates picat data files in examples/ which are then used with
picat from http://picat-lang.org/ to enhance the search for the best layout
guess for each particular class. picat is a nice mix of a functional
programming tool with an optional constraint solver. The first part in
the picat process does almost the same as unknown.c, finding the fixed
layout, possible variants and holes in a straight-forward functional
fashion. This language is very similar to erlang, untyped haskell or prolog.
The second optimization part of picat uses a solver with
constraints to improve the layout of the found variants and holes to
find the best guess for the needed dwg.spec layout.
Note that picat list and array indices are one-based, so you need to
subtract 1 from each found offset. 1-32 mean the bits 0-31.

The field names are filled in by examples/log_unknown_dxf.pl automatically.
We could parse dwg.spec for this, but for now I went with a manual solution,
as the number of unknown classes gets less, not more.

E.g. for ACAD_EVALUATION_GRAPH.pi with a high percentage from the above
possible layout, it currently produces this:

Definite result:
----------------
HOLE([1,32],01000000010100000001010000000110) len = 32
FIELD_BL (edge_flags, 93); // 32 [33,42]
HOLE([43,52],0100000001) len = 10
FIELD_BL (node_edge1, 92); // -1 [53,86]
FIELD_BL (node_edge2, 92); // -1 [87,120]
FIELD_BL (node_edge3, 92); // -1 [121,154]
FIELD_BL (node_edge4, 92); // -1 [155,188]
HOLE([189,191],100) len = 3
FIELD_H (parenthandle, 330); // 6.0.0 [192,199]
FIELD_H (evalexpr, 360); // 3.2.2E2 [200,223]
HOLE([224,230],1100111) len = 7
----------------
Todo: 32 + 178 = 210, Missing: 20
FIELD_BL (has_graph, 96); // 1 0100000001 [[1,10],[11,20],[21,30],[43,52]]
FIELD_BL (unknown1, 97); // 1 0100000001 [[1,10],[11,20],[21,30],[43,52]]
FIELD_BL (nodeid, 91); // 0 10 [[2,3],[10,11],[12,13],[20,21],[22,23],[31,32],[44,45],[52,53],[189,190],[225,226]]
FIELD_BL (num_evalexpr, 95); // 1 0100000001 [[1,10],[11,20],[21,30],[43,52]]

The next picat steps do automate the following reasoning:

The first hole 1-32 is filled by the 3 1 values from BL96, BL97 and
BL95, followed by the 0 value from BL91. The second hole is clearly
another unknown BL with value 1. The third hole at 189-191
is padding before the handle stream, and can be ignored. This is from
a r2010 file, which has seperate handle and text streams. The last
hole 224-230 could theoretically hold almost another unknown handle, but
practically it's also just padding. The last handles are always optional
reactors and the xdicobject handle for objects, and 7 bits is not enough
for a handle value. A code 4 null-handle would be 01000000.

You start by finding the DXF documentation and the ObjectARX header
file of the class, to get the names and description of the class.

You add the names and types to dwg.h and dwg.spec, change the class
type in classes.inc to DEBUGGING or UNTESTED. With DEBUGGING add the
-DDEBUG_CLASSES flag to CFLAGS in src/Makefile and test the dwg's with
programs/dwgread -v4. Some layouts are version dependent, some need
a REPEAT loop or vector with a num_field field.

The picat constraints module examples/unknown.pi is still being worked
and is getting better and better identifying all missing classes
automatically. The problem with AutoCAD DWG's is that everybody can
add their own custom classes as ObjectARX application, and that
reverse-engineering them never stops. So it has to be automated somehow.

27 July, 2018 09:55AM by Reini Urban

July 24, 2018

FSF Blogs

The Free Software Directory needs you! IRC meetups every Friday

The Free Software Directory is an essential catalog of free software online, composed and maintained by countless volunteers dedicated to the promotion of software that respects your personal liberty. Tens of thousands of people visit the Directory every month to discover free software and explore the information about version control, documentation, and licensing. Adding and maintaining entries to the Directory is crucial work to give people access to free software which has only free dependencies and runs on a free OS. All of this information is also exported in machine-readable formats, making it a valuable source of data for the study of trends in free software. The Directory is powered by MediaWiki, the same software used by Wikipedia.

Every Friday at 12:00-15:00 EDT (16:00 to 19:00 UTC), volunteers meet on IRC in the #fsf channel on irc.freenode.org to add new entries, update existing ones, and talk about free software together (to see the meeting start time in your time zone, run this in GNU bash: date --date='TZ="America/New_York" 12:00 this Fri'). As with any group composed of volunteers, the informal Directory team has people who come and go, and right now, it could really use some fresh new members to kick our efforts into high gear. The Directory just passed 16,000 entries this year, and as far as we're concerned, there's no limit to how high it should go!

Examples of ways you can help include updating information about existing entries, proposing new entries, or reviewing new entries submitted by others to make sure they meet the Directory's criteria.

If you can't wait or don't have the time to jump onto IRC on Friday afternoons, you can still help: check out the Free Software Directory Participation Guide for instructions.

No matter how you participate, improving the Free Software Directory is an easy, nuts-and-bolts way to make a contribution to the free software movement, bringing us a tinier step every day to a truly free society. We look forward to seeing you on IRC!

24 July, 2018 02:50PM

GNU Guix

Multi-dimensional transactions and rollbacks, oh my!

One of the highlights of version 0.15.0 was the overhaul of guix pull, the command that updates Guix and its package collection. In Debian terms, you can think of guix pull as:

apt-get update && apt-get install apt

Let’s be frank, guix pull does not yet run as quickly as this apt-get command—in the “best case”, when pre-built binaries are available, it currently runs in about 1m30s on a recent laptop. More about the performance story in a future post…

One of the key features of the new guix pull is the ability to roll back to previous versions of Guix. That’s a distinguishing feature that opens up new possibilities.

“Profile generations”

Transactional upgrades and rollbacks have been a distinguishing feature of Guix since Day 1. They come for free as a consequence of the functional package management model inherited from the Nix package manager. To many users, this alone is enough to justify using a functional package manager: if an upgrade goes wrong, you can always roll back. Let’s recap how this all works.

As a user, you install packages in your own profile, which defaults to ~/.guix-profile. Then from time to time you update Guix and its package collection:

$ guix pull

This updates ~/.config/guix/current, giving you an updated guix executable along with an updated set of packages. You can now upgrade the packages that are in your profile:

$ guix package -u
The following packages will be upgraded:
   diffoscope   93 → 96     /gnu/store/…-diffoscope-96
   emacs    25.3 → 26.1     /gnu/store/…-emacs-26.1
   gimp     2.8.22 → 2.10.4 /gnu/store/…-gimp-2.10.4
   gnupg    2.2.7 → 2.2.9   /gnu/store/…-gnupg-2.2.9

The upgrade creates a new generation of your profile—the previous generation of your profile, with diffoscope 93, emacs 25.3, and so on is still around. You can list profile generations:

$ guix package --list-generations
Generation 1  Jun 08 2018 20:06:21
   diffoscope   93     out   /gnu/store/…-diffoscope-93
   emacs        25.3   out   /gnu/store/…-emacs-25.3
   gimp         2.8.22 out   /gnu/store/…-gimp-2.8.22
   gnupg        2.2.7  out   /gnu/store/…-gnupg-2.2.7
   python       3.6.5  out   /gnu/store/…-python-3.6.5

Generation 2  Jul 12 2018 12:42:08     (current)
-  diffoscope   93     out   /gnu/store/…-diffoscope-93
-  emacs        25.3   out   /gnu/store/…-emacs-25.3
-  gimp         2.8.22 out   /gnu/store/…-gimp-2.8.22
-  gnupg        2.2.7  out   /gnu/store/…-gnupg-2.2.7
+  diffoscope   96     out   /gnu/store/…-diffoscope-96
+  emacs        26.1   out   /gnu/store/…-emacs-26.1
+  gimp         2.10.4 out   /gnu/store/…-gimp-2.10.4
+  gnupg        2.2.9  out   /gnu/store/…-gnupg-2.2.9

That shows our two generations with the diff between Generation 1 and Generation 2. We can at any time run guix package --roll-back and get our previous versions of gimp, emacs, and so on. Each generation is just a bunch of symlinks to those packages, so what we have looks like this:

Image of the profile generations.

Notice that python was not updated, so it’s shared between both generations. And of course, all the dependencies that didn’t change in between—e.g., the C library—are shared among all packages.

guix pull generations

Like I wrote above, guix pull brings the latest set of package definitions from Git master. The Guix package collection usually contains only the latest version of each package; for example, current master only has version 26.1 of Emacs and version 2.10.4 of the GIMP (there are notable exceptions such as GCC or Python.) Thus, guix package -i gimp, from today’s master, can only install gimp 2.10.4. Often, that’s not a problem: you can keep old profile generations around, so if you really need that older version of Emacs, you can run it from your previous generation.

Still, having guix pull keep track of the changes to Guix and its package collection is useful. Starting from 0.15.0, guix pull creates a new generation, just like guix package does. After you’ve run guix pull, you can now list Guix generations as well:

$ guix pull -l
Generation 10   Jul 14 2018 00:02:03
  guix 27f7cbc
    repository URL: https://git.savannah.gnu.org/git/guix.git
    branch: origin/master
    commit: 27f7cbc91d1963118e44b14d04fcc669c9618176
Generation 11   Jul 20 2018 10:44:46
  guix 82549f2
    repository URL: https://git.savannah.gnu.org/git/guix.git
    branch: origin/master
    commit: 82549f2328c59525584b92565846217c288d8e85
  14 new packages: bsdiff, electron-cash, emacs-adoc-mode,
    emacs-markup-faces, emacs-rust-mode, inchi, luakit, monero-gui,
    nethack, openbabel, qhull, r-txtplot, stb-image, stb-image-write
  52 packages upgraded: angband@4.1.2, aspell-dict-en@2018.04.16-0,
    assimp@4.1.0, bitcoin-core@0.16.1, botan@2.7.0, busybox@1.29.1,
    …
Generation 12   Jul 23 2018 15:22:52    (current)
  guix fef7bab
    repository URL: https://git.savannah.gnu.org/git/guix.git
    branch: origin/master
    commit: fef7baba786a96b7a3100c9c7adf8b45782ced37
  20 new packages: ccrypt, demlo, emacs-dired-du,
    emacs-helm-org-contacts, emacs-ztree, ffmpegthumbnailer, 
    go-github-com-aarzilli-golua, go-github-com-kr-text, 
    go-github-com-mattn-go-colorable, go-github-com-mattn-go-isatty, 
    go-github-com-mgutz-ansi, go-github-com-michiwend-golang-pretty, 
    go-github-com-michiwend-gomusicbrainz, go-github-com-stevedonovan-luar, 
    go-github-com-wtolson-go-taglib, go-github-com-yookoala-realpath, 
    go-gitlab-com-ambrevar-damerau, go-gitlab-com-ambrevar-golua-unicode,
    guile-pfds, u-boot-cubietruck
  27 packages upgraded: c-toxcore@0.2.4, calibre@3.28.0,
    emacs-evil-collection@20180721-2.5d739f5, 
    …

The nice thing here is that guix pull provides high-level information about the differences between two subsequent generations of Guix.

In the end, Generation 1 of our profile was presumably built with Guix Generation 11, while Generation 2 of our profile was built with Guix Generation 12. We have a clear mapping between Guix generations as created by guix pull and profile generations as created with guix package:

Image of the Guix generations.

Each generation created by guix pull corresponds to one commit in the Guix repo. Thus, if I go to another machine and run:

$ guix pull --commit=fef7bab

then I know that I get the exact same Guix instance as my Generation 12 above. From there I can install diffoscope, emacs, etc. and I know I’ll get the exact same binaries as those I have above, thanks to reproducible builds.

These are very strong guarantees in terms of reproducibility and provenance tracking—properties that are typically missing from “applications bundles” à la Docker.

In addition, you can easily run an older Guix. For instance, this is how you would install the version of gimp that was current as of Generation 10:

$ ~/.config/guix/current-10-link/bin/guix package -i gimp

At this point your profile contains gimp coming from an old Guix along with packages installed from the latest Guix. Past and present coexist in the same profile. The historical dimension of the profile no longer matches exactly the history of Guix itself.

Composing Guix revisions

Some people have expressed interest in being able to compose packages coming from different revisions of Guix—say to create a profile containing old versions of Python and NumPy, but also the latest and greatest GCC. It may seem far-fetched but it has very real applications: there are large collections of scientific packages and in particular bioinformatics packages that don’t move as fast as our beloved flagship free software packages, and users may require ancient versions of some of the tools.

We could keep old versions of many packages but maintainability costs would grow exponentially. Instead, Guix users can take advantage of the version control history of Guix itself to mix and match packages coming from different revisions of Guix. As shown above, it’s already possible to achieve this by running the guix program off the generation of interest. It does the job, but can we do better?

In the process of enhancing guix pull we developed a high-level API that allows an instance of Guix to “talk” to a different instance of Guix—an “inferior”. It’s what allows guix pull to display the list of packages that were added or upgraded between two revisions. The next logical step will be to provide seamless integration of packages coming from an inferior. That way, users would be able to refer to “past” package graphs right from a profile manifest or from the command-line. Future work!

On coupling

The time traveler in you might be wondering: Why are package definitions coupled with the package manager, doesn’t it make it harder to compose packages coming from different revisions? Good point!

Tight coupling certainly complicates this kind of composition: we can’t just have any revision of Guix load package definitions from any other revision; this could fail altogether, or it could provide a different build result. Another potential issue is that guix pulling an older revision not only gives you an older set of packages, it also gives you older tools, bug-for-bug.

The reason for this coupling is that a package definition like this one doesn’t exist in a vacuum. Its meaning is defined by the implementation of package objects, by gnu-build-system, by a number of lower-level abstractions that are all defined as extensions of the Scheme language in Guix itself, and ultimately by Guile, which implements the language Guix is written in. Each instance created by guix pull brings all these components. Because Guix is implemented as a set of programming language extensions and libraries, that package definitions depend on all these parts becomes manifest. Instead of being frozen, the APIs and package definitions evolve together, which gives us developers a lot of freedom on the changes we can make.

Nix results from a different design choice. Nix-the-package-manager implements the Nix language, which acts as a “frozen” interface. Package definitions in Nixpkgs are written in that language, and a given version of Nix can possibly interpret both current and past package definitions without further ado. The Nix language does evolve though, so at one point an old Nix inevitably becomes unable to evaluate a new Nixpkgs, and vice versa.

These two approaches make different tradeoffs. Nix’ loose coupling simplifies the implementation and makes it easy to compose old and new package definitions, to some extent; Guix’ tight coupling makes such composition more difficult to implement, but it leaves developers more freedom and, we hope, may support “time travels” over longer period of times. Time will tell!

It’s like driving a DeLorean

Inside the cabin of the DeLorean time machine in “Back to the Future.”

Picture of a DeLorean cabin by Oto Godfrey and Justin Morton, under CC-BY-SA 4.0.

That profile generations are kept around already gave users a time machine of sorts—you can always roll back to a previous state of your software environment. With the addition of roll-back support for guix pull, this adds another dimension to the time machine: you can roll-back to a previous state of Guix itself and from there create alternative futures or even mix bits from the past with bits from the present. We hope you’ll enjoy it!

24 July, 2018 12:30PM by Ludovic Courtès

July 22, 2018

parallel @ Savannah

GNU Parallel 20180722 ('Crimson Hexagon') released [alpha]

GNU Parallel 20180722 ('Crimson Hexagon') [alpha] has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

This release has significant changes and is considered alpha quality.

Quote of the month:

I've been using GNU Parallel very much and effectively lately.
Such an easy way to get huge speed-ups with my simple bash/Perl/Python
programs -- parallelize them!
-- Ken Youens-Clark @kycl4rk@twitter

New in this release:

  • The quoting engine has been changed. Instead of using \-quoting GNU Parallel now uses '-quoting in bash/ash/dash/ksh. This should improve compatibility with different locales. This is a big change causing this release to be alpha quality.
  • The CPU calculation has changed. By default GNU Parallel uses the number of CPU threads as the number of CPUs. This can be changed to the number of CPU cores or number of CPU sockets with --use-cores-instead-of-threads or --use-sockets-instead-of-threads.
  • The detected number of sockets, cores, and threads can be shown with --number-of-sockets, --number-of-cores, and --number-of-threads.
  • env_parallel now support mksh using env_parallel.mksh.
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, April 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 July, 2018 07:41AM by Ole Tange

July 17, 2018

Riccardo Mottola

Graphos GNUstep and Tablet interface

I have acquired a Thinkpad X41 Tablet and worked quite a bit on it making it usable and then installing Linux and of course GNUstep on it. The original battery was dead and the compatible replacement I got is bigger, it works very well, but makes the device unbalanced.

Anyway, my interest about it how usable GNUstep applications would be and especially Graphos, its (and my) drawing application.

Using the interface in Tablet mode is different: the stylus is very precise and allows clicking by pointing the tip and a second button is also possible. However, contrary to the mouse use, the keyboard is folded so no keyboard modifiers are possible. Furthermore GNUstep has no on-screen keyboard so typing is not possible.

The classic OpenStep-style Menus work exceedingly well with a touch interface:the menus are easy to click and teared-out they remain like palettes, making toolbars not necessary.
This is a good start!

However, Graphos was not easy to use: aside from typing text, with no keyboard, several components requried typing (e.g. inserting Line Width).
I worked on the interface so that all these elements also had a clickable interface (e.g. Stepper Arrows). Duplicating certain items available in context-menus in regular menus, which can be detached, provided also an enhancements.
Standard items like the color wheel already work very well


Drawing on the screen is definitely very precise and convenient. Stay tuned for the upcoming release!

17 July, 2018 10:41AM by Riccardo (noreply@blogger.com)

July 15, 2018

Parabola GNU/Linux-libre

[From Arch] libutf8proc>=2.1.1-3 update requires manual intervention

The libutf8proc package prior to version 2.1.1-3 had an incorrect soname link. This has been fixed in 2.1.1-3, so the upgrade will need to overwrite the untracked soname link created by ldconfig. If you get an error

libutf8proc: /usr/lib/libutf8proc.so.2 exists in filesystem

when updating, use

pacman -Suy --overwrite usr/lib/libutf8proc.so.2

to perform the upgrade.

15 July, 2018 03:57PM by Omar Vega Ramos

July 13, 2018

libredwg @ Savannah

Revealing unknown DWG classes

I implemented three major buzzwords today in some trivial ways.

  • massive parallel processing
  • asynchronous processing
  • machine-learning: a self-improving program

The problem is mostly trivial, and the solutions also. I need to
reverse-engineer a binary closed file-format, but got some hints from
a related ASCII file-format, DWG vs DXF.

I have several pairs of files, and a helper library to convert the
ASCII data to the binary representation in the DWG. There are various
variants for most data values, and several fields are unknown, they
are not represented in the DXF, only in the DWG. So I wrote an example
program called unknown, which walks over all unknown binary blobs and
tries to find the matching known values. If a bitmap is found only
once, we have a unique match, if it's found multiple times, we have
several possibilities the fields could be laid out or if it is not
found, we have a problem, the binary representation is wrong.

When preparing the program called unknown, I decided to parse to log
files in perl and store the unknown blobs as C `.inc` files, to be
compiled into unknown as array of structs.

Several DWG files are too large and either produce too large log files
filling my hard disc or cannot be parsed properly leading to overly huge
mallocs and invalid loops, so these files need to be killed after some
timeout of 10s.

So instead of

for d in test/test-data/*.dwg; do
log=`basename "$d" .dwg`.log
echo $d
programs/dwgread -v5 "$d" 2>$log
done

I improved it to

for d in test/test-data/*.dwg; do
log=`basename "$d" .dwg`.log
echo $d
programs/dwgread -v5 "$d" 2>$log &
(sleep 10s; kill %1 2>/dev/null) &
done

The dwgread program is put into the background, with %1 being its PID,
and `sleep 10s; kill %1` implements a simple timeout via bash, not via
perl. Both processes are in the background and the second optionally
kills the first. So with some 100 files in test-data, this is
**massive parallelization**, as the dwgread processes immediately
return, and it's output appears some time later, when the process is
finished or killed. So it's also **asynchronous**, as I cannot see the
result of each individual process anymore, which returned SUCCESS,
which returned ERROR and which was killed. You need to look at the
logfiles, similar to debugging hard real-world problems, like
real-time controllers. This processing is also massively faster, but
mostly I did it to implement a simple timeout mechanism in bash.

The next problem with the background processing is that I don't know
when all the background processes stopped, so I had to add one more
line:

while pgrep dwgread; do sleep 1; done

Otherwise I would continue processing the logfiles, creating my C
structs from these, but some logfiles would still grow and I would
miss several unknown classes.

The processed data is ~10GB gigabyte large, so massive parallel
processing saves some time. The log files are only temporarily needed
to extract the binary blobs and can be removed later.

Eventually I turnned off the massive parallelization using another
timeout solution:

for d in test/test-data/*.dwg; do
log=`basename "$d" .dwg`.log
echo $d
timeout -k 1 10 programs/dwgread -v5 "$d" 2>$log
done

I could also use GNU [parallel](https://www.gnu.org/software/parallel/parallel_tutorial.html) with timeout instead to re-enable the
parallelization though and collect the async results properly.

parallel timeout 10 programs/dwgread -v5 {} \2\>{/.}.log ::: test/test-data/*.dwg
cd test/test-data
parallel timeout 10 ../../programs/dwgread -v5 {} \2\>../../{/.}_{//}.log ::: \*/\*.dwg

So now the other interesting problem, the machine-learning part.
Let me show you first some real data I'm creating.

Parsing the logfiles and DXF data via some trivial perl scripts creates an array of such structs:

{ "ACDBASSOCOSNAPPOINTREFACTIONPARAM", "test/test-data/example_2000.dxf", 0x393, /* 473 */
"\252\100\152\001\000\000\000\000\000\000\074\057\340\014\014\200\345\020\024\126\310\100", 176, NULL },

/* ACDBASSOCOSNAPPOINTREFACTIONPARAM 393 in test/test-data/example_2000.dxf */
static const struct _unknown_field unknown_dxf_473[] = {
{ 5, "393", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 330, "392", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 100, "AcDbAssocActionParam", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 1, "", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 100, "AcDbAssocCompoundActionParam", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "1", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 360, "394", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 330, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 100, "ACDBASSOCOSNAPPOINTREFACTIONPARAM", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "1", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 40, "-1.0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 0, NULL, NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1}}
};

I prefer the data to be compiled in, so it's not on the heap but in
the .DATA segment as const. And from this data, the programs creates
this log:

ACDBASSOCOSNAPPOINTREFACTIONPARAM: 0x393 (176) test/test-data/example_2000.dxf
=bits: 010101010000001001010110100000000000000000000000000000000000000000000
0000000000000111100111101000000011100110000001100000000000110100111000010000
0101000011010100001001100000010
handle 0.2.393 (0)
search 5:"393" (24 bits of type HANDLE [0]) in 176 bits
=search (0): 010000001100000011001001
handle 4.2.392 (0)
search 330:"392" (24 bits of type HANDLE [1]) in 176 bits
=search (0): 010000101100000001001001
handle 8.0.0 (393)
search 330:"392" (8 bits of type HANDLE [1]) in 176 bits
=search (0): 00000001
330: 392 [HANDLE] found 3 at offsets 75-82, 94, 120 /176
100: AcDbAssocActionParam
search 90:"0" (2 bits of type BL [3]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
search 1:"" (2 bits of type TV [4]) in 176 bits
=search (0): 00
1: [TV] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
100: AcDbAssocCompoundActionParam
search 90:"0" (2 bits of type BL [6]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
search 90:"0" (2 bits of type BL [7]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
search 90:"1" (10 bits of type BL [8]) in 176 bits
=search (0): 0000001000
field 90 already found at 176
search 90:"1" (10 bits of type BL [8]) in 7 bits
=search (169): 0000001000
search 90:"1" (10 bits of type BS [8]) in 7 bits
=search (169): 0000001000
handle 3.2.394 (0)
search 360:"394" (24 bits of type HANDLE [9]) in 176 bits
=search (0): 010011001100000000101001
handle 2.2.394 (393)
search 360:"394" (24 bits of type HANDLE [9]) in 176 bits
=search (0): 010001001100000000101001
handle 3.2.394 (393)
search 360:"394" (24 bits of type HANDLE [9]) in 176 bits
=search (0): 010011001100000000101001
handle 4.2.394 (393)
search 360:"394" (24 bits of type HANDLE [9]) in 176 bits
=search (0): 010000101100000000101001
handle 5.2.394 (393)
search 360:"394" (24 bits of type HANDLE [9]) in 176 bits
=search (0): 010010101100000000101001
handle 6.0.0 (393)
search 360:"394" (8 bits of type HANDLE [9]) in 176 bits
=search (0): 00000110
360: 394 [HANDLE] found 2 at offsets 109-116, 122-129 /176
search 90:"0" (2 bits of type BL [10]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
search 90:"0" (2 bits of type BL [11]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
handle 4.0.0 (0)
search 330:"0" (8 bits of type HANDLE [12]) in 176 bits
=search (0): 00000010
330: 0 [HANDLE] found 2 at offsets 8-15, 168-175 /176
100: ACDBASSOCOSNAPPOINTREFACTIONPARAM
search 90:"0" (2 bits of type BL [14]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
search 90:"1" (10 bits of type BL [15]) in 176 bits
=search (0): 0000001000
field 90 already found at 168
search 90:"1" (10 bits of type BL [15]) in 7 bits
=search (169): 0000001000
search 90:"1" (10 bits of type BS [15]) in 7 bits
=search (169): 0000001000
search 40:"-1.0" (66 bits of type BD [16]) in 176 bits
=search (0): 000000000000000000000000000000000000000000000000001111001111010000
40: -1.0 [BD] found 1 at offset 32-97 /176
66/176=37.5%
possible: [ 8....8187 7......xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxx..81 77 7....8118.........9211 77 7..7 7...7 7..7
7..7 77 xxxxxxxx]

It converts each blob into bits of 0 or 1, converts each DXF field to
some binary types, also logged as bits, then some handwritten **membits()**
search similar to `memmem()` or `strstr()` searches the bitmask in the blob
and records all found instances. In the printed **possible [ ]** array the
xxxx represents unique finds, `1-9` and `.` multiple finds and space
holes for unknown fields, not represented in the DXF, DWG only
fields. These can be guessed from the documentation and some thinking.

Most fields itself are specialized run-length bit-encoded, so a 0.0
needs only 2 bits 10, an empty string needs the same 2 bits 10, the
number 0 as BL (bitlong) needs 2 bits 10, and 90:"1", i.e. the number
1 as BL (bitlong) needs 10 bits 0010000000. So you really need unique
values in the sample data to get enough unique finds. In this bad
example, which so far really is the best example in my data I get 37.5% of
exact matches, with 6x 90:0, i.e. 6 times the number 0. You won't know
which binary 00 is the the real number 0. Conversion from
strings to floats is also messy and inprecise. While representing a
double as 64 binary bits is always properly defined in Intel chips,
the reverse is not true, respresenting the string as double can lead
to various representations and I search various variants cutting off
the mantissa precision to find our matching binary double.

What I'll be doing then is to shuffle the values a bit in the DXF to
represent uniquely identifiable values, like 1,2,3,4,5,6, convert this
DXF back to a DWG, and analyse this pair again. This is e.g. what I did to
uniquely identify the position of the header variables in earlier DWG
versions.

Bad percentages are not processed anymore and removed from the
program. So I create a constant feedback loop, with the program
creating these logs, and a list of classes to skip and permute to
create better matches. I could do this within my program for a
complete self-learning process, but I rather create logfiles,
re-analyse them, adjust the data structures shown above, re-compile
the program and run it again. Previously I did such a re-compilation
step via shared modules, which I can compile from within the program,
dlload and dlunload it, but this is silly. Simple C structs on disc
are easier to follow than shared libs in memory. I also store the
intermediate steps in git, so I can check how the progress of the
self-improvement evolves, if at all. Several such objects were getting
worse and worse deviating to 0%, because there were no unique values
and finds anymore. Several representations are also still wrong, some
text values are really values from some external references, such as
the layer name.

So this was a short intro into massive parallel processing,
asynchronous processing, and machine-learning: a self-improving program.
The code in question is here:
https://github.com/LibreDWG/libredwg/tree/master/examples
All the `*.inc` and `*.skip` files are automatically created by make -C examples regen-unknown.

The initial plan was to create a more complex backtracking solver to
find the best matches for all possible variants, but step-wise
refinement in controllable loops and usage of several trivial tools is
so far much easier than real AI. AI really is trivial if you do it
properly.

Followed by https://savannah.gnu.org/forum/forum.php?forum_id=9203 for the actual AI part with picat.

13 July, 2018 10:11PM by Reini Urban

July 12, 2018

Riccardo Mottola

DataBasin + DataBasinKit 1.0 released

A new release (1.0) for DataBasin and its framework DataBasinKit is out!

This release provides lots of news, most of the enhancements coming from the framework and exposed by the GUI:
  • Update login endpoint to login.salesforce.com (back again!)
  • Implement retrieve (get fields from a list of IDs, natively)
  • Support nillable fields on create
  • save HTML tables and pseudo-XLS in HTML-typed formats
  • Fix cloning of connections in case of threading
  • Implement Typing of fields after describing query elements (DBSFDataTypes)

DataBasin is a tool to access and work with SalesForce.com. It allows to perform queries remotely, export and import data, inspect single records and describe objects. DataBasinKit is its underlying framework which implements the APIs in Objective-C. Works on GNUstep (major Unix variants and MinGW on windows) and natively on macOS.

12 July, 2018 04:44PM by Riccardo (noreply@blogger.com)

July 09, 2018

FSF News

July 06, 2018

GNU Guix

GNU Guix and GuixSD 0.15.0 released

We are pleased to announce the new release of GNU Guix and GuixSD, version 0.15.0! This release brings us close to what we wanted to have for 1.0, so it’s probably one of the last zero-dot-something releases.

The release comes with GuixSD ISO-9660 installation images, a virtual machine image of GuixSD, and with tarballs to install the package manager on top of your GNU/Linux distro, either from source or from binaries.

It’s been 7 months (much too long!) since the previous release, during which 100 people contributed code and packages. The highlights include:

  • The unloved guix pull command, which allows users to upgrade Guix and its package collection, has been overhauled and we hope you will like it. We’ll discuss these enhancements in another post soon but suffice to say that the new guix pull now supports rollbacks (just like guix package) and that the new --list-generations option allows you to visualize past upgrades. It’s also faster, not as fast as we’d like though, so we plan to optimize it further in the near future.
  • guix pack can now produce relocatable binaries. With -f squashfs it can now produce images stored as SquashFS file systems. These images can then be executed by Singularity, a “container engine” deployed on some high-performance computing clusters.
  • GuixSD now runs on ARMv7 and AArch64 boxes! We do not provide an installation image though because the details depend on the board you’re targeting, so you’ll have to build the image yourself following the instructions. On ARMv7 it typically uses U-Boot, while AArch64 boxes such as the OverDrive rely on the EFI-enabled GRUB. Bootloader definitions are available for many boards—Novena, A20 OLinuXino, BeagleBone, and even NES.
  • We further improved error-reporting and hints provided by guix system. For instance, it will now suggest upfront kernel modules that should be added to the initrd—previously, you could install a system that would fail to boot simply because the initrd lacked drivers for your hard disk.
  • OS configuration has been simplified with the introduction of things like the initrd-modules field and the file-system-label construct.
  • There’s a new guix system docker-image command that does exactly what you’d expect. :-)
  • There’s a dozen new GuixSD services: the Enlightenment and MATE desktops, Apache httpd, support for transparent emulation with QEMU through the qemu-binfmt service, OpenNTPD, and more.
  • There were 1,200 new packages, so we’re now close to 8,000 packages.
  • Many bug fixes!
  • The manual is now partially translated into French and you can help translate it into your native language by joining the Translation Project.

See the release announcement for details.

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64, ARMv7, and AArch64 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

06 July, 2018 12:00PM by Ludovic Courtès

July 05, 2018

Luca Saiu

The European Parliament has rejected the copyright directive, for now

The EU copyright directive in its present form has deep and wide implications reaching far beyond copyright, and erodes into core human rights and values. For more information I recommend Julia Reda’s analysis at , which is accessible to the casual reader but also contains pointers to the text of the law. Today on June 5, following a few weeks of very intense debate, campaigning and lobbying including deliberate attempts to mislead politicians (), the European Parliament voted in plenary session to reject the directive in its current form endorsed by the JURI committee, and instead reopen the debate. It ... [Read more]

05 July, 2018 10:47PM by Luca Saiu (positron@gnu.org)

libredwg @ Savannah

libredwg-0.5 released [alpha]

See https://www.gnu.org/software/libredwg/ and http://git.savannah.gnu.org/cgit/libredwg.git/tree/NEWS?h=0.5

Here are the compressed sources:
http://ftp.gnu.org/gnu/libredwg/libredwg-0.5.tar.gz (9.2MB)
http://ftp.gnu.org/gnu/libredwg/libredwg-0.5.tar.xz (3.4MB)

Here are the GPG detached signatures[*]:
http://ftp.gnu.org/gnu/libredwg/libredwg-0.5.tar.gz.sig
http://ftp.gnu.org/gnu/libredwg/libredwg-0.5.tar.xz.sig

Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html

Here are the SHA256 checksums:

920c1f13378c849d41338173764dfac06b2f2df1bea54e5069501af4fab14dd1 libredwg-0.5.tar.gz
fd7b6d029ec1c974afcb72c0849785db0451d4ef148e03ca4a6c4a4221b479c0 libredwg-0.5.tar.xz

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:

gpg --verify libredwg-0.5.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

gpg --keyserver keys.gnupg.net --recv-keys B4F63339E65D6414

and rerun the 'gpg --verify' command.

05 July, 2018 05:35AM by Reini Urban

July 02, 2018

GNU Guile

GNU Guile 2.2.4 released

We are delighted to announce GNU Guile 2.2.4, the fourth bug-fix release in the new 2.2 stable release series. It fixes many bugs that had accumulated over the last few months, in particular bugs that could lead to crashes of multi-threaded Scheme programs. This release also brings documentation improvements, the addition of SRFI-71, and better GDB support.

See the release announcement for full details and a download link. Enjoy!

02 July, 2018 09:00AM by Ludovic Courtès (guile-devel@gnu.org)

coreutils @ Savannah

coreutils-8.30 released [stable]

02 July, 2018 02:02AM by Pádraig Brady

June 28, 2018

Christopher Allan Webber

We Miss You, Charlie Brown

Morgan was knocking on the bathroom door. She wanted to know why I was crying when I was meant to be showering.

I was crying because my brain had played a cruel trick on me last night. It conjured a dream in which all the characters from the comic strip "Peanuts" represented myself and friends. Charlie Brown, the familiar but awkward everyman of the series, was absent from every scene, and in that way, heavily present.

I knew that Charlie Brown was absent because Charlie Brown had committed suicide.

I knew that Charlie Brown was my friend Matt Despears.

The familiar Peanuts imagery passed by: Linus (who was me), sat at the wall, but with nobody to talk to. Lucy held out the football, but nobody was there to kick it. Snoopy sat at his doghouse with an empty bowl, and nobody was there to greet him. And so on.

Then the characters, in silence, moved on with their lives, and the title scrolled by the screen... "We Miss You, Charlie Brown".

And so, that morning, I found myself in the shower, crying.

Why Peanuts? I don't know. I wouldn't describe myself as an overly energetic fan of the series. I also don't give too much credit for dream imagery as being necessarily important, since I think much tends to be the byproduct of the cleanup processes of the brain. But it hit home hard, probably because the imagery is so very familiar and repetitive, and so the absence of a key component amongst that familiarity stands out strongly. And maybe Charlie Brown just a good fit for Matt: awkward but loveable.

It has now been over six years since Matt has passed, and I find myself thinking of him often, usually when I have the urge to check in with him and remember that he isn't there. Before this recent move I was going through old drives and CDs and cleaning out and wiping out old junk, and found an archive of old chat logs from when I was a teenager. I found myself reliving old conversations, and most of it was utter trash... I felt embarrassed with my past self and nearly deleted the entire archive. But then I went through and read those chat logs with Matt. I can't say they were of any higher quality... my conversations with Matt seemed even more absurd on average than the rest. But I kept the chat logs. I didn't want to lose that history.

I felt compelled to write this up, and I don't entirely know why. I also nearly didn't write this up, because I think maybe this kind of writing can be dangerous. That may sound absurd, but I can speak from my experience of someone who frequently experiences suicidal ideation that the phrase "would anyone really miss me when I'm gone" comes to mind, and maybe this reinforces that.

I do think that society tends to romanticize depression and suicide in some strange ways, particularly this belief that suffering makes art greater. A friend of mine pointed this out to me for the first time in reference to John Toole's "A Confederacy of Dunces", often advertised and sold to others by, "and the author committed suicide before it was ever published!" But it would have been better to have more books by John Toole instead.

So as for "will anyone miss me if I'm gone", I want to answer that without romanticizing it. The answer is just "Yes, but it would be better if you were here."

A group of friends and I got together to play a board game recently. We sat around the table and had a good time. I drew a picture of "Batpope", one of Matt's favorite old jokes, and we left it on an empty spot at the table for Matt. But we would have rathered that Matt was there. His absence was felt. And that's usually how it is... like in the dream, we pass through the scenes of our lives, and we carry on, but there's a missing space, and one can feel the shape. There's no romance to that... just absence and memories.

We miss you, Matt Despears.

28 June, 2018 04:50PM by Christopher Lemmer Webber

June 27, 2018

gdbm @ Savannah

Version 1.16

Version 1.16 has been released.

This version improves free space management and fixes a long-standing bug discovered recently due to introduction of strict database consistency checks.

27 June, 2018 07:05PM by Sergey Poznyakoff

June 21, 2018

foliot @ Savannah

GNU Foliot version 0.9.8

GNU Foliot version 0.9.8 is released (June 2018)

This is a maintenance release, which brings GNU Foliot up-to-date with Grip 0.2.0, upon which it
depends. In addition, the default installation locations changed, and there is a new configure option.

For a list of changes since the previous version, visit the NEWS file. For a complete description, consult the git summary and git log.

21 June, 2018 03:37AM by David Pirotte

June 20, 2018

parallel @ Savannah

GNU Parallel 20180622 ('Kim Trump') released

GNU Parallel 20180622 ('Kim Trump') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Quote of the month:

GNU Parallel 實在太方便啦!!
Yucheng Chuang @yorkxin@twitter

New in this release:

  • Deal better with multibyte chars by forcing LC_ALL=C.
  • GNU Parallel was shown on Danish national news for 1.7 seconds: dr.dk
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, April 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

20 June, 2018 10:00PM by Ole Tange

June 19, 2018

Riccardo Mottola

GNUMail + Pantomime 1.3.0

A new release for GNUmail (Mail User Agent for GNUstep and MacOS) and Pantomime (portable MIME Framework): 1.3.0!


Panomime APIs were update to have safer types: mostly count and sizes were transitioned to more Cocoa-like NSUinteger/NSInteger or size_t/ssize_t where appropriate.
This required a major release as 1.3.0 for both Pantomime and GNUMail. In several functions returning -1 was replaced by NSNotFound.

Note: When running the new GNUMail it will update your message cache to the new format. In case of problems, clean it (or in case of reverting to the old version). Message size is now encoded as unsigned instead of signed inside it.

Countless enhancements and bug fixes in both Pantomime and GNUMail should improve usability.
Previously there were issues of certain messages not loading when containing special characters and/or decoding personal part of Addresses.

Pantomime:

  • Correct signature detection as per RFC (caused issues when removing it during replies)
  • improved address and quoted parsing
  • generally improved header parsing
  • Encoding fixes
  • Serious iconv fix which could cause memory corruption due to realloc
  • Fixes for Local folders (should help fix #53063, #51852 and generally bugs with POP and Local accounts)
  • generally improved init methods to check for self, that may help avoid memory issues and debugging in the future
  • various code cleanup in Message loading for better readibility
  • more logging code for debug build, should help debugging

  • Possibility to create filters for To and CC directly in message context menu
  • Read/Unread and Flag/Unflag actions directly in the message context menu
  • Size status for Messages in bytes KiloBytes or MegaBytes depending on size
  • Spelling fixes
  • Improved Menu Validation
  • fix for #52817
  • generally improved init methods to check for self, that may help avoid memory issues and debugging in the future
  • GNUstep Only: Find Panel is now GORM based

19 June, 2018 01:10PM by Riccardo (noreply@blogger.com)

June 16, 2018

gdbm @ Savannah

Version 1.15

GDBM version 1.15 is available for download. Important changes in this release:

Extensive database consistency checking

GDBM tries to detect inconsistencies in input database files as early as possible. When an inconcistency is detected, a helpful diagnostics is returned and the database is marked as needing recovery. From this moment on, any GDBM function trying to access the database will immediately return error code (instead of eventually segfaulting as previous versions did). In order to reconstruct the database and return it to healthy state, the gdbm_recover function should be used.

Commands can be given to gdbmtool in the command line

The syntax is:

Multiple commands are separated by semicolon (take care to escape it), e.g.:

Fixed data conversion bugs in storing structured keys or content

== New member in the gdbm_recovery structure: duplicate_keys.==

Upon return from gdbm_recover, this member holds the number of keys that has not been recovered, because the same key had already been stored in the database. The actual number of stored keys is thus:

New error codes

The following new error codes are introduced:

  • GDBM_BAD_BUCKET (Malformed bucket header)
  • GDBM_BAD_HEADER (Malformed database file header)
  • GDBM_BAD_AVAIL (Malformed avail_block)
  • GDBM_BAD_HASH_TABLE (Malformed hash table)
  • GDBM_BAD_DIR_ENTRY (Invalid directory entry)

Removed gdbm-1.8.3 compatibility layer

16 June, 2018 04:52PM by Sergey Poznyakoff

June 15, 2018

Riccardo Mottola

OresmeKit initial release: plotting for GNUstep and Cocoa

Finally a public release of OresmeKit.
Started many years ago, it has finally come the moment for a first public release, since I put together even a first draft of documentation. Stay tuned for improvements and new graph types.

Oresme is useful for plotting and graphing data both native on Cocoa/MacOS as on GNUstep.

OresmeKit is a framework which provides NSView subclasses that can display data. It is useful to easily embed charts and graphs in your applications, e.g. monitoring apps, dashboards and such.
OresmeKit supports both GNUstep and Cocoa/MacOS.

An initial API Documentation is also available as well as two example in the SVN repository.







15 June, 2018 02:41PM by Riccardo (noreply@blogger.com)

June 14, 2018

gsl @ Savannah

GNU Scientific Library 2.5 released

Version 2.5 of the GNU Scientific Library (GSL) is now available. GSL provides a large collection of routines for numerical computing in C.

This release introduces some new features and fixes several bugs. The full NEWS file entry is appended below. The file details for this release are:

ftp://ftp.gnu.org/gnu/gsl/gsl-2.5.tar.gz
ftp://ftp.gnu.org/gnu/gsl/gsl-2.5.tar.gz.sig

The GSL project homepage is http://www.gnu.org/software/gsl/

GSL is free software distributed under the GNU General Public License.

Thanks to everyone who reported bugs and contributed improvements.

Patrick Alken

14 June, 2018 07:02PM by Patrick Alken

June 11, 2018

libredwg @ Savannah

Major speedup for big DWG's

Thanks to David Bender and James Michael DuPont for convincing me that we need a hash table for really big DWGs. I got a DWG example with 42MB, which needed 2m to process and then 3m to free the dwg struct. I also had to fix a couple of internal problems.

We couldn't use David Bender's hashmap which he took from Android (Apache 2 licensed), and I didn't like it too much neither. So today I sat down and wrote a good int hashmap from scratch, with several performance adjustments, because we never get a key 0 and we won't need to delete keys.
So it's extremely small and simple, using cache-friendly open addressing, and I got it right at the second attempt.

Performance with this hash table now got down to 7 seconds.
Then I also removed the unneeded dwg_free calls from some cmdline apps, because the kernel does it much better then libc malloc/free. 3 minutes for free() is longer than the slowest garbage collector I've ever seen.
So now processing this 42MB dwg needs 7s.

While I was there I also had to adjust several hard coded limits used for testing our small test files. With realistic big DWG's they failed all over. There are still some minor problems, but the majority of the DWG's can be read.
And I pass through now all new error codes, with a bitmask of all occured uncritical and critical errors. On any critical error the library aborts, on some errors it just aborts for this particular object and some uncritical errors are just skipped over. See dwg.h or the docs for the error levels.

What's left is writing proper DXF's, which is apparently not that easy. With full precision mode as in newer acad's with subclass markers and all kind of internal handles, it's very strict, and I wasn't yet able to import any such DXF into acad. Should be possible though to import these into any other app.
So now I'm thinking of adding a third level of DXF: minimal, low and full (default). minimal are entities only, and are documented by acad. low would be new, the level of DXF other applications produce, such as libdxfrw or pythoncad. This is a basic DXF. Full is what teigha and acad produce.
In the end I want full, because I want to copy over parametric constraints from and to better modelers, full 3d data (3dsolid, surface) and full rendering and BIM data (Material, Sun, Lights, ...).

Reading DXF will have to wait for the next release, as I'm stuck with writing DXF's first. This needs to be elegant and maintainable, and not such a mess as with other libraries. I want to use the same code for reading and writing DXF, JSON (e.g. GeoJSON), and XML (e.g. OpenstreetMap, FreeCAD).

11 June, 2018 06:15PM by Reini Urban

unifont @ Savannah

Unifont 11.0.01 Released - Upgrade Recommended

Unifont 11.0.01 was released on 5 June 2018, coinciding with the formal release of Unicode 11.0.0 by The Unicode Consortium.

I wanted to check over this release before recommending that GNU/Linux distributions incorporate it. So far there only appears to be one new bug added: U+1C90 has an extra vertical line added to it, making the character double-width instead of single-width. This will be fixed in the next release. Unifont 10.0.x went through 7 updates in about half a year. I felt that was not stable enough for those trying to maintain GNU/Linux distributions, so I did not keep recommending that each update, with minor changes from one to the next, be propagated. I plan to have more stability in Unifont 11.0.x.

Unifont provides fonts with a glyph for each printable code point in the Unicode Basic Multilingual Plane, as well as wide coverage of the Supplemental Multilingual Plane and some ConScript Unicode Registry glyphs.

The Unifont package includes TrueType fonts for all of these ranges, and BDF and PCF fonts for the Unicode Basic Multilingual Plane. There is also a specialized PSF font for using GNU APL in console mode on GNU/Linux systems.

The web page for this project is https://savannah.gnu.org/projects/unifont/.

You can download the latest version from GNU mirror sites, accessible at http://ftpmirror.gnu.org/unifont/unifont-11.0.01. If the mirror site does not contain this latest version, you can download files directly from GNU at https://ftp.gnu.org/gnu/unifont/unifont-11.0.01/ or ftp://ftp.gnu.org/gnu/unifont/unifont-11.0.01/.

Highlights of this version:

Support for the brand new Unicode Copyleft glyph (U+01F12F), which was added in Unicode 11.0.0. This glyph is present in the Unifont package's TrueType fonts.

The addition of the space character (U+0020) to all Unifont package TrueType fonts, for more straightforward rendering of Unifont Upper (i.e., Unicode Plane 1) scripts that contain spaces.

The addition of several new scripts that were introduced in Unicode 11.0.0:

  • U+1C90..U+1CBF Georgian Extended
  • U+010D00..U+010D3F Hanifi Rohingya
  • U+010F00..U+010F2F Old Sogdian
  • U+010F30..U+010F6F Sogdian
  • U+011800..U+01184F Dogra
  • U+011D60..U+011DAF Gunjala Gondi
  • U+011EE0..U+011EFF Makasar
  • U+016E40..U+016E9F Medefaidrin
  • U+01D2E0..U+01D2FF Mayan Numerals
  • U+01EC70..U+01ECBF Indic Siyaq Numbers
  • U+01FA00..U+01FA6F Chess Symbols

Paul Hardy

GNU Unifont Maintainer

11 June, 2018 12:29PM by Paul Hardy

June 06, 2018

freedink @ Savannah

New FreeDink DFArc frontend 3.14 release

Here's a new release of DFArc, a frontend to run the GNU FreeDink game and manage its numerous add-on adventures or D-Mods :)
https://ftp.gnu.org/pub/gnu/freedink/dfarc-3.14.tar.gz

This release fixes CVE-2018-0496: Sylvain Beucler and Dan Walma discovered several directory traversal issues in DFArc (as well as in the RTsoft's Dink Smallwood HD / ProtonSDK version), allowing an attacker to overwrite arbitrary files on the user's system.

Also in this release:

- New Swedish and Friulian translations.

- Updated Catalan, Brazilian Portuguese and Spanish translations.

- Fix crash when clicking on 'Package' when there is no D-Mod present.

- Compilation fixes for OS X.

- Reproducible build process for Windows (as well as GNU/Linux depending on your distro) - see https://reproducible-builds.org/

A note about distros security support:

- Debian Security team graciously issued a CVE ID under 72h but declined both a security upload and a rationale on their choice; fix diverted to the next ~quarterly point release
- Fedora/RedHat security did not answer after 6 days; fortunately Fedora is flexible enough to allow package maintainers to upgrade DFArc in previous releases on their own
- Gentoo Security did not answer after 7 days
- FreeBSD ports and Mageia packagers were contacted but did not answer
- In Arch, package still stuck between orphaned and deleted state due to a 2017 bug

It seems security support for packages without large user base and/or games is delayed significantly at best.

About GNU FreeDink:

Dink Smallwood is an adventure/role-playing game, similar to Zelda, made by RTsoft. Besides twisted humor, it includes the actual game editor, allowing players to create hundreds of new adventures called Dink Modules or D-Mods for short.

GNU FreeDink is a new and portable version of the game engine, which runs the original game as well as its D-Mods, with close
compatibility, under multiple platforms.

DFArc is an integrated frontend, .dmod installer and .dmod archiver for the Dink Smallwood game engine.

06 June, 2018 07:03PM by Sylvain Beucler

Sylvain Beucler

Best GitHub alternative: us

Why try to choose the host that sucks less, when hosting a single-file (S)CGI gets you decentralized git-like + tracker + wiki?

Fossil

https://www.fossil-scm.org/

We gotta take the power back.

06 June, 2018 06:16PM

GNUnet News

GNUnet 0.11.0pre66

Platform: 
Source Code (TGZ)

We are pleased to announce the release of GNUnet 0.11.0pre66.

This is a pre-release to assist developers and downstream packagers to test the
package before the final release after four years of development.

06 June, 2018 07:20AM by Christian Grothoff

June 05, 2018

health @ Savannah

GNU Health patchset 3.2.10 released

Dear community

GNU Health 3.2.10 patchset has been released !

Priority: Medium

Table of Contents

  • About GNU Health Patchsets
  • Updating your system with the GNU Health control Center
  • Summary of this patchset
  • Installation notes
  • List of issues related to this patchset

About GNU Health Patchsets

We provide "patchsets" to stable releases. Patchsets allow applying bug fixes and updates on production systems. Always try to keep your production system up-to-date with the latest patches.

Patches and Patchsets maximize uptime for production systems, and keep your system updated, without the need to do a whole installation.

NOTE: Patchsets are applied on previously installed systems only. For new, fresh installations, download and install the whole tarball (ie, gnuhealth-3.2.10.tar.gz)

Updating your system with the GNU Health control Center

Starting GNU Health 3.x series, you can do automatic updates on the GNU Health and Tryton kernel and modules using the GNU Health control center program.

Please refer to the administration manual section ( https://en.wikibooks.org/wiki/GNU_Health/Control_Center )

The GNU Health control center works on standard installations (those done following the installation manual on wikibooks). Don't use it if you use an alternative method or if your distribution does not follow the GNU Health packaging guidelines.

Summary of this patchset

Patch 3.2.10 fixes issues related to the CalDAV functionality, updating the calDAV event after changing an appointment.

The gnuhealth-setup program has also been updated, including numpy for the latest pytools

Refer to the List of issues related to this patchset for a comprehensive list of fixed bugs.

Installation Notes

You must apply previous patchsets before installing this patchset. If your patchset level is 3.2.9, then just follow the general instructions.
You can find the patchsets at GNU Health main download site at GNU.org (https://ftp.gnu.org/gnu/health/)

In most cases, GNU Health Control center (gnuhealth-control) takes care of applying the patches for you.

Follow the general instructions at

After applying the patches, make a full update of your GNU Health database as explained in the documentation.

  • Restart the GNU Health Tryton server

List of issues and tasks related to this patchset

  • bug #54055: Caldav event does not update after changing the appointment

For detailed information about each issue, you can visit https://savannah.gnu.org/bugs/?group=health
For detailed information about each task, you can visit https://savannah.gnu.org/task/?group=health

For detailed information you can read about Patches and Patchsets

05 June, 2018 11:53AM by Luis Falcon

June 04, 2018

gnuastro @ Savannah

Gnuastro 0.6 released

The sixth release of Gnuastro is now ready for download. Please see the announcement for more details.

04 June, 2018 04:25PM by Mohammad Akhlaghi

June 02, 2018

Sylvain Beucler

Reproducible Windows builds

I'm working again on making reproducible .exe-s. I thought I'd share my process:

Pros:

  • End users get a bit-for-bit reproducible .exe, known not to contain trojan and auditable from sources
  • Point releases can reuse the exact same build process and avoid introducing bugs

Steps:

  • Generate a source tarball (non reproducibly)
  • Debian Docker as a base, with fixed version + snapshot.debian.org sources.list
    • Dockerfile: install packaged dependencies and MXE(.cc) from a fixed Git revision
    • Dockerfile: compile MXE with SOURCE_DATE_EPOCH + fix-ups
  • Build my project in the container with SOURCE_DATE_EPOCH and check SHA256
  • Copy-on-release

Result:

git.savannah.gnu.org/gitweb/?p=freedink/dfarc.git;a=tree;f=autobuild/dfarc-w32-snapshot

Generate a source tarball (non reproducibly)

This is not reproducible due to using non-reproducible tools (gettext, automake tarballs, etc.) but it doesn't matter: only building from source needs to be reproducible, and the source is the tarball.

It would be better if the source tarball were perfectly reproducible, especially for large generated content (./configure, wxGlade-generated GUI source code...), but that can be a second step.

Debian Docker as a base

AFAIU the Debian Docker images are made by Debian developers but are in no way official images. That's a pity, and to be 100% safe I should start anew from debootstrap, but Docker is providing a very efficient framework to build images, notably with caching of every build steps, immediate fresh containers, and public images repository.

This means with a single:

sudo -g docker make

you get my project reproducibly built from scratch with nothing to setup at all.

I avoid using a :latest tag, since it will change, and also backports, since they can be updated anytime. Here I'm using stretch:9.4 and no backports.

Using snapshot.debian.org in sources.list makes sure the installed packaged dependencies won't change at next build. For a dot release however (not for a rebuild), they should be updated in case there was a security fix that has an effect on built software (rare, but exists).

Last but not least, APT::Install-Recommends "false"; for better dependency control.

MXE

mxe.cc is compilation environment to get MingGW (GCC for Windows) and selected dependencies rebuilt unattended with a single make. Doing this manually would be tedious because every other day, upstream breaks MinGW cross-compilation, and debugging an hour-long build process takes ages. Been there, done that.

MXE has a reproducible-boosted binutils with a patch for SOURCE_DATE_EPOCH that avoids getting date-based and/or random build timestamps in the PE (.exe/.dll) files. It's also compiled with --enable-deterministic-archives to avoid timestamp issues in .a files (but no automatic ordering).

I set SOURCE_DATE_EPOCH to the fixed Git commit date and I run MXE's build.

This does not apply to GCC however, so I needed to e.g. patch a __DATE__ in wxWidgets.

In addition, libstdc++.a has a file ordering issue (said ordering surprisingly stays stable between a container and a host build, but varies when using a different computer with the same distros and tools versions). I hence re-archive libstdc++.a manually.

It's worth noting that PE files don't have issues with build paths (and varying BuildID-s - unlike ELF... T_T).

Again, for a dot release, it makes sense to update the MXE Git revision so as to catch security fixes, but at least I have the choice.

Build project

With this I can start a fresh Docker container and run the compilation process inside, as a non-privileged user just in case.

I set SOURCE_DATE_EPOCH to the release date at 00:00UTC, or the Git revision date for snapshots.

This rebuild framework is excluded from the source tarball, so the latter stays stable during build tuning. I see it as a post-release tool, hence not part of the release (just like distros packaging).

The generated .exe is statically compiled which helps getting a stable result (only the few needed parts of dependencies get included in the final executable).

Since MXE is not itself reproducible differences may come from MXE itself, which may need fixes as explained above. This is annoying and hopefully will be easier once they ship GCC6. To debug I unzip the different .zip-s, upx -d my .exe-s, and run diffoscope.

I use various tricks (stable ordering, stable timestamping, metadata cleaning) to make the final .zip reproducible as well. Post-processing tools would be an alternative if they were fixed.

reprotest

Any process is moot if it can't be tested.

reprotest helps by running 2 successive compilations with varying factors (build path, file system ordering, etc.), and check that we get the exact same binary. As a trade-off, I don't run it on the full build environment, just on the project itself. I plugged reprotest to the Docker container by running a sshd on the fly. I have another Makefile target to run reprotest in my host system where I also installed MXE, so I can compare results and sometimes find differences (e.g. due to using a different filesystem). In addition this is faster for debugging since changing anything in the early Dockerfile steps means a full 1h rebuild.

Copy-on-release

At release time I make a copy of the directory that contains all the self-contained build scripts and the Dockerfile, and rename it after the new release version. I'll continue improving upon the reproducible build system in the 'snapshot' directory, but the versioned directory will stay as-is and can be used in the future to get the same bit-for-bit identical .exe anytime.

This is the technique I used in my Android Rebuilds project.

Other platforms

For now I don't control the build process for other platforms: distros have their own autobuilders, so does F-Droid. Their problem :P

I have plans to make reproducible GNU/Linux AppImage-based builds in the future though. I should be able to use a finer-grained, per-dependency process rather than the huge MXE-based chunk I currently do.

I hope this helps other projects provide reproducible binaries directly! Comments/suggestions welcome.

02 June, 2018 05:12PM

May 30, 2018

FSF News

Minifree Libreboot X200 Tablet now FSF-certified to Respect Your Freedom

Libreboot X200 tablet

This is the third device from Minifree Ltd to receive RYF certification. The Libreboot X200 Tablet is a fully free laptop/tablet hybrid that comes with Trisquel and Libreboot pre-installed. The device is similar to the previously certified Libreboot X200 laptop, but with a built-in tablet that enables users to draw, sign documents, or make handwritten notes. Like all devices from Minifree Ltd., purchasing the Libreboot X200 Tablet helps to fund development of Libreboot, the free boot firmware that currently runs on all RYF-certified laptops. It may be purchased at https://minifree.org/product/libreboot-x200-tablet/, and comes with free technical support included.

"We need RYF-certified laptops of all shapes, sizes, and form factors, and for them to be available from multiple sources around the world so users have options. This is a welcome expansion of those options, as well as an opportunity for people to help unlock future possibilities by funding Libreboot development," said the FSF's executive director, John Sullivan.

"The Libreboot X200 Tablet is another great addition to the line-up of freedom respecting devices from Minifree, which has a long history of developing the software and tools that make RYF-certifiable devices possible," said the FSF's licensing & compliance manager, Donald Robertson, III.

"I'm happy that the FSF is now endorsing yet another Minifree product. Minifree's mission is to provide affordable, libre systems that are easy to use and therefore accessible to the public. Minifree's purpose is to provide funding to the Libreboot project, supporting it fully, and I'm delighted to once again cooperate with the FSF on this most noble goal," said Leah Rowe, Founder & CEO, Minifree Ltd.

To learn more about the Respects Your Freedom certification program, including details on the certification of the Libreboot x200 Tablet, please visit https://fsf.org/ryf.

Hardware sellers interested in applying for certification can consult https://www.fsf.org/resources/hw/endorsement/criteria.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

About Minifree Ltd

Minifree Ltd, trading as Ministry of Freedom (formerly trading as Gluglug), is a UK supplier shipping worldwide that sells GNU/Linux-libre computers with the Libreboot firmware and Trisquel GNU/Linux-libre operating system pre-installed.

Libreboot is a free BIOS/UEFI replacement, offering faster boot speeds, better security, and many advanced features compared to most proprietary boot firmware.

Media Contacts

Donald Robertson, III
Licensing and Compliance Manager
Free Software Foundation
+1 (617) 542 5942
licensing@fsf.org

Leah Rowe
Founder & CEO
Minifree Ltd
+44 7442 425 835
info@gluglug.org.uk

Image Copyright 2018 Minifree Ltd, Licensed under Creative Commons Attribution-ShareAlike 4.0.

30 May, 2018 02:59PM

Parabola GNU/Linux-libre

Server outage

One of our servers, winston.parabola.nu, is currently offline for hardware reasons. It has been offline since 2018-05-30 00:15 UTC. Hang tight, it should be back online soon.

30 May, 2018 03:11AM by Luke Shumaker

May 28, 2018

bison @ Savannah

bison-3.0.5 released [stable]

28 May, 2018 05:04AM by Akim Demaille

May 26, 2018

GNU Guix

Customize GuixSD: Use Stock SSH Agent Everywhere!

I frequently use SSH. Since I don't like typing my password all the time, I use an SSH agent. Originally I used the GNOME Keyring as my SSH agent, but recently I've switched to using the ssh-agent from OpenSSH. I accomplished this by doing the following two things:

  • Replace the default GNOME Keyring with a custom-built version that disables the SSH agent feature.

  • Start my desktop session with OpenSSH's ssh-agent so that it's always available to any applications in my desktop session.

Below, I'll show you in detail how I did this. In addition to being useful for anyone who wants to use OpenSSH's ssh-agent in GuixSD, I hope this example will help to illustrate how GuixSD enables you to customize your entire system to be just the way you want it!

The Problem: GNOME Keyring Can't Handle My SSH Keys

On GuixSD, I like to use the GNOME desktop environment. GNOME is just one of the various desktop environments that GuixSD supports. By default, the GNOME desktop environment on GuixSD comes with a lot of goodies, including the GNOME Keyring, which is GNOME's integrated solution for securely storing secrets, passwords, keys, and certificates.

The GNOME Keyring has many useful features. One of those is its SSH Agent feature. This feature allows you to use the GNOME Keyring as an SSH agent. This means that when you invoke a command like ssh-add, it will add the private key identities to the GNOME Keyring. Usually this is quite convenient, since it means that GNOME users basically get an SSH agent for free!

Unfortunately, up until GNOME 3.28 (the current release), the GNOME Keyring's SSH agent implementation was not as complete as the stock SSH agent from OpenSSH. As a result, earlier versions of GNOME Keyring did not support many use cases. This was a problem for me, since GNOME Keyring couldn't read my modern SSH keys. To make matters worse, by design the SSH agent for GNOME Keyring and OpenSSH both use the same environment variables (e.g., SSH_AUTH_SOCK). This makes it difficult to use OpenSSH's ssh-agent everywhere within my GNOME desktop environment.

Happily, starting with GNOME 3.28, GNOME Keyring delegates all SSH agent functionality to the stock SSH agent from OpenSSH. They have removed their custom implementation entirely. This means that today, I could solve my problem simply by using the most recent version of GNOME Keyring. I'll probably do just that when the new release gets included in Guix. However, when I first encountered this problem, GNOME 3.28 hadn't been released yet, so the only option available to me was to customize GNOME Keyring or remove it entirely.

In any case, I'm going to show you how I solved this problem by modifying the default GNOME Keyring from the Guix package collection. The same ideas can be used to customize any package, so hopefully it will be a useful example. And what if you don't use GNOME, but you do want to use OpenSSH's ssh-agent? In that case, you may still need to customize your GuixSD system a little bit. Let me show you how!

The Solution: ~/.xsession and a Custom GNOME Keyring

The goal is to make OpenSSH's ssh-agent available everywhere when we log into our GNOME desktop session. First, we must arrange for ssh-agent to be running whenever we're logged in.

There are many ways to accomplish this. For example, I've seen people implement shell code in their shell's start-up files which basically manages their own ssh-agent process. However, I prefer to just start ssh-agent once and not clutter up my shell's start-up files with unnecessary code. So that's what we're going to do!

Launch OpenSSH's ssh-agent in Your ~/.xsession

By default, GuixSD uses the SLiM desktop manager. When you log in, SLiM presents you with a menu of so-called "desktop sessions", which correspond to the desktop environments you've declared in your operating system declaration. For example, if you've added the gnome-desktop-service to your operating system declaration, then you'll see an option for GNOME at the SLiM login screen.

You can further customize your desktop session with the ~/.xsession file. The contract for this file in GuixSD is the same as it is for many GNU/Linux distributions: if it exists, then it will be executed. The arguments passed to it will be the command line invocation that would normally be executed to start the desktop session that you selected from the SLiM login screen. Your ~/.xsession is expected to do whatever is necessary to customize and then start the specified desktop environment. For example, when you select GNOME from the SLiM login screen, your ~/.xsession file will basically be executed like this (for the exact execution mechanism, please refer to the source code linked above):

$ ~/.xsession gnome-session

The upshot of all this is that the ~/.xsession is an ideal place to set up your SSH agent! If you start an SSH agent in your ~/.xsession file, you can have the SSH agent available everywhere, automatically! Check it out: Put this into your ~/.xsession file, and make the file executable:

#!/run/current-system/profile/bin/bash
exec ssh-agent "$@"

When you invoke ssh-agent in this way, it executes the specified program in an environment where commands like ssh-add just work. It does this by setting environment variables such as SSH_AUTH_SOCK, which programs like ssh-add find and use automatically. Because GuixSD allows you to customize your desktop session like this, you can use any SSH agent you want in any desktop environments that you want, automatically!

Of course, if you're using GNOME Keyring version 3.27 or earlier (like I was), then this isn't quite enough. In that case, the SSH agent feature of GNOME Keyring will override the environment variables set by OpenSSH's ssh-agent, so commands like ssh-add will wind up communicating with the GNOME Keyring instead of the ssh-agent you launched in your ~/.xsession. This is bad because, as previously mentioned, GNOME Keyring version 3.27 or earlier doesn't support as many uses cases as OpenSSH's ssh-agent.

How can we work around this problem?

Customize the GNOME Keyring

One heavy-handed solution would be to remove GNOME Keyring entirely. That would work, but then you would lose out on all the other great features that it has to offer. Surely we can do better!

The GNOME Keyring documentation explains that one way to disable the SSH agent feature is to include the --disable-ssh-agent configure flag when building it. Thankfully, Guix provides some ways to customize software in exactly this way!

Conceptually, we "just" have to do the following two things:

  • Customize the existing gnome-keyring package.

  • Make the gnome-desktop-service use our custom gnome-keyring package.

Create a Custom GNOME Keyring Package

Let's begin by defining a custom gnome-keyring package, which we'll call gnome-keyring-sans-ssh-agent. With Guix, we can do this in less than ten lines of code:

(define-public gnome-keyring-sans-ssh-agent
  (package
    (inherit gnome-keyring)
    (name "gnome-keyring-sans-ssh-agent")
    (arguments
     (substitute-keyword-arguments
         (package-arguments gnome-keyring)
       ((#:configure-flags flags)
        `(cons "--disable-ssh-agent" ,flags))))))

Don't worry if some of that code is unclear at first. I'll clarify it now!

In Guix, a <package> record like the one above is defined by a macro called define-record-type* (defined in the file guix/records.scm in the Guix source). It's similar to an SRFI-9 record. The inherit feature of this macro is very useful: it creates a new copy of an existing record, overriding specific fields in the new copy as needed.

In the above, we define gnome-keyring-sans-ssh-agent to be a copy of the gnome-keyring package, and we use inherit to change the name and arguments fields in that new copy. We also use the substitute-keyword-arguments macro (defined in the file guix/utils.scm in the Guix source) to add --disable-ssh-agent to the list of configure flags defined in the gnome-keyring package. The effect of this is to define a new GNOME Keyring package that is built exactly the same as the original, but in which the SSH agent is disabled.

I'll admit this code may seem a little opaque at first, but all code does when you first learn it. Once you get the hang of things, you can customize packages any way you can imagine. If you want to learn more, you should read the docstrings for the define-record-type* and substitute-keyword-arguments macros in the Guix source code. It's also very helpful to grep the source code to see examples of how these macros are used in practice. For example:

$ # Search the currently installed Guix for the current user.
$ grep -r substitute-keyword-arguments ~/.config/guix/latest
$ # Search the Guix Git repository, assuming you've checked it out here.
$ grep -r substitute-keyword-arguments ~/guix

Use the Custom GNOME Keyring Package

OK, we've created our own custom GNOME Keyring package. Great! Now, how do we use it?

In GuixSD, the GNOME desktop environment is treated as a system service. To make GNOME use our custom GNOME Keyring package, we must somehow customize the gnome-desktop-service (defined in the file gnu/services/desktop.scm) to use our custom package. How do we customize a service? Generally, the answer depends on the service. Thankfully, many of GuixSD's services, including the gnome-desktop-service, follow a similar pattern. In this case, we "just" need to pass a custom <gnome-desktop-configuration> record to the gnome-desktop-service procedure in our operating system declaration, like this:

(operating-system

  ...

  (services (cons*
             (gnome-desktop-service
              #:config my-gnome-desktop-configuration)
             %desktop-services)))

Here, the cons* procedure just adds the GNOME desktop service to the %desktop-services list, returning the new list. For details, please refer to the Guile manual.

Now the question is: what should my-gnome-desktop-configuration be? Well, if we examine the definition of this record type in the Guix source, we see the following:

(define-record-type* <gnome-desktop-configuration> gnome-desktop-configuration
  make-gnome-desktop-configuration
  gnome-desktop-configuration
  (gnome-package gnome-package (default gnome)))

The gnome package referenced here is a "meta" package: it exists only to aggregate many GNOME packages together, including gnome-keyring. To see its definition, we can simply invoke guix edit gnome, which opens the file where the package is defined:

(define-public gnome
  (package
    (name "gnome")
    (version (package-version gnome-shell))
    (source #f)
    (build-system trivial-build-system)
    (arguments '(#:builder (mkdir %output)))
    (propagated-inputs
     ;; TODO: Add more packages according to:
     ;;       <https://packages.debian.org/jessie/gnome-core>.
     `(("adwaita-icon-theme"        ,adwaita-icon-theme)
       ("baobab"                    ,baobab)
       ("font-cantarell"            ,font-cantarell)
       [... many packages omitted for brevity ...]
       ("gnome-keyring"             ,gnome-keyring)
       [... many packages omitted for brevity ...]
    (synopsis "The GNU desktop environment")
    (home-page "https://www.gnome.org/")
    (description
     "GNOME is the graphical desktop for GNU.  It includes a wide variety of
applications for browsing the web, editing text and images, creating
documents and diagrams, playing media, scanning, and much more.")
    (license license:gpl2+)))

Apart from being a little long, this is just a normal package definition. We can see that gnome-keyring is included in the list of propagated-inputs. So, we need to create a replacement for the gnome package that uses our gnome-keyring-sans-ssh-agent instead of gnome-keyring. The following package definition accomplishes that:

(define-public gnome-sans-ssh-agent
  (package
    (inherit gnome)
    (name "gnome-sans-ssh-agent")
    (propagated-inputs
     (map (match-lambda
            ((name package)
             (if (equal? name "gnome-keyring")
                 (list name gnome-keyring-sans-ssh-agent)
                 (list name package))))
          (package-propagated-inputs gnome)))))

As before, we use inherit to create a new copy of the gnome package that overrides the original name and propagated-inputs fields. Since Guix packages are just defined using good old scheme, we can use existing language features like map and match-lambda to manipulate the list of propagated inputs. The effect of the above is to create a new package that is the same as the gnome package but uses gnome-keyring-sans-ssh-agent instead of gnome-keyring.

Now that we have gnome-sans-ssh-agent, we can create a custom <gnome-desktop-configuration> record and pass it to the gnome-desktop-service procedure as follows:

(operating-system

  ...

  (services (cons*
             (gnome-desktop-service
              #:config (gnome-desktop-configuration
                        (gnome-package gnome-sans-ssh-agent)))
             %desktop-services)))

Wrapping It All Up

Finally, you need to run the following commands as root to create and boot into the new system generation (replace MY-CONFIG with the path to the customized operating system configuration file):

# guix system reconfigure MY-CONFIG
# reboot

After you log into GNOME, any time you need to use SSH, the stock SSH agent from OpenSSH that you started in your ~/.xsession file will be used instead of the GNOME Keyring's SSH agent. It just works! Note that it still works even if you select a non-GNOME desktop session (like XFCE) at the SLiM login screen, since the ~/.xsession is not tied to any particular desktop session,

In the unfortunate event that something went wrong and things just aren't working when you reboot, don't worry: with GuixSD, you can safely roll back to the previous system generation via the usual mechanisms. For example, you can run this from the command line to roll back:

# guix system roll-back
# reboot

This is one of the great benefits that comes from the fact that Guix follows the functional software deployment model. However, note that because the ~/.xsession file (like many files in your home directory) is not managed by Guix, you must manually undo the changes that you made to it in order to roll back fully.

Conclusion

I hope this helps give you some ideas for how you can customize your own GuixSD system to make it exactly what you want it to be. Not only can you customize your desktop session via your ~/.xsession file, but Guix also provides tools for you to modify any of the default packages or services to suit your specific needs.

Happy hacking!

Notices

CC0

To the extent possible under law, Chris Marusich has waived all copyright and related or neighboring rights to this article, "Customize GuixSD: Use Stock SSH Agent Everywhere!". This work is published from: United States.

The views expressed in this article are those of Chris Marusich and do not necessarily reflect the views of his past, present, or future employers.

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

26 May, 2018 03:00PM by Chris Marusich

health @ Savannah

openSUSE donates more Raspberry Pis to the GNU Health project

Dear community:

Today, in the context of the openSUSE conference 2018, oSC18, openSUSE donated 10 Raspberry Pis to the GNU health project.

GNU Health embedded, is a project that delivers GNU Health in single board machines, like raspberry pi.

The #GNUHealthEmbedded project delivers a ready-to-run, full blown installation of GNU Health. From the GNUHealth kernel, the database and even a demo database that can be installed.

The user only needs to point the client to the the server address.

The new Raspberry Pis will include:

  • Latest openSUSE Leap 15
  • GNU Health 3.4 server
  • Offline documentation
  • Lab interfaces

It will also include the Federation related packages, so it could act as a relay or as a node in the distributed, federated model.

Thank you so much to openSUSE for their generous donation, and commitment to the GNUHealth project, as an active member and as an sponsor !

Main news from openSUSE portal:
https://news.opensuse.org/2018/05/26/opensuse-donates-10-more-raspberry-pis-to-gnu-health/

26 May, 2018 01:02PM by Luis Falcon

May 25, 2018

Sylvain Beucler

Testing GNU FreeDink in your browser

Ever wanted to try this weird GNU FreeDink game, but never had the patience to install it?
Today, you can play it with a single click :)

Play GNU FreeDink

This is a first version that can be polished further but it works quite well.
This is the original C/C++/SDL2 code with a few tweaks, cross-compiled to WebAssembly (and an alternate version in asm.js) with emscripten.
Nothing brand new I know, but things are getting smoother, and WebAssembly is definitely a performance boost.

I like distributed and autonomous tools, so I'm generally not inclined to web-based solutions.
In this case however, this is a local version of the game. There's no server side. Savegames are in your browser local storage. Even importing D-Mods (game add-ons) is performed purely locally in the in-memory virtual FS with a custom .tar.bz2 extractor cross-compiled to WebAssembly.
And you don't have to worry about all these Store policies (and Distros policies^W^W^W.

I'm interested in feedback on how well these works for you in your browsers and devices:

I'm also interested in tips on how to place LibreJS tags - this is all free JavaScript.

25 May, 2018 11:46PM

May 23, 2018

GNUnet News

May 22, 2018

parallel @ Savannah

GNU Parallel 20180522 ('Great March of Return') released

GNU Parallel 20180522 ('Great March of Return') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Quote of the month:

Gnu parallel seems to work fine.
Sucking up the life out of all cores, poor machine ahahahahah
-- osxreverser

New in this release:

  • --tty allows for more programs accessing /dev/tty in parallel. Some programs require tty access without using it.
  • env_parallel --session will record names in current environment in $PARALLEL_IGNORED_NAMES and exit. It is only used with env_parallel, and can work like --record-env but in a single session.
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, April 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 May, 2018 10:57PM by Ole Tange

May 21, 2018

Andy Wingo

correct or inotify: pick one

Let's say you decide that you'd like to see what some other processes on your system are doing to a subtree of the file system. You don't want to have to change how those processes work -- you just want to see what files those processes create and delete.

One approach would be to just scan the file-system tree periodically, enumerating its contents. But when the file system tree is large and the change rate is low, that's not an optimal thing to do.

Fortunately, Linux provides an API to allow a process to receive notifications on file-system change events, called inotify. So you open up the inotify(7) manual page, and are greeted with this:

With careful programming, an application can use inotify to efficiently monitor and cache the state of a set of filesystem objects. However, robust applications should allow for the fact that bugs in the monitoring logic or races of the kind described below may leave the cache inconsistent with the filesystem state. It is probably wise to do some consistency checking, and rebuild the cache when inconsistencies are detected.

It's not exactly reassuring is it? I mean, "you had one job" and all.

Reading down a bit farther, I thought that with some "careful programming", I could get by. After a day of trying, I am now certain that it is impossible to build a correct recursive directory monitor with inotify, and I am not even sure that "good enough" solutions exist.

pitfall the first: buffer overflow

Fundamentally, inotify races the monitoring process with all other processes on the system. Events are delivered to the monitoring process via a fixed-size buffer that can overflow, and the monitoring process provides no back-pressure on the system's rate of filesystem modifications. With inotify, you have to be ready to lose events.

This I think is probably the easiest limitation to work around. The kernel can let you know when the buffer overflows, and you can tweak the buffer size. Still, it's a first indication that perfect is not possible.

pitfall the second: now you see it, now you don't

This one is the real kicker. Say you get an event that says that a file "frenemies.txt" has been created in the directory "/contacts/". You go to open the file -- but is it still there? By the time you get around to looking for it, it could have been deleted, or renamed, or maybe even created again or replaced! This is a TOCTTOU race, built-in to the inotify API. It is literally impossible to use inotify without this class of error.

The canonical solution to this kind of issue in the kernel is to use file descriptors instead. Instead of or possibly in addition to getting a name with the file change event, you get a descriptor to a (possibly-unlinked) open file, which you would then be responsible for closing. But that's not what inotify does. Oh well!

pitfall the third: race conditions between inotify instances

When you inotify a directory, you get change notifications for just that directory. If you want to get change notifications for subdirectories, you need to open more inotify instances and poll on them all. However now you have N2 problems: as poll and the like return an unordered set of readable file descriptors, each with their own ordering, you no longer have access to a linear order in which changes occurred.

It is impossible to build a recursive directory watcher that definitively says "ok, first /contacts/frenemies.txt was created, then /contacts was renamed to /peeps, ..." because you have no ordering between the different watches. You don't know that there was ever even a time that /contacts/frenemies.txt was an accessible file name; it could have been only ever openable as /peeps/frenemies.txt.

Of course, this is the most basic ordering problem. If you are building a monitoring tool that actually wants to open files -- good luck bubster! It literally cannot be correct. (It might work well enough, of course.)

reflections

As far as I am aware, inotify came out to address the needs of desktop search tools like the belated Beagle (11/10 good pupper just trying to get his pup on). Especially in the days of spinning metal, grovelling over the whole hard-drive was a real non-starter, especially if the search database should to be up-to-date.

But after looking into inotify, I start to see why someone at Google said that desktop search was in some ways harder than web search -- I mean we all struggle to find files on our own machines, even now, 15 years after the whole dnotify/inotify thing started. Part of it is that the given the choice between supporting reliable, fool-proof file system indexes on the one hand, and overclocking the IOPS benchmarks on the other, the kernel gave us inotify. I understand it, but inotify still sucks.

I dunno about you all but whenever I've had to document such an egregious uncorrectable failure mode as any of the ones in the inotify manual, I have rewritten the software instead. In that spirit, I hope that some day we shall send inotify to the pet cemetery, to rest in peace beside Beagle.

21 May, 2018 02:29PM by Andy Wingo

nano @ Savannah

GNU nano 2.9.7 was released

Accumulated changes over the last five releases include: the ability to bind a key to a string (text and/or escape sequences), a default color of bright white on red for error messages, an improvement to the way the Scroll-Up and Scroll-Down commands work, and the new --afterends option to make Ctrl+Right (next word) stop at the end of a word instead of at the beginning. Check it out.

21 May, 2018 10:36AM by Benno Schulenberg